text
stringlengths
1
2.25M
--- abstract: 'Simulating direct current resistivity, frequency domain electromagnetics and time domain electromagnetics in settings where steel cased boreholes are present is of interest across a range of applications including well-logging, monitoring subsurface injections such as hydraulic fracturing or carbon capture and storage. In some surveys, well-casings have been used as “extended electrodes” for near surface environmental or geotechnical applications. Wells are often cased with steel, which has both a high conductivity and a significant magnetic permeability. The large physical property contrasts as well as the large disparity in length-scales, which are introduced when a steel-cased well is in a modeling domain, makes performing an electromagnetic forward simulation challenging. Using this setting as motivation, we present a finite volume approach for modeling electromagnetic problems on cylindrically symmetric and 3D cylindrical meshes which include an azimuthal discretization. The associated software implementation includes modeling capabilities for direct current resistivity, time domain electromagnetics, and frequency domain electromagnetics for models that include variable electrical conductivity and magnetic permeability. Electric and magnetic fields, fluxes, and charges are readily accessible in any simulation so that they can be visualized and interrogated. We demonstrate the value of being able to explore the behaviour of electromagnetic fields and fluxes through examples which revisit a number of foundational papers on direct current resistivity and electromagnetics in steel-cased wells. The software implementation is open source and included as a part of the `SimPEG` software ecosystem for simulation and parameter estimation in geophysics.' address: - '604-836-2715, [email protected]' - 'Geophysical Inversion Facility, University of British Columbia' author: - 'Lindsey J. Heagy' - 'Douglas W. Oldenburg' bibliography: - 'refs.bib' title: 'Modeling electromagnetics on cylindrical meshes with applications to steel-cased wells' --- Finite volume, Partial differential equations, Time domain electromagnetics, Frequency domain electromagnetics, Direct current resistivity, Boreholes [^1] Introduction ============ A number of geophysical electromagnetic (EM) problems lend themselves to cylindrical geometries. Airborne EM problems over a 1D layered earth or borehole-logging applications fall into this category; in these cases cylindrical modeling, which removes a degree of freedom in the azimuthal component, can be advantageous as it reduces the computation load. This is useful when running an inversion where many forward modelings are required, and it is also valuable when exploring and building up an understanding of the behaviour of electromagnetic fields and fluxes in a variety of settings, such as the canonical model of an airborne EM sounding over a sphere, as it reduces feedback time between asking a question and visualizing results (e.g. [@Oldenburg2017]). Beyond these simple settings, there are also a range of scenarios where the footprint of the survey is primarily cylindrical, but 2D or 3D variations in the physical property model may be present. For example if we consider a single sounding in an Airborne EM survey, the primary electric fields are rotational and the magnetic fields are poloidal, but the physical property model may have lateral variations or compact targets. More flexibility is required from the discretization to capture these features. In this case, a 3D cylindrical geometry, which incorporates azimuthal discretization may be advantageous. It allows finer discretization near the source where we have the most sensitivity and the fields are changing rapidly. Far from the source, the discretization is coarser, but it still conforms to the primary behaviour of the EM fields and fluxes and captures the rotational electric fields and poloidal magnetic flux. In other cases, the most significant physical property variations may conform to a cylindrical geometry, for example in settings where vertical metallic well-casings are present, or in the emerging topic of using geophysics to “look ahead” of a tunnel boring machine. In particular, understanding the behavior of electromagnetic fields and fluxes in the presence of steel-cased wells is of interest across a range of applications, from characterizing lithologic units with well-logs [@Kaufman1990; @Kaufman1993; @Augustin1989], to identifying marine hydrocarbon targets [@Kong2009; @Swidinsky2013; @Tietze2015], to mapping changes in a reservoir induced by hydraulic fracturing or carbon capture and storage [@Pardo2013; @Borner2015; @Um2015; @Weiss2016; @hoversten2017borehole; @Zhang2018]. Carbon steel, a material commonly used for borehole casings, is highly electrically conductive ($10^6 - 10^7$ S/m) and has a significant magnetic permeability ( $\geq 100$ $\mu_0$) [@wuhabashy1994]; it therefore can have a significant influence on electromagnetic signals. The large contrasts in physical properties between the casing and the geologic features of interest, along with the large range of scales that need to be considered to model both the millimeter-thick casing walls while also capturing geologic features, provide interesting challenges and context for electromagnetics in cylindrical geometries. As such, we will use EM simulations of conductive, permeable boreholes as motivation throughout this paper. In much of the early literature, the casing was viewed as a nuisance which distorts the EM signals of interest. Distortion of surface direct current (DC) resistivity and induced polarization (IP) data, primarily in hydrocarbon settings, was examined in [@Wait1983; @Holladay1984; @Johnston1987] and later extended to grounded source EM and IP in [@Wait1985; @Williams1985; @Johnston1992]. Also in hydrocarbon applications, well-logging in the presence of steel cased boreholes is motivation for examining the behavior of electromagnetic fields and fluxes in the vicinity of casing. Initial work focussed on DC resistivity with [@Kaufman1990; @Schenkel1990; @Kaufman1993; @Schenkel1994], and inductive source frequency domain experiments with [@Augustin1989]. [@Kaufman1990] derives an analytical solution for the electric field at DC in an experiment where an electrode is positioned along the axis of an infinite-length well. The mathematical solutions presented shows how, and under what conditions, horizontal currents leak into the formation outside the well. Moreover, [@Kaufman1990] showed, based upon asymptotic analysis, which fields to measure inside the well so that information about the formation resistivity could be obtained. This analysis is extended to include finite-length wells in [@Kaufman1993]. [@Schenkel1994] show the importance of considering the length of the casing in borehole resistivity measurements, and demonstrate the feasibility of cross-well DC resistivity. They also show that the presence of a steel casing can improve sensitivity to a target adjacent to the well. In frequency domain EM, [@Augustin1989] consider a loop-loop experiment, where a large loop is positioned on the surface of the earth and a magnetic field receiver is within the borehole. Magnetic permeability is included in the analysis and a “casing correction”, effectively a filter due to the casing on inductive-source data, is introduced. This work was built upon for considering cross-well frequency domain EM experiments [@Uchida1991; @Wilt1996]. For larger scale geophysical surveys, steel cased wells have been used as “extended electrodes.” [@Rocroi1985] used a pair of well casings as current electrodes for reservoir characterization in hydrocarbon applications. In near-surface settings [@Ramirez1996; @Rucker2010; @Rucker2012] considered the use of monitoring wells as current and potential electrodes for a DC experiment aimed at imaging nuclear waste beneath a leaking storage tank. Similarly, [@Ronczka2015] considers the use of groundwater wells for monitoring a saltwater intrusion and investigates numerical strategies for simulating casings as long electrodes. Imaging hydraulic fractures has been a motivator for a number of studies at DC or EM, among them [@Weiss2016; @hoversten2017borehole]. Some of these have suggested the use of casings that include resistive gaps so that currents may be injected in a segment of the well and potentials measured across the other gaps along the well [@Nekut1995; @Zhang2018]. There is also interest in modeling casings for casing integrity applications where the aim of the DC or EM survey is to diagnose if a well is flawed or intact based on data collected on the surface [@Wilt2018]. As computing resources increased, our ability to forward-simulate more complex scenarios has improved. However, the large physical property contrasts and disperate length scales introduced when a steel cased well is included in a model still present computational challenges. Even the DC problem, which is relatively computationally light, has posed challenges; those are exacerbated when solving the full Maxwell equations in the frequency (FDEM) or time domain (TDEM) and can become crippling for an inversion. For models where the source and borehole are axisymmetric, cylindrical symmetry may be exploited to reduce the dimensionality, and thus the number of unknowns, in the problem (e.g. [@Pardo2013; @Heagy2015]). To reduce computational load in a 3D simulation, a number of authors have employed simplifying assumptions. Several authors replaced the steel-cased well with a solid borehole, either with the same conductivity as the hollow-cased well (e.g. [@Um2015; @Puzyrev2017]) or preserving the cross sectional conductance (e.g. [@Swidinsky2013; @Kohnke2017]), so that a coarser discretization may be used; [@Haber2016] similarly replaces the borehole with a coarser conductivity structure and adopts an OcTree discretization locally refine the mesh around the casing. [@Yang2016] uses a circuit model and introduces circuit components to account for the steel cased well in a 3D DC resistivity experiment; [@Weiss2017] adopts a similar strategy in a Finite Element scheme. Another approach has been to replace the well with an “equivalent source”, for example, a collection of representative dipoles, inspired from [@cuevas2014], or with a line-charge distribution for a DC problem [@Weiss2016]. For the frequency domain electromagnetic problem, a method of moments approach, which replaces the casing with a series of current dipoles, has been taken in [@Kohnke2017]. For 3D survey geometries, only a handful of forward simulations which accurately discretize the casing have been demonstrated, and they have been achieved at significant computational cost. Recent examples, including [@Commer2015; @Um2015; @Puzyrev2017], perform time and frequency domain simulations with finely-discretized boreholes; they required the equivalent of days of compute-time for a single forward simulation to complete. While these codes will undoubtedly see improvements in efficiency, what we present here is an alternative approach to the discretization which capitalizes on the cylindrical geometry of a borehole. Thus far, the majority of the literature has focussed on electrical conductivity, with little attention being paid to magnetic permeability, leaving open many questions about how it alters the behavior of the fields and fluxes and the impact it has on data measured at the surface. In our approach, we ensure that variable magnetic permeability can be included in order to facilitate exploration of these questions. We introduce an approach and associated open-source software implementation for simulating Maxwell’s equations over conductive, permeable models on 2D and 3D cylindrical meshes. The software is written in Python [@van1995python] and is included as an extension to the `SimPEG` ecosystem [@Cockett2015; @Heagy2017]. Within the context of current research connected to steel-cased wells, our aim with the development and distribution of this software is two-fold: (1) to facilitate the exploration of the physics of EM in these large-contrast settings, and (2) to provide a simulation tool that can be used for testing other EM codes. The large physical property contrasts in both conductivity and permeability means the physics is complicated and often non-intuitive; as such, we ensure that the researcher can readily access and visualize fields, fluxes, and charges in the simulation domain. This is particularly useful when the software is used in conjunction with Jupyter notebooks which facilitate exploration of numerical results [@Perez2015]. As the mesh conforms to the geometry of a vertical borehole, a fine discretization can be used in its vicinity without resulting in a onerous computation. This provides the opportunity to build an understanding of the physics of EM in settings with vertical boreholes prior to moving to settings with deviated and horizontal wells. We demonstrate the software with examples at DC, in the frequency domain, and in the time domain. Source-code for all examples is provided as Jupyter notebooks at https://github.com/simpeg-research/heagy-2018-emcyl [@Heagy2018]; they are licensed under the permissive MIT license with the hope of reducing the effort necessary by a researcher to compare to or build upon this work. Our paper is organized in the following manner. In section \[sec:numerical\_tools\], we introduce the governing equations, Maxwell’s equations, and describe their discretization in cylindrical coordinates. We then compare our numerical implementation to the finite element and finite difference results shown in [@Commer2015] as well as a finite volume OcTree simulation described in [@Haber2007]. Section \[sec:numerical\_examples\] contains numerical examples of the DC, frequency domain EM, and time domain EM implementations. The two DC resistivity examples (sections \[sec:dc\_resistivity\_part1\] and \[sec:dc\_resistivity\_part2\]) are built upon the foundational work in [@Kaufman1990; @Kaufman1993] which use asymptotic analysis to draw conclusions about the behavior of the electric fields, currents, and charges for a well where an electrode has been positioned along its axis. The next example, in section \[sec:TDEM\], is motivated by the interest in using a steel-cased well as an “extended electrode” in a time domain EM experiment. We perform a “top-casing” experiment, with one electrode connected to the top of the well and examine the currents in the surrounding geologic formation through time. Our final two examples, in sections \[sec:FDEM\_part1\] and \[sec:FDEM\_part2\], consider a frequency domain experiment inspired by [@Augustin1989]. These examples demonstrate the impact of magnetic permeability on the character of the magnetic flux within the vicinity of the borehole and discusses the resulting magnetic field measurements made within a borehole. Methodology {#sec:numerical_tools} =========== Governing Equations ------------------- The governing equations under consideration are Maxwell’s equations. Here we provide a brief overview and recommend [@Ward1988] for more detail. Under the quasi-static approximation, Maxwell’s equations are given by: $$\begin{split} \nabla \times \vec{e} + \frac{\partial \vec{b}}{\partial t} = 0 \\ \nabla \times \vec{h} - \vec{j} = \vec{s}_e \end{split} \label{eq:MaxwellTime}$$ where $\vec{e}$ is the electric field, $\vec{b}$ is the magnetic flux density, $\vec{h}$ is the magnetic field, $\vec{j}$ is the current density and $\vec{s_e}$ is the source current density. Maxwell’s equations can also be formulated in the frequency domain, using the $e^{i \omega t}$ Fourier Transform convention, they are $$\begin{split} \nabla \times \vec{E} + i\omega\vec{B} &= 0 \\ \nabla \times \vec{H} - \vec{J} &= \vec{S}_e \end{split} \label{eq:MaxwellFreq}$$ The fields and fluxes are related through the physical properties: electrical conductivity ($\sigma$, or its inverse, resistivity $\rho$) and magnetic permeability ($\mu$), as described by the constitutive relations $$\begin{split} \vec{J} = \sigma \vec{E} \\ \vec{B} = \mu \vec{H} \end{split} \label{eq:ConstitutiveRelations}$$ At the zero-frequency limit, we also consider the DC resistivity experiment, described by $$\begin{split} \nabla \cdot \vec{j} &= I\left(\delta(\vec{r} - \vec{r}_{s^{+}}) - \delta(\vec{r} - \vec{r}_{s^{-}})\right) \\ \vec{e} &= - \nabla \phi \end{split} \label{eq:DCequations}$$ where $I$ is the magnitude of the source current density, $\vec{r}_{s^+}$ and $\vec{r}_{s^-}$ are the locations of the current electrodes, and $\phi$ is the scalar electric potential. Of our numerical tools, we require the ability to simulate large electrical conductivity contrasts, include magnetic permeability, and solve Maxwell’s equations at DC, in frequency and in time in a computationally tractable manner. Finite volume methods are advantageous for modeling large physical property contrasts as they are conservative and the operators “mimic” properties of the continuous operators, that is, the edge curl operator is in the null space of the face divergence operator, and the nodal gradient operator is in the null space of the edge curl operator [@Hyman1999]. As such, they are common practice for many electromagnetic simulations (e.g. [@Horesh2011; @Haber2014; @Jahandari2014] and references within), and will be our method of choice. Finite Volume Discretization ---------------------------- To represent a set of partial differential equations on the mesh, we use a staggered-grid approach [@Yee1966] and discretize fields on edges, fluxes on faces, and physical properties at cell centers, as shown in Figure \[fig:CylFiniteVolume\]. Scalar potentials can be discretized at cell centers or nodes. We consider both cylindrically symmetric meshes and fully 3D cylindrical meshes; the anatomy of a finite volume cell for these scenarios is shown in Figure \[fig:CylFiniteVolume\] (b) and (c). ![ Anatomy of a finite volume cell in a (a) cartesian, regtangular mesh, (b) cylindrically symmetric mesh, and (c) a three dimensional cylindrical mesh. []{data-label="fig:CylFiniteVolume"}](finiteVolume-02.png){width="\columnwidth"} To discretize Maxwell’s equations in the time domain (equation \[eq:MaxwellTime\]) or in the frequency domain (equation \[eq:MaxwellFreq\]), we invoke the constitutive relations to formulate our system in terms of a single field and a single flux. This gives a system in either the electric field and magnetic flux (E-B formulation), or the magnetic field and the current density (H-J formulation). For example, in the frequency domain, the E-B formulation is $$\begin{split} \mathbf{C} \mathbf{e} + i\omega\mathbf{b} &= \mathbf{0} \\ \mathbf{C}^\top \mathbf{M}_{\boldsymbol{\mu}^{-1}}^f \mathbf{b} - \mathbf{M}_{\boldsymbol{\sigma}}^e \mathbf{e} &= \mathbf{s_e} \end{split} \label{eq:DiscreteFDEMEB}$$ and the H-J formulation is $$\begin{split} \mathbf{C}^\top \mathbf{M}_{\boldsymbol{\rho}}^f \mathbf{j} + i\omega\mathbf{M}_{\boldsymbol{\mu}}^e\mathbf{h} &= \mathbf{0} \\ \mathbf{C} \mathbf{h} - \mathbf{j} &= \mathbf{s_e} \end{split} \label{eq:DiscreteFDEMHJ}$$ where $\mathbf{e}, \mathbf{b}, \mathbf{h}, \mathbf{j}$ are vectors of the discrete EM fields and fluxes; $\mathbf{s_m}$ and $\mathbf{s_e}$ are the discrete magnetic and electric source terms, respectively; $\mathbf{C}$ is the edge curl operator, and the matrices $\mathbf{M}_{\text{prop}}^{e,f}$ are the edge / face inner product matrices. In particular, variable electrical conductivity and variable magnetic permeability are captured in the discretization. The time domain equations are discretized in the same manner as is discussed in [@Heagy2017]; for time-stepping, a first-order backward Euler approach is used. Although the midpoint method, which is second-order accurate, could be considered, it is susceptible to oscillations in the solution, which reduce the order of accuracy, unless a sufficiently small time-step is used [@Haber2004; @Haber2014]. At the zero-frequency limit, each formulation has a complementary discretization for the DC equations, for the E-B formulation the discretization leads to a nodal discretization of the electric potential $\boldsymbol{\phi}$, giving $$\begin{split} - \mathbf{G}^\top \mathbf{M}_{\boldsymbol{\sigma}}^e \mathbf{e} &= \mathbf{q} \\ \mathbf{e} &= -\mathbf{G}\boldsymbol{\phi} \end{split} \label{eq:DiscreteDCNodal}$$ where $\mathbf{G}$ is the nodal gradient operator, and $\mathbf{q}$ is the source term, defined on nodes. Note that the nodal gradient takes the discrete derivative of nodal variables, and thus the output is on edges. The H-J formulation leads naturally to a cell centered discretization of the electric potential $$\begin{split} \mathbf{V} \mathbf{D} \mathbf{j} &= \mathbf{q} \\ \mathbf{M}_{\boldsymbol{\rho}}^f \mathbf{j} &= \mathbf{D}^\top \mathbf{V} \boldsymbol{\phi} \end{split} \label{eq:DiscreteDCCC}$$ Where $\mathbf{D}$ is the face divergence operator, $\mathbf{V}$ is a diagonal matrix of the cell volumes, $\mathbf{q}$ is the source term, which is defined at cell centers as is $\boldsymbol{\phi}$. Here, the face divergence takes the discrete derivative from faces to cell centers, thus its transpose takes a variable from cell centers to faces. For a tutorial on the finite volume discretization of the DC equations, see [@Cockett2016]. For the EM simulations, natural boundary conditions are employed; in the E-B formulation, this means $\vec{B}\times\vec{n} = 0\vert_{\partial \Omega}$, and in the H-J formulation, we use $\vec{J}\times\vec{n} = 0\vert_{\partial \Omega}$. Within the DC simulations, there is flexibility on the choice of boundary conditions employed. In the simplest scenario, for the nodal discretization, we use Neumann boundary conditions, $\sigma\vec{E} \cdot \vec{n} = 0\vert_{\partial \Omega}$, and for the cell centered discretization, we use Dirichlet boundary conditions $\phi = 0\vert_{\partial \Omega}$. When employing a cylindrical mesh, the distinction between where the electric and magnetic contributions are discretized in each formulation has important implications. If we consider the cylindrically symmetric mesh (Figure \[fig:CylFiniteVolume\]b) and a magnetic dipole source positioned along the axis of symmetry (sometimes referred to as the TE mode), we must use the E-B formulation of Maxwell’s equation to simulate the resulting toroidal magnetic flux and rotational electric fields. If instead, a vertical current dipole is positioned along the axis of symmetry (also referred to as the TM mode), then the H-J formulation of Maxwell’s equations must be used in order to simulate toroidal currents and rotational magnetic fields. The advantage of a fully 3D cylindrical mesh provides additional degrees of freedom, with the discretization in the azimuthal direction, allowing us to simulate more complex responses. However, in order to avoid the need for very fine discretization in the azimuthal direction, we should select the most natural formulation of Maxwell’s equations given the source geometry being considered. For a vertical steel cased well and a grounded source, we expect the majority of the currents to flow vertically and radially, thus the more natural discretization to employ is the H-J formulation of Maxwell’s equations. [@Haber2014] provides derivations and discussion of the differential operators and inner product matrices; though they are described for a cartesian coordinate system and a rectangular grid, the extension to a three dimensional cylindrical mesh is straightforward. Effectively, a cartesian mesh is wrapped so that the $x$ components become $r$ components, and $y$ components become $\theta$ components, as shown in Figure \[fig:cylwrap\]. ![Construction of a 3D cylindrical mesh from a cartesian mesh.[]{data-label="fig:cylwrap"}](cylwrap.png){width="0.9\columnwidth"} The additional complications that are introduced are: (1) the periodic boundary condition introduced on boundary faces and edges in the azimuthal direction, (2) the removal of radial faces and azimuthal edges along the axis of symmetry, and (3) the elimination of the degrees of freedom of the nodes and edges at the boundary and as well as the nodes and vertical edges along the axis of symmetry. The implementation of the 3D cylindrical mesh is provided as a part of the `discretize` package (http://discretize.simpeg.xyz), which is an open-source python package that contains finite volume operators and utilities for a variety of mesh-types. All differential operators are tested for second order convergence and for preservation of mimetic properties (as described in [@Haber2014]). `discretize` is developed in a modular, object-oriented manner and interfaces to all of the `SimPEG` forward modeling and inversion routines, thus, once the differential operators have been implemented, they can be readily used to perform forward simulations [@Cockett2015; @Heagy2017]. One of the benefits of `SimPEG` for forward simulations is that values of the fields and fluxes are readily computed and visualized, which enables researchers to examine the physics as well as to simulate data. Development within the `SimPEG` ecosystem follows best practices for modern, opens-source software, including: peer review of code changes and additions, versioning, automated testing, documentation, and issue tracking. Validation ---------- Testing for the DC, TDEM, and FDEM implementations includes comparison with analytic solutions for a dipole in a whole-space. These examples are included as supplementary examples with the distributed notebooks. We have also compared the cylindrically symmetric implementation at low frequency with a DC simulations from a Resistor Network solution developed in MATLAB with (Figure 3 in [@Yang2016]). Here, we include a comparison with the time domain electromagnetic simulation shown in Figures 13 and 14 of [@Commer2015]. A 200m long well, with a conductivity of $10^{6}$ S/m, outer diameter of 135 mm, and casing thickness of 12 mm is embedded in a 0.0333 S/m background. For the material inside the casing, we use a conductivity equal to that of the background. The conductivity of the air is set to $3 \times 10^{-4}$ S/m and the permeability of the casing is ignored ($\mu = \mu_0$). A 10 m long inline electric dipole source is positioned on the surface, 50 m radially from the well. The radial electric field is sampled at 5 m, 10 m, 100 m, 200 m and 300 m along a line $180^{\circ}$ from the source. Two simulations are included in [@Commer2015]: a finite element (FE) and a finite difference (FD) solution. Both simulation meshes capture the thickness of the casing with a single cell or single tetrahedral element. Additionally, we include a comparison with the 3D UBC finite volume OcTree time domain code [@Haber2007]. The OcTree mesh allows for adaptive refinement of the mesh around sources, receivers, and conductivity structures within the domain, thus reducing the number of unknowns in the domain as compared to a tensor mesh. For the 3D cylindrical simulation (SimPEG), we use a mesh that has 4 cells radially across the width of the casing, 2.5m vertical discretization, and azimuthal refinement near the source and receivers (along the $\theta=90^\circ$ line), as shown in Figure \[fig:commer\_model\]. To solve the system-matrix, the direct solver PARDISO was used [@Petra2014; @Cosmin2016]. The simulation took 14 minutes to run on a single Intel Xeon X5660 processor (2.80GHz). The details of each simulation are shown in Table \[tab:commer\_comparison\]. ![ Depth slice (left) and cross section (right) through the 3D cylindrical mesh used for the comparison with [@Commer2015]. The source and recievers are positioned along the $\theta = 90^\circ$ line. The mesh extends 17km raidally and 30km vertically to ensure that the fields have sufficiently decayed before reaching the boundaries. []{data-label="fig:commer_model"}](commer_model.png){width="\columnwidth"} **Mesh** **Timestepping** **Compute Resources** **Compute Time** ------------ -------------------------------- --------------------------------------------------------- ----------------------------------------- ------------------ Commer FE 8 421 559 tetrahedral elements 893 time steps 9 factorizations single core Intel Xeon X5550 (2.67 GHz) 63 hours Commer FD 2 182 528 cells $\Delta t = 3 \times 10^{-10}$ s 120 598 277 time-steps 512 cores Intel Xeon (2.33 GHz) 23.2 hours UBC OcTree 5 011 924 cell 154 time steps 10 factorizations single core Intel Xeon X5660 (2.80 GHz) 57 minutes SimPEG 314 272 cells 187 time-steps 7 factorizations single core Intel Xeon X5660 (2.80GHz) 14 minutes : Simulation details for the results shown in Figure \[fig:commer\_results\]. Note that the discretizations in the Commer FE and FD codes use one element or one cell across the width of the casing, as does the UBC code. The SimPEG simulation uses 4 cells across the width of the casing. For the time-stepping, each chance in step length requires a matrix factorization. Values for the Commer FE and FD solutions are from [@Commer2015; @Um2015]. []{data-label="tab:commer_comparison"} In Figure \[fig:commer\_results\], we show the absolute value of the radial electric field sampled at fives stations; each of the different line colors is associated with a different location, and offsets are with respect to the location of the well. Solutions were interpolated to the same offset using nearest neighbor interpolation.The 3D cylindrical simulation (SimPEG) is plotted with a solid line and overlaps with the UBC solution (dash-dot line) for all times shown. The finite element (FE) solution from [@Commer2015] is shown with the dashed lines, and the finite difference (FD) solution is plotted with dotted lines. The 3D cylindrical (SimPEG) and UBC solutions are overall in good agreement with the solutions from [@Commer2015]. There is a difference in amplitude and position of the zero-crossing (the v-shape visible in the blue and orange curves) between the Commer solutions and the SimPEG / UBC solutions at the shortest two offsets in the early times. At such short offsets from a highly conductive target, details of the simulation and discretization, such as the construction of the physical property matrices in each of the various approaches become significant; this likely accounts for the discrepancies but a detailed code-comparison is beyond the scope of this paper. Our aim with this comparison is to provide evidence that our numerical simulation is performing as expected; the overall agreement with Commer’s and UBC’s results is provides confidence that it is. ![Time domain EM response comparison with [@Commer2015]. Each of the different line colors is associated with a different location; offsets are with respect to the location of the well.[]{data-label="fig:commer_results"}](commer_results.png){width="0.8\columnwidth"} This example demonstrates agreement between the 3D cylindrical solution and solutions obtained with independently developed codes. Importantly, it also shows how, by using a cylindrical discretization which conforms to the conductivity structure of interest, the size of the mesh and resultant cost of the computation can be greatly reduced. This is true even with relatively conservative spatial and temporal discretizations. Minimizing computation time was not a main focus in the development of the software and there are still opportunities for improving efficiency. As an open-source project, contributions from the wider community are encouraged. Numerical Examples {#sec:numerical_examples} ================== We demonstrate the implementation through examples using the DC, time domain EM and frequency domain EM codes. To focus discussion, each of the examples explores an aspect of the physical behavior of electromagnetic fields and fluxes in the presence of a steel-cased well. DC Resistivity Part 1: Electric fields, currents and charges in a long well {#sec:dc_resistivity_part1} --------------------------------------------------------------------------- In his two seminal papers on the topic, Kaufman uses transmission line theory to draw conclusions about the behaviour of the electric field when an electrode is positioned inside of an infinite casing. In this first example, we will revisit some of the physical insights discussed in [@Kaufman1990; @Kaufman1993] that followed from an analytical derivation and compare those to our numerical results. In the second example, we look at the distribution of current and charges as the length of the well is varied and compare those to the analytical results discussed in [@Kaufman1993] We start by considering a 1km long well ($10^6$ S/m) in a whole space ($10^{-2}$ S/m), with the conductivity of the material inside the borehole equal to that of the whole space. For modeling, we will use a cylindrically symmetric mesh. The positive electrode is positioned on the borehole axis in the mid-point of a 1km long well; a distant return electrode is positioned 1km away at the same depth. Kaufman discusses the behavior of the electric field by dividing the response into three zones: a near zone, an intermediate zone and a far zone [@Kaufman1990; @Kaufman1993]. In the near zone, the electric field has both radial and vertical components, negative charges are present on the inside of the casing, and positive charges are present on the outside of the casing. The near zone is quite localized and typically, its vertical extent is no more than $\sim 10$ borehole radii away from the electrode. To examine these features in our numerical simulation, we have plotted in Figure \[fig:kaufman\_zones\]: (a) the total charge, (b) secondary charges, (c) electric field, and (d) current density in a portion of the model near the source. The behaviours expected by Kaufman are consistent with our numerical results. Within the near-zone, the total charge is dominated by the large positive charge at the current electrode location and negative charges that exist along the casing wall where current is moving from a resistive region inside the borehole into a conductor. The extent of the negative charges along the inner casing wall is more evident when we look at the secondary charge, which is obtained by subtracting the charge that would be observed in a uniform half-space from the total charge (Figure \[fig:kaufman\_zones\]b). Inside the casing, we can see the transition from near-zone behavior to intermediate zone behavior approximately 0.5 m above and below the source; that is equal to 10 borehole radii from the source location, which agrees with Kaufman’s conclusion. In the intermediate zone, Kaufman discusses a number of interesting aspects with respect to the behavior of the electric fields and currents which we can compare with the observed behavior in Figure \[fig:kaufman\_zones\]. Among them, he shows that the electric field within the borehole and casing is directed along the vertical axis; as a result no charges accumulate on the inner casing wall. Charges do, however, accumulate on the outer surface of the casing; these generate radially-directed electric fields and currents, often referred to as “leakage currents”, within the formation. At each depth slice through the casing and borehole, the electric field is uniform, however, due to the high conductivity of the casing, most of the current flows within the casing. The vertical extent of the intermediate zone depends on the resistivity contrast between the casing and the surrounding formation and extends beyond several hundred meters before transitioning to the far zone, where the influence of the casing disappears [@Kaufman1990]. ![(a) Total charge density, (b) secondary charge density, (c) electric field, and (d) current density in a section of the pipe near the source at z=-500m.[]{data-label="fig:kaufman_zones"}](kaufman_zones.png){width="0.7\columnwidth"} The radially directed fields from the casing, and the length of the intermediate zone, have practical implications in the context of well-logging because they delineate the region in which measurements can be made to acquire information about the formation resistivity outside the well. Within the intermediate zone, fields behave like those due to a transmission line [@Kaufman1990], and multiple authors have adopted modeling strategies that approximate the well and surrounding medium as a transmission line [@Kong2009; @Aldridge2015]. We will extend this analysis in the next example and discuss how the length of the well impacts the behavior of the charges, fields, and fluxes. DC Resistivity Part 2: Finite Length Wells {#sec:dc_resistivity_part2} ------------------------------------------ In [@Kaufman1993], the transmission-line analysis was extended to consider finite-length wells. Inspired by the interest in using the casing as an “extended electrode” for delivering current to depth (e.g. [@Schenkel1994; @Um2015; @Weiss2016; @hoversten2017borehole]), here we consider a 3D DC resistivity experiment where one electrode is connected to the top of the well. We will examine the current and charge distribution for wells ranging in length from 250 m to 4000 m and compare those to the observations in [@Kaufman1993]. The conductivity of the well is selected to be $10^6$ S/m. A uniform background conductivity of $10^{-2}$ S/m is used and the return electrode is positioned 8000m from the well; this is sufficiently far from the well that we do not need to examine the impact of the return electrode location in this example. A 3D cylindrical mesh was used for the simulation. [@Kaufman1993] derives a solution for the current within a finite length well and discusses two end-member cases: a short well and a long well. “Short” versus “long” are defined on the product of $\alpha L_c$, where $L_c$ is the length of the casing and $\alpha = 1/\sqrt{S T}$, where $S$ is the cross-sectional conductance of the casing and has units of S$\cdot$m ($S = \sigma_c 2\pi a \Delta a$, for a casing with radius $a$ and thickness $\Delta a$), and $T$ is the transverse resistance. The transverse resistance is approximately equal to the resistivity of the surrounding formation (for more discussion on where this approximation breaks down, see [@Schenkel1994]). For short wells, $\alpha L_c \ll 1$, the current decreases linearly with distance, whereas for long wells, where $\alpha L_c \gg 1$, the current decays exponentially with distance from the source, with the rate of decay being controlled by the parameter $\alpha$. In Figure \[fig:kaufman\_finite\_well\] (a), we show current in the well for 5 different borehole lengths. The x-axis is the distance from the source normalized by the length of the well. We also show the two end-member solutions (equations 45 and 53) from [@Kaufman1993]. There is significant overlap between the 250m numerical solution and the short well approximation. As the length of the well increases, exponential decay of the currents becomes evident. Since $\alpha$ is quite small, for this example $\alpha = 2 \times 10^{-3} m^{-1}$, the borehole must be very long to reach the other end member which corresponds to the exponentially decaying solution. In Figure \[fig:kaufman\_finite\_well\] (b), we have plotted the charges along the length of the well. In the short-well regime, the borehole is approximately an equipotential surface and the charges are uniformly distributed; in the long well the charges decay with depth. What was surprising to us was the noticeable increase in charge accumulation that occurs near the bottom of the well. This is especially evident for the short well. Initially, we were suspicious and thought this might be due to problems with our numerical simulation; there was no obvious physical explanation that we were aware of. However, investigation into the literature revealed that the increase in charge density at the ends of a cylinder is a real physical effect, but an exact theoretical solution does still not appear to exist [@Griffiths1997] (see figure 4, in particular). The results shown in Figure \[fig:kaufman\_finite\_well\] have implications when testing approaches for reducing computational load by approximating a well with a solid cylinder or prism, as in [@Um2015], or replacing the well with a distribution of charges, as in [@Weiss2016]. For a short well, the behaviour of the currents is independent of conductivity, so, as long as the borehole is approximated by a sufficiently conductive target, the behaviour of the fields and fluxes will be representative of the fine-scale model. However, as the length of the well increases, the cross-sectional conductance of the well becomes relevant as it controls the rate of decay of the currents in the well and thus the rate that currents leak into the formation. A similar result holds when a line of charges is used to approximate the well as a DC source; a uniform charge is suitable for a sufficiently short or sufficiently conductive well, whereas a distribution of charge which decays exponentially with depth needs to be considered for longer wells. Thus, when attempting to replace a fine-scale model of a well with a coarse-scale model, either with a conductivity structure or by some form of “equivalent source”, validations should be performed on models that have the same length-scale as the experiment to ensure that both behaviors are being accurately modeled. ![ (a) Current along a well for 5 different wellbore lengths. The x-axis is depth normalized by the length of the well. The black dashed line shows the short-well approximation (equation 45 in [@Kaufman1993]) for a 200m long well. The black dash-dot line shows the long-well approximation (equation 53 in [@Kaufman1993]) for a 4000m well. (b) Charge per unit length along the well for 5 different wellbore lengths. []{data-label="fig:kaufman_finite_well"}](kaufman_finite_well.png){width="\columnwidth"} Time Domain Electromagnetics {#sec:TDEM} ---------------------------- In this example, we examine the behaviour of electric currents in an experiment where the casing is used as an “extended electrode”. Although the initial investigations with casings centered around using a DC source, greater information about the subsurface can be had by employing a frequency or time domain source. A particular application is the monitoring of hydraulic fracturing proppant and fluids, or CO$_2$; this is active research carried out by many groups worldwide (e.g. [@Hoversten2015; @Um2015; @Puzyrev2017; @Zhang2018] among others). The challenge is to have efficient and accurate forward modeling; solving the full Maxwell equations is much more demanding than solving the DC problem. For our simulation, a positive electrode is connected to the top of the casing and a return electrode is positioned 1 km away. The well has a conductivity of $10^6$ S/m and is 1 km long; it has an outer diameter of 10 cm and a 1 cm thick casing wall. The mud which infills the well has the same conductivity as the background, $10^{-2}$ S/m. The conductivity of the air is set to $10^{-5}$ S/m; in numerical experiments, we have observed that contrasts near or larger than $\sim 10^{12}$ S/m leads to erroneous numerical solutions. For this example, we will focus on electrical conductivity only and set the permeability of the well to $\mu_0$. A step-off waveform is used, and the currents within the formation are plotted through time in Figure \[fig:TDEM\_currents\]. Panel (a) shows a zoomed-in cross section of the casing, (b) shows a vertical cross section along the line of the wire (c) shows a horizontal depth slice at 50 m depth and (d) shows a depth slice at 800 m depth. The images in panels (b), (c) and (d) are on the same color scale. We begin by examining Figure \[fig:TDEM\_currents\] (b), which shows the currents in the formation. At time $t=0s$, we have the DC solution. Currents flow away from the well, and eventually curve back to the return electrode. Immediately after shut-off, we see an image current develop in the formation. The image current flows in the same direction as the original current in the wire; this is opposite to currents in the formation, causing a circulation of current. The center of this circulation is visible as the null propagating downwards and to the right in Figure \[fig:TDEM\_currents\] (b). In Figure \[fig:TDEM\_currents\] (a), we see the background circulating currents being channeled into the well and propagating downwards. The depth range over which currents enter the casing depends upon time. At t=0.01 ms, the zero crossing, which distinguishes the depth between incoming and outgoing current in the casing, occurs at z=90 m, at t=0.1 ms it is at 225 m and by t=1 ms, the zero crossing approaches the midway point in the casing and is at 470 m depth. At later times, the downward propagation of this null slows as the image currents are channeled into the highly conductive casing; at 5 ms it is at 520 m depth, at 10 ms, 560 m depth and by 100 ms (not shown), it is at 800 m depth. On the side of the well opposite to the wire, we also see a null develop; it is visible in the cross sections in panel (a). To help understand this, we examine the depth slices in panel (c). Behind the well, we see that as the image currents diffuse downwards and outwards, some of those currents are channeled back towards the well; this is visible in the depth slice at $10^{-4} s$. These channeled currents are opposite in direction to those the formation currents set up at t=0, which also are diffusing downwards and outwards; where these two processes intersect, there is a current shadow. There are a number of points to highlight in this example. The first, which has been noted by several authors (e.g. [@Schenkel1994; @Hoversten2015]), is that the casing helps increase sensitivity to targets at depth. This occurs by two mechanisms: (1) at DC, prior to shut-off, the casing acts as an “extended electrode” leaking current into the formation along its length; (2) after shut-off, it channels the image currents and increases the current density within the vicinity of the casing. The second point to note is that there are several survey design considerations raised by examining the currents: targets that are positioned where there is significant current will be most illuminated. If the target is near the surface and offset from the well, a survey where the source wire runs along the same line as the target will have the added benefits of the excitation due to the image currents. These benefits are twofold: (1) the passing image-current increases the current density for a period of time, and (2) the changing amplitude and direction of the currents with time generate different excitations of the target. this should provide enhanced information in an inversion, as compared to a single excitation that is available from a DC survey. For deeper targets in this experiment, the passing image current has diffused significantly, and thus it appears that the wire location has less impact on the magnitude of the current density with location. However, it is possible that increasing the wire-length could be beneficial. This extension is straightforward and could be examined with the provided script. There may also be added benefit by having the target positioned along the same line as the source wire, as at later times, the direction of current reverses, changing the excitation of the target. The final point to note from this example is that although this is a simple model, the behavior of the currents is not intuitive; visualizations of the currents, fields and fluxes, allow researchers to explore the basic physics and prompts new questions. ![ Current density for a time domain experiment where one electrode is connected to the top of the casing and a return electrode is on the surface, 1000m away. Six different times are shown, corresponding to each of the six rows; the times are indicated in the plots in panel (d). Panel (a) shows a zoomed-in cross section of the current density in the immediate vicinity of the steel cased well. Panel (b) shows a cross section through the half-space along the same line as the source-wire. Panels (c) and (d) show depth-slices of the currents at 54m and 801m depth. []{data-label="fig:TDEM_currents"}](TDEM_currents.png){width="0.95\columnwidth"} Frequency Domain Electromagnetics Part 1: Comparison with scale model results {#sec:FDEM_part1} ----------------------------------------------------------------------------- In the DC example, we discussed how charges are distributed along the well and currents flow into the formation. The time domain example extended the analysis of grounded sources, showed the potential importance of EM induction effects and illuminated the underlying physics. From a historical perspective, however, practical developments in EM were pursued in the frequency domain; the mathematics is more manageable in the frequency domain, and technological advances were being made in the development of induction well-logging tools [@Doll1949; @Moran1962]. Although conductivity of the pipes is generally plays the most dominant role in attenuating the signal, the magnetic permeability is non-negligible [@Wait1977]; it is the product of the conductivity and permeability that appears in the description of EM attenuation. Also, the fact that permeable material becomes magnetized in the presence of an external field complicates the problem. [@Augustin1989] is one of the first papers on induction logging in the presence of steel cased wells that aims to understand and isolate the EM response of the steel cased well. Using a combination of scale modeling and analytical mathematical modeling, they examined the impacts of conductivity and magnetic permeability on the magnetic field observed in the pipe. In this example and the one that follows, we attempt to unravel this interplay between conductivity and magnetic permeability. The first experiment [@Augustin1989] discuss is a scale model using two different pipes, a conductive copper pipe and a conductive, permeable iron pipe; each pipe is 9 m in length. The copper pipe had an inner diameter of 0.063 m and a thickness of 0.002 m, while the iron pipe had a 0.063 m inner diameter and 0.0043 m wall thickness. A source-loop, with radius 0.6 m was co-axial with the pipe and in one experiment positioned at one end of the pipe (which they refer to as the “semi-infinite pipe” scenario). In another experiment the source loop is positioned at the midpoint of the pipe (which they refer to as the “infinite pipe” scenario); for both experiments, magnetic field data are measured as a function of frequency at the central axis of the pipe. Their results are presented in terms of a Field Strength Ratio (FSR), which is the ratio of the absolute value of the magnetic field at the receiver with the absolute value of the magnetic field if no pipe is present (Figure 3 in [@Augustin1989]). At low frequencies, for the data collected within the iron pipe, static shielding (FSR $<$ 1) was observed for the measurements where the receiver was in the plane of the source loop for both the “infinite” and “semi-infinite” scenarios. When the receiver was positioned within the pipe, 1.49 m offset from the plane of the source loop, static enhancement effects (FSR $>$ 1) were observed for both the infinite and semi-infinite scenarios. Using this experiment for context, we will compare the behaviour of our numerical simulation with the observations in [@Augustin1989] and examine the nature of the static shielding and enhancement effects. For our numerical setup, the pipes are 9 m in length and have an inner diameter of 0.06 m. The copper pipe has a casing-wall thickness of 0.002 m and the iron pipe has a thickness of 0.004 m. Following the estimated physical property values from [@Augustin1989], we use a conductivity of $3.5 \times 10^7$ S/m and a relative permeability of 1 for the copper pipe. For the iron pipe, a conductivity of $8.0 \times 10^6$ S/m and a relative permeability of 150 is used. A background conductivity of $10^4 ~\Omega$m is assumed. The computed FSR values for the axial magnetic field as a function of frequency are shown in Figure \[fig:AugustinFSR\]. ![ Field strength ratio (FSR), the ratio of the measured vertical magnetic field with the free space magnetic field, as a function of frequency for two different reciever locations. In (a), the reciever is in the same plane as the source, in (b), the reciever is 1.49m offset from the source. []{data-label="fig:AugustinFSR"}](AugustinFSR.png){width="0.6\columnwidth"} Consider the response of the conductive pipe. At low frequencies, the FSR for the copper pipe (blue lines) is 1 for both the infinite (solid line) and semi-infinite (dashed line) scenarios, as the field inside the copper pipe is equivalent to the free-space field. With increasing frequency, eddy currents are induced in the pipe which generate a magnetic field that opposes the primary, causing a decrease in the observed FSR. When the source and receiver are in the same plane (L=0.00 m), the rate of decrease is more rapid in the infinite scenario than the semi-infinite. Since there is conductive material on both sides of the receiver in the infinite case, we would expect attenuation of the fields to occur more rapidly than in the semi-infinite case. This observation is consistent with Figure 3a in [@Augustin1989]. For the offset receiver (L=1.49 m), they observed a slight separation in the infinite and semi-infinite curves which we do not; however, they attributed this to potential errors in magnetometer position. Thus, overall, the numerical results for the copper pipe are in good agreement with the scale model results observed by [@Augustin1989]. Next, we examine the response of the conductive, permeable pipe. In Figure \[fig:AugustinFSR\]b, we observe a static enhancement effect (FSR $>$ 1) at low frequencies. The enhancement is larger in the infinite scenario than the semi-infinite scenario; this is in agreement with Figure 3b in [@Augustin1989]. There is however, a significant discrepancy between our numerical simulations and the scale model for the semi-infinite pipe when the source and receiver lie in the same plane(Figure \[fig:AugustinFSR\]a). [@Augustin1989] observed a static shielding effect for both the infinite and semi-infinite scenarios, whereas we observe a static shielding for the infinite scenario, but a significant static enhancement for the semi-infinite case. To diagnose what might be the cause of this, we will examine the magnetic flux density in this region of the pipe. In Figure \[fig:AugustinBfields\], we have plotted: (a) the secondary magnetic flux in the infinite-pipe scenario near the source (z=-4.5 m), (b) the secondary magnetic flux in the semi-infinite scenario (z=0 m for the source), and (c) top 5 cm of the semi-infinite pipe. All plots are at 0.1 Hz. The primary magnetic field is directed upwards within the regions we have plotted, so upward-going magnetic flux indicates a static enhancement effect, and downward-oriented magnetic flux indicates static shielding effects. In (a) we see a transition between the static shielding in the vicinity of the source to a static enhancement approximately 0.5 m above and below the plane of the source. Similarly in (b), we notice a sign-reversal in the z-component of the secondary magnetic flux at a depth of 0.6 m. The behaviors observed in Figure \[fig:AugustinBfields\] are quite comparable to Augustin et al.’s observation of a transition from shielding to enhancement occurring at distances greater than 0.8 m from the source. Numerical experiments show that the vertical extent of the region over which static shielding is occuring increases with increasing pipe diameter, and similarly increases with increasing loop radius while the magnitude of the effect decreases. This can be understood by considering how the pipe is magnetized; for a small loop radius, the magnetization is largely localized near the plane of the source and rapidly falls off with distance from the plane of the source. Localized, large amplitude magnetization causes the casing to act as a collection of dipoles around the circumference of the casing. As the radius of the loop increases, the magnetization spreads out along the length of the well resulting in longer, lower-amplitude dipoles, thus both increasing the extent of the region over which static shielding is occuring as well as decreasing its amplitude. ![ Magnetic flux density at 0.1Hz in the region of the pipe near the plane of the source for (a) the “infinite” pipe, where the source is located at -4.5m and the pipe extends from 0m to -9m, (b) a “semi-infinite” pipe, where the source is located at 0m and the pipe extends to -9m. In (c), we zoom in to the top 5cm of the “semi-infinite” pipe, and (d) shows the 5cm at the top-end of the “infinite” pipe. []{data-label="fig:AugustinBfields"}](AugustinBfields.png){width="\columnwidth"} This explains the nature of the static enhancement and static shielding effects, but to explain the discrepancy between the static shielding observed in the semi-infinite pipe when L=0 m by Augustin et al., and the static enhancement we observe in Figure \[fig:AugustinFSR\]a, we examine the magnetic flux density in the top few centimeters of the pipe. Figure \[fig:AugustinBfields\]c shows the top 5 cm of the secondary magnetic flux in the semi-infinite pipe; the source is in the z=0 m plane. Zooming in reveals there is yet another sign reversal near the end of the pipe. This is evident even in the infinite-pipe scenario (Figure \[fig:AugustinFSR\]d), where the source is offset by several meters from the end of the pipe. This edge-effect perhaps bears some similarities to what we observed in Figure \[fig:kaufman\_finite\_well\]b, where we saw a build up of charge near the end of the pipe in the DC scenario. At the end of the pipe, we encounter the situation where the normal component of the flux ($\vec{j}, \vec{b}$) from the pipe to the background needs to be continuous both in the radial and vertical directions at the end of the pipe as does the tangential component of the fields ($\vec{e}, \vec{h}$). The interplay of these two constraints at the end of the pipe results in more complexity in the resultant fields and fluxes. Within the span of a few centimeters we transition from static enhancement at the top of the pipe to a static shielding further down. An error as small as a few centimeters in the position of the magnetometer causes a reversal in behavior; in Figure \[fig:Augustin3cm\], we have plotted the FSR for a magnetometer positioned 3cm beneath the plane of the source, and the static-shielding behavior observed for the semi-infinite pipe is much more aligned with that observed in Figure 3a in [@Augustin1989]. ![ Field strength ratio, FSR, for a reciever positioned 3cm beneath the plane of the source. For comparison, we have plotted the FSR for the permeable pipe when the source and reciever lie in the same plane (L=0.00m) with the semi-transparent orange lines. Note that the infinite-pipe solutions for L=0.03m and L=0.00m overlap. []{data-label="fig:Augustin3cm"}](Augustin3cm.png){width="0.6\columnwidth"} Frequency Domain Electromagnetics Part 2: Conductivity and permeability in the inductive response of a well {#sec:FDEM_part2} ----------------------------------------------------------------------------------------------------------- The experiments shown in the previous section revealed some insights into the complexity of the fields within the pipe and illustrated the role of permeability in the character of the responses at low frequency. Next, we move to larger scales and examine the role of conductivity and permeability in the responses we observe in the borehole. In this example, we consider a 2 km long well with an outer diameter of 10 cm and thickness of 1 cm in a whole-space which has a resistivity of $10^4$ $\Omega$m. A loop with radius 100 m is coaxial with the well and positioned at the top-end of the well. A receiver measuring the z-component of the magnetic flux density is positioned 500 m below the transmitter loop, along the axis of the well. We will consider both time domain and frequency domain responses. In electromagnetics, it is often the product of permeability and conductivity that we consider to be the main controlling factor on the EM responses. To assess the contribution of each to the measured responses, we will investigate two scenarios. In the first, the well has a conductivity of $10^8$ S/m and a relative permeability of 1, and in the second, the well has a conductivity of $10^6$ S/m and a relative permeability of 100; thus the product of conductivity and permeability is equivalent for both wells. Similar to the analysis done by [@Augustin1989] when looking at the role of borehole radius in the behaviour of the magnetic response (e.g. figure 8), we will examine the normalized secondary field (NSF) which is the ratio of the secondary field with the amplitude of the primary, where the primary is defined to be the free-space response. In Figure \[fig:fdemNSF\], we have plotted the normalized secondary field for the two pipes considered, the conductive pipe (blue) and the conductive, permeable pipe (orange). Let’s start by examining the conductivity response in Figure \[fig:fdemNSF\]. Where the value of the NSF is zero, the primary dominates the response; this is the case at low frequencies where induction is not yet contributing to the response. As frequency increases, currents are induced in the pipe which generate a secondary magnetic field that opposes the primary, hence the NSF becomes negative. When the real part of the NSF (solid line) is -1, the secondary magnetic field is equal in magnitude but opposite in direction to the free-space primary and the measured real field is zero. Values less than -1 indicate a sign reversal in the real magnetic field. Similarly, when the imaginary part of the response function goes above zero, there is a sign reversal in the imaginary component. Note that these sign reversals occur even in a half-space and are a result of sampling the fields within a conductive medium; in this case the receiver was 500 m below the surface. As compared to the conductive pipe, the frequency at which induction sets in is higher for the conductive, permeable pipe. We also notice that the amplitude variation of both the imaginary and real parts is larger for the permeable pipe. To examine the contribution of conductivity and permeability to the responses, we have plotted the real part of the secondary magnetic flux density, $\mathbf{b}$, in Figure \[fig:bfdem\]. The top row shows the response within the conductive pipe and the bottom row shows the conductive, permeable pipe. The primary magnetic flux is oriented upwards and we can see that all of the secondary fields generated are oriented downwards. Similar to the previous example, we see that at low frequencies, there is magnetostatic response due to the permeable pipe. However, due to the larger length scales of the source loop and the casing in this example, there is no measurable contribution at the receiver. At 1 Hz, we can see that induction is starting to contribute to the signal for the conductive pipe, while for the permeable pipe, it is not until $\sim$10 Hz that we begin to observe the contribution of induction. At 100 Hz, the secondary magnetic field is stronger in amplitude than the primary, and the NFS is less than -1 for both the conductive and permeable pipes. The amplitude of the secondary within the permeable pipe is stronger than that in the conductive pipe. At 1000 Hz, we have reached the asymptote of NSF=-1 for both the conductive and permeable pipes; the secondary magnetic flux is equal in magnitude but opposite in direction to the primary. ![ Normalized secondary field, NSF, as a function of frequency for two wells. The NSF is the ratio of the secondary vertical magnetic field with the primary magnetic field at the reciever location (z=-500m); the primary is defined as the whole-space primary. []{data-label="fig:fdemNSF"}](fdemNSF.png){width="0.6\columnwidth"} ![ Secondary magnetic flux density (with respect to a whole-space primary) at five different frequencies for a conductive pipe (top row) and for a conductive, permeable pipe (bottom row). []{data-label="fig:bfdem"}](bfdem.png){width="\columnwidth"} Conducting a similar experiment in the time domain, we can compare the responses as a function of time. For this experiment, a step-off waveform is employed and data are measured after shut-off, the NSF is plotted in Figure \[fig:tdemNSF\]. Note here that the secondary field is in the same direction as the primary, so after the source has been shut off, the secondary field is oriented upwards, as shown in Figure \[fig:btdem\]. Shortly after shut-off, the rate of increase in the secondary field is the same for both the conductive and the conductive, permeable wells. A maximum normalized field strength of approximately 1 is reached for both cases. The responses begin to differ at $10^{-3}$ s where the conductive well maintains a NFS $\sim 1$ for approximately 1 ms longer than the permeable well before the fields decay away. ![ Normalized secondary field (NSF) through time. In the time-domain, we compute the NSF by taking the difference between the total magnetic flux at the reciever and the whole-space response and then taking the ratio with the whole-space magnetic flux prior to shutting off the transmitter. []{data-label="fig:tdemNSF"}](tdemNSF.png){width="0.6\columnwidth"} ![ Secondary magnetic flux density for a conductive well (top row) and a conductive, permeable well (bottom row) through time. The source waveform is a step-off waveform. []{data-label="fig:btdem"}](btdem.png){width="\columnwidth"} It is important to note that although the product of the conductivity and permeability is identical for these wells, the geometry of the well and inducing fields results in different couplings for each of the parameters. For a vertical magnetic dipole source, the electric fields are purely rotational while the magnetic fields are primarily vertical. An approximation we can use to understand the implications of these geometric difference is to assume the inducing fields are uniform (e.g. the radius of the source loop is infinite) and to examine the conductance and permeance of the pipe. For rotational electric fields, the conductance is $$\mathcal{S} = \sigma \frac{t L}{2 \pi r} \label{eq:conductance}$$ where $t$ is the thickness of the casing, $r$ is the radius of the casing and $L$ is the length-scale of the pipe segment contributing to the signal. For vertical magnetic fields, the permeance is $$\mathcal{P} = \mu \frac{ t 2 \pi r}{L} \label{eq:permeance}$$ As the length-scale, $L$, is larger than the circumference of the pipe ($2\pi r$) the geometric contribution to the conductance is larger than that to the permeance. An important take-away from this example is that the contributions of conductivity and permeability to the observed EM signals are not simply governed by their product. The geometry of the source fields plays an important role in how each contributes. Thus to accurately model conductive, permeable pipes, over a range of frequencies or times, a numerical code must allow both variable conductivity and variable permeability to be considered. Discussion ========== The behavior of EM fields and fluxes in the presence of highly conductive, permeable, steel-cased wells is often non-intuitive. In DC resistivity experiments, we demonstrated that there is a charge build-up near the end of the well. This has not previously been discussed in the geophysics literature, but it was recognized by [@Griffiths1997]. In practice, such a charge buildup might be consequential in an inversion as it will alter the sensitivities; this is an avenue for future research. Moving to EM experiments complicates matters in two ways: (1) the fields and fluxes vary through time introducing inductive phenomena, and (2) variable magnetic permeability alters the fields and fluxes. With respect to magnetic permeability, [@Augustin1989] noticed “static shielding” and “static enhancement” effects in a scale-model experiment with an iron pipe subject to an inductive source. They observed that “the truncated pipe is more effective at shielding static and low-frequency fields than the infinite pipe,” however, they offered no explanation as to why this is the case. In the numerical experiment in section \[sec:FDEM\_part1\], we were able to replicate the nature of their data, and furthermore, we demonstrated that near the ends of the pipe, the magnetic fields change rapidly over short length-scales. Although this may seem to be a detail that is unimportant unless measurements are being made near the end of a pipe, it demonstrates some of the complexity that is introduced when both conductivity and permeability are significant in a model. To date, the geophysics literature that considers the magnetic permeability of well-casings does so primarily in the context of inductive sources; very little research examines the impact of permeability on grounded-source experiments. Thus, there are open questions about how magnetic permeability alters the currents and impacts the data in such experiments. Building an understanding of these impacts is critical both for assessing feasibility of using EM methods to delineate targets of interest as well as for developing strategies to reduce the computational cost of 3D simulations which include steel-cased wells. In many settings where DC or EM experiments are being considered, wells are deviated or horizontal, and several wells may be present. The cylindrical discretization strategy presented here does not accommodate such geometries. Recent advancements such as the hierarchical finite element approach presented in [@Weiss2017] make modeling these complex scenarios feasible for DC resistivity. However, it is not clear that approaches that work at DC will be suitable for EM. For example, [@Weiss2017] points out several challenges that arise when considering the inherently more complex PDE governing EM, and in particular, points out that it is unclear how to include magnetic permeability in a hierarchical approach. For EM, other avenues for performing simulations include the upscaling approach suggested by [@Schwarzbach2018; @Caudillo-Mata2017] in which one inverts for the conductivity and permeability of a coarse-scale model of the casing, as well as the method of moments approach taken by [@Kohnke2017]. Irrespective of the strategy that is taken, it is important to have numerical tools that yield accurate, computationally efficient simulations and that are easy to use. Although tools such as COMSOL are versatile, the cognitive overhead for a researcher to set up a simple simulation to test their understanding of the physics is significant. The software presented here aims to bridge that gap and serve as a resource for researchers to calibrate their understanding of the physics, as well as to assess the assumptions that new approaches are making and to benchmark their accuracy. Summary and Outlook =================== We developed software for solving Maxwell’s equations on 2D and 3D cylindrical meshes. Variable electrical conductivity and magnetic permeability are considered. The 2D solution is especially computationally efficient and has a large number of practical applications. When cylindrical symmetry is not valid, the 3D solution can be implemented; a judicious design of the mesh can often generate a problem with fewer cells than would be required with a tensor or OcTree mesh, thus reducing the computational cost of a simulation. We demonstrated the versatility of the codes by modeling the electromagnetic fields that result when a highly conductive and permeable casing is embedded in the earth. We presented a number of different experiments involving DC, frequency-domain, and time-domain sources. The first two examples considered a simple DC resistivity experiment. In the first, we demonstrated that the numerically obtained currents, electric fields, and charges emulated those predicted by the asymptotic analysis in [@Kaufman1990] for long wells. The second example looked at the transition in behavior of currents and charges between short and long wells. Even in this relatively simple example, the physics was more complex than we originally anticipated. In the subsequent examples, we considered electromagnetic experiments. The third example presented a grounded-source time-domain experiment and showed the distribution of currents in the formation through time. It showed that the steel-casing can help excite a target at depth by two mechanisms: (1) the casing provides a high-conductivity pathway for bringing DC currents to depth, and (2) the casing channels the image current that is created after shut-off of the source. With respect to survey design, one consequence of the second point is that there may be an advantage to positioning the wire and return electrode along the same line as where the target is expected to be located. In this way, the current direction is reversed as the image current passes; the target is thus excited from multiple excitation directions and the resultant data can be beneficial in an inversion. The final two examples incorporated magnetic permeability in the simulations. We showed that for a conductive and permeable casing, excited by a circular current source, there is a complicated magnetic field that occurs in the top few centimeters of the pipe. Furthermore, the role of conductivity and permeability in the observed responses is more complex than their product; the source geometry and coupling with the casing are important to consider. In each of the examples, the ability to plot the charges, fields, and fluxes was of critical importance; these ground our understanding of the physics and provide a foundation for designing a field survey. The software implementation is included as a part of the `SimPEG` ecosystem. `SimPEG` also includes finite volume simulations on 3D tensor and OcTree meshes as well as machinery for solving inverse problems. Thus, the cylindrical codes can be readily connected to an inversion and additionally, simulations and inversions of more complex 3D geologic settings can be achieved by coupling the cylindrical simulation with a 3D tensor or OcTree mesh using a primary-secondary approach (e.g. example 3 in [@Heagy2017]). Beyond modeling steel cased wells, we envision that the 3D cylindrical mesh could prove to be useful in conducting 3D airborne EM inversions where a domain-decomposition approach, similar to that described in [@Yang2014], is adopted. `SimPEG` and all of the further developments described in this paper are open source and freely available. The examples have been provided as Jupyter notebooks. This not only allows all of the figures in the paper to be reproduced, but provides an avenue by which the reader can ask questions, change parameters, and use resultant images to confirm (or not) their presumed outcome. We hope that our efforts to make the software and examples accessible promotes the utility of this work for the wider community. Acknowledgements ================ The authors thank Michael Commer and Christoph Schwarzbach for providing the simulation results shown in Figure \[fig:commer\_results\] and for permission to distribute them. We also thank Thibaut Astic and Dikun Yang for their suggestions and input on the early draft of this paper. Finally, we are grateful to Rowan Cockett, Seogi Kang and the rest of the `SimPEG` community for their discussions and efforts on the development of the `SimPEG`. We are also grateful to Raphael Rochlitz and the two other anonymous reviewers for their critiques and suggestions which improved the quality of the manuscript. The funding for this work is provided through the Vanier Canada Graduate Scholarships Program. Computer Code Availability ========================== All of the software used in this paper is open source and was made available in 2018. The examples are provided as Jupyter notebooks and the code is written in Python. The code and installation instructions are available at https://github.com/simpeg-research/heagy-2018-emcyl and have been archived at https://doi.org/10.5281/zenodo.1220427. The main developer is Lindsey Heagy (email: [email protected], phone: (604) 836-2715). References {#references .unnumbered} ========== [^1]: **Authorship statement**: Heagy developed the software and conducted the numerical experiments with guidance from Oldenburg who supervised the project. Heagy and Oldenburg contributed to the manuscript.
--- abstract: 'Scene models allow robots to reason about what is in the scene, what else should be in it, and what should not be in it. In this paper, we propose a hybrid Boltzmann Machine (BM) for scene modeling where relations between objects are integrated. To be able to do that, we extend BM to include tri-way edges between visible (object) nodes and make the network to share the relations across different objects. We evaluate our method against several baseline models (Deep Boltzmann Machines, and Restricted Boltzmann Machines) on a scene classification dataset, and show that it performs better in several scene reasoning tasks.' author: - 'İlker Bozcan$^{1}$, Yagmur Oymak$^{1}$, İdil Zeynep Alemdar$^{1}$ and Sinan Kalkan$^{1}$[^1]' bibliography: - 'references.bib' title: '**What is (missing or wrong) in the scene? A Hybrid Deep Boltzmann Machine For Contextualized Scene Modeling** ' --- Introduction ============ Modeling (representing) their environments is crucial for cognitive as well as artificial agents. For a robot, scene modeling pertains to representing a scene in such a way that the robot can reason about the scene and the objects in it in an efficient manner. A scene model should allow for the robot to check, for example, (i) whether there is a a certain object in the scene and where it is, or (ii) whether it is in the right place in the scene or (iii) whether there is something redundant in the scene to be moved to some place else. Although there are many studies on scene modeling in robotics and computer vision (e.g., [@CelikkanatConceptWeb2014; @CelikkanatContext2014; @li2017context; @anand2013contextually]), to the best of our knowledge, ours is the first that uses a multi-way Boltzmann Machines for the task. Boltzmann Machines (BM) [@ackley1985learning] are stochastic generative models that offer many benefits for various modeling problems. The benefits of Boltzmann Machines include (among others) the presence of latent nodes, which function as context variables modulating the object activations, the ease to extend with the requirements of scene modeling, and its generative capability. Although BM existed beforehand, they became popular again with extensions to deep architectures or restricted connections (i.e., Restricted Boltzmann Machines). Related Work ------------ **Scene Modeling:** Many models have been proposed for scene modeling in computer vision and robotics using probabilistic models such as Markov or Conditional Random Fields [@CelikkanatConceptWeb2014; @CelikkanatContext2014; @anand2013contextually; @lin2013holistic], Bayesian Networks [@li2017context; @sheikh2005bayesian], Latent Dirichlet Allocation variants [@wang2008spatial; @Philbin08a], Dirichlet and Beta processes [@joho2013nonparametric], chain-graphs [@pronobis2012large], predicate logic [@mastrogiovanni2011robots; @hwang2006ontology], and Scene Graphs [@blumenthal2014towards]. There have also been many attempts for ontology-based scene modeling where objects and various types of relations are modeled [@hwang2006ontology; @saxena2014robobrain; @tenorth2009knowrob]. Among these, [@CelikkanatConceptWeb2014; @CelikkanatContext2014; @li2017context; @anand2013contextually] use context either in representing the scene or solving a task using the scene model for a robotics problem. These studies model context via local interactions between visible variables, except for [@CelikkanatContext2014] who proposed using Latent Dirichlet Allocation for modeling context. **Relation Estimation and Reasoning:** Early studies on integrating relations into scene modeling and analysis tasks were rule-based. These approaches defined relations using rules based on 2D/3D distances between objects, e.g., [@stopp1994utilizing]. With advances in probabilistic graphical modeling, many approaches used models such as Markov Random Fields [@anand2013contextually; @celikkanat2015integrating], Conditional Random Fields [@lin2013holistic], Implicit Shape Models [@meissner2013recognizing], latent generative models [@joho2013nonparametric]. Many studies also proposed formulating relation detection as a classification problem, e.g., using logistic regression [@guadarrama2013grounding], and deep learning [@johnson2016clevr]. Contributions ------------- The main contributions of our work are the following: - **Deep Boltzmann Machines for Scene Modeling:** We use Deep Boltzmann Machines (DBM) for modeling a scene in terms of objects and the relations between the objects. To the best of our knowledge, this is the first study that uses DBM with relations for the task. - **A Hybrid Triway Model - DBM with relations:** Adding relations to DBM is not straightforward since there may be different relations between objects and the same relations between different objects should represent the same thing. This leads to two extensions: (i) Tri-way nodes to represent relations in the DBM, (ii) Weight-sharing between the weights of relation nodes to enforce relations between different objects to represent the same relations. We evaluate our extended DBM model on many practical robot problems: Determining (i) what is missing in a scene, (ii) relations between objects, (iii) what should not be in a scene, (iv) random scene generation given some objects or relations from the to-be-generated scene. We compare our model (Triway BM) against DBM [@salakhutdinov2009deep] with 2-way relations (GBM), and Restricted Boltzmann Machines (RBM) [@salakhutdinov2007restricted]. Background: General, Restricted and Higher-order Boltzmann Machines =================================================================== ![A schematic comparison of Boltzmann Machines, Restricted Boltzmann Machines and Deep Boltzmann Machines. \[fig:all\_bm\]](figures/All_BM.pdf){width="49.00000%"} A Boltzmann Machine (BM)[^2] [@ackley1985learning] is a graphical model composed of visible nodes $\mathbf{v}=\{v_i\}_{i=1}^{V} \subset \{0,1\}^V$ and hidden nodes $\mathbf{h}=\{h_i\}_{i=1}^{H} \subset \{0,1\}^H$ – see also Figure \[fig:all\_bm\]. In a BM, hidden nodes are connected to other hidden nodes with bi-directional weights, $W^{hh}= \{w^{hh}_{ij}\}^{H\times H}$; visible nodes to other visible nodes with $W^{vv}=\{w^{vv}_{ij}\}^{V\times V}$; and hidden nodes to visible nodes with $W^{hv}=\{w^{hv}_{ij}\}^{H\times V}$. With these connections, a BM tries to obtain an estimation of $p(\mathbf{v})=\sum_{\mathbf{h}} p(\mathbf{v}|\mathbf{h})p(\mathbf(h)) $ from a sample of the training data. For a BM, one can define a scalar, representing the negative harmony between the nodes given current weights: $$E(\textbf{v}, \textbf{h})=-\sum_{i<j}v_i w^{vv}_{ij} v_j -\sum_{i<j}h_i w^{hh}_{ij} h_j -\sum_{i<j}h_i w^{hv}_{ij} v_j. \label{eqn:energy_BM}$$ BM is inspired from physical systems which favor states with lower energies, and therefore, the probability of being in a certain state (i.e., $\{\mathbf{v}, \mathbf{h}\}$) is linked to the energy of the system via Boltzmann distribution: $$p(\mathbf{v}, \mathbf{h})= \frac{1}{Z} \exp(-E(\textbf{v}, \textbf{h})),$$ where $Z=\sum_{\textbf{v'}, \textbf{h'}} E(\textbf{v'}, \textbf{h'})$ is called the partitioning function. Since the partitioning function is intractable to calculate for real problems, $p(\mathbf{v}, \mathbf{h})$ is iteratively estimated by stochastically activating nodes in the network with probability based on the change in the energy of the system after the update: $$p(x=1) = \frac{1}{1+e^{\Delta E_x/T}},$$ where $x$ is a visible or a hidden node; $\Delta E_x$ is the change in energy of the system if $x$ is turned on; and $T$ is the temperature of the system, gradually decreased (annealed) to a low value, controlling how stochastic the updates are. Since training is rather slow and limiting in BM, its restricted version (Restricted Boltzmann Machines) with only connections between hidden and visible nodes have been proposed [@salakhutdinov2007restricted]. In a Deep Boltzmann Machine [@salakhutdinov2009deep], on the other hand, there are layers of hidden nodes. See Figure \[fig:all\_bm\] for a schematic comparison of the alternative models. Some problems require the edges to combine more than two nodes at once, which have led to the Higher-order Boltzmann Machines (HBM) [@sejnowski1986higher]. With HBM, one can introduce edges of any order to link multiple nodes together. Training a Boltzmann Machine ---------------------------- Training a BM minimizes the Kullback-Leibler divergence between $p^+(\textbf{v})$, the distribution over $\mathbf{v}$ when data is clamped on the visible nodes (called the positive phase), and $p^-(\mathbf{v})$, the distribution obtained when the network is run freely (called the negative phase). Taking the gradient of the divergence with respect to the weights leads to the following update rule: $$w_{ij} \leftarrow w_{ij} - \alpha (p^+_{ij} - p^-_{ij}),$$ where $p^+_{ij}$ and $p^-_{ij}$ are the expected joint activations of nodes $s_i$ and $s_j$ during the positive phase and the negative phase, respectively; and $\alpha$ is a learning rate. A Triway Hybrid Boltzmann Machine for Scene Modeling ==================================================== As shown in Figure \[fig:triway\_BM\], we extend Boltzmann Machines by adding relational (visible) nodes $r_i \in \mathbb{B}$ that (i) are shared across objects, and (ii) link two objects together with a single tri-way edge. In other words, a relation $r_i$ connects two objects, $v_j$ and $v_k$ with a weight $w^r_{ijk}$. The overall energy of the hybrid BM then is updated as follows: $$\begin{aligned} E(\textbf{v}, \textbf{h}, \textcolor{red}{\textbf{r}}) & = & {\setbox0=\hbox{$-\sum_{i<j}v_i w^{vv}_{ij} v_j$}\rlap{\raisebox{.45\ht0}{\textcolor{red}{\rule{\wd0}{1pt}}}}-\sum_{i<j}v_i w^{vv}_{ij} v_j} -\sum_{i<j}h_i w^{hh}_{ij} h_j \nonumber \\ & & -\sum_{i<j}h_i w^{hv}_{ij} v_j \nonumber \\ & & \textcolor{red}{-\sum_{i,j,k} w^{r}_{ijk} r_i v_j v_k} \textcolor{red}{-\sum_{i<j} r_i w^{rh}_{ij} h_j} , \label{eqn:energy_hybridBM}\end{aligned}$$ where the changes compared to the energy definition in Equation \[eqn:energy\_BM\] are highlighted in red. Note that the definition in Equation \[eqn:energy\_hybridBM\] uses tri-way edges with the relation nodes, and that relations (in fact, weights of relations) are shared across objects. Weight-sharing suggests that, e.g., a *left* relation between $v_i$ and $v_j$, and a *left* relation between $v_k$ and $v_l$ ($i\neq j \neq k \neq l$) represent the same relation. In order to do that, the gradients on the weights of relation $r_i$ that is coming from all pairs of objects in the scene are aggregated: $$\Delta w_{ijk} = \sum_{s_j,s_k\ \in\ U_{r_i}} (p^+_{ijk} - p^-_{ijk}),$$ where $U_{r_i}$ is the set of object tuples connected by relation $r_i$. Training and Inference ---------------------- In order to make training faster, we dropped the connections between the hidden neurons. For training our Triway Hybrid BM, in the positive phase, as usual, we clamp the visible units with the objects, and the relations between the objects and calculate $p^+$ for any edge in the network. In negative phase, firstly, object units are sampled with a two-step Gibbs sampling by using activation of hidden units and relation units. In this way, relation units also contribute to activation of object units, in addition to the hidden units. Then, the relation units are sampled from recently sampled object units and hidden units. We calculate $p^-$ for any edge in the network in these two steps. For training the networks, we used gradient descent with a batch size of 32 with early stopping (i.e., training process is finished when validation accuracy begins to decrease). Learning rate and temperature are empirically set to 0.5 and 1 respectively and 2 hidden layers is used that the bottom layer has 200 hidden units and the top layer has 100 hidden layers. For inference, we use Gibbs sampling [@geman1984stochastic] that is a Monte Carlo Markov Chain (MCMC) method to approximate true data distribution. We prefer a MCMC method over variational inference since our dataset is relatively small (totaling 3,485 samples) and input vectors are too sparse (i.e. slight number of relation nodes are active). Therefore, we need precise inferences that MCMC methods can guarantee but variational inference cannot [@blei2017variational]. Dataset Collection ------------------ There are two datasets with labeled spatial relations [@golland2010game; @johnson2016clevr]. However, both datasets are simulated, and therefore, we collected a real dataset with relations. We use a 3,485 (the ones acquired with newest depth sensors) of the SUNRGBD dataset [@song2015sun]. Misspelled and redundant object labels were merged. In this way, total numbers of objects are reduced to 417. We extended the original dataset by adding eight spatial (left, right, front, behind, on-top, under, above, below) relations among annotated objects manually. All object pairs are considered as a relation. Therefore, a total number of relations that can be estimated is $8 \times 417\times 417=1,391,112$. Let us use $D=\{S_1,...,S_{3485}\}$, where $S_i$ denotes $i^{th}$ sample, to denote the dataset. $S_i$ has a vector form that represents the presence of objects and relations among them in the scene. Active objects and relations have value 1, otherwise 0. Opposite relations (ex: left and right) can be represented in one relation in BMs since if object $a$ is left of object $b$ then object $b$ is right of object $a$. As a result, each sample is represented by a binary vector that has length 695,973 $(417+4\times 417\times 417)$. There are 33 indoor scene types (kitchen, dining room etc.) that robots can be used for variety of tasks instead of humans in dataset. We split dataset into three: $60\%$ for training, $30\%$ for testing and $10\%$ for validation during training. All sets include samples from each scene category. Experiments and Results ======================= In this section, we evaluate and compare the methods on several tasks. Network Training Performance ---------------------------- We calculated an error on difference between original data that are clamped to visible units and reconstructed visible states that are sampled in negative phase and observed how it changes during training: $$E_{train} = \frac{1}{|\mathbf{V}|}\sum_{\mathbf{v}\in\mathbf{V}}\sum_{j}(p(v_j^{+})-p(v_j^{-}))^2 , \label{train_error_equation}$$ where $p(v^{+})$ is probability of activation of original data that is clamped to visible units at the beginning of positive phase. $P(v^{-})$ is probability of activation of visible nodes at the end of the negative phase. The cumulative sum over all samples is normalized with the total number of samples ($|\mathbf{V}|$). We look at the error separately for the objects and the relations, as shown in Figure \[fig:loss\]. We see that the network consistently decreases the error, and learns to represent objects and the relations between them. However, it somehow learns relations much faster. This difference is because the space of all possible relations is much larger than the objects set, and very sparse; therefore, the network quickly learns to estimate 0 (zero) for relations, which leads to a sudden decrease in the loss. ![Reconstruction error vs. epochs plot during Triway BM training. \[fig:loss\]](figures/errorvsepochs.pdf){width="49.00000%"} Task 1: Relation Estimation --------------------------- Our model can estimate possible relations among active concepts in the scene. For testing, active objects on the scene are clamped to visible object nodes and the model is let loose. Initially, model sees the environment in terms of “bag of objects” and samples hidden units (i.e., context). The context is determined by using objects on the scene without relations among them. Then, the relation nodes are sampled from contexts and objects. For this task, we define accuracy as the percentage of relations correctly estimated with respect to the labeled relations in the test dataset. RBM GBM Triway BM Chance level ------- -------- ------------ ---------------------- 5.60% 14.18% **23.35%** $1.43\times10^{-4}$% : Performance (accuracy) of the methods on estimating relations between a given set of objects (Task 1). Higher is better. \[tbl:relationestimation\] We evaluate this task with RBM, General BM and our Hybrid model, as shown in Table \[tbl:relationestimation\]. We see that our model provides highest accuracy. We do not consider inactive relation nodes in original data since the network has already learned which relation nodes should be inactive. We provide some visual examples in Figure \[fig:relationestimation\], where we see that our model nicely finds out how to place a set of objects together. The chance level of activation of one relation node is $\frac{1}{\mbox{\# of relation nodes}} \approx 1.43\times10^{-6}$. Task 2: What is missing in the scene? ------------------------------------- In this task, we randomly de-activate an object from the scene and expect the network to find the missing object. For this task, we define accuracy as the percentage of the missing objects found correctly in the reconstructed sample. As shown in Table \[tbl:whatismissing\], our hybrid model performs better than RBM and GBM. See also Figure \[fig:whatismissing\], which shows some visual examples for most likely objects found for a target position in the scene. RBM GBM Triway BM Chance level -------- ------------ ------------ ---------------------- 35.12% [40.94%]{} **43.28**% $5.75\times10^{-6}$% : Performance (accuracy) of the methods on finding missing object in the scene (Task 2). Higher is better.[]{data-label="tbl:whatismissing"} Task 3: What is out of context in the scene? -------------------------------------------- Next, we evaluate how good the methods find an object that is out of context in the scene. For this end, we select scenes, remove randomly an object and randomly add another object not in the scene. Of course, during this process, the network might disrupt other objects in $\mathbf{v}$ as well. To take this into account, for this task, we define the error measure for a sample as the number of objects that are incorrectly sampled or changed. Let $\mathbf{v}^{+x}$ be the current scene representation, where $x$ is the randomly selected active object, and $\mathbf{v}^{-x}$ be the scene representation, where $x$ is removed (set to zero). After $\mathbf{v}^{-x}$ is clamped, the network settles to $\mathbf{v}^{?x}$, where object $x$ is hopefully recovered, but there might be other unwanted changes on nodes other than $x$. We can define our measure formally as follows: $$\textrm{Task 3 error measure} = \frac{1}{|\mathbf{v}|\times|\mathbf{V}|}\sum_{\mathbf{v}^{+x} \in \mathbf{V}} \sum_i \textrm{abs}(\mathbf{v}^{+x}_i - \mathbf{v}^{?x}),$$ where $\textrm(abs(\cdot))$ is the absolute value function. Table \[tbl:whatisextra\] compares the methods, and shows that our hybrid model produces lowest error. See also Figure \[fig:whatisextra\] for some visual examples. In this task, models may tend to give higher contextual importance to particular objects for different scenes (i.e. “dishwasher" is a dominant object for the “kitchen" context and provides higher contextual information than a “chair" object for the “kitchen" context). Therefore, they can remove objects that have lower contextual information and corrupt original input data. RBM GBM Triway BM Chance level -------- -------- ------------ -------------- 0.6446 0.1404 **0.0789** 0.5 : Performance (error) of the methods on finding what is out of context in the scene (Task 3). Lower is better. \[tbl:whatisextra\] Task 4: Generate a scene ------------------------ In this task, we demonstrate how we can use another generative ability of our Triway BM: we can select a hidden node (or more of them, leaving the other hidden neurons randomly initialized or set to zero), and sample visible nodes (including relations) that describes a scene. Figure \[fig:randomscenegeneration\] shows some visual examples. Conclusion ========== We have proposed a novel method based on Boltzmann Machines for contextualized scene modeling. For this end, we extended BM by adding spatial relations between objects that are shared across different objects in the scene. Compared to RBM and DBM, we show that our hybrid model performs better in several scene analysis and reasoning tasks, such as finding relations, missing objects and out-of-context objects. Moreover, being generative, our model allows generating new scenes given a context or a part of the scene (as a set of objects). Acknowledgment {#acknowledgment .unnumbered} ============== This work was supported by the Scientific and Technological Research Council of Turkey (TÜBİTAK) through project called “Context in Robots” (project no 215E133). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research. [^1]: $^{1}$All authors are with the KOVAN Research Lab at the Department of Computer Engineering, Middle East Technical University, Ankara, Turkey [{ilker.bozcan, yagmur.oymak, idil.alemdar, skalkan}@metu.edu.tr]{} [^2]: Although this is textbook material, it is essential for us to be able to describe our extensions.
--- author: - 'L. Boehnke^^ and F. Lechermann^,^' bibliography: - 'bibextra.bib' title: | Getting back to Na$_x$CoO$_2$:\ spectral and thermoelectric properties --- Introduction ============ The quasi two-dimensional sodium cobaltate system Na$_x$CoO$_2$ marks one milestone in the investigation of realistic strongly correlated electron systems. It consists of stacked triangular CoO$_2$ layers, glued together by Na ions inbetween. Depending on the doping $x$, the nominal oxidation state of the cobalt ion lies between Co$^{4+}$($3d^5$) and Co$^{3+}$($3d^6$). While the $x$=1 compound is a band insulator with a filled low-spin Co($t_{2g}$) subshell, for $x$=0 a single hole resides therein. Stimulated by findings of large thermoelectric response at higher doping $x$ [@ter97; @mot01] and superconductivity for $x$$\sim$0.3 upon intercalation with water [@tak03], the phase diagram of Na$_x$CoO$_2$ attracted enormous interest, both experimentally as well as theoretically, in the pre-pnictide era of the early 2000 century. The relevance of strong correlation effects due to the partially filled Co($3d$) shell for $x$$<$1 has been motivated by several experiments, e.g., from optics [@wan04], photoemission [@val02; @has04; @yan07; @gec07] and transport [@foo04] measurements. Although much progress has been made in the understanding of layered cobaltates, after more than ten years of extensive research many problems are still open. We here want to address selected matters of debate, namely the nature of the low-energy electronic one-particle spectral function, the pecularities of the dynamic spin response as well as the temperature- and doping-dependent behavior of the Seebeck coefficient. Theoretical approach ==================== Effective single-particle methods based on the local density approximation (LDA) to density functional theory (DFT) are known to be insufficient to cover the rich physics of strongly correlated materials. Tailored model-Hamiltonian approaches to be treated within a powerful many-body technique such as dynamical mean-field theory (DMFT) are thus useful to reveal important insight in the dominant processes at high and low energy. Nowadays the DFT+DMFT methodology (see e.g. [@kotliar_review] for a review) opens the possibility to tackle electronic correlations with the benefit of the fully interlaced LDA materials chemistry. In this work a charge self-consistent DFT+DMFT scheme [@gri12] built up on an efficient combination of a mixed-basis pseudopotential framework [@mbpp_code] with a hybridization-expansion continuous-time quantum Monte-Carlo solver [@rub05; @wer06; @triqs_code; @boe11] is utilized to retrieve spectral functions and the thermopower. Thereby the correlated subspace consists of the projected [@ama08; @ani05] $t_{2g}$ orbitals, i.e. a three-orbital many-body treatment is performed within the single-site DMFT part. The generic multi-orbital Coulomb interactions include density-density as well as spin-flip and pair-hopping terms, parametrized [@cas78; @fre97] by a Hubbard $U$=5 eV and a Hund’s exchange $J$=0.7 eV. Since the physics of sodium cobaltate is intrinsically doping dependent, we constructed Na pseudopotentials with fractional nuclear charge in order to cope therewith in DFT+DMFT. A simplistic structural approach was undertaken, utilizing a primitive hexagonal cell allowing for only one formula unit of Na$_x$CoO$_2$, with the fractional-charge Na in the so-called Na2 position. Thus note that therefore the bilayer splitting does not occur in the electronic structure. Our calculations are straightforwardly extendable to more complex unit cells and geometries, however the present approach suits already the purpose of allowing for some general qualitative statements. The resulting 3$\times 3$ DFT+DMFT Green’s function in Bloch space with one correlated Co ion in the primitive cell hence reads here [@gri12] $$\begin{aligned} \label{Eqn:GBLDEFINITION} {\bf G}^{\mathrm{bl}} ({\bf k}, i \omega_n)&=& \Bigl[ (i \omega_n + \mu) {\bf 1} - \varepsilon_{\bf k}^{\mathrm{KS}}-\nonumber\\ &&-\, {\bf P}^{\dagger}({\bf k}) \cdot{\bf \Delta\Sigma}^{\rm imp} (i\omega_n)\cdot{\bf P}({\bf k})\Bigr]^{-1}\;,\end{aligned}$$ where $\varepsilon_{\bf k}^{\mathrm{KS}}$ denotes the Kohn-Sham (KS) dispersion part, $\mu$ is the chemical potential and ${\bf P}({\bf k})$ the $t_{2g}$ projection matrix mediating between the Co correlated subspace and the crystal Bloch space. The impurity self-energy term ${\bf \Delta\Sigma}^{\rm imp}$ includes the DMFT self-energy modified by the double-counting correction. For the latter the fully-localized form [@ani93] has been utilized. To extract the one-particle spectral function $A(\vec{k},\omega)$=$-\pi^{-1}\,{\rm Im}\,G^{\mathrm{bl}}({\bf k},\omega)$ as well as the thermopower, an analytical continuation of the impurity self-energy term ${\bf \Delta\Sigma}^{\rm imp}$ in Matsubara space $\omega_n$ was performed via Pad[é]{} approximation. Note that via the upfolding procedure within eq. (\[Eqn:GBLDEFINITION\]), the resulting real-frequency self-energy term in Bloch space carries $k$ dependence, i.e. ${\bf \Delta\Sigma}^{\rm bl}$=${\bf \Delta\Sigma}^{\rm bl}({\bf k},\omega)$. For the investigation of the thermoelectric response, the Seebeck coefficient is calculated within the Kubo formalism from $$S=-\frac{k_B}{|e|}\frac{A_1}{A_0}\;,$$ where the correlation functions $A_n$ are given by $$\begin{aligned} A_n=\sum_{\vec{k}}\int d\omega&\,\beta^n(\omega-\mu)^n \left(-\frac{\partial f_\mu}{\partial\omega}\right)\notag\times\\ &\times\operatorname{Tr}\left[\vec{v}(\vec{k})A(\vec{k},\omega)\vec{v} (\vec{k})A(\vec{k},\omega)\right]\;. \label{eqn:An}\end{aligned}$$ Here $\beta$ is the inverse temperature, $\vec{v}(\vec{k})$ denotes the Fermi velocity calculated from the charge self-consistent KS part and $f_{\mu}$ marks the Fermi-Dirac distribution. Due to subtle refinements in the low-energy regime close to the Fermi level within the charge self-consistent DFT+DMFT scheme, computing $A_n$ through eq.  for the three-orbital system at hand is quite challenging. It requires great care both in the handling of the frequency dependence of the spectral function through the analytical continuation of the local self-energy term as well as the evaluation of the $k$ sum. The difficulty with the latter is the sharp structure of the summand especially for low temperatures. Even a tetrahedron summation is problematic due to the double appearance of the spectral function in the sum [@pal01]. We overcome this problem by using an adaptive numerical integration method separately for each $A_n$, where $\Delta\Sigma^{\rm bl}(\vec{k},\omega)$, $\vec{v}(\vec{k})$ and $\varepsilon_{\vec{k}}^{\rm KS}$ are linearly interpolated in reciprocal space. Since all these quantities are relatively smooth in $k$, the resulting $A_n$ show only weak dependence on the underlying mesh for these interpolations. Finally, the expensive two-particle-based dynamic spin response with relevant local vertex corrections was studied. Thereby a simplified single-band tight-binding parametrization [@ros03] of the realistic dispersion, including hopping integrals up to third-nearest neighbor, entered the DMFT self-consistency cycle. For more details on the utilized DMFT+vertex technique see [@boe11; @boe12]. ![image](specfunc-0_3){width="45.00000%"} ![image](specfunc-0_7){width="45.00000%"} One-particle spectral function ============================== The low-energy electronic states of Na$_x$CoO$_2$ close to the Fermi level $\varepsilon_{\rm F}$ have been subject to many discussions. LDA calculations for single-formula-unit cells reveal a threefold bandstructure of about 1.5 eV total bandwidth, dominantly originating from the Co $3d(t_{2g})$ states [@sin00]. The resulting LDA Fermi surface (FS) consists of an $a_{1g}$-like hole sheet with additional $e_{g}'$-like hole pockets near the $K$ point of the hexagonal 1. Brillouin zone (BZ). For larger doping these hole pockets become more and more filled and their existence for $x$$\gtrsim$0.6 subtly depends on the very structural details [@joh04]. Not only displays the measured spectral function $A({\bf k},\omega)$ from angle-resolved photoemission (ARPES) experiments a much narrower dispersion very close to $\varepsilon_{\rm F}$, but also the FS at lower doping lacks the hole-pocket sheets for [*any*]{} doping $x$ [@has04; @qia06_2; @yan07; @gec07]. Usually LDA works surprisingly well for the FS of strongly correlated metals, even if the method does not allow for the proper renormalization and the appearance of Hubbard sidebands. Hence sodium cobaltate seems to belong to rare cases of correlated metals where the LDA FS topology does not agree with experiment. Many attempts have been elaborated in order to either explain the non-existence of the hole pockets or to prove the ARPES data wrong. Without going into the very details of this rather long story, no definite final decision has been made on either line of argument. Concerning proper correlated methodologies, DFT+DMFT without charge self-consistency, i.e. in the traditional post-processing manner, may even [*increase*]{} the strength of the hole pockets (see e.g. [@mar07]). It appeared that the size of the $a_{1g}$-$e_{g}'$ crystal-field splitting plays an important role when turning on correlations [@lecproc; @mar07]. From an LDA+Gutzwiller study by Wan [*et al.*]{} [@wan08] it became clear that charge self-consistency may has a relevant influence on the correlated FS and the hole pockets indeed disappeared for $x$$>$0.3 in their work. In order to touch base with these results we computed the one-particle spectral function $A({\bf k},\omega)$ within charge self-consistent DFT+DMFT for $x$=0.3 and $x$=0.7. Thereby within our simplified structural treatment the apical oxygen position was chosen such to allow for a single-sheet $a_{1g}$-like hole FS within LDA at $x$=0.7. Though the impact of charge order onto the spectral function is believed to be also important [@pei11], we here neglect this influence and concentrate on the multi-orbital interplay and its impact on the correlated Fermi surface. Figure \[ospec\] shows the obtained three-orbital spectral functions close to the Fermi level. Note that there is also a lower Hubbard band, but due to the strong doping from half filling it is located in the energy range $[-6,-4]$ eV. As expected, the less-doped $x$=0.3 case shows a stronger total renormalization of the $t_{2g}$ derived manifold. The most important observation is the clear shift of the potential pocket-forming $e_{g}'$-like quasiparticle bands away from $\varepsilon_{\rm F}$ compared to the LDA result. This here amounts to a (nearly complete) vanishing of the pockets for $x$=0.3, where they are still sizable in LDA. Even if there are some ambiguities concerning possible modifications due to structural details, the main trend that charge self-consistent DFT+DMFT (notably without invoking long-range order) tends to surpress the $e_{g}'$ derived pockets is evident. Additionally the $e_g'$-like states exhibit a substantial broadening also with (nearly) total filling, a multi-orbital effect discussed already for filled $t_{2g}$ states in LaNiO$_3$ [@den12]. This altogether brings the theoretical description in line with the available ARPES data. Thus charge self-consistency can be an important ingredient in the calculations, accounting for shifts of the level structure in sensitive crystal-field environments. Transport: Seebeck coefficient ============================== ![Seebeck coefficient $S(T)$ within charge self-consistent DFT+DMFT for larger doping. Full lines correspond to the inplane thermopower, dashed lines to $S(T)$ along the $c$-axis.[]{data-label="seebeck-pic"}](thermopower){height="6cm"} ![image](dynsus_new){height="6cm"} ![Dynamic spin susceptibility $\chi_s(\omega,{\bf q},$$T$=580K$)$ for only nearest-neighbor hopping $-t$ at $x$=0.67.[]{data-label="dynsus_tb"}](tb){height="4cm"} The increased thermoelectric response at larger doping $x$ marks one of the Na$_x$CoO$_2$ key aspects [@ter97; @mot01; @kau09; @lee06]. Although the more complex related so-called misfit cobaltates appear to display even larger thermopower and increased figure of merit (see e.g. [@heb13] for a recent review), the sodium cobaltate system still holds most of the main physics ready in its simplest structural form. There have been various theoretical modelings of the Seebeck coefficient for this system [@kos01; @ham07; @kur07; @pet07; @wis10; @san12], ranging from the use of Heikes formula, Boltzmann-equation approaches as well as Kubo-formula oriented modelings. Albeit for a full account of thermoelectricity details may matter [@wis10; @san12], for the doping regime 0.6$<$$x$$<$0.75 nearly all different theoretical descriptions yield thermopower values within the ranges of the experimental data. However open modeling questions remain for the highly increased Seebeck values in the regime of vary large doping $x$$>$0.8 [@lee06; @pet07] as well as for lower dopings $x$$\lesssim$0.5, where e.g. a nonmonotonic $S(T)$ with decreasing temperature is observed [@kau09]. Here we exhibit results for the thermopower as obtained within our charge self-consistent DFT+DMFT-based Kubo-formalism approach which builds up on the $t_{2g}$ correlated subspace for Na$_x$CoO$_2$. Data is provided for $x$=0.7 and $x$=0.75 in Fig. \[seebeck-pic\], the other more challenging doping regimes concerning the thermoelectric response will be addressed in future studies. For instance, we expect that the low-lying $e_g'$ bands leave some fingerprints in the thermopower for small $x$. Longer-ranged FM spin fluctuations and charge-ordering tendencies [@pei11] may influence $S(T)$ for $x$$>$0.8. Our inplane Seebeck values are in good agreement with experimental data of Kaurav [*et al.*]{} [@kau09]. The increased response for $x$=0.75 compared to $x$=0.7 is also verified, albeit the experimental tendency towards stronger enhancement with increased doping is still somewhat underestimated from theory. In addition to the inplane values, Fig. \[seebeck-pic\] depicts the $S(T)$ tensor part along the $c$-axis of the system. Besides the $n$-like response with a change of sign, the absolute value becomes reduced at larger $T$, related to the different (rather incoherent) transport between layers at elevated temperature [@val02]. Two-particle function: dynamic spin susceptibility ================================================== Besides the one-particle spectral properties and the thermopower, a further intriguing cobaltate issue is the magnetic behavior with doping. Within the frustrated triangular CoO$_2$ layers superexchange may dominate the low-doping regime due to nominal Mott proximity, but competing exchange processes set in at larger doping. The work by Lang [*et al.*]{} [@lan08] based on nuclear-magnetic-resonance measurements nicely summarized the magnetic phase diagram of Na$_x$CoO$_2$ with temperature $T$, showing the inplane crossover from antiferromagnetic (AFM) to ferromagnetic (FM) correlations with the eventual onset of A-type AFM order for $x$$>$0.75. In line with the results for the spectral function we computed the spin susceptibility $\chi_s(\omega,{\bf q},T)$ for an effective single-band model using DMFT with local vertex contributions [@boe12]. This allows for the theoretical verification of the AFM-to-FM crossover. It can directly be retrieved from the shift of maxima in the static part $\chi_s(\omega$=$0,{\bf q},T)$ for ${\bf q}$ at the BZ $K$ point at small $x$ towards maxima at ${\bf q}$=0 ($\Gamma$ point) at larger doping. Figure \[dynsus\] displays the full dynamic spin susceptibility with increasing $x$ in the paramagnetic regime. Below $x$=0.5 the strong two-particle spectral intensity close to $M$ and $K$ at the BZ boundary is indeed visible. For rather large doping the intensity accumulates at small frequency $\omega$ near the $\Gamma$ point, with clear paramagnon branches due to the proximity towards inplane FM order. We note that the vertex contributions are essential for the qualitative as well as quantitative signatures in the doping-dependent spin susceptibility [@boe12]. Most interestingly, there also is a high-intensity mode near the $K$ point with maximum spectral weight well located around the comensurable doping $x$=0.67 on the frustrated CoO$_2$ triangular lattice. The corresponding excitation energy of about 1 eV for $T$=580K is decreasing with lowering the temperature. Thus albeit the low-energy spin excitations for that larger doping have already shifted towards FM kind, a rather stable finite-$\omega$ AFM-like mode becomes available. This intriguing doping and frequency dependence of the effective exchange $J$ can be linked to the specific hopping-integral structure of sodium cobaltate. In this respect Fig. \[dynsus\_tb\] shows the dynamic spin susceptibility at $x$=0.67 for the Hubbard model on the triangular lattice with only nearest-neighbor hopping $-t$. While the FM paramagnon modes close to $\Gamma$ seem even strengthened in that case, the high-energy feature close to $K$ is now completely absent. Still it is not obvious to draw a straightforward connection between the one-particle spectral function and two-particle dynamic spin susceptibility. Summary ======= We have presented a state-of-the-art DFT+DMFT investigation of the multi-orbital one-particle spectral properties as well as the thermoelectric behavior of Na$_x$CoO$_2$. The charge self-consistent scheme brings the one-particle spectral function concerning the correlated Fermi surface and the broadening of the occupied part in good agreement with available ARPES data. Further extensions of the realistic methodology towards the proper inclusion of charge-order effects, eventually with incorporating relevant intersite Coulomb terms, are still needed for a comprehensive understanding. Nonetheless the present framework is capable of addressing the temperature- and doping-dependent thermopower in line with experimental data for the larger doping regime. Detailed studies of the more critical regions in this respect at low and very high dopings are envisaged. Through the inclusion of sophisticated vertex contributions in a simplified tight-binding-based approach, details of the dynamic spin susceptibility, e.g. the prediction of a rather stable high-energy AFM-like mode close to $x$=0.67, have been revealed. Additional experimental work is needed to verify our results and to stimulate future work. Eventually, the investigation of the direct impact of two-particle correlations on the one-particle spectrum is a challenging goal, but therefore it will be necessary to go beyond the local-correlation viewpoint of DMFT. We thank D. Grieger, A. I. Lichtenstein and O. E. Peil for helpful discussions. Financial support from the DFG-SPP1386 and the DFG-FOR1346 is acknowledged. Computations were performed at the local computing center of the University of Hamburg as well as the North-German Supercomputing Alliance (HLRN) under the grant hhp00026.
--- abstract: | A two-variable generalization of the Big $-1$ Jacobi polynomials is introduced and characterized. These bivariate polynomials are constructed as a coupled product of two univariate Big $-1$ Jacobi polynomials. Their orthogonality measure is obtained. Their bispectral properties (eigenvalue equations and recurrence relations) are determined through a limiting process from the two-variable Big $q$-Jacobi polynomials of Lewanowicz and Woźny. An alternative derivation of the weight function using Pearson-type equations is presented. **Keywords:** Bivariate Orthogonal Polynomials, Big $-1$ Jacobi polynomials\ **AMS classification numbers:** 33C50 author: - 'Vincent X. Genest' - 'Jean-Michel Lemay' - Luc Vinet - Alexei Zhedanov title: 'Two-variable $-1$ Jacobi polynomials' --- ------------------------------------------------------------------------ ------------------------------------------------------------------------ Introduction ============ The purpose of this paper is to introduce and study a family of bivariate Big $-1$ Jacobi polynomials. These two-variable polynomials, which shall be denoted by $\mathcal{J}_{n,k}(x,y)$, depend on four real parameters $\alpha,\beta,\gamma,\delta$ such that $\alpha,\beta,\gamma>-1$, $\delta\neq 1$ and are defined as $$\begin{aligned} \label{Poly} \mathcal{J}_{n,k}(x,y)=J_{n-k}\left(y;\alpha,2k+\beta+\gamma+1,(-1)^{k}\delta\right)\,\rho_{k}(y)\,J_{k}\left(\frac{x}{y};\gamma,\beta,\frac{\delta}{y}\right),\end{aligned}$$ with $k=0,1,\ldots$ and $n=k,k+1,\ldots$, where $$\begin{aligned} \rho_{k}(y)= \begin{cases} y^{k}\left(1-\frac{\delta^2}{y^2}\right)^{\frac{k}{2}}, & \text{$k$ even}, \\ y^{k}\left(1-\frac{\delta^2}{y^2}\right)^{\frac{k-1}{2}} \left(1+\frac{\delta}{y}\right), & \text{$k$ odd}, \end{cases}\end{aligned}$$ and where $J_{n}(x;a,b,c)$ denotes the one-variable Big $-1$ Jacobi polynomials [@Vinet_Zhedanov_2012-07] (see section 1.1). It will be shown that these polynomials are orthogonal with respect to a positive measure defined on the disjoint union of four triangular domains in the real plane. The polynomials $\mathcal{J}_{n,k}(x,y)$ will also be identified as a $q\rightarrow -1$ limit of the two-variable Big $q$-Jacobi polynomials introduced by Lewanowicz and Woźny in [@Lewanowicz_Wozny_2010-01], which generalize the bivariate little $q$-Jacobi polynomials introduced by Dunkl in [@Dunkl_1980]. The bispectral properties of the Big $-1$ Jacobi polynomials will be determined from this identification. The polynomials $\mathcal{J}_{n,k}(x,y)$ will be shown to satisfy an explicit vector-type three term recurrence relation and it will be seen that they are the joint eigenfunctions of a pair of commuting first order differential operators involving reflections. By solving the Pearson-type system of equations arising from the symmetrization of these differential/difference operators, the weight function for the polynomials $\mathcal{J}_{n,k}(x,y)$ will be recovered. The defining formula of the two-variable Big $-1$ Jacobi polynomials is reminiscent of the expressions found in [@Kwon_Lee_Littlejohn_2001] for the Krall-Sheffer polynomials [@Krall_Sheffer_1967], which, as shown in [@Harnad-2001], are directly related to two-dimensional superintegrable systems on spaces with constants curvature (see [@Miller_Post_Winter_2013] for a review of superintegrable systems). The polynomials $\mathcal{J}_{n,k}(x,y)$ do not belong to the Krall-Sheffer classification, as they will be seen to obey *first order* differential equations with reflections. The results of [@Harnad-2001] however suggest that the polynomials $\mathcal{J}_{n,k}(x,y)$ could be related to two-dimensional integrable systems with reflections such as the ones recently considered in [@Genest_Ismail_Vinet_Zhedanov_2013; @Genest_Vinet_Zhedanov_2013; @Genest_CMP_2014; @Genest_2015_1]. This fact motivates our examination of the polynomials $\mathcal{J}_{n,k}(x,y)$. The Big $-1$ Jacobi polynomials ------------------------------- Let us now review some of the properties of the Big $-1$ Jacobi polynomials which shall be needed in the following. The Big $-1$ Jacobi polynomials, denoted by $J_{n}(x;a,b,c)$, were introduced in [@Vinet_Zhedanov_2012-07] as a $q=-1$ limit of the Big $q$-Jacobi polynomials [@Koekoek-2010]. They are part of the Bannai-Ito scheme of $-1$ orthogonal polynomials [@Genest-2013-02-1; @Genest-2013-09-02; @Tsujimoto-2012-03; @Vinet-2011-01]. They are defined by $$\begin{aligned} \label{Jacobi} J_{n}(x;a,b,c)= \begin{cases} {\begingroup \catcode`\,\active \def ,{\mskip{8mu}\relax} \dopfq }{2}{1}{-\frac{n}{2},\frac{n+a+b+2}{2}}{\frac{a+1}{2}}{\frac{1-x^2}{1-c^2}}+\frac{n(1-x)}{(1+c)(a+1)}\; {\begingroup \catcode`\,\active \def ,{\mskip{8mu}\relax} \dopfq }{2}{1}{1-\frac{n}{2}, \frac{n+a+b+2}{2}}{\frac{a+3}{2}}{\frac{1-x^2}{1-c^2}}, & \text{$n$ even}, \\[3mm] {\begingroup \catcode`\,\active \def ,{\mskip{8mu}\relax} \dopfq }{2}{1}{-\frac{n-1}{2}, \frac{n+a+b+1}{2}}{\frac{a+1}{2}}{\frac{1-x^2}{1-c^2}}-\frac{(n+a+b+1)(1-x)}{(1+c)(a+1)} \; {\begingroup \catcode`\,\active \def ,{\mskip{8mu}\relax} \dopfq }{2}{1}{-\frac{n-1}{2}, \frac{n+a+b+3}{2}}{\frac{a+3}{2}}{\frac{1-x^2}{1-c^2}},& \text{$n$ odd}, \end{cases}\end{aligned}$$ where ${}_{2}F_{1}$ is the standard Gauss hypergeometric function [@Andrews_Askey_Roy_1999]; when no confusion can arise, we shall simply write $J_{n}(x)$ instead of $J_{n}(x;a,b,c)$. The polynomials satisfy the recurrence relation $$\begin{aligned} x\,J_{n}(x)=A_{n}\, J_{n+1}(x)+(1-A_{n}-C_{n})\,J_{n}(x)+ C_{n}\,J_{n-1}(x),\end{aligned}$$ with coefficients $$\begin{aligned} A_{n}= \begin{cases} \frac{(n+a+1)(c+1)}{2n+a+b+2}, & \text{$n$ even}, \\ \frac{(1-c)(n+a+b+1)}{2n+a+b+2}, & \text{$n$ odd}, \end{cases} \qquad C_{n}= \begin{cases} \frac{n(1-c)}{2n+a+b}, & \text{$n$ even}, \\ \frac{(n+b)(1+c)}{2n+a+b}, & \text{$n$ odd}. \end{cases}\end{aligned}$$ It can be seen that for $a,b>-1$ and $|c|\neq 1$ the polynomials $J_{n}(x)$ are positive-definite. The Big $-1$ Jacobi polynomials satisfy the eigenvalue equation $$\begin{aligned} \label{Eigen-Uni} \mathcal{L} J_{n}(x)=\left\{(-1)^{n}\left(n+a/2+b/2+1/2\right)\right\} J_{n}(x),\end{aligned}$$ where $\mathcal{L}$ is the most general first-order differential operator with reflection preserving the space of polynomials of a given degree. This operator has the expression $$\begin{aligned} \mathcal{L}=\left[\frac{(x+c)(x-1)}{x}\right]\partial_{x}R+\left[\frac{c}{2x^2}+\frac{c a-b}{2x}\right](R-\mathbb{I})+\left[\frac{a+b+1}{2}\right]R,\end{aligned}$$ where $R$ is the reflection operator, i.e. $Rf(x)=f(-x)$, and $\mathbb{I}$ stands for the identity. The orthogonality relation of the Big $-1$ Jacobi polynomials is as follows. For $|c|<1$, one has $$\begin{aligned} \label{Ortho-1} \int_{\mathcal{C}}J_{n}(x;a,b,c)\,J_{m}(x;a,b,c) \;\omega(x;a,b,c)\;\mathrm{d}x=\left[\frac{(1-c^2)^{\frac{a+b+2}{2}}}{(1+c)}\right] h_{n}(a,b)\,\delta_{nm},\end{aligned}$$ where the interval is $\mathcal{C}=[-1,-|c|]\cup [|c|,1]$ and the weight function reads $$\begin{aligned} \label{Weight-1} \omega(x;a,b,c)=\theta(x)\,(1+x)\,(x-c)\,(x^2-c^2)^{\frac{b-1}{2}}\,(1-x^2)^{\frac{a-1}{2}},\end{aligned}$$ with $\theta(x)$ is the sign function. The normalization factor $h_{n}$ is given by $$\begin{aligned} \label{hn} h_{n}(a,b)= \begin{cases} \frac{2\;\Gamma\left(\frac{n+b+1}{2}\right)\Gamma\left(\frac{n+a+3}{2}\right) \left(\frac{n}{2}\right)!}{(n+a+1)\;\Gamma\left(\frac{n+a+b+2}{2}\right)\left(\frac{a+1}{2}\right)_{\frac{n}{2}}^2}, & \text{$n$ even}, \\[3mm] \frac{(n+a+b+1)\;\Gamma\left(\frac{n+b+2}{2}\right)\Gamma \left(\frac{n+a+2}{2}\right) \left(\frac{n-1}{2}\right)!}{2\,\Gamma\left(\frac{n+a+b+3}{2}\right) \left(\frac{a+1}{2}\right)_{\frac{n+1}{2}}^2}, & \text{$n$ odd}, \end{cases}\end{aligned}$$ where $(a)_{n}$ stands for the Pochhammer symbol [@Andrews_Askey_Roy_1999]. For $|c|>1$, one has $$\begin{aligned} \label{Ortho-Tilde} \int_{\widetilde{\mathcal{C}}}J_{n}(x;a,b,c)\,J_{m}(x;a,b,c) \;\widetilde{\omega}(x;a,b,c)\;\mathrm{d}x=\left[\frac{\theta(c)(c^2-1)^{\frac{a+b+2}{2}}}{1+c}\right]\widetilde{h}_{n}(a,b)\,\delta_{nm},\end{aligned}$$ where the interval is $\widetilde{\mathcal{C}}=[-|c|,-1]\cup [1,|c|]$ and the weight function reads $$\begin{aligned} \label{Weight-Tilde} \widetilde{\omega}(x;a,b,c)=\theta(c\,x)\,(1+x)\,(c-x)\,(c^2-x^2)^{\frac{b-1}{2}}\,(x^2-1)^{\frac{a-1}{2}}.\end{aligned}$$ In this case the normalization factor has the expression $$\begin{aligned} \label{Hn-Tilde} \widetilde{h}_{n}(a,b)= \begin{cases} \frac{2\;\Gamma\left(\frac{n+b+1}{2}\right)\Gamma\left(\frac{n+a+3}{2}\right)\left(\frac{n}{2}\right)! }{(n+a+1) \Gamma\left(\frac{n+a+b+2}{2}\right)\,\left(\frac{a+1}{2}\right)_{\frac{n}{2}}^2}, & \text{$n$ even}, \\[3mm] \frac{(n+a+b+1)\,\Gamma\left(\frac{n+b+2}{2}\right)\Gamma\left(\frac{n+a+2}{2}\right)\left(\frac{n-1}{2}\right)!}{2\;\Gamma\left(\frac{n+a+b+3}{2}\right)\,\left(\frac{a+1}{2}\right)_{\frac{n+1}{2}}^2} & \text{$n$ odd}. \end{cases}\end{aligned}$$ The normalization factors $h_{n}$ and $\widetilde{h}_{n}$ were not derived in [@Vinet_Zhedanov_2012-07]. They have been obtained here using the orthogonality relation for the Chihara polynomials provided in [@Genest-2013-09-02] and the fact that the Big $-1$ Jacobi polynomials are related to the latter by a Christoffel transformation. The details of this derivation are presented in appendix A. Orthogonality of the two-variable Big $-1$ Jacobi polynomials ============================================================= We now prove the orthogonality property of the two-variable Big $-1$ Jacobi polynomials. Let $\alpha, \beta,\gamma>-1$ and $|\delta|<1$. The two-variable Big $-1$ Jacobi polynomials defined by satisfy the orthogonality relation $$\begin{aligned} \label{Ortho-2Var} \int_{D_{y}}\int_{D_{x}}\;\mathcal{J}_{n,k}(x,y)\;\mathcal{J}_{m,\ell}(x,y)\;W(x,y)\;\mathrm{d}x\;\mathrm{d}y=H_{nk} \;\delta_{k\ell}\delta_{nm},\end{aligned}$$ with respect to the weight function $$\begin{aligned} \label{Weight-2Var} W(x,y)=\theta(x\,y)|y|^{\beta+\gamma}(1+y)\left(1+\frac{x}{y}\right)\left(\frac{x-\delta}{y}\right)(1-y^2)^{\frac{\alpha-1}{2}}\left(1-\frac{x^2}{y^2}\right)^{\frac{\gamma-1}{2}}\left(\frac{x^2-\delta^2}{y^2}\right)^{\frac{\beta-1}{2}}.\end{aligned}$$ The integration domain is prescribed by $$\begin{aligned} \label{Domain} D_{x}=[-|y|,-|\delta|]\cup [|\delta|, |y|], \qquad D_{y}=[-1,-|\delta|]\cup[|\delta|,1],\end{aligned}$$ and the normalization factor $H_{nk}$ has the expression $$\begin{aligned} H_{nk}=\left[\frac{(1-\delta^2)^{\frac{2k+\alpha+\beta+\gamma+3}{2}}}{(1+(-1)^{k}\delta)}\right]\;h_{k}(\gamma,\beta)\;h_{n-k}(\alpha,2k+\gamma+\beta+1),\end{aligned}$$ where $h_{n}(a,b)$ is given by . We proceed by a direct calculation. We denote the orthogonality integral by $$\begin{aligned} I=\int_{D_{y}}\int_{D_{x}}\;\mathcal{J}_{n,k}(x,y)\;\mathcal{J}_{m,\ell}(x,y)\;W(x,y)\;\mathrm{d}x\;\mathrm{d}y.\end{aligned}$$ Upon using the expressions and in the above, one writes $$\begin{gathered} I =\int_{D_{y}} J_{n-k}(y;\alpha,2k+\gamma+\beta+1,(-1)^{k}\delta)\;\;J_{m-\ell}(y,\alpha,2k+\gamma+\beta+1,(-1)^{k}\delta) \\ \times \left[\rho_{k}(y)\rho_{\ell}(y)|y|^{\beta+\gamma}(1+y)(1-y^2)^{\frac{\alpha-1}{2}}\right]\;\mathrm{d}y\;\; \\ \times \int_{D_{x}} J_{k}\left(\frac{x}{y}; \gamma,\beta,\frac{\delta}{y}\right)J_{\ell}\left(\frac{x}{y}; \gamma,\beta,\frac{\delta}{y}\right) \left[\theta\left(\frac{x}{y}\right)\left(1+\frac{x}{y}\right)\left(\frac{x-\delta}{y}\right)\left(1-\frac{x^2}{y^2}\right)^{\frac{\gamma-1}{2}}\left(\frac{x^2-\delta^2}{y^2}\right)^{\frac{\beta-1}{2}}\right]\;\mathrm{d}x.\end{gathered}$$ The integral over $D_{x}$ is directly evaluated using the change of variables $u=x/y$ and comparing with the orthogonality relation . The result is thus $$\begin{gathered} I=h_{k}(\gamma,\beta)\;\delta_{k\ell}\times \int_{D_y}J_{n-k}(y;\alpha,2k+\gamma+\beta+1,(-1)^{k}\delta)\;\;J_{m-\ell}(y,\alpha,2k+\gamma+\beta+1,(-1)^{k}\delta) \\ \times \rho_{k}(y)\;\rho_{\ell}(y) \left[\theta(y)\;(1+y)\;(1-y^2)^{\frac{\alpha-1}{2}}\;\frac{(y^2-\delta^2)^{\frac{2+\gamma+\beta}{2}}}{(y+\delta)}\right]\;\mathrm{d}y.\end{gathered}$$ Assuming that $k=\ell$ is an even integer, the integral takes the form $$\begin{aligned} I=h_{k}(\gamma,\beta)\delta_{k\ell}\times \int_{D_y}J_{n-k}(y;\alpha,2k+\gamma+\beta+1,\delta)\;\;J_{m-\ell}(y,\alpha,2k+\gamma+\beta+1,\delta) \\ \times (y^2-\delta^2)^{k} \left[\theta(y)\;(1+y)\;(1-y^2)^{\frac{\alpha-1}{2}}\;\frac{(y^2-\delta^2)^{\frac{2+\gamma+\beta}{2}}}{(y+\delta)}\right]\;\mathrm{d}y,\end{aligned}$$ which in view of yields $$\begin{aligned} I=h_{k}(\gamma,\beta)h_{n-k}(\alpha,2k+\gamma+\beta+1)\left[\frac{(1-\delta^2)^{\frac{2k+\alpha+\beta+\gamma+3}{2}}}{1+\delta}\right]\delta_{k\ell}\delta_{mn}.\end{aligned}$$ Assuming that $k=\ell$ is an odd integer, the integral takes the form $$\begin{aligned} I=h_{k}(\gamma,\beta)\delta_{k\ell}\times \int_{D_y}J_{n-k}(y;\alpha,2k+\gamma+\beta+1,-\delta)\;\;J_{m-\ell}(y,\alpha,2k+\gamma+\beta+1,-\delta) \\ \times (y^2-\delta^2)^{k-1}(y+\delta)^{2} \left[\theta(y)\;(1+y)\;(1-y^2)^{\frac{\alpha-1}{2}}\;\frac{(y^2-\delta^2)^{\frac{2+\gamma+\beta}{2}}}{(y+\delta)}\right]\;\mathrm{d}y,\end{aligned}$$ which given gives $$\begin{aligned} I=h_{k}(\gamma,\beta)h_{n-k}(\alpha,2k+\gamma+\beta+1)\left[\frac{(1-\delta^2)^{\frac{2k+\alpha+\beta+\gamma+3}{2}}}{1-\delta}\right]\delta_{k\ell}\delta_{mn}.\end{aligned}$$ Upon combining the $k$ even and $k$ odd cases, one finds . This completes the proof. It is not difficult to see that the region corresponds to the disjoint union of four triangular domains. The $|\delta|=1/5$ case is illustrated in Figure 1. \ Figure 1: Orthogonality region for $|\delta|=1/5$ For $\alpha,\beta,\gamma>-1$ and $|\delta|<1$, it can be verified that the weight function is positive on . The orthogonality relation for $|\delta|>1$ can be obtained in a similar fashion. The result is as follows. Let $\alpha, \beta,\gamma>-1$ and $|\delta|>1$. The two-variable Big $-1$ Jacobi polynomials defined by satisfy the orthogonality relation $$\begin{aligned} \int_{\widetilde{D}_{y}}\int_{\widetilde{D}_{x}}\;\mathcal{J}_{n,k}(x,y)\;\mathcal{J}_{m,\ell}(x,y)\;\widetilde{W}(x,y)\;\mathrm{d}x\;\mathrm{d}y=\widetilde{H}_{nk} \;\delta_{k\ell}\delta_{nm},\end{aligned}$$ with respect to the weight function $$\begin{aligned} \label{Weight-2Var-2} \widetilde{W}(x,y)=\theta(\delta\,x\,y) |y|^{\gamma+\beta}(1+y)\left(1+\frac{x}{y}\right)\left(\frac{\delta-x}{y}\right)(y^2-1)^{\frac{\alpha-1}{2}}\left(\frac{x^2}{y^2}-1\right)^{\frac{\gamma-1}{2}}\left(\frac{\delta^2-x^2}{y^2}\right)^{\frac{\beta-1}{2}}.\end{aligned}$$ The integration domain is $$\begin{aligned} \label{Domain-2} \widetilde{D}_{x}=[-|\delta|, -|y|]\cup [|y|,|\delta|],\qquad \widetilde{D}_{y}=[-|\delta|,1]\cup [1, |\delta|],\end{aligned}$$ and the normalization factor is of the form $$\begin{aligned} \widetilde{H}_{nk}=(-1)^{k}\theta(\delta)\left[\frac{(\delta^2-1)^{\frac{2k+\alpha+\beta+\gamma+3}{2}}}{1+(-1)^{k}\delta}\right]\widetilde{h}_{k}(\gamma,\beta)\,\widetilde{h}_{n-k}(\alpha,2k+\beta+\gamma+1),\end{aligned}$$ where $\widetilde{h}_{n}(a,b)$ is given by . Similar to proposition 2.1 using instead , and . It can again be seen that the weight function is positive on the domain provided that $\alpha,\beta,\gamma>-1$ and $|\delta|>1$. The orthogonality region defined by again corresponds to the disjoint union of four triangular domains, as illustrated by Figure 2 for the case $|\delta|=3$. \ Figure 2: Orthogonality region for $|\delta|=3$ A special case: the bivariate Little $-1$ Jacobi polynomials ------------------------------------------------------------ When $c=0$, the Big $-1$ Jacobi polynomials $J_{n}(x;a,b,c)$ defined by reduce to the so-called Little $-1$ Jacobi polynomials $j_{n}(x;a,b)$ introduced in [@Vinet-2011-01]. These polynomials have the hypergeometric representation $$\begin{aligned} j_{n}(x;a,b)= \begin{cases} {\begingroup \catcode`\,\active \def ,{\mskip{8mu}\relax} \dopfq }{2}{1}{-\frac{n}{2},\frac{n+a+b+2}{2}}{\frac{a+1}{2}}{1-x^2}+\frac{n(1-x)}{(a+1)}\; {\begingroup \catcode`\,\active \def ,{\mskip{8mu}\relax} \dopfq }{2}{1}{1-\frac{n}{2}, \frac{n+a+b+2}{2}}{\frac{a+3}{2}}{1-x^2}, & \text{$n$ even}, \\[3mm] {\begingroup \catcode`\,\active \def ,{\mskip{8mu}\relax} \dopfq }{2}{1}{-\frac{n-1}{2}, \frac{n+a+b+1}{2}}{\frac{a+1}{2}}{1-x^2}-\frac{(n+a+b+1)(1-x)}{(a+1)} \; {\begingroup \catcode`\,\active \def ,{\mskip{8mu}\relax} \dopfq }{2}{1}{-\frac{n-1}{2}, \frac{n+a+b+3}{2}}{\frac{a+3}{2}}{1-x^2},& \text{$n$ odd}. \end{cases}\end{aligned}$$ Taking $\delta=0$ in leads to the following definition for the two-variable Little $-1$ Jacobi polynomials: $$\begin{aligned} \label{2-Var-Little} q_{n,k}(x,y)=j_{n-k}(y;\alpha,2k+\beta+\gamma+1)\;y^{k}\;j_{k}\left(\frac{x}{y};\gamma,\beta\right),\quad k=0,1,2\ldots,\quad n=k,k+1,\ldots.\end{aligned}$$ It is seen from that the two-variable Little $-1$ Jacobi polynomials have the structure corresponding to one of the methods to construct bivariate orthogonal polynomials systems proposed by Koornwinder in [@Koornwinder-1975]. For the polynomials , the weight function, which can be obtained by taking $\delta=0$ in either or , can also be recovered using the general scheme given in [@Koornwinder-1975]. For $\delta=0$, the region reduces to two vertically opposite triangles. Bispectrality of the bivariate Big $-1$ Jacobi polynomials ========================================================== In this section, the two-variable Big $-1$ Jacobi polynomials $\mathcal{J}_{n,k}(x,y)$ are shown be the joint eigenfunctions of a pair of first-order differential operators involving reflections. Their recurrence relations are also derived. The results are obtained through a limiting process from the corresponding properties of the two-variable Big $q$-Jacobi polynomials introduced by Lewanowicz and Woźny [@Lewanowicz_Wozny_2010-01]. Bivariate Big $q$-Jacobi polynomials ------------------------------------ Let us review some of the properties of the bivariate $q$-polynomials introduced in [@Lewanowicz_Wozny_2010-01]. The two-variable Big $q$-Jacobi polynomials, denoted $\mathcal{P}_{n,k}(x,y;a,b,c,d;q)$ are defined as $$\begin{aligned} \label{Biv-Q-Jacobi} \mathcal{P}_{n,k}(x,y;a,b,c,d;q)=P_{n-k}(y;a,bc q^{2k+1}, dq^{k};q)\;y^{k}\;\left(\frac{d q}{y}; q\right)_{k}\;P_{k}\left(\frac{x}{y}; c, b,\frac{d}{y};q\right),\end{aligned}$$ where $(a;q)_{n}$ stands for the $q$-Pochhammer symbol [@Gasper-2004] and where $P_{n}(x;a,b,c;q)$ are the Big $q$-Jacobi polynomials [@Koekoek-2010]. The two-variable Big $q$-Jacobi polynomials satisfy the eigenvalue equation [@Lewanowicz_Wozny_2010-01] $$\begin{aligned} \label{Eigen-q} \Omega\,\mathcal{P}_{n,k}(x,y)=\left[\frac{q^{1-n}(q^{n}-1)(abcq^{n+2}-1)}{(q-1)^2}\right]\,\mathcal{P}_{n,k}(x,y),\end{aligned}$$ where $\Omega$ is the $q$-difference operator $$\begin{gathered} \Omega=(x-dq)(x-acq^2)\mathbf{D}_{q,x}\mathbf{D}_{q^{-1},x}+(y-aq)(y-dq)\mathbf{D}_{q,y}\mathbf{D}_{q^{-1},y} \\ +q^{-1}(x-dq)(y-aq)\mathbf{D}_{q^{-1},x}\mathbf{D}_{q^{-1},y}+acq^{3}(bx-d)(y-1)\mathbf{D}_{q,x}\mathbf{D}_{q,y} \\ +\frac{(abcq^{3}-1)(x-1)-(acq^{2}-1)(dq-1)}{q-1}\mathbf{D}_{q,x}+ \frac{(abcq^{3}-1)(y-1)-(aq-1)(dq-1)}{q-1}\mathbf{D}_{q,y},\end{gathered}$$ and where $\mathbf{D}_{q,x}$ stands for the $q$-derivative $$\begin{aligned} \mathbf{D}_{q,x}f(x,y)=\frac{f(qx,y)-f(x,y)}{x(q-1)}.\end{aligned}$$ The bivariate Big $q$-Jacobi polynomials also satisfy the pair of recurrence relations [@Lewanowicz_Wozny_2010-01] $$\begin{aligned} y\,\mathcal{P}_{n,k}(x,y)=a_{nk}\,\mathcal{P}_{n+1,k}(x,y)+b_{nk}\,\mathcal{P}_{n,k}(x,y)+c_{nk}\,\mathcal{P}_{n-1,k}(x,y),\end{aligned}$$ $$\begin{gathered} \label{Recurrence-q} x\,\mathcal{P}_{n,k}(x,y)=e_{nk}\,\mathcal{P}_{n+1,k-1}(x,y)+f_{nk}\,\mathcal{P}_{n+1,k}(x,y)+g_{nk}\,\mathcal{P}_{n+1,k+1}(x,y) \\ +r_{nk}\,\mathcal{P}_{n,k-1}(x,y)+s_{nk}\,\mathcal{P}_{n,k}(x,y)+t_{nk}\,\mathcal{P}_{n,k+1}(x,y) \\ +u_{nk}\,\mathcal{P}_{n-1,k-1}(x,y)+v_{nk}\,\mathcal{P}_{n-1,k}(x,y)+w_{nk}\,\mathcal{P}_{n-1,k+1}(x,y),\end{gathered}$$ where the recurrence coefficients read $$\begin{aligned} {2} a_{nk}&=\frac{(1-a q^{n-k+1})(1-abcq^{n+k+2})(1-d q^{n+1})}{(a b c q^{2n+2};q)_{2}},\quad & b_{nk}&=1-a_{nk}-c_{nk}, \\ w_{nk}&=\frac{abc\sigma_k q^{n+2k+3}(abcq^{n+1}-d)(q^{n-k-1};q)_2}{(1-dq^{k+1})(a b c q^{2n+1};q)_2},\quad & f_{nk}&= a_{nk}(b c q^{k}\tau_{k}-\sigma_k+1), \\ e_{nk}&=\frac{\tau_k bcq^{k}(dq^{k}-1)(1-dq^{n+1})(aq^{n-k+1};q)_2}{(abcq^{2n+2};q)_{2}},\quad & v_{nk}&=c_{nk}(bc q^{k}\tau_k-\sigma_k+1), \\ g_{nk}&=\frac{\sigma_k (1-dq^{n+1})(a b c q^{n+k+2};q)_{2}}{(1-d q^{k+1})(abcq^{2n+2};q)_{2}},\quad &s_{nk}&=b_{nk}(bcq^{k}\tau_k-\sigma_k+1)+d(q^{k+1}\sigma_k-\tau_k),\end{aligned}$$ with $$\begin{aligned} {2} t_{nk}&=\frac{q^{k+1}\sigma_{k}z_{n}(1-q^{n-k})(1-abcq^{n+k+2})}{1-d q^{k+1}}, \\ r_{nk}&=\tau_{k}z_{n}(dq^{k}-1)(1-aq^{n-k+1})(1-bcq^{n+k+1}), \\ u_{nk}&=\frac{\tau_{k}a q^{n-k+1}(dq^{k}-1)(abcq^{n+1}-d)(bcq^{n+k};q)_{2}}{(abcq^{2n+1};q)_{2}}, \\ c_{nk}&=\frac{a d q^{n+1}(q^{n-k}-1)(1-b c q^{n+k+1})(1- a b c d^{-1} q^{n+1})}{(abcq^{2n+1};q)_{2}},\end{aligned}$$ and where $\sigma_k$, $\tau_k$ and $z_{n}$ are given by $$\begin{aligned} \sigma_{k}&=\frac{(1-c q^{k+1})(1- b c q^{k+1})}{(bcq^{2k+1};q)_{2}},\quad \tau_{k}=-\frac{cq^{k+1}(1-q^{k})(1-bq^{k})}{(bcq^{2k};q)_{2}}, \\ z_{n}&=\frac{a b c q^{n+1}(1+q-dq^{n+1})-d}{(1-a b c q^{2n+1})(1-a b c q^{2n+3})}.\end{aligned}$$ $\mathcal{J}_{n,k}(x,y)$ as a $q\rightarrow -1$ limit of $\mathcal{P}_{n,k}(x,y)$ --------------------------------------------------------------------------------- The two-variable Big $-1$ Jacobi polynomials $\mathcal{J}_{n,k}(x,y)$ can be obtained from the bivariate Big $q$-Jacobi by taking $q\rightarrow -1$. Indeed, a direct calculation using the expression shows that $$\begin{aligned} \label{Limit} \lim_{\epsilon\rightarrow 0}\;\mathcal{P}_{n,k}(x,y;-e^{\epsilon \alpha},-e^{\epsilon \beta},-e^{\epsilon \gamma},\delta, -e^{\epsilon})= \mathcal{I}_{n,k}(x,y; \alpha,\beta,\gamma,\delta),\end{aligned}$$ where we have used the notation $\mathcal{I}_{n,k}(x,y; \alpha,\beta,\gamma,\delta)$ to exhibit the parameters appearing in the Big $-1$ Jacobi polynomials defined in . A similar limit was considered in [@Vinet_Zhedanov_2012-07] to obtain the univariate Big $-1$ Jacobi polynomials in terms of the Big $q$-Jacobi polynomials. Eigenvalue equation for the Big $-1$ Jacobi polynomials ------------------------------------------------------- The eigenvalue equation for the Big $q$-Jacobi polynomials and the relation between the Big $q$-Jacobi polynomials and the Big $-1$ Jacobi polynomials can be used to obtain an eigenvalue equation for the latter. Let $L_{1}$ be the first-order differential/difference operator $$\begin{gathered} \label{L1-Operator} L_{1}=G_{5}(x,y)\,R_{x}R_{y}\partial_{y}+G_{6}(x,y)\,R_{y}\partial_{y}+G_{7}(x,y)\,R_{x}R_{y}\partial_{x}+G_{8}(x,y)\,R_{x}\partial_{x} \\ +G_{1}(x,y)\,R_{x}R_{y}+G_2(x,y)\,R_{x}+G_{3}(x,y)\,R_{y}-(G_1(x,y)+G_2(x,y)+G_3(x,y))\,\mathbb{I},\end{gathered}$$ where $R_{x}, R_{y}$ are reflection operators and where the coefficients read \[Coef\] $$\begin{aligned} {2} G_{1}(x,y)&=\frac{x[1+\beta+\gamma-y(\alpha+\beta+\gamma+2)]-\delta[y(\alpha+\gamma+1)-\gamma]}{4xy}, \quad & G_{8}(x,y)&=\frac{(\delta+x)(x-y)}{2 x y}, \\ G_{2}(x,y)&=-\frac{x[x(\beta+\gamma+1)-\beta y]+\delta(y+\gamma x)}{4 x^2 y}, \quad & G_{7}(x,y)&=\frac{(\delta+x)(y-1)}{2y}, \\ G_3(x,y)&=-\delta\left( \frac{x+\alpha xy-y[y(\alpha+\gamma+1)-\gamma]}{4x y^2}\right) \quad & G_{6}(x,y)&=\frac{\delta(x-y)(y-1)}{2 xy}, \\ G_{5}(x,y)&=\frac{(\delta+x)(y-1)}{2x}.\end{aligned}$$ The Big $-1$ Jacobi polynomials satisfy the eigenvalue equation $$\begin{aligned} L_{1}\,\mathcal{P}_{n,k}(x,y)=\mu_n \,\mathcal{P}_{n,k}(x,y),\qquad \mu_{n}= \begin{cases} -\frac{n}{2}, & \text{$n$ even}, \\ \frac{n+\alpha+\beta+\gamma+2}{2}, & \text{$n$ odd}. \end{cases}\end{aligned}$$ Furthermore, let $L_2$ be the differential/difference operator $$\begin{aligned} L_2=\frac{2(y-x)(x+\delta)}{x}R_{x}\partial_{x}+\frac{(\gamma+\beta+1)x^2+(\delta\gamma-\beta y)x+\delta y}{x^2}(R_{x}-\mathbb{I}),\end{aligned}$$ The Big $-1$ Jacobi polynomials satisfy the eigenvalue equation $$\begin{aligned} L_2\,\mathcal{P}_{n,k}(x,y)=\nu_k \,\mathcal{P}_{n,k}(x,y),\qquad \nu_k= \begin{cases} 2k,& \text{$k$ even}, \\ -2(k+\beta+\gamma+1), & \text{$k$ odd}, \end{cases}.\end{aligned}$$ The eigenvalue equation with respect to $L_1$ is obtained by dividing both sides of by $(1+q)$ and taking the $q\rightarrow -1$ limit according to . The eigenvalue equation with respect to $L_2$ is obtained by combining , and . The two-variable Big $-1$ Jacobi polynomials are thus the joint eigenfunctions of the first order differential operators with reflections $L_1$ and $L_2$. It is directly verified that these operators commute with one another, as should be. The $q\rightarrow -1$ limit of the recurrence relations can also be taken to obtain the recurrence relations satisfied by the Big $-1$ Jacobi polynomials. The result is as follows. The Big $-1$ Jacobi polynomials satisfy the recurrence relations $$\begin{aligned} y\,\mathcal{J}_{n,k}(x,y)=\widetilde{a}_{nk}\,\mathcal{J}_{n+1,k}(x,y)+\widetilde{b}_{nk}\,\mathcal{J}_{n,k}(x,y)+\widetilde{c}_{nk}\,\mathcal{J}_{n-1,k}(x,y),\end{aligned}$$ $$\begin{gathered} x\,\mathcal{J}_{n,k}(x,y)=\widetilde{e}_{nk}\,\mathcal{J}_{n+1,k-1}(x,y)+\widetilde{f}_{nk}\,\mathcal{J}_{n+1,k}(x,y)+\widetilde{g}_{nk}\,\mathcal{J}_{n+1,k+1}(x,y) \\ +\widetilde{r}_{nk}\,\mathcal{J}_{n,k-1}(x,y)+\widetilde{s}_{nk}\,\mathcal{J}_{n,k}(x,y)+\widetilde{t}_{nk}\,\mathcal{J}_{n,k+1}(x,y) \\ +\widetilde{u}_{nk}\,\mathcal{J}_{n-1,k-1}(x,y)+\widetilde{v}_{nk}\,\mathcal{J}_{n-1,k}(x,y)+\widetilde{w}_{nk}\,\mathcal{J}_{n-1,k+1}(x,y).\end{gathered}$$ With $$\begin{aligned} \widetilde{\tau}_k=\frac{k+\beta \phi_k}{2k+\beta+\gamma},\quad \widetilde{\sigma}_{k}=\frac{k+\beta\phi_k+\gamma+1}{2k+\beta+\gamma+2},\quad \widetilde{z}_{n}=\frac{(-1)^{n}-\delta(2n+\alpha+\beta+\gamma+2)}{(2n+\alpha+\beta+\gamma+1)(2n+\alpha+\beta+\gamma+3)},\end{aligned}$$ where $\phi_{k}=(1-(-1)^{k})/2$ is the characteristic function for odd numbers, the recurrence coefficients read $$\begin{aligned} \widetilde{a}_{n,k}&=\frac{1+\delta_{n}}{2n+\alpha+\beta+\gamma+3} \times \begin{cases} n-k+\alpha+1, & \text{$n+k$ even}, \\ n+k+\alpha+\beta+\gamma+2, & \text{$n+k$ odd}, \end{cases} \\ \widetilde{c}_{n,k}&=\frac{1+\delta_{n+1}}{2n+\alpha+\beta+\gamma+1} \times \begin{cases} n-k, & \text{$n+k$ even}, \\ n+k+\beta+\gamma+1, &\text{$n+k$ odd}, \end{cases} \\ \widetilde{e}_{n,k}&=\frac{\widetilde{\tau}_k(1-\delta_{k})(1+\delta_{n})}{2n+\alpha+\beta+\gamma+3} \times \begin{cases} n-k+\alpha+1, & \text{$n+k$ even}, \\ n-k+\alpha+2, & \text{$n+k$ odd}, \end{cases} \\ \widetilde{g}_{n,k}&=\frac{\widetilde{\sigma}_k(1+\delta_n)}{(1+\delta_k)(2n+\alpha+\beta+\gamma+3)} \times \begin{cases} n+k+\alpha+\beta+\gamma+3, & \text{$n+k$ even}, \\ n+k+\alpha+\beta+\gamma+2, & \text{$n+k$ odd}, \end{cases} \\ \widetilde{r}_{n,k}&=2\widetilde{\tau}_{k}\widetilde{z}_n((-1)^{k}-\delta) \times \begin{cases} n-k+\alpha +1 & \text{$n+k$ even} \\ n+k+\beta+\gamma+1 & \text{$n+k$ odd} \end{cases}\end{aligned}$$ $$\begin{aligned} \widetilde{t}_{n,k}&=\frac{2(-1)^{k+1}\widetilde{\sigma}_k\widetilde{z}_n}{1+\delta_k} \times \begin{cases} n-k, & \text{$n+k$ even}, \\ n+k+\alpha+\beta+\gamma+2, & \text{$n+k$ odd}, \end{cases} \\ \widetilde{u}_{nk}&=\frac{\widetilde{\tau}_k(1-\delta_k)(1-\delta_n)}{(2n+\alpha+\beta+\gamma+1)} \times \begin{cases} n+k+\beta+\gamma, & \text{$n+k$ even}, \\ n+k+\beta+\gamma+1, & \text{$n+k$ odd}, \end{cases} \\ \widetilde{w}_{n,k}&=\frac{\widetilde{\sigma}_k(1-\delta_n)}{(1+\delta_k)(2n+\alpha+\beta+\gamma+1)} \times \begin{cases} n-k, & \text{$n+k$ even}, \\ n-k-1, & \text{$n+k$ odd}, \end{cases}\end{aligned}$$ with $\delta_{n}=(-1)^{n}\delta$ and $$\begin{aligned} &\widetilde{b}_{n,k}=1-\widetilde{a}_{n,k}-\widetilde{c}_{n,k},\quad \widetilde{f}_{n,k}=\widetilde{a}_{n,k}(1-\widetilde{\sigma}_k-\widetilde{\tau}_{n,k}),\quad \widetilde{s}_{n,k}=\widetilde{b}_{n,k}(1-\widetilde{\sigma}_{k}-\widetilde{\tau}_{k})-\delta_{k} (\widetilde{\sigma}_{k}-\widetilde{\tau}_{k}) \\ & \widetilde{v}_{n,k}=\widetilde{c}_{n,k}(1-\widetilde{\sigma}_k-\widetilde{\tau}_{k})\end{aligned}$$ The result is obtained by applying the limit to the recurrence relations . A Pearson-type system for Big $-1$ Jacobi ========================================= Let us now show how the weight function $W(x,y)$ for the polynomials $\mathcal{J}_{n,k}(x,y)$ can be recovered as the symmetry factor for the operator $L_1$ given in . The symmetrization condition for $L_1$ is $$\begin{aligned} \label{Sym-Cond} (W(x,y)L_1)^{*}=W(x,y)L_1,\end{aligned}$$ where $M^{*}$ denotes the Lagrange adjoint. For an operator of the form $$\begin{aligned} M=\sum_{\mu,\nu,k,j}A_{k,j}(x,y)\;\partial_{x}^{k}\;\partial_{y}^{j}\;R_{x}^{\mu}\;R_{y}^{\nu},\end{aligned}$$ for $\mu,\nu\in \{0,1\}$ and $k,j=0,1,2,\ldots$, the Lagrange adjoint reads $$\begin{aligned} M^{*}=\sum_{\mu,\nu, k, j}(-1)^{k+j}\;R_{x}^{\mu}R_{y}^{\nu}\;\partial_{y}^{j}\;\partial_{x}^{k}\;A_{k,j}(x,y),\end{aligned}$$ where we have assumed that $W(x,y)$ is defined on a symmetric region with respect to $R_x$ and $R_y$. Imposing the condition , one finds the following system of Pearson-type equations: $$\begin{aligned} \label{A} W(x,y)\,G_{8}(x,y)&=W(-x,y)\,G_{8}(-x,y), \\ \label{B} W(x,y)\,G_{7}(x,y)&=W(-x,-y)\,G_{7}(-x,-y), \\ W(x,y)\,G_{6}(x,y)&= W(x,-y)\,G_{6}(x,-y), \\ W(x,y)\,G_{5}(x,y)&=W(-x,-y)\,G_{5}(-x,-y) \\ \label{D} W(x,y)\,G_{3}(x,y)&=W(x,-y)\,G_{3}(x,-y)-\partial_{y}(W(x,-y)\,G_{6}(x,-y)), \\ \label{C} W(x,y)\,G_{2}(x,y)&=W(-x,y)\, G_2(-x,y)-\partial_{x}(W(-x,y)\,G_{8}(-x,y)), \\ \nonumber W(x,y)\,G_{1}(x,y)&=W(-x,-y)\,G_1(-x,-y) \\ &\qquad -\partial_{x}(W(-x,-y)\,G_{7}(-x,-y))-\partial_{y}(W(-x,-y)\,G_5(-x,-y)),\end{aligned}$$ where the functions $G_{i}(x,y)$, $i=1,\ldots,8$, are given by . We assume that $W(x,y)>0$ and moreover that $|\delta|\leqslant |x|\leqslant y\leqslant 1$. Upon substituting in , one finds $$\begin{aligned} (x+\delta)\,(x-y)\,W(x,y)=-(x-\delta)\,(x+y)\,W(-x,y),\end{aligned}$$ for which the general solution is of the form $$\begin{aligned} \label{Inter-1} W(x,y)=\theta(x)\,(x-\delta)\,(x+y)\,f_1(x^2,y),\end{aligned}$$ where $f_1$ is an arbitrary function. Using in yields $$\begin{aligned} (y-1)\,f_1(x^2,y)=(y+1)\,f_1(x^2,-y),\end{aligned}$$ which has the general solution $$\begin{aligned} f_1(x^2,y)=\theta(y)\,(y+1)\,f_2(x^2,y^2),\end{aligned}$$ where $f_2$ is an arbitrary function. The function $W(x,y)$ is thus of the general form $$\begin{aligned} \label{Dompe} W(x,y)=\theta(x)\theta(y)\,(x-\delta)\,(x+y)\,(y+1)\,f_2(x^2,y^2).\end{aligned}$$ Upon substituting the above expression in , one finds after simplifications $$\begin{aligned} \left[\frac{\beta-1}{x^2-\delta^2}+\frac{\gamma-1}{x^2-y^2}\right]\,x\,f_2(x^2,y^2)=\partial_{x}\,f_2(x^2,y^2).\end{aligned}$$ After separation of variable, the result is $$\begin{aligned} f_2(x^2,y^2)=(x^2-\delta^2)^{\frac{\beta-1}{2}}(y^2-x^2)^{\frac{\gamma-1}{2}}f_3(y^2).\end{aligned}$$ Finally, upon substituting the above equation in one finds $$\begin{aligned} y(\alpha-1)f_3(y^2)=(y^2-1)\partial_{y}f_3(y^2),\end{aligned}$$ which gives $f_3=(1-y^2)^{\frac{\alpha-1}{2}}$ and thus $$\begin{aligned} f_2(x^2,y^2)=(x^2-\delta^2)^{\frac{\beta-1}{2}}(y^2-x^2)^{\frac{\gamma-1}{2}}(1-y^2)^{\frac{\alpha-1}{2}}.\end{aligned}$$ Upon combining the above expression with , we find $$\begin{aligned} W(x,y)=\theta(x\,y)|y|^{\beta+\gamma}(1+y)\left(1+\frac{x}{y}\right)\left(\frac{x-\delta}{y}\right)(1-y^2)^{\frac{\alpha-1}{2}}\left(1-\frac{x^2}{y^2}\right)^{\frac{\gamma-1}{2}}\left(\frac{x^2-\delta^2}{y^2}\right)^{\frac{\beta-1}{2}},\end{aligned}$$ which indeed corresponds to the weight function of proposition 2.1. The weight function $W(x,y)$ for the two-variable Big $-1$ Jacobi polynomials thus corresponds to the symmetry factor for $L_1$. Conclusion ========== In this paper, we have introduced and characterized a new family of two-variable orthogonal polynomials that generalize of the Big $-1$ Jacobi polynomials. We have constructed their orthogonality measure and we have derived explicitly their bispectral properties. We have furthermore shown that the weight function for these two-variable polynomials can be recovered by symmetrization of the first-order differential operator with reflections that these polynomials diagonalize. The two-variable orthogonal polynomials introduced here are the first example of the multivariate generalization of the Bannai-Ito scheme. It would be of great interest to construct multivariate extensions of the other families of polynomials of this scheme. Acknowledgments {#acknowledgments .unnumbered} =============== VXG holds an Alexander-Graham-Bell fellowship from the Natural Science and Engineering Research Council of Canada (NSERC). JML holds a scholarship from NSERC. The research of LV is supported in part by NSERC. AZ would like to thank the Centre de recherches mathématiques for its hospitality. Normalization coefficients for Big $-1$ Jacobi polynomials ========================================================== In this appendix, we present a derivation of the normalization coefficients and appearing in the orthogonality relation of the univariate Big $-1$ Jacobi polynomials. The result is obtained by using their kernel partners, the Chihara polynomials. These polynomials, denoted by $C_{n}(x;\alpha,\beta,\gamma)$, have the expression [@Genest-2013-09-02] $$\begin{aligned} &C_{2n}\left(x;\alpha,\beta,\gamma\right)= (-1)^n\frac{(\alpha+1)_n}{(n+\alpha+\beta+1)_n}\;{\begingroup \catcode`\,\active \def ,{\mskip{8mu}\relax} \dopfq }{2}{1}{-n, n+\alpha+\beta+1}{\alpha+1}{x^2-\gamma^2} , \\ &C_{2n+1}\left(x;\alpha,\beta,\gamma\right)= (-1)^n\frac{(\alpha+2)_n}{(n+\alpha+\beta+2)_n}(x-\gamma)\;{\begingroup \catcode`\,\active \def ,{\mskip{8mu}\relax} \dopfq }{2}{1}{-n, n+\alpha+\beta+2}{\alpha+2}{x^2-\gamma^2}.\end{aligned}$$ For $\alpha,\beta>-1$, they satisfy the orthogonality relation $$\begin{aligned} \label{Ortho-A} \int_{\mathcal{E}} C_{n}\left(x;\alpha,\beta,\gamma\right)C_{m}\left(x;\alpha,\beta,\gamma\right)\theta(x)(x+\gamma)(x^2-\gamma^2)^\alpha(1+\gamma^2-x^2)^{\beta}\;\mathrm{d}x=\eta_n\delta_{nm},\end{aligned}$$ on the interval $\mathcal{E}=[-\sqrt{1+\gamma^2},-|\gamma|]\cup[|\gamma|,\sqrt{1+\gamma^2}]$ and their normalization coefficients read $$\begin{aligned} \eta_{2n}=\frac{\Gamma(n+\alpha+1)\Gamma(n+\beta+1)}{\Gamma(n+\alpha+\beta+1)}\frac{n!}{(2n+\alpha+\beta+1)[(n+\alpha+\beta+1)_n]^2}, \\ \eta_{2n+1}=\frac{\Gamma(n+\alpha+2)\Gamma(n+\beta+1)}{\Gamma(n+\alpha+\beta+2)}\frac{n!}{(2n+\alpha+\beta+2)[(n+\alpha+\beta+2)_n]^2}.\end{aligned}$$ Let $\widehat{J}_{n}(x)$ be the monic Big $-1$ Jacobi polynomials: $$\begin{aligned} \widehat{J}_n(x)= \kappa_n J_n(x) = x^n+\mathcal{O}(x^{n-1}),\end{aligned}$$ where $\kappa_n$ is given by $$\begin{gathered} \kappa_n(a,b,c)=\begin{cases} \qquad \frac{(1-c^2)^{\frac{n}{2}}\left(\frac{a+1}{2}\right)_{\frac{n}{2}}}{\left(\frac{n+a+b+2}{2}\right)_{\frac{n}{2}}} , &\text{$n$ even,} \\ \frac{(1+c)(1-c^2)^{\frac{n-1}{2}}\left(\frac{a+1}{2}\right)_{\frac{n+1}{2}}}{\left(\frac{n+a+b+1}{2}\right)_{\frac{n+1}{2}}}, &\text{$n$ odd.} \end{cases}\end{gathered}$$ The kernel polynomials $\widetilde{K}_n(x;a,b,c)$ associated to $\widehat{J}_{n}(x;a,b,c)$ are defined through the Christoffel transformation [@Chihara-2011]: $$\begin{aligned} \label{Chris} \widetilde{K}_n(x;a,b,c)=\frac{\widehat{J}_{n+1}(x)-\frac{\widehat{J}_{n+1}(\nu)}{\widehat{J}_n(\nu)}\widehat{J}_n(x)}{x-\nu}.\end{aligned}$$ For $\nu=1$, the monic polynomials $\widetilde{K}_n(x;a,b,c)$ can be expressed in terms of the Chihara polynomials. Indeed, one can show that [@Genest-2013-09-02] $$\begin{aligned} \label{Relation} \widetilde{K}_n(x;a,b,c)=(\sqrt{1-c^2})^{n}\,C_n\left(\frac{x}{\sqrt{1-c^2}};\frac{b-1}{2},\frac{a+1}{2},\frac{-c}{\sqrt{1-c^2}}\right).\end{aligned}$$ Let $\mathcal{M}$ be the orthogonality functional for the Big $-1$ Jacobi polynomials. In view of the relation , one can write [@Chihara-2011] $$\begin{aligned} \mathcal{M}[(x-\nu)\;\widetilde{K}_n(x;a,b,c)\;x^k]=-\frac{\widehat{J}_{n+1}(\nu)}{\widehat{J}_{n}(\nu)}\mathcal{M}[\widehat{J}^2_n(x;a,b,c)]\,\delta_{kn}.\end{aligned}$$ By linearity, one thus finds $$\begin{aligned} \widetilde{\eta}_n=-\frac{\widehat{J}_{n+1}(\nu)}{\widehat{J}_{n}(\nu)}\hat{h}_n.\end{aligned}$$ where $\widetilde{\eta}_n=\mathcal{M}[(x-\nu)\;\widetilde{K}_n^2(x)]$ and $ \hat{h}_n=\mathcal{M}[\widehat{P}_n^2(x)]$. For $\nu=1$, the value of $\widetilde{\eta}_n$ is easily computed from and . The above relation then gives the desired coefficients. [2]{} [10]{} G. Andrews, R. Askey, and R. Roy. . Cambridge University Press, 1999. T. Chihara. . Dover Publications, 2011. C. Dunkl. . , 1:137–151, 1980. G. Gasper and M. Rahman. . . Cambridge University Press, 2^nd^ edition, 2004. V. X. Genest, M. Ismail, L. Vinet, and A. Zhedanov. . , 46:145201, 2013. V. X. Genest, M. Ismail, L. Vinet, and A. Zhedanov. . , 329:999–1029, 2014. V. X. Genest, L. Vinet, and A. Zhedanov. . , 46:325201, 2013. V. X. Genest, L. Vinet, and A. Zhedanov. . , 10:38–55, 2014. V. X. Genest, L. Vinet, and A. Zhedanov. . , 2015. V.X. Genest, L. Vinet, and A. Zhedanov. . , 9:18–37, 2013. J. Harnad, L. Vinet, O. Yermolayeva, and A. Zhedanov. . , 34:10619–10625, 2001. R. Koekoek, P.A. Lesky, and R.F. Swarttouw. . Springer, 1^st^ edition, 2010. T. Koornwinder. . In R. Askey, editor, [*[Theory and applications of special functions]{}*]{}, pages 435–495. Academic Press, 1975. H. L. Krall and I. M. Sheffer. . , 76:325–376, 1967. K. H. Kwon, J. K. Lee, and L. L. Littlejohn. . , 353:3629–3647, 2001. S. Lewanowicz and P. Wo[ź]{}ny. . , 233, 2010. W. Miller, S. Post, and P. Winternitz. . , 46:423001, 2013. S. Tsujimoto, L. Vinet, and A. Zhedanov. . , 229:2123–2158, 2012. L. Vinet and A. Zhedanov. . , 44:085201, 2011. L. Vinet and A. Zhedanov. . , 364:5491–5507, 2012.
--- author: - 'T. Oba' - 'T.L. Riethm[ü]{}ller' - 'S. K. Solanki' - 'Y. Iida' - 'C. Quintero Noda,' - 'T. Shimizu' bibliography: - 'myrefs.bib' title: 'The small-scale structure of photospheric convection retrieved by a deconvolution technique applied to *Hinode*/SP data' --- Introduction ============ Discussions =========== The continuum contrast of *Hinode*/SP is 7.2% for the original observation and 13.0 % after the deconvolution. We considered whether these values are realistic by comparing them with the results obtained using hydrodynamic simulations. The contrast of the synthesized image in the original pixel sampling ($\approx$10 km), the summed sampling ($\approx$100 km), and the summed sampling after the convolution with the instrument’s PSF shows 16.8%, 16.1%, and 8.2%, respectively. The third one is similar to the one observed by *Hinode*/SP, i.e., 7.2%, although the *Hinode* value is somewhat lower. According to @Danilovic2008 this difference may be due to a small defocus in the *Hinode*/SP, as the best focus is generally determined using the filtergraph. The contrast value of 13.0% in the deconvolved observation is lower than 16.1% in the original simulation. This suggests that the image is not over-restored, so that the deconvolution provided a realistic image in terms of the contrast. Moreover, these values are consistent with the previous work of @Danilovic2008, who found RMS contrasts of 14.4%, 7.5%, and 7.0% in the original pixel sampling ($\approx$10 km), the summed sampling ($\approx$100 km) with spatial convolution, and the *Hinode*/SP observation used in their work, respectively. Their contrast values are slightly lower than ours because their simulation included magnetic flux, which hinders the granulation and, hence, reduces the intensity contrast (@Biermann1941 [@Riethmuller2014]). The contrast of 13.0% found in our work is also consistent with the value of 12.8% estimated by the observational approach of @Mathew2009, who deconvolved *Hinode* images using a PSF of the *Hinode*/SOT derived using images during the Mercury transit.\ Before the deconvolution, the results displayed stronger convective upflows and weaker downflows. However, after the deconvolution, these amplitudes of up- and downflows became comparable, resulting from a preferential enhancement of the downflows. This enhancement can be explained by considering the difference in emergent intensity in granules and intergranular lanes. Convective blue- and red-shifted signals within a sub-arcsec spatial scale are unavoidably mixed because of the imperfect imaging performance of a telescope. Thus, they partially cancel each other, leading to spatially degraded signals. The higher intensities of granules compared to intergranular lanes, implied that more light from the brighter granules is scattered into the darker intergranular lanes than the other way around. Since radiation from granules is blue-shifted, it reduces the redshift of the radiation from intergranular lanes. This effect is larger than the decrease in the blueshift of radiation coming from granules, due to the smaller amount of straylight from intergranules, so that finally the downflows are decreased more strongly than the upflows. After deconvolution, the two velocities are better separated from each other again, so that downflows are more strongly enhanced than upflows. The preferential enhancement in downflows by the deconvolution leads to some weak upflows (mainly at the boundaries of granules) turning into downflows after deconvolution. This either decreases the areas of particular granules somewhat, or brings to light, usually narrow downflow lanes that are not visible in the original data. These effects increase the area covered by downflows at the cost of that covered by upflows.\ Before the deconvolution, we found stronger upflows and weaker downflows, in agreement with previous works (@Yu2011b [@Jin2009; @Oba2017]). After deconvolution, there was still an asymmetry between up- and downflows, but now with downflows being on average faster (although still covering a somewhat smaller area). The LOS flow averaged over the full FOV speed now lay within $\pm$0.05 km/s at all bisector levels, compared with an offset of up to -0.2 km/s in the original data. We found that the effect of the deconvolution on the data is similar in magnitude (but of opposite sign) as the changes introduced by convolving the data from the the MHD simulations. In Fig.\[fig:24\] for the observation and Fig.\[fig:13\] for the numerical simulation, small differences, e.g., in the exact shape of the velocity histograms, are present between simulations and observations, but these are minor compared with the otherwise surprisingly good agreement. The consistency between the observations and the numerical simulation are a good indication that *Hinode* resolves much of the granular structure. It also implies that the MURaM simulations provide a good description of solar granulation.\ We found that the magnitude of convective fields decreases with increasing height (see Fig.\[fig:24\]), which is consistent with overshooting convection. The photosphere is convectively stable, so that buoyancy acts against the material’s movement, and thus weakens the convective motions [@Stix2004]. We also detected that the effect of the deconvolution process on the convective motions is height-dependent; at the higher layers, the changes through the deconvolution process are less prominent. We believe that the reason behind this behavior is the photospheric temperature stratification, in which, towards higher layers, the intensity contrast in the granulation becomes smaller and eventually reverses, i.e., *reversed granulation* is seen. In this layer, dark blue-shifted granules are surrounded by bright red-shifted intergranular lanes [@Kostik2009]. The transition from the normal granulation to the reversed one occurs at roughly 130 to 140 km according to the numerical simulations of @Wedemeyer2004 and @Cheung2007 and at roughly 140 km in the observational work of @RuizCobo1996. The lowest bisector level investigated the present work samples an average height (135 km) roughly corresponding to the height of this transition. At around this geometrical height, the first-moments before and after the deconvolution process show no significant difference, which is consistent with this temperature distribution because the intensity from granules and intergranular lanes should be almost identical. Summary ======= To correct the instrumental imaging performance, we applied a deconvolution technique based on the Richardson-Lucy algorithm to *Hinode*/SP data. We incorporated a regularization term into the algorithm, which minimizes the risk of enhancing the noise component. Using a synthesized image, we confirmed that the RL algorithm in a regularized form restricted the noise enhancement. The deconvolution of the data led to a major change in the magnitude of the LOS velocity obtained via bisector analysis, in particular at higher intensities in the spectral line. The resulting convective velocity amplitudes are almost the same for up- and downflows (ranging from approximately -3.0 km/s to 3.0 km/s at bisector level 0.70). Whereas the original data showed a global upflow (i.e., net LOS velocity averaged over all pixels) of up to -0.2 km/s, after the deconvolution this value dropped to below $\pm$0.05 km/s. This indicates that the downflows were preferentially increased by the deconvolution, while the upflows were only moderately enhanced. The upflow obtained from the original data is consistent with the earlier investigations based on *Hinode* data (@Yu2011b, @Jin2009, and @Oba2017). We found the observed LOS velocities before and after deconvolution matched well with those derived from the numerical simulations, obtained by applying bisector analysis to the synthesized spectral profiles after and before convolution with the *Hinode* PSF, respectively. This agreement with MHD simulations is very heartening and suggests that the best current MHD simulations provide a relatively good description of solar granulation. Some remaining disagreement in the RMS contrast and velocity distributions may be due to a residual defocus in the *Hinode* data. Observations at higher spatial resolution (e.g. with the Sunrise balloon-borne solar observatory; @Solanki2010 [@Solanki2017; @Barthol2011]) and covering more spectral lines will provide a more sensitive test.\ The deconvolution also changed the spatial distribution of the velocity field, such that the area occupied by upflowing material at the granular edges was trimmed off by downflows pertaining to the intergranular lanes. By revealing more downflow lanes, a deconvolution technique such as ours will provide new insights into magneto-convection by allowing to see the immediate surroundings of small flux tubes, preferentially located in intergranular lanes (@Title1987 [@Grossmann-Doerth1988; @Solanki1989]). This should enable the magnetic features to be better probed, e.g., as done by @Dominguez2003 and @Buehler2015 using complementary approaches. Possible future applications of deconvolution include catching the detailed process of convective collapse coinciding with a downflow within the magnetic features [@Parker1978], as observed by @Nagata2008 and @Requerey2014, and described using MHD simulations by @Danilovic2010, or the magneto-acoustic waves generated intermittently in flux tubes (e.g., @Stangalini2014, @Jess2016), which are excited by the surrounding convective motions [@Kato2016]. *Hinode* is a Japanese mission developed and launched by ISAS/JAXA, collaborating with NAOJ as a domestic partner, NASA and STFC (UK) as international partners. Scientific operation of the *Hinode* mission is conducted by the *Hinode* science team organized at ISAS/JAXA. This team mainly consists of scientists from institutes in the partner countries. Support for the post-launch operation is provided by JAXA and NAOJ (Japan), STFC (U.K.), NASA, ESA, and NSC (Norway). We are grateful to the *Hinode* team for performing observations on August 25, 2009, which were well suited to this analysis. The present study was supported by the Advanced Research Course Program of SOKENDAI and by JSPS KAKENHI Grant Number JP16J07106. This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 695075) and has been supported by the BK21 plus program through the National Research Foundation (NRF) funded by the Ministry of Education of Korea. ![Evolution of the observed continuum intensity. The vertical axis corresponds to the slit-direction, and the horizontal axis represents the scanning-direction. The spatial sampling is roughly 100 km and the numbers above each map designate the time sequence with a cadence of 62 s.[]{data-label="fig:1"}](Figure_1v2.eps){width="12cm"} ![Illustration of the making of a pseudo-instantneous two-dimensional spectral map. Spectra are recorded along the slit at each x-position, starting from the left (at position A) and moving to the right with time, i.e., scanning along the x-direction. After the last slit position (position N) of a scan, the scanning position goes back to the first position (position A) and the process is repeated. To make a pseudo-instantaneous spatio-spectral map at a given time at the end of the first scan (indicated by the larger rectangle), the slit data at each spatial x-position are computed by linearly interpolating between the 1st and 2nd scans, as indicated by the vertical double-headed arrow and the dotted rectangle lying in the larger rectangle. []{data-label="fig:20"}](Figure_2ai2.eps){width="12cm"} ![Emergent continuum intensity map for the simulation box at one snapshot considered in this paper.[]{data-label="fig:21"}](Figure_3.eps){width="13cm"} ![PSF of *Hinode*/SP provided by S. Danilovic and obtained from the *Hinode*/SOT pupil noted in @Suematsu2008. One pixel corresponds to a spatial sampling of 100 km, the same as in the original *Hinode* data used here.[]{data-label="fig:2"}](Figure_4.eps){width="13cm"} ![Total fraction of the PSF contained within a given square area, expressed in pixels, i.e. fraction of the total energy within a given square area on the detector. Diamond symbols correspond to the fraction of the PSF contained within squares with an area of 1, 4, 9, 16, ..., 169 pixels (equivalent to a length of one side of the square of 1, 2, 3, 4, ..., 13 pixels). []{data-label="fig:3"}](Figure_5_v2ai.eps){width="13cm"} ![Comparison of the original image to the one retrieved through deconvolution. *Panel (a)*: *the answer* image, which is the simulated original continuum intensity with a spatial pixelation of $\approx$ 100 km. *Panel (b)*: *the imitated* image, which is produced by convolving the answer-image by the PSF and additionally introducing Gaussian random noise. *Panel (c)*: the intensity map retrieved by the RL algorithm without any regularization at an iteration number of 50. *Panel (d)*: the intensity map retrieved by the extended RL algorithm including the regularization term. *Panel (e)* is a scatter plot between *the answer-image* (*Panels (a)*) and the intensity retrieved through the RL algorithm without regularization (*Panel (c)*). *Panel (f)* depicts a scatter plot between *the answer-image* (*Panel (a)*) and the intensity retrieved through the RL algorithm with regularization (*Panel (d)*).[]{data-label="fig:5"}](Figure_6_v3ai.eps){width="12cm"} ![Standard deviation between *the answer-image* and the deconvolved image as a function of the iteration number. The solid line represents the standard deviation achieved by the original RL algorithm, and the dashed one that by the RL algorithm with regularization using $\lambda_{TM}$ = 0.016 (see main text and Fig.\[fig:22\] for the reasons behind this choice). []{data-label="fig:23"}](Figure_7v2.eps){width="13cm"} ![Standard deviation between *the answer-image* and the image retrieved by the RL algorithm with regularization as a function of the regularization parameter, $\lambda_{TM}$. The horizontal dashed line indicates the minimum standard deviation reached by the RL algorithm without regularization.[]{data-label="fig:22"}](Figure8_v3.eps){width="13cm"} ![ Granulation and p-mode oscillation images obtained from original and deconvolved *Hinode*/SP data. *The left column* shows the continuum intensity, *the center column* displays the convective velocity field at a bisector level of 0.70, and *the right column* depicts the 5-minute oscillation at the same level. *The top row* corresponds to the observations before the deconvolution, while *the bottom row* displays the deconvolved ones. Positive velocity signifies a red-shift (downward), a negative one a blue-shift (upward). []{data-label="fig:7"}](Figure_9.eps){width="13cm"} ![Upper panel: Scatter plot of the continuum intensity after deconvolution vs. that before deconvolution. Lower panel: the same for the LOS component of the convective velocity field at a bisector level of 0.70. Solid lines corresponds to the best fitted line, and dashed lines indicate a slope of unity. []{data-label="fig:14"}](Figure10_v2.eps){width="13cm"} ![Histogram of the convective velocity field before the deconvolution (top) and after the deconvolution (bottom). The sign of the velocity has the same meaning as in Fig.\[fig:14\]. The colors represent different bisector levels, listed in the upper right corners of the frames. []{data-label="fig:24"}](Figure_11ai2.eps){width="13cm"} ![First moment of the convective velocity fields at the 6 bisector levels. Diamonds indicate the first-momentum of the velocity distribution before the deconvolution, whereas crosses represent it after the deconvolution.[]{data-label="fig:25"}](Figure_12_ai1.eps){width="13cm"} ![Histogram of the LOS velocity, derived via bisector analysis from spectral lines synthesized in the MHD simulation. Upper panel: after the convolution with the *Hinode* PSF. Bottom panel: before the convolution. Positive values correspond to downflows and negative values mean upflows. The colors depict the different bisector levels identified in the upper right corners of the frames. These levels are the same as in Fig.\[fig:24\]. []{data-label="fig:13"}](figure_13_v2ai2.eps){width="13cm"}
--- abstract: | We prove that a circle bundle over a closed oriented aspherical manifold with hyperbolic fundamental group admits a self-map of absolute degree greater than one if and only if it is the trivial bundle. This generalizes in every dimension the case of circle bundles over hyperbolic surfaces, for which the result was known by the work of Brooks and Goldman on the Seifert volume. As a consequence, we verify the following strong version of a problem of Hopf for the above class of manifolds: Every self-map of non-zero degree of a circle bundle over a closed oriented aspherical manifold with hyperbolic fundamental group is either homotopic to a homeomorphism or homotopic to a non-trivial covering and the bundle is trivial. As another application, we derive the first examples of non-vanishing numerical invariants that are monotone with respect to the mapping degree on non-trivial circle bundles over aspherical manifolds with hyperbolic fundamental groups of any dimension. Moreover, we obtain the first examples of manifolds (given by the aforementioned bundles with torsion Euler class) which do not admit self-maps of absolute degree greater than one, but admit maps of infinitely many different degrees from other manifolds. address: 'Section de Mathématiques, Université de Genève, 2-4 rue du Lièvre, Case postale 64, 1211 Genève 4, Switzerland' author: - Christoforos Neofytidis title: | On a problem of Hopf for circle bundles over\ aspherical manifolds with hyperbolic fundamental groups --- Introduction ============ A long-standing question of Hopf (cf. Problem 5.26 in Kirby’s list [@Kirby]) asks the following: [(Hopf).]{} \[Hopfprob\] Given a closed oriented manifold $M$, is every self-map $f\colon M\longrightarrow M$ of degree $\pm 1$ a homotopy equivalence? A complete solution to Hopf’s problem seems to be currently out of reach. Nevertheless, some affirmative answers are known for certain classes of manifolds and dimensions, most notably for simply connected manifolds (by Whitehead’s theorem), for manifolds of dimension at most four with Hopfian fundamental groups [@Haus] (recall that a group is called Hopfian if every surjective endomorphism is an isomorphism), and for aspherical manifolds with hyperbolic fundamental groups (e.g. negatively curved manifolds). The latter groups are Hopfian [@Ma; @Sela], thus, the asphericity assumption together with the simple fact that any map of degree $\pm 1$ is $\pi_1$-surjective, answer in the affirmative Problem \[Hopfprob\] for closed manifolds with hyperbolic fundamental manifolds. In fact, the assumption about degree $\pm 1$ is unnecessary in verifying Problem \[Hopfprob\] for aspherical manifolds with hyperbolic fundamental groups, because those manifolds cannot admit self-maps of degree other than $\pm1$ or zero [@BHM; @Sela1; @Sela; @Min; @Min1]; cf. Section \[ss:hyperbolicmappings\]. Hence, every self-map of non-zero degree of a closed oriented aspherical manifold with hyperbolic fundamental group is a homotopy equivalence. Of course, the latter statement does not hold for all (aspherical) manifolds, because, for example, the circle admits self-maps of any degree. Nevertheless, every self-map of the circle of degree greater than one is homotopic to a (non-trivial) covering. The same is true for every self-map of nilpotent manifolds [@Bel] and for certain solvable mapping tori of homeomorphisms of the $n$-dimensional torus [@Wangorder; @Neoorder]. In addition, every non-zero degree self-map of a $3$-manifold $M$ is either a homotopy equivalence or homotopic to a covering map, unless the fundamental group of each prime summand of $M$ is finite or cyclic [@Wang]. The above results suggest the following question for aspherical manifolds: \[Hopfstrong\] Is every non-zero degree self-map of a closed oriented aspherical manifold either a homotopy equivalence or homotopic to a non-trivial covering? In dimension three, hyperbolic manifolds and manifolds containing a hyperbolic piece in their JSJ decomposition do not admit any self-map of degree greater than one[^1] due to the positivity of the simplicial volume [@Gromov]. (Recall that the simplicial volume $\|\cdot\|$ satisfies $\|M'\|\geq|\deg(f)|\cdot\|M\|$ for every map $f\colon M'\longrightarrow M$.) The other classes of aspherical $3$-manifolds which do not admit self-maps of degree grater than one are $\widetilde{SL_2}$-manifolds [@BG] and graph manifolds [@DW2], since those manifolds have another (virtually) positive invariant that is monotone with respect to mapping degrees, namely the Seifert volume (introduced in [@BG] by Brooks and Goldman). In particular, non-trivial circle bundles over closed hyperbolic surfaces (which are modeled on the $\widetilde{SL_2}$ geometry) do not admit self-maps of degree greater than one. At the other end, it is clear that trivial circle bundles over (hyperbolic) surfaces, i.e. products $S^1\times \Sigma$, admit self-maps of any degree (and those maps are either homotopy equivalences or homotopic to non-trivial coverings [@Wang]). Recall that a circle bundle $M\stackrel{\pi}\longrightarrow N$ is classified by its Euler class $e\in H^2(N;{\mathbb{Z}})$; in particular, it is trivial if and only if $e=0$. The main result of this paper is that the non-existence of self-maps of degree greater than one on non-trivial circle bundles over closed oriented aspherical $2$-manifolds with hyperbolic fundamental groups (i.e. over closed oriented hyperbolic surfaces) can be extended in any dimension: \[t:main\] Every self-map of a non-trivial circle bundle over a closed oriented aspherical manifold with hyperbolic fundamental group has degree at most one. It is a long-standing question (motivated by Problem \[Hopfprob\]) whether the Hopf property characterizes aspherical manifolds. More concretely, it is conjectured that the fundamental group of every aspherical manifold is Hopfian (see [@Neu] for a discussion). If this conjecture is true, then every self-map of an aspherical manifold of degree $\pm 1$ is a homotopy equivalence. In the course of the proof of Theorem \[t:main\], we will see that every self-map of a circle bundle over a closed oriented aspherical manifold with hyperbolic fundamental group is homotopic to a fiberwise covering map, and this alone shows that Problems \[Hopfprob\] and \[Hopfstrong\] have indeed affirmative answers for self-maps of those manifolds. More interestingly, Theorem \[t:main\] implies the following complete characterization with respect to Problem \[Hopfstrong\]: \[c:Hopf\] Every self-map of non-zero degree of a circle bundle over a closed oriented aspherical manifold with hyperbolic fundamental group either is a homotopy equivalence or is homotopic to a non-trivial covering and the bundle is trivial. \[r:homeo\] An even stronger conclusion holds for the homotopy equivalences of Corollary \[c:Hopf\]. Recall that the Borel conjecture asserts that any homotopy equivalence between two closed aspherical manifolds is homotopic to a homeomorphism. (Note that the Borel conjecture does not hold in the smooth category or for non-aspherical manifolds; see for example the related references in the survey paper [@Lue1] and the discussion in [@Sul].) A complete affirmative answer to the Borel conjecture is known in dimensions less than four (see again [@Lue1] for a survey). Moreover, by [@BL; @BLR], the fundamental group of a circle bundle $M$ over a closed aspherical manifold $N$ with $\pi_1(N)$ hyperbolic and $\dim(N)\geq 4$, satisfies the Farrell-Jones conjecture, and therefore the Borel conjecture, and so every homotopy equivalence of such a circle bundle is in fact homotopic to a homeomorphism. (See also [@BHM] for self-maps of the base $N$.) Beyond the Seifert volume for non-trivial circle bundles over hyperbolic surfaces [@BG], no other non-vanishing monotone invariant respecting the degree seemed to be known on higher dimensional circle bundles over aspherical manifolds with hyperbolic fundamental groups (note that the simplicial volume vanishes as well  [@Gromov]). A consequence of Theorem \[t:main\] is that such a monotone invariant exists and it is given by the domination semi-norm. Recall that the domination semi-norm is defined by $$\nu_M(M'):=\sup\{|\deg(f)|\ | \ f\colon M'\longrightarrow M\},$$ and it was introduced in [@CL]. Theorem \[t:main\] implies the following: If $M$ is a non-trivial circle bundle over a closed oriented aspherical manifold with hyperbolic fundamental group, then $\nu_M(M)=1$. However, the domination semi-norm is not finite in general, because $M$ might admit maps of infinitely many different degrees from another manifold $M'$. Indeed, this is the case for non-trivial circle bundles over closed oriented aspherical manifolds with hyperbolic fundamental groups with torsion Euler class. Those manifolds do not admit self-maps of degree greater than one by Theorem \[t:main\], but they are finitely covered by trivial circle bundles (see [@NeoIIPP Section 4.2] and the related references there), which admit self-maps of any degree. It seems that those non-trivial circle bundles are the first examples of manifolds that do not admit self-maps of degree greater than one, but admit maps of infinitely many different degrees from other manifolds. Nevertheless, Theorem \[t:main\] and the non-vanishing of the Seifert volume for non-trivial circle bundles over hyperbolic surfaces suggest the following: \[conj\] In every dimension $n$, there is a homotopy $n$-manifold numerical invariant $I_n$ satisfying the inequality $I_n(M)\geq |\deg(f)|\cdot I_n(N)$ for each map $f\colon M\longrightarrow N$, which is positive and finite on every not virtually trivial circle bundle over a closed aspherical manifold with hyperbolic fundamental group. Outline of the proof of the main theorem {#outline-of-the-proof-of-the-main-theorem .unnumbered} ---------------------------------------- The proof of Theorem \[t:main\] amounts in showing that if an oriented circle bundle $M$ over a closed oriented aspherical manifold $N$ with $\pi_1(N)$ hyperbolic admits a self-map $f$ of degree greater than one, then this bundle must be trivial. We will show that such $f$ is in fact homotopic to a fiberwise non-trivial self-covering of $M$, and thus the powers of $f$ induce a purely decreasing sequence $$\label{eq.sequenceintro} \pi_1(M)\supsetneq f_*(\pi_1(M))\supsetneq\cdots\supsetneq f^m_*(\pi_1(M))\supsetneq f^{m+1}_*(\pi_1(M))\supsetneq\cdots.$$ Using this sequence, we will be able to obtain an infinite index subgroup of $\pi_1(M)$ given by $$G:=\mathop{\cap}_{m}f^m_*(\pi_1(M)).$$ The last part of the proof uses the concept of groups infinite index presentable by products (IIPP) and characterizations of groups fulfilling this condition [@NeoIIPP]. More precisely, we will see that the multiplication map $$\varphi\colon C(\pi_1(M))\times G\longrightarrow \pi_1(M)$$ defines a surjective presentation by products for $\pi_1(M)$, where both $G$ and the center $C(\pi_1(M))$ have infinite index in $\pi_1(M)$. This will lead us to the conclusion that $\pi_1(M)$ is in fact isomorphic to a product and $M$ is the trivial circle bundle. In the proof of Theorem \[t:main\] we will use the fact that the base is an aspherical manifold which does not admit self-maps of degree greater than one, and its fundamental group is Hopfian with trivial center. Thus we can extend Theorem \[t:main\] (and its consequences) to any circle bundle over a closed oriented manifold $N$ that fullfils the aforementioned properties. For instance, if $N$ is an irreducible locally symmetric space of non-compact type, then it is aspherical, it has positive simplicial volume [@LS; @Bu] (and thus does not admit self-maps of degree greater than one), and $\pi_1(N)$ is Hopfian [@Ma] without center [@Ra]. A decreasing sequence (\[eq.sequenceintro\]) exists whenever an aspherical manifold $M$ admits a self-map $f$ of degree greater than one and $\pi_1(\overline{M})$ is Hopfian for every finite cover $\overline{M}$ of $M$ (which is conjectured to be true as mentioned above). This gives further evidence towards an affirmative answer to Problem \[Hopfstrong\], since the existence of such a sequence is a necessary condition for $f$ to be homotopic to a non-trivial covering. Now, every finite index subgroup of the fundamental group of a circle bundle over a closed aspherical manifold with hyperbolic fundamental group is indeed Hopfian and therefore this gives us an alternative way of obtaining sequence (\[eq.sequenceintro\]). We will discuss the Hopf property for those circle bundles and Problem \[Hopfstrong\] more generally in Section \[s:Hopfdiscussion\]. Acknowledgments {#acknowledgments .unnumbered} --------------- I would like to thank Michelle Bucher, Pierre de la Harpe, Jean-Claude Hausmann, Wolfgang Lück, Jason Manning, Dennis Sullivan and Shmuel Weinberger for useful comments and discussions. I am especially thankful to Wolfgang Lück for suggesting to extend the results of a previous version of this paper to circle bundles over aspherical manifolds with hyperbolic fundamental groups. The support of the Swiss National Science Foundation is gratefully acknowledged. Infinite sequences of coverings =============================== In this section, we reduce our discussion to self-coverings of a circle bundle over a closed oriented aspherical manifold with hyperbolic fundamental group and thus obtain a purely decreasing sequence of finite index subgroups of the fundamental group of this bundle. Self-maps of aspherical manifolds with hyperbolic fundamental groups {#ss:hyperbolicmappings} -------------------------------------------------------------------- First, we observe that the hyperbolicity of the fundamental group of the base implies strong restrictions on the possible degrees of its self-maps: [([@BHM]).]{}\[p:mappinghyperbolic\] Every self-map of non-zero degree of a closed aspherical manifold with non-elementary hyperbolic fundamental group is a homotopy equivalence. There are two ways to see this. The first one (given in [@BHM]) is purely algebraic, using the co-Hopf property of torsion-free, non-elementary hyperbolic groups [@Sela1; @Sela]. The other way uses bounded cohomology and the simplicial volume; cf. [@Min; @Min1] and [@Gromov1]. Let $N$ be a closed oriented aspherical manifold whose fundamental group is non-elementary hyperbolic and $f\colon N\longrightarrow N$ be a map of non-zero degree. By [@Sela1; @Sela] (see also [@BHM Lemma 4.2]), $\pi_1(N)$ is co-Hopfian (i.e. every injective endomorphism is an isomorphism), and so by the asphericity of $N$ it suffices to show that $f_*$ is injective. Suppose the contrary, and let a non-trivial element $x\in \ker(f_*)$. Since $f_*(\pi_1(N))$ has finite index in $\pi_1(N)$, there is some $n\in{\mathbb{N}}$ such that $x^n\in f_*(\pi_1(N))$, i.e. there is some $y\in\pi_1(N)$ such that $f_*(y)=x^n$. Clearly, $x^n\neq1$, because $\pi_1(N)$ is torsion-free, and so $y\notin\ker(f_*)$. Now, $f_*^2(y)=f_*(x^n)=1$, which means that $y\in\ker(f_*^2)$. By iterating this process, we obtain a purely increasing sequence $$\ker(f_*)\subsetneq \ker(f_*^2)\subsetneq\cdots\subsetneq \ker(f_*^{m}) \subsetneq \ker(f_*^{{m+1}})\subsetneq\cdots.$$ But the latter sequence contradicts Sela’s result [@Sela1; @Sela] that for every endomorphism $\psi$ of a torsion-free hyperbolic group, there exists $m_0\in{\mathbb{N}}$ such that $\ker(\psi^k)=\ker(\psi^{m_0})$ for all $k\geq m_0$. We deduce that $f_*$ is injective, and therefore an isomorphism as required. Alternatively to the above argument, since $\pi_1(M)$ is non-elementary hyperbolic, the comparison map from bounded cohomology to ordinary cohomology $$\psi_{\pi_1(M)}\colon H_b^n(\pi_1(M);{\mathbb{R}})\longrightarrow H^n(\pi_1(M);{\mathbb{R}})$$ is surjective; cf. [@Min; @Min1; @Gromov1]. Thus, by the duality of the simplicial $\ell^1$-semi-norm and the bounded cohomology $\ell^\infty$-semi-norm (cf. [@Gromov]), we deduce that $M$ has positive simplicial volume. This implies that every non-zero degree map $f\colon M\longrightarrow M$ has degree $\pm1$. In particular, $f$ is $\pi_1$-surjective, and thus an isomorphism, because $\pi_1(M)$ is Hopfian [@Ma; @Sela]. Fundamental group and finite covers {#ss:cover} ----------------------------------- Let $M\stackrel{\pi}\longrightarrow N$ be an oriented circle bundle, where $N$ is a closed aspherical manifold with $\pi_1(N)$ hyperbolic. We may assume that $\dim (N)\geq2$, otherwise we deal with the well-known case of $T^2$. The fundamental group of $M$ fits into the central extension (cf. [@Bo; @CR]) $$\label{eq.fundamentalgroup} 1\longrightarrow C(\pi_1(M))\longrightarrow \pi_1(M)\stackrel{\pi_*}\longrightarrow\pi_1(N) \longrightarrow 1,$$ where $C(\pi_1(M))={\mathbb{Z}}$ (note that $C(\pi_1(N))=1$, because $\pi_1(N)$ is non-cyclic hyperbolic). It is easy to observe that every finite covering of $M$ is of the same type. More precisely: [([@NeoIIPP Lemma 4.6]).]{}\[l:cover\] Every finite cover $\overline{M}\stackrel{p}\longrightarrow M$ is a circle bundle $\overline{M}\stackrel{\overline{\pi}}\longrightarrow \overline{N}$, where $\overline{N}\stackrel{\overline{p}}\longrightarrow N$ is a finite covering. In particular, $p$ is a bundle map covering $\overline{p}$ and the (infinite cyclic) center of $\pi_1(\overline{M})$ is mapped under $p_*$ into the center of $\pi_1(M)$. Reduction to covering maps {#covers} -------------------------- Now, let $f\colon M\longrightarrow M$ be a map of non-zero degree. We observe that $f$ is homotopic to a covering map: \[p:fibercover\] $f$ is homotopic to a fiberwise covering where the induced map $f_{S^1}\colon S^1\longrightarrow S^1$ has degree $\pm\deg(f)$. Let the composite map $\pi\circ f\colon M\longrightarrow N$ and the induced homomorphism $$(\pi\circ f)_*\colon \pi_1(M)\longrightarrow\pi_1(N).$$ Since the center of $\pi_1(N)$ is trivial, we derive, after lifting $f$ to a $\pi_1$-surjective map $\overline{f}\colon M\longrightarrow \overline{M}$ (where $\overline{M}\stackrel{p}\longrightarrow M$ corresponds to $f_*(\pi_1(M))$), that the center of $\pi_1(M)$ is mapped under $(\pi\circ f)_*$ to the trivial element of $\pi_1(N)$; cf. Section \[ss:cover\]. Thus $f$ factors up to homotopy through a self-map $g\colon N\longrightarrow N$, i.e. $\pi\circ f =g\circ\pi$ (that is, $f$ is a bundle map covering $g$). Clearly, $\deg(g)\neq 0$, otherwise $f$ would factor through the degree zero map from the pull-back bundle of $g$ along $\pi$ to $M$, which is impossible because $\deg(f)\neq 0$. Now, Proposition \[p:mappinghyperbolic\] implies that $g$ is a homotopy equivalence of $N$ (in particular $\deg(g)=\pm 1$). Hence, since $f$ is a bundle map, we conclude that the induced map $f_{S^1}$ on the $S^1$ fiber is of degree $$\deg(f_{S^1})=\pm\deg(f).$$ In particular, $f$ is homotopic to a fiberwise covering. Since every map of degree $\pm1$ is $\pi_1$-surjective, the above proposition answers in the affirmative Problems \[Hopfprob\] and \[Hopfstrong\] for $M$: \[c:Hopfprob\] Every self-map of non-zero degree of a circle bundle over a closed aspherical manifold with hyperbolic fundamental group is either a homotopy equivalence or homotopic to a non-trivial covering. As pointed out in Remark \[r:homeo\], every homotopy equivalence of a circle bundle over a closed oriented aspherical manifold with hyperbolic fundamental group is homotopic to a homeomorphism; in most of the cases this follows by a result of Bartels and Lück [@BL] on the Borel conjecture. Consider now the iterates $$f^m\colon M\longrightarrow M, \ m\geq1.$$ By Proposition \[p:fibercover\], each $f^m$ is homotopic to a fiberwise covering of degree $$(\deg(f))^m=[\pi_1(M):f^m_*(\pi_1(M))],$$ i.e. for each $m$, the homomorphism $$f^m_*\colon\pi_1(M)\longrightarrow\pi_1(M)$$ maps every element $x\in C(\pi_1(M))={\mathbb{Z}}$ to $x^{\pm\deg(f^m)}\in C(\pi_1(M))$ and induces an isomorphism on $\pi_1(N)=\pi_*(\pi_1(M))$. In particular, when $\deg(f)>1$, we obtain the following: \[c:sequence\] If $f\colon M\longrightarrow M$ has degree greater than one, then there is a purely decreasing infinite sequence of subgroups of $\pi_1(M)$ given by $$\label{eq.sequence} \pi_1(M)\supsetneq f_*(\pi_1(M))\supsetneq\cdots\supsetneq f^m_*(\pi_1(M))\supsetneq f^{m+1}_*(\pi_1(M))\supsetneq\cdots.$$ Distinguishing between trivial and non-trivial bundle ===================================================== Now we will show that the existence of sequence (\[eq.sequence\]) implies that $\pi_1(M)$ is isomorphic to a product and $M$ is the trivial bundle. To this end, we will construct a surjective presentation of $\pi_1(M)$ by a product of two infinite index subgroups. Groups infinite index presentable by products --------------------------------------------- An infinite group $\Gamma$ is said to be [*infinite index presentable by products*]{} or [*IIPP*]{} if there exist two infinite subgroups $\Gamma_1,\Gamma_2\subset\Gamma$ that commute elementwise, such that $[\Gamma:\Gamma_i]=\infty$ for both $\Gamma_i$ and the multiplication homomorphism $$\label{eq.IIPP} \Gamma_1\times\Gamma_2\longrightarrow\Gamma$$ surjects onto a finite index subgroup of $\Gamma$. The notion of groups IIPP was introduced in [@NeoIIPP] in the study of maps of non-zero degree from direct products to aspherical manifolds with non-trivial center. The concept of groups presentable by products (i.e. without the constraint on the index) was introduced in [@KL]. It is clear that when $\Gamma$ is a [*reducible*]{} group, that is, virtually a product of two infinite groups, then $\Gamma$ is IIPP. Thus, a natural problem is to determine when these two properties are equivalent. In general, they are not equivalent as shown in [@NeoIIPP Section 8], however their equivalence is achieved under certain assumptions: [([@NeoIIPP Theorem D]).]{}\[t:IIPPequiv\] Suppose $\Gamma$ fits into a central extension $$1\longrightarrow C(\Gamma)\longrightarrow\Gamma\longrightarrow\Gamma/C(\Gamma)\longrightarrow1,$$ where $\Gamma/C(\Gamma)$ is not presentable by products. Then $\Gamma$ is IIPP if and only if it is reducible. The following theorem characterizes aspherical circle bundles, when the fundamental group of the base is not presentable by products: [([@NeoIIPP Theorem C]).]{}\[t:char\] Let $M \stackrel{\pi}\longrightarrow N$ be a circle bundle over a closed aspherical manifold $N$ whose fundamental group $\pi_1(N)$ is not presentable by products. Then the following are equivalent: - $M$ admits a map of non-zero degree from a direct product; - $M$ is finitely covered by a product $S^1 \times \overline{N}$, for some finite cover $\overline{N} \longrightarrow N$; - $\pi_1(M)$ is reducible; - $\pi_1(M)$ is IIPP. Since non-elementary hyperbolic groups are not presentable by products [@KL], each circle bundle $M$ over a closed aspherical manifold $N$ with $\pi_1(N)$ hyperbolic fulfills the assumptions of Theorems \[t:IIPPequiv\] and \[t:char\]. Using this, we will be able to deduce that $M$ is virtually trivial. However, our presentation by products for $\pi_1(M)$ will be already surjective and this, together with the fact that $C(\pi_1(M))={\mathbb{Z}}$, will imply that $M$ is actually the trivial circle bundle (see also Remark \[r:notPP\]). An infinite index presentation by products ------------------------------------------ Under the assumption of the existence of $f^m\colon M\longrightarrow M$ with $\deg(f^m)=(\deg(f))^m>1$ for all $m\geq 1$, and thus of sequence (\[eq.sequence\]), we consider the subgroup of $\pi_1(M)$ defined by $$G:=\mathop{\cap}_{m}f^m_*(\pi_1(M)).$$ First, we observe that $G$ has infinite index in $\pi_1(M)$. Let us suppose the contrary, i.e. that $[\pi_1(M):G]<\infty$. Then by $$[\pi_1(M):f^m_*(\pi_1(M))]\leq[\pi_1(M):G]$$ for all $m$, and the fact that $\pi_1(M)$ contains only finitely many subgroups of a fixed index, we deduce that there exists $n$ such that $f^n_*(\pi_1(M))=f^k_*(\pi_1(M))$ for all $k\geq n$. This is however impossible by Corollary \[c:sequence\]. Now, we will show that $\pi_1(M)$ admits a presentation by the product $C(\pi_1(M))\times G$. Let $$\label{eq.presentation} \varphi\colon C(\pi_1(M))\times G\longrightarrow \pi_1(M)$$ be the multiplication map. Since each element of $C(\pi_1(M))$ commutes with every element of $G$, we deduce that $\varphi$ is in fact a well-defined homomorphism. We claim that $\varphi$ is surjective. Let $x\in\pi_1(M)$. If $x\in C(\pi_1(M))$, then $\varphi(x,1)=x$. If $x\notin C(\pi_1(M))$, then $\pi_*(x)$ is not trivial in $\pi_1(N)$. For simplicity, we can moreover assume that $x$ does not contain any power of the generator $z$ of the infinite cyclic $C(\pi_1(M))$. In Section \[covers\], we have seen that for every $m$, the composite $f^m$ induces an isomorphism on $\pi_1(N)$, and even more it induces a homotopy equivalence of $N$. Thus, for each $m$, there is a $y_m\in \pi_1(M)$ such that $x=z^{l_m}f^m_*(y_m)$. Since $f^m_*(z)=z^{\pm\deg(f^m)}$, we have that $\deg(f^m)$ must divide $l_m$, and so $$x=f^m_*(z^{\pm\frac{l_m}{\deg(f^m)}}y_m)\in G.$$ Hence $\varphi(1,x)=x$ and $\varphi$ is surjective. Since moreover $C(\pi_1(M))$ and $G$ have infinite index in $\pi_1(M)$, we conclude that the presentation given in (\[eq.presentation\]) is an infinite index presentation by products. Theorem \[t:char\] implies already that $\pi_1(M)$ is reducible and $M$ is virtually the trivial circle bundle. Furthermore, the kernel of $\varphi$ must be trivial, because it is isomorphic to $C(\pi_1(M))\cap G$, which is torsion-free, central in $C(\pi_1(M))={\mathbb{Z}}$, and satisfies $$[C(\pi_1(M)):C(\pi_1(M))\cap G]=[\pi_1(M):G]=\infty.$$ We deduce that $$\pi_1(M)\cong C(\pi_1(M))\times G.$$ In particular, $G$ is isomorphic to $\pi_1(N)$ and $M$ is the trivial circle bundle. This finishes the proof of Theorem \[t:main\]. \[r:notPP\] Note that the property of $\pi_1(N)$ that is not presentable by products was not necessary for our proof. The infinite index presentation by products of $\pi_1(M)$ given in (\[eq.presentation\]) is already surjective. Using this, together with the fact that the center of $\pi_1(M)$ is infinite cyclic, we obtained the desired product decomposition of $\pi_1(M)$. The proof of Corollary \[c:Hopf\] is now straightforward: Let $M$ be a circle bundle over a closed oriented aspherical manifold $N$ with $\pi_1(N)$ hyperbolic and $f\colon M\longrightarrow M$ be a map of non-zero degree. As we have seen in Section \[covers\], if $\deg(f)=\pm1$, then $f$ is a homotopy equivalence and, if $\deg(f)\neq\pm1$, then $f$ is homotopic to a non-trivial covering. In the latter case, Theorem \[t:main\] implies moreover that $M\simeq S^1\times N$. The Hopf property and the strong version of Hopf’s problem {#s:Hopfdiscussion} ========================================================== In this section we discuss the Hopf property for circle bundles over aspherical manifolds with hyperbolic fundamental groups and Problem \[Hopfstrong\] more generally. The Hopf property ----------------- First, we show that the fundamental groups of circle bundles over aspherical manifolds with hyperbolic fundamental groups are Hopfian: \[p:Hopfproperty1\] If $M$ is a circle bundle over a closed oriented aspherical manifold with hyperbolic fundamental group, then every finite index subgroup of $\pi_1(M)$ is Hopfian. Let $M\stackrel{\pi}\longrightarrow N$ be a circle bundle, where $N$ is a closed oriented aspherical manifold with $\pi_1(N)$ hyperbolic. (As before, we can assume that $\pi_1(N)$ is not cyclic.) Since every finite covering of $M$ is of the same type (cf. Lemma \[l:cover\]), it suffices to show that $\pi_1(M)$ is Hopfian. Let $\phi\colon\pi_1(M)\longrightarrow\pi_1(M)$ be a surjective homomorphism. Then $\phi(C(\pi_1(M)))\subseteq C(\pi_1(M))$, and so the composite homomorphism $\pi_*\circ\phi\colon \pi_1(M)\longrightarrow\pi_1(N)$ maps $C(\pi_1(M))$ to the trivial element of $\pi_1(N)$. In particular, there exists a surjective homomorphism $\overline{\phi}\colon\pi_1(N)\longrightarrow \pi_1(N)$ such that $\overline{\phi}\circ\pi_*=\pi_*\circ\phi$. Now $\overline{\phi}$ is injective as well (and so an isomorphism), because $\pi_1(N)$ is Hopfian, being hyperbolic and torsion-free [@Ma; @Sela]. $$\xymatrix{ 1 \ar[r]^{} & C(\pi_1(M)) \ar[d]^{\phi\vert_{C(\pi_1(M))}} \ar[r]^{} & \pi_1(M) \ar[d]^{\phi} \ar[r]^{\pi_*} & \pi_1(N)\ar[d]^{\overline{\phi}} \ar[r]^{} & 1 \\ 1 \ar[r]^{} &C(\pi_1(M)) \ar[r]^{} & \pi_1(M)\ar[r]^{\pi_*} & \pi_1(N) \ar[r]^{} & 1\\ }$$ Then, using again the surjectivity of $\phi$, we deduce that $$\phi\vert_{C(\pi_1(M))}\colon C(\pi_1(M))\longrightarrow C(\pi_1(M))$$ is also surjective. Since $C(\pi_1(M))={\mathbb{Z}}$ is Hopfian, we conclude that $\phi\vert_{C(\pi_1(M))}$ is in fact an isomorphism. Now, the five-lemma for the commutative diagram in Figure \[f:Hopf\] implies that $\phi$ is an isomorphism as well. In this way, we obtain also an alternative proof of the fact that every self-map of $M$ of degree $\pm 1$ is a homotopy equivalence. Of course, the above group theoretic argument uses the same line of argument as the proof of Proposition \[p:fibercover\], with the difference that it starts with a stronger assumption, namely that $\phi$ is surjective. Infinite decreasing sequences and Problem \[Hopfstrong\] -------------------------------------------------------- The fact that every finite index subgroup of the fundamental group of a circle bundle over an aspherical manifold $N$ with hyperbolic $\pi_1(N)$ has the Hopf property is actually conjectured to be true for all aspherical manifolds. Beyond that this would immediately verify Problem \[Hopfprob\] for every aspherical manifold, it also gives evidence for an affirmative answer to Problem \[Hopfstrong\]. Namely, let $f\colon M\longrightarrow M$ be a map of degree $\deg(f)>1$ and suppose that every finite index subgroup of $\pi_1(M)$ is Hopfian. Then, as in the case of non-trivial coverings, there is a purely decreasing infinite sequence $$\pi_1(M)\supsetneq f_*(\pi_1(M))\supsetneq\cdots\supsetneq f^m_*(\pi_1(M))\supsetneq f^{m+1}_*(\pi_1(M))\supsetneq\cdots.$$ The proof of this claim can be found along the lines of the proof of Theorem 14.40 of [@Lue], but let us give the details for completeness: Suppose the contrary, i.e. that there is some $n$ such that $f^n_*(\pi_1(M))=f^k_*(\pi_1(M))$ for all $k\geq n$. Let $M_n\stackrel{p_n}\longrightarrow M$ be the finite covering of $M$ corresponding to $f^n_*(\pi_1(M))$ and denote by $\overline{f^n}\colon M\longrightarrow M_n$ the lift of $f^n$, which induces a surjection on $\pi_1$. Since $f^n_*(\pi_1(M))=f^{2n}_*(\pi_1(M))$, we deduce that the composite map $\overline{f^n}\circ p_n\colon M_n\longrightarrow M_n$ induces a surjection $$(\overline{f^n}\circ p_n)_*\colon \pi_1(M_n)\longrightarrow \pi_1(M_n).$$ Since $\pi_1(M_n)$ is Hopfian, we deduce that $(\overline{f^n}\circ p_n)_*$ is an isomorphism, and so a homotopy equivalence, because $M_n$ is aspherical. In particular, we obtain $$\deg(\overline{f^n}),\deg(p_n)\in\{\pm 1\},$$ which leads to the absurd conclusion that $\deg(f)=\pm 1$. [10]{} A. Bartels and W. Lück, [*The Borel conjecture for hyperbolic and CAT(0)-groups*]{}, Ann. of Math. (2) [**175**]{} no. 2 (2012) 631–689. A. Bartels, W. Lück and H. Reich, [*The K-theoretic Farrell-Jones conjecture for hyperbolic groups*]{}, Invent. Math., [**172**]{} (1) (2008), 29–70. I. Belegradek, [*On co-Hopfian nilpotent groups*]{}, Bull. London Math. Soc. [**35**]{} (2003), 805–811. A. Borel, [*On periodic maps of certain $K(\pi,1)$*]{}, Œuvres: Collected Papers III, 1969–1982, Springer, 1983. M. Bridson, A. Hinkkanen and G. Martin, [*Quasiregular self-mappings of manifolds and word hyperbolic groups*]{}, Compos. Math. [**143**]{} no. 6 (2007), 1613–1622. R. Brooks and W. Goldman, [*Volumes in Seifert space*]{}, Duke Math. J. [**51**]{} no. 3 (1984), 529–545. M. Bucher, [*Simplicial volume of locally symmetric spaces covered by ${\mathrm{SL}}_{3}{\mathbb{R}}/\mathrm{SO}(3)$*]{}, Geom. Dedicata [**125**]{} (2007), 203–224. P. Conner and F. Raymond, [*Actions of compact Lie groups on aspherical manifolds*]{}, Topology of Manifolds, University of Georgia, Athens, Georgia 1969, 227–264, Markham, Chicago 1970. D. Crowley and C. Löh, [*Functorial seminorms on singular homology and (in)flexible manifolds*]{}, Algebr. Geom. Topol. [**15**]{} no. 3 (2015), 1453–1499. P. Derbez and S. Wang, [*Graph manifolds have virtually positive Seifert volume*]{}, J. Lond. Math. Soc. [**86**]{} (2012), 17–35. M. Gromov, [*Volume and bounded cohomology*]{}, Inst. Hautes Études Sci. Publ. Math. [**56**]{} (1982), 5–99. M. Gromov, [*Hyperbolic groups*]{}, in “Essays in Group Theory”, Math. Sci. Res. Inst. Publ., Springer, New York-Berlin [**8**]{} (1987), 75–263. J.-C. Hausmann, [*Geometric Hopfian and non-Hopfian situations*]{}, Lecture notes in Pure and Applied Math. [**105**]{} (1987), 157–165. R. Kirby, [*Problems in low-dimensional topology*]{}, Berkeley 1995. D. Kotschick and C. Löh, [*Fundamental classes not representable by products*]{}, J. Lond. Math. Soc., [**79**]{} (2009), 545–561. J.-F. Lafont and B. Schmidt, [*Simplicial volume of closed locally symmetric spaces of non-compact type*]{}, Acta Math., [**197**]{} (2006), 129–143. W. Lück, [*$L^2$-invariants: theory and applications to geometry and K-theory*]{}, Springer-Verlag, Berlin, 2002. W. Lück, [*Survey on aspherical manifolds*]{}, European Congress of Mathematics, 53–82, Eur. Math. Soc., Zürich 2010. A. I. Mal’cev, [*On the faithful representation of infinite groups by matrices*]{}, Math. Stab. [**8**]{} (50) (1940), 405–422. I. Mineyev, [*Straightening and bounded cohomology of hyperbolic groups*]{}, Geom. Funct. Anal. [**11**]{} (2001) 807–839. I. Mineyev, [*Bounded cohomology characterizes hyperbolic groups*]{}, Q. J. Math. [**53**]{} no. 1 (2002), 5–73. C. Neofytidis, [*Fundamental groups of aspherical manifolds and maps of non-zero degree*]{}, Groups Geom. Dyn. [**12**]{} (2018), 637–677. C. Neofytidis, [*Ordering Thurston’s geometries by maps of non-zero degree*]{}, J. Topol. Anal. (to appear). B. H. Neumann, [*On a problem of Hopf*]{}, J. Lond. Math. Soc. [**28**]{} (1953), 351–353. M. S. Raghunathan, [*Discrete subgroups of Lie groups*]{}, Ergebnisse der Mathematik und ihrer Grenzgebiete [**68**]{} Springer, New York-Heidelberg, 1972. Z. Sela, [*Structure and rigidity in (Gromov) hyperbolic groups and discrete groups in rank 1 Lie groups. II.*]{}, Geom. Funct. Anal. [**7**]{} (1997), 561–593. Z. Sela, [*Endomorphisms of hyperbolic groups. I: The Hopf property*]{}, Topology [**38**]{} no. 2 (1999), 301–321. D. Sullivan, [*Infinitesimal computations in topology*]{}, Inst. Hautes Études Sci. Publ. Math. [**47**]{} (1977), 269–331. S. Wang, [*The existence of maps of nonzero degree between aspherical 3-manifolds*]{}, Math. Z. [**208**]{} no. 1 (1991), 147–160. S. Wang, [*The $\pi_1$-injectivity of self-maps of nonzero degree on 3-manifolds*]{}, Math. Ann. [**297**]{} (1993), 171-189. [^1]: Equivalently, of absolute degree greater than one, by taking $f^2$ whenever $\deg(f)<-1$.
--- abstract: | Whenever eye movements are measured, a central part of the analysis has to do with *where* subjects fixate, and *why* they fixated where they fixated. To a first approximation, a set of fixations can be viewed as a set of points in space: this implies that fixations are spatial data and that the analysis of fixation locations can be beneficially thought of as a spatial statistics problem. We argue that thinking of fixation locations as arising from *point processes* is a very fruitful framework for eye movement data, helping turn qualitative questions into quantitative ones. We provide a tutorial introduction to some of the main ideas of the field of spatial statistics, focusing especially on spatial Poisson processes. We show how point processes help relate image properties to fixation locations. In particular we show how point processes naturally express the idea that image features’ predictability for fixations may vary from one image to another. We review other methods of analysis used in the literature, show how they relate to point process theory, and argue that thinking in terms of point processes substantially extends the range of analyses that can be performed and clarify their interpretation. author: - | Simon Barthelmé\ Psychology, University of Geneva. Boulevard du Pont-d’Arve 40 1205 Genève, Switzerland. - | Hans Trukenbrod, Ralf Engbert\ Psychology, University of Potsdam Karl-Liebknecht-Str. 24-25 14476 Potsdam OT Golm, Germany - | Felix Wichmann\ 1. Neural Information Processing Group, Faculty of Science, University of Tübingen,\ Sand 6, 72076 Tübingen, Germany\ 2. Bernstein Center for Computational Neuroscience Tübingen,\ Otfried-Müller-Str. 25, 72076 Tübingen, Germany\ 3. Max Planck Institute for Intelligent Systems, Empirical Inference Department,\ Spemannstr. 38, 72076 Tübingen, Germany bibliography: - 'ref.bib' title: Modelling fixation locations using spatial point processes --- Eye movement recordings are some of the most complex data available to behavioural scientists. At the most basic level they are long sequences of measured eye positions, a very high dimensional signal containing saccades, fixations, micro-saccades, drift, and their myriad variations [@CiuffredaTannen:EyeMovementBasics]. There are already many methods that process the raw data and turn it into a more manageable format, checking for calibration, distinguishing saccades from other eye movements (e.g., [@EngbertMergenthaler:MicrosaccadesTriggRetImSlip; @MergenthalerEngbert:MicrosaccDiffSaccScenePerception]). Our focus is rather on fixation locations. In the kind of experiment that will serve as an example throughout the paper, subjects were shown a number of pictures on a computer screen, under no particular instructions. The resulting data are a number of points in space, representing what people looked at in the picture—the fixation locations. The fact that fixations tend to cluster shows that people favour certain locations and do not simply explore at random. Thus one natural question to ask is why are certain locations preferred? We argue that a very fruitful approach to the problem is to be found in the methods of spatial statistics [@Diggle:StatisticalAnalysisSpatialPointPatterns; @Illian:StatAnalysisSpatPointPatterns]. A sizeable part of spatial statistics is concerned with how things are distributed in space, and fixations are “things” distributed in space. We will introduce the concepts of point processes and latent fields, and explain how these can be applied to fixations. We will show how this lets us put the important (and much researched) issue of low-level saliency on firmer statistical ground. We will begin with simple models and gradually build up to more sophisticated models that attempt to separate the various factors that influence the location of fixations and deal with non-stationarities. Using the point process framework, we replicate results obtained previously with other methods, but also show how the basic tools of point process models can be used as building blocks for a variety of data analyses. They also help shed new light on old tools, and we will argue that classical methods based on analysing the contents of image patches around fixated locations make the most sense when seen in the context of point process models. Analysing eye movement data =========================== Eye movements ------------- While looking at a static scene our eyes perform a sequence of rapid jerk-like movements (saccades) interrupted by moments of relative stability (fixations)[^1]. One reason for this fixation-saccade strategy arises from the inhomogeneity of the visual field [@Land.BOOK.2009]. Visual acuity is highest at the center of gaze, i.e. the fovea (within 1 eccentricity), and declines towards the periphery as a function of eccentricity. Thus saccades are needed to move the fovea to selected parts of an image for high resolution analysis. About 3–4 saccades are generated each second. An average saccade moves the eyes 4–5 during scene perception and, depending on the amplitude, lasts between 20–50 ms. Due to saccadic suppression (and the the high velocity of saccades) vision is hampered during saccades [@Matin.PsycholBull.1974] and information uptake is restricted to the time in between, i.e. the fixations. During a fixation gaze is on average held stationary for 250–300 ms but individual fixation durations are highly variable and range from less than a hundred milliseconds to more than a second. For recent reviews on eye movements and eye movements during scene perception see @Rayner.QJExpPsychol.2009 and @Henderson.Liversedge.2011, respectively. During scene perception fixations cluster on “informative” parts of an image whereas other parts only receive few or no fixations. This behavior has been observed between and within observers and has been associated with several factors. Due to the close coupling of stimulus features and attention [@Wolfe.NatRevNeurosci.2004] as well as eye movements and attention [@Deubel.VisionRes.1996], local image features like contrast, edges, and color are assumed to guide eye movements. In their influential model of visual saliency, @IttiKoch:ComputationalModellingVisualAttention combine several of these factors to predict fixation locations. However, rather simple calculations like edge detectors [@TatlerVincent:BehavBiasesEyeGuidance] or center-surround patterns combined with contrast-gain control [@Kienzle:CenterSurroundPatternsOptimalPredictors] seem to predict eye movements similarly well. The saliency approach has generated a lot of interest in research on the prediction of fixation locations and has led to the development of a broad variety of different models. A recent summary can be found in @Borji.IEEETPatternAnal.2013. Besides of local image features, fixations seem to be guided by faces, persons, and objects [@Cerf.AdvNeuralInfoProcSyst.2008; @Judd.CompVis.2009]. Recently it has been argued that objects may be, on average, more salient than scene background [@Einhauser.JVis.2008; @Nuthmann:ObjectBasedAttentionalSelection] suggesting that saccades might primarily target objects and that the relation between objects, visual saliency and salient local image features is just correlative in nature. The inspection behavior of our eyes is further modulated by specific knowledge about a scene acquired during the last fixations or more general knowledge acquired on longer time scales [@Henderson.Henderson.2004]. Similarly, the same image viewed under differing instructions changes the distribution of fixation locations considerably [@Yarbus.BOOK.1967]. To account for top-down modulations of fixation locations at a computational level, @Torralba:ContextualGuidanceEyeMovements weighted a saliency map with a-priori knowledge about a scene. Finally, the spatial distribution of fixations is affected by factors independent of specific images. @Tatler:CentralFixationBias, for example, reported a strong bias towards central parts of an image. In conventional photographs the effect may largely be caused by the tendency of photographers to place interesting objects in the image center but, importantly, the center bias remains in less structured images. Eye movements during scene perception have been a vibrant research topic over the past years and the preceding paragraphs provide only a brief overview of the diverse factors that contribute to the selection of fixation locations. We illustrate the logic of spatial point processes by using two of these factors in the upcoming sections: local image properties—visual saliency—and the center bias. The concept of point processes can easily be extended to more factors and helps to assess the impact of various factors on eye movement control. Relating fixation locations to image properties {#sub:FixLocationLocalProp} ----------------------------------------------- There is already a rather large literature relating local image properties to fixation locations, and it has given rise to many different methods for analysing fixation locations. Some analyses are mostly descriptive, and compare image content at fixated and non-fixated locations. Others take a stronger modelling stance, and are built around the notion of a saliency map combining a range of interesting image features. Given a saliency map, one must somehow relate it to the data, and various methods have been used to check whether a given saliency map has something useful to say about eye movements. In this section we review the more popular of the methods in use. As we explain below, in Section \[sub:Relating-patch-statistics-to-PP-theory\] , the point process framework outlined here helps to unify and make sense of the great variety of methods in the field. @ReinagelZador:NatSceneStatsCenterGaze had observers view a set of natural images for a few seconds while their eye movements were monitored. They selected image patches around gaze points, and compared their content to that of control patches taken at random from the images. Patches extracted around the center of gaze had higher contrast and were less smooth than control patches. Reinagel and Zador’s work set a blueprint for many follow-up studies, such as @Krieger:ObjectSceneAnalysisSaccadicEyeMovements and @ParkhurstNiebur:SceneContentSelectedActiveVision, although they departed from the original by focusing on *fixated* vs *non-fixated* patches. Fixated points are presumably the points of higher interest to the observer, and to go from one to the next the eye may travel through duller landscapes. Nonetheless the basic analysis pattern remained: one compares the contents of selected patches to that of patches drawn from random control locations. Since the contents of the patches (e.g. their contrast) will differ both within-categories and across, what one typically has is a distribution of contrast values in fixated and control patches. The question is whether these distributions differ, and asking whether the distributions differ is mathematically equivalent to asking whether one can guess, based on the contrast of a patch, whether the patch comes from the fixated or the non-fixated set. We call this problem patch classification, and we show in Section \[sub:The-patch-classification-problem\] that it has close ties to point process modelling—indeed, certain forms of patch classification can be seen as approximations to point process modelling. The fact that fixated patches have distinctive local statistics could suggest that it is exactly these distinctive local statistics that attract gaze to a certain area. Certain models adopt this viewpoint and assume that the visual system computes a bottom-up saliency map based on local image features. The bottom-up saliency map is used by the visual system (along with top-down influences) to direct the eyes [@KochUllman:ShiftsInSelectiveVisualAttention]. Several models of bottom-up saliency have been proposed (for a complete list see [@Borji.IEEETPatternAnal.2013]), based either on the architecture of the visual cortex [@IttiKoch:ComputationalModellingVisualAttention] or on computational considerations (e.g., [@Kanan:SUNTopDownSaliencyUsingNaturalStats]), but their essential feature for our purposes is that they take an image as input and yield as output a saliency map. The computational mechanism that produces the saliency map should ideally work out from the local statistics of the image which areas are more visually conspicuous and give them higher saliency scores. The model takes its validity from the correlation between the saliency maps it produces and actual eye movements. How one goes from a saliency map to a set of eye movements is not obvious, and @Wilming:MeasuresLimitsModelsFixationSelection have found in their extensive review of the literature as many as 8 different performance measures. One solution is to look at area counts [@Torralba:ContextualGuidanceEyeMovements]: if we pick the 20% most salient pixels in an image, they will define an area that takes up 20% of the picture. If much more than 20% of the recorded fixations are in this area, it is reasonable to say that the saliency model gives us useful information, because by chance we’d expect this proportion to be around 20%. A seemingly completely different solution is given by the very popular AUC measure [@Tatler:VisualCorrelatesFixSel], which uses the patch classification viewpoint: fixated patches should have higher salience than control patches. The situation is analoguous to a signal detection paradigm: correctly classifying a patch as fixated is a Hit, incorrectly classifying a patch as fixated is a False Alarm, etc. A good saliency map should give both a high Hit Rate and a low rate of False Alarms, and therefore performance can be quantified by the area under the ROC curve (AUC): the higher the AUC, the better the model. One contribution of the point process framework is that we can prove these two measures are actually tightly related, even though they are rather different in origin (Section \[sub:ROC-and-Area-counts\]). There are many other ways to relate stimulus properties to fixation locations, based for example on scanpaths [@Henderson:VisualSaliencyDoesNotAccountForEyeMovements], on the number of fixations before entering a region of interest [@Underwood:EyeMovementsDuringSceneInspection], on the distance between fixations and landmarks [@Mannan:RelationshipLocationSpatialFeaturesFixations], etc. We cannot attempt here a complete unification of all measures, but we hope to show that our proposed spatial point process framework is general enough that such unification is at least theoretically possible. In the next section we introduce the main ideas behind spatial point-process models. Point processes =============== We begin with a general overview on the art and science of generating random sets of points in space. It is important to emphasise at this stage that the models we will describe are entirely *statistical* in nature and not mechanistic: they do not assume anything about how saccadic eye movements are generated by the brain [@Sparks:BrainstemControlSaccEyeMovements]. In this sense they are more akin to linear regression models than, e.g., biologically-inspired models of overt attention during reading or visual search [@Engbert:SWIFTSaccadeGenDuringReading; @Zelinsky:TheoryEyeMovementsTargetAcquisition]. The goal of our modelling is to provide statistically sound and useful summaries and visualizations of data, rather than come up with a full story of how the brain goes about choosing where to allocate the next saccade. What we lose in depth, we gain in generality, however: the concepts that are highlighted here are applicable to the vast majority of experiments in which fixations locations are of interest. Definition and examples ----------------------- In statistics, point patterns in space are usually described in terms of point processes, which represent realisations from probability distributions over sets of points. Just like linear regression models, point processes have a deterministic and a stochastic component. In linear models, the deterministic component describes the average value of the dependent variable as a function of the independent ones, and the stochastic component captures the fact that the model cannot predict perfectly the value of the independent variable, for example because of measurement noise. In the same way, point processes will have a latent *intensity function*, which describes the expected number of points that will be found in a certain area, and a stochastic part which captures prediction error and/or intrinsic variability. We focus on a certain class of point process models known as inhomogeneous Poisson processes. Some specific examples of inhomogeneous Poisson processes should be familiar to most readers. These are temporal rather than spatial, which means they generate random point sets in time rather than in space, but equivalent concepts apply in both cases. In neuroscience, Poisson processes are often used to characterize neuronal spike trains (see e.g., [@AbbottDayan]). The assumption is that the number of spikes produced by a neuron in a given time interval follows a Poisson distribution: for example, repeated presentation of the same visual stimulus will produce a variable number of spikes, but the variability will be well captured by a Poisson distribution. Different stimuli will produce different average spike rates, but spike rate will also vary over *time* during the course of a presentation, for example rising fast at stimulation onset and then decaying. A useful description, summarized in Figure \[fig:Example-Temporal-IPP\], is in terms of a latent intensity function $\lambda(t)$ governing the expected number of spikes observed in a certain time window. Formally, $\int_{\tau}^{\tau+\delta}\lambda\left(t\right)\mbox{d}t$ gives the expected number of spikes between times $\tau$ and $\tau+\delta$. If we note $\mathbf{t}=t_{1},t_{2},\ldots,t_{k}$ the times at which spikes occurred on a given trial, then $\mathbf{t}$ follows a inhomogeneous Poisson Process (from now on IPP) distribution if, for all intervals $\left(\tau,\tau+\delta\right)$, the number of spikes occurring in the interval follows a Poisson distribution (with mean given by the integral of $\lambda\left(t\right)$ over the interval). The temporal IPP therefore gives us a distribution over sets of points in time (in Figure \[fig:Example-Temporal-IPP\], over the interval $[0,1]$). Extending to the spatial case is straightforward: we simply define a new intensity function $\lambda(x,y)$ over space, and the IPP now generates point sets such that the expected number of points to appear in a certain area $A$ is $\int_{A}\lambda\left(x,y\right)\mbox{d}x\mbox{d}y$, with the actual quantity again following a Poisson distribution. The spatial IPP is illustrated on Figure \[fig:Example-Spatial-IPP\]. ![A first example of a point process: the Inhomogeneous Poisson Process (IPP) as a model for spike trains. **a.** The neuron is assumed to respond to stimulation at a varying rate over time. The latent rate is described by an intensity function, $\lambda(t)$ **b.** Spikes are stochastic: here we simulated spike trains from an IPP with intensity $\lambda(t)$. Different trials correspond to different realisations. Note that a given spike train can be seen simply as a set of points in $(0,1)$. **c.** The defining property of the IPP is that spike counts in a given interval follow a Poisson distribution. Here we show the probability of observing a certain number of spikes in two different time intervals. \[fig:Example-Temporal-IPP\]](Figures/example_spikes) The point of point processes ---------------------------- Given a point set, the most natural question to ask is, generally, “what latent intensity function could have generated the observed pattern?” Indeed, we argue that a lot of very specific research questions are actually special cases of this general problem. For mathematical convenience, we will from now on focus on the log-intensity function $\eta(x,y)=\log\lambda\left(x,y\right)$. The reason this is more convenient is that $\lambda\left(x,y\right)$ cannot be negative—and we are not expecting a negative number of points (fixations)—whereas $\eta\left(x,y\right)$, on the other hand, can take any value whatever, from minus to plus infinity. At this point we need to introduce the notion of *spatial covariate*, which are directly analoguous to covariates in linear models. In statistical parlance, the *response* is what we are interested in predicting, and *covariates* is what we use to predict the response with. In the case of point processes covariates are often spatial functions too. One of the classical questions in the study of overt attention is the role of low-level cues in attracting gaze (i.e. visual saliency). Among low-level cues, local contrast may play a prominent role, and it is a classical finding that observers tend to be more interested in high-contrast regions when viewing natural images, e.g. [@Rajashekar:FoveatedAnalysisImgFeatFixation]. Imagine that our point set $\mathbf{F}=\left\{ \left(x_{1},y_{1}\right),\ldots,\left(x_{n},y_{n}\right)\right\} $ represents observed fixation locations on a certain image, and we assume that these fixation locations were generated by an IPP with log-intensity function $\eta\left(x,y\right)$. We suppose that the value of $\eta\left(x,y\right)$ at location $x,y$ has something to do with the local contrast $c(x,y)$ at the location. In other words, the image contrast function $c(x,y)$ will enter as a *covariate* in our model. The simplest way to do so is to posit that $\eta\left(x,y\right)$ is a linear function of $c(x,y)$, i.e.: $$\eta\left(x,y\right)=\beta_{c}\times c(x,y)+\beta_{0}\label{eq:simple-contrast-covariate}$$ We have introduced two free parameters, $\beta_{c}$ and $\beta_{0}$, that will need to be estimated from the data. $\beta_{c}$ is the more informative of the two: for example, a positive value indicates that high contrast is predictive of high intensity, and a nearly-null value indicates that contrast is not related to intensity (or at least not monotonically). We will return to this idea below when we consider analysing the output of low-level saliency models. Another example that will come up in our analysis is the well-documented issue of the “centrality bias”, whereby human observers in psychophysical experiments in front of a centrally placed computer screen tend to fixate central locations more often regardless of what they are shown [@Tatler:CentralFixationBias]. Again this has an influence on the intensity function that needs to be accounted for. One could postulate another spatial (intrinsic) covariate, $d\left(x,y\right)$, representing the distance to the centre of the display: e.g., $d(x,y)=\sqrt{x^{2}+y^{2}}$ assuming the centre is at $(0,0)$. As in Equation (\[eq:simple-contrast-covariate\]), we could write $$\eta\left(x,y\right)=\beta_{d}\times d(x,y)+\beta_{0}$$ However, in a given image, both centrality bias and local contrast will play a role, resulting in: $$\eta\left(x,y\right)=\beta_{d}\times d(x,y)+\beta_{c}\times c(x,y)+\beta_{0}\label{eq:example-additive-decomp}$$ The relative contribution of each factor will be determined by the relative values of $\beta_{d}$ and $\beta_{c}$. Such additive decompositions will be central to our analysis, and we will cover them in much more detail below. Case study: Analysis of low-level saliency models \[sec:Case-study-1\] ====================================================================== If eye movement guidance is a relatively inflexible system which uses local image cues as heuristics for finding interesting places in a stimulus, then low-level image cues should be predictive of where people look when they have nothing particular to do. This has been investigated many times (see [@Schuetz:EyeMovementsPerceptionReview]), and there are now many datasets available of “free-viewing” eye movements in natural images [@VanDerLinde:DOVES; @Torralba:ContextualGuidanceEyeMovements]. Here we use the dataset of @Kienzle:CenterSurroundPatternsOptimalPredictors because the authors were particularly careful to eliminate a number of potential biases (photographer’s bias, among other things). In @Kienzle:CenterSurroundPatternsOptimalPredictors, subjects viewed photographs taken in a zoo in Southern Germany. Each image appeared for a short, randomly varying duration of around 2 sec[^2]. Subjects were instructed to “look around the scene”, with no particular goal given. The raw signal recorded from the eye-tracker was processed to yield a set of saccades and fixations, and here we focus only on the latter. We have already mentioned in the introduction that such data are often analysed in terms of a patch classification problem: can we tell between fixated and non-fixated image patches based on their content? We now explain how to replicate the main features of a such an analysis in terms of the point process framework. Understanding the role of covariates in determining fixated locations --------------------------------------------------------------------- To be able to move beyond the basic statement that local image cues somehow *correlate* with fixation locations, it is important that we clarify how covariates could enter into the latent intensity function. There are many different ways in which this could happen, with important consequences for the modelling. Our approach is to build a model gradually, starting from simplistic assumptions and introducing complexity as needed. To begin with we imagine that local contrast is the only cue that matters. A very unrealistic but drastically simple model assumes that the more contrast there is in a region, the more subjects’ attention will be attracted to it. In our framework we could specify this model as: $$\eta\left(x,y\right)=\beta_{0}+\beta_{1}c(x,y)$$ However, surely other things besides contrast matters - what about average luminance, for example? Couldn’t brighter regions attract gaze? This would lead us to expand our model to include luminance as another spatial covariate, so that the log-intensity function becomes:\ $$\eta\left(x,y\right)=\beta_{0}+\beta_{1}c(x,y)+\beta_{2}l(x,y)$$ in which $l(x,y)$ stands for local luminance. But perhaps edges matter, so why not include another covariate corresponding to the output of a local edge detector $e(x,y)$? This results in: $$\eta\left(x,y\right)=\beta_{0}+\beta_{1}c(x,y)+\beta_{2}l(x,y)+\beta_{3}e(x,y)$$ It is possible to go further down this path, and add as many covariates as one sees fit (although with too many covariates, problems of variable selection do arise, see [@Hastie:ESL]), but to make our lives simpler we can also rely on some prior work in the area and use pre-existing, off-the-shelf *image-based saliency models* [@FecteauMunoz:SalienceRelevancePriorityMap]. Such models combine many local cues into one interest map, which saves us from having to choose a set of covariates and then estimating their relative weight (although see [@Vincent:DoWeLookAtLights] for work in a related direction). Here we focus on the perhaps most well-known among these models, described in @IttiKoch:ComputationalModellingVisualAttention and @WaltherKoch:ModelingAttentionSalientProtoObjects, although many other interesting options are available (e.g., [@BruceTsotsos:SaliencyAttentionVisualSearch], [@ZhaoKoch:LearningSaliencyMap], or [@Kienzle:CenterSurroundPatternsOptimalPredictors]). ![An image from the dataset of @Kienzle:CenterSurroundPatternsOptimalPredictors, along with an “interest map” - local saliency computed according to the Itti-Koch model [@IttiKoch:ComputationalModellingVisualAttention; @WaltherKoch:ModelingAttentionSalientProtoObjects]. Fixations made by the subjects are overlaid in red. How well does the interest map characterise this fixation pattern? This question is not easily answered by eye, but may be given a more precise meaning in the context of spatial processes. \[fig:IttiKochSaliencyKienzle\]](Figures/itti_example) The model computes several feature maps (orientation, contrast, etc.) according to physiologically plausible mechanisms, and combines them into one master map which aims to predict what the interesting features in image $i$ are. For a given image $i$ we can obtain the interest map $m_{i}\left(x,y\right)$ and use that as the unique covariate in a point process: $$\eta_{i}\left(x,y\right)=\alpha_{i}+\beta_{i}m_{i}(x,y)\label{eq:itti-saliency-simple}$$ This last equation will be the starting point of our modelling. We have changed the notation somewhat to reflect some of the adjustments we need to make in order to learn anything from applying model to data. To summarise: - $\eta_{i}(x,y)$ denotes the log-intensity function for image $i$, which depends on the spatial covariate $m_{i}\left(x,y\right)$ that corresponds to the interest map given by the low-level saliency of @IttiKoch:ComputationalModellingVisualAttention. - $\beta_{i}$ is an image-specific coefficient that measures to what extent spatial intensity can be predicted from the interest map. $\beta_{i}=0$ means no relation, $\beta_{i}>0$ means that higher low-level saliency is associated on average with more fixations, $\beta_{i}<0$ indicates the opposite - people looked more often at low points of the interest map. We make $\beta_{i}$ image-dependent because we anticipate that how well the interest map predicts fixations depends on the image, an assumption that is borne out, as we will see. - $\alpha_{i}$ is an image specific intercept. It is required for technical reasons but otherwise plays no important role in our analysis. We fitted the model given by Equation (\[eq:itti-saliency-simple\]) to a dataset consisting of the fixations recorded in the first 100 images of the dataset of @Kienzle:CenterSurroundPatternsOptimalPredictors [see Fig. ref[fig:IttiKochSaliencyKienzle]{}]. Computational methods are described in the appendix. We obtained a set of posterior estimates for the $\beta_{i}$’s, of which a summary is given in Figure \[fig:Variability-IK-coeffs\]. ![Variability in the predictivity of the Itti-Koch model across images. We estimate $\beta_{i}$ in Equation \[eq:itti-saliency-simple\] for 100 different images from the dataset of @Kienzle:CenterSurroundPatternsOptimalPredictors. We plot the sorted mean-a-posteriori estimates along with a 95% Bayesian credible interval. The results show clearly that the “interestingness” given by low-level saliency is of variable value when predicting fixations: for some images $\beta_{i}$ is very close to 0, which indicates that there is no discernible association between low-level saliency and fixation intensity in these images. In other images the association is much stronger. \[fig:Variability-IK-coeffs\]](Figures/sorted_coeffs_itti_simple) To make the coefficients shown on Figure \[fig:Variability-IK-coeffs\] more readily interpretable, we have scaled $m_{i}\left(x,y\right)$ so that in each image the most interesting points (according to the Itti-Koch model) have value 1 and the least interesting 0. In terms of the estimated coefficients $\beta_{i}$, this implies that the intensity ratio between a maximally interesting region and a minimally interesting region is equal to $e^{\beta_{i}}$: for example, a value of $\beta_{i}$ of 1 indicates that in image $i$ on average a region with high “interestingness” receives roughly 2.5 more fixations than a region with very low “interestingness”. At the opposite end of the spectrum, in images in which the Itti-Koch model performs very well, we have values of $\beta_{i}\approx6$, which implies a ratio of 150 to 1 for the most interesting regions compared to the least interesting. It is instructive to compare the images in which the model does well[^3], to those in which it does poorly. On Figure \[fig:Itti-best\] we show the 8 images with highest $\beta_{i}$ value, and on Figure \[fig:Itti-worst\] the 8 images with lowest $\beta_{i}$, along with the corresponding Itti-Koch interest maps. It is evident that, while on certain images the model does extremely well, for example when it manages to pick up the animal in the picture (see the lion in images 52 and 53), in others it gets fooled by high-contrast edges that subjects find highly uninteresting. Foliage and rock seem to be particularly difficult, at least from the limited evidence available here. Given a larger annotated dataset, it would be possible to confirm whether the model performs better for certain categories of images than others. Although this is outside the scope of the current paper, we would like to point out that the model in Equation (\[eq:itti-saliency-simple\]) can be easily extended for that purpose: If we assume that images are encoded as being either “foliage” or “not foliage”, we may then define a variable $\phi_{i}$ that is equal to 1 if image $i$ is foliage and 0 if not. We may re-express the latent log-intensity as: $$\eta_{i}\left(x,y\right)=\alpha_{i}+\left(\phi_{i}\gamma+\delta_{i}\right)m_{i}(x,y)$$ which decomposes $\beta_{i}$ as the sum of an image-specific effect ($\delta_{i}$) and an effect of belonging to the foliage category $\left(\gamma\right)$. Having $\gamma<0$ would indicate that pictures of foliage are indeed more difficult on average[^4]. We take foliage here only as an illustration of the technique, as it is certainly not the most useful categorical distinction one could make (for a taxonomy of natural images, see [@FeiFei:WhatDoWePercInAGlance], and, e.g., [@KasparKoenig:ViewingBehaviorImpactLowLevelImgProp] for a discussion of image categories). A related suggestion [@Torralba:ContextualGuidanceEyeMovements] is to augment low-level saliency models with some higher-level concepts, adding face detectors, text detectors, or horizon detectors. Within the limits of our framework, a much easier way to improve predictions is to take into account the *centrality bias* [@TatlerVincent:BehavBiasesEyeGuidance], i.e. the tendency for observers to fixate more often at the centre of the image than around the periphery. One explanation for the centrality bias is that it is essentially a side-effect of photographer’s bias: people are interested in the centre because the centre is where photographers usually put the interesting things, unless they are particularly incompetent. In @Kienzle:CenterSurroundPatternsOptimalPredictors photographic incompetence was simulated by randomly clipping actual photographs so that central locations were not more likely to be interesting than peripheral ones. The centrality bias persists (see Fig. \[fig:Centrality-bias\]), which shows that central locations are preferred regardless of image content (a point already made in [@Tatler:CentralFixationBias]). We can use this fact to make better predictions by making the required modifications to the intensity function. Before we can explain how to do that, we need to introduce a number of additional concepts. A central theme in the proposed spatial point process framework is to develop tools that help us to understand performance of our models in detail. In the next section we introduce some relatively user-friendly graphical tools for assessing fit. We will also show how one can estimate an intensity function in a non-parametric way, that is, without assuming that the intensity function has a specific form. Nonparametric estimates are important in their own right for visualisation (see for example the right-hand-side of Fig. \[fig:Centrality-bias\]), but also as a central element in more sophisticated analyses. ![Out of first 100 pictures in the Kienzle et al. dataset, we show the 8 ones in which Itti&Koch interestingness has the strongest link to fixation density (according to the value of $\beta_{i}$ in Equation \[eq:itti-saliency-simple\]). The I&K interest map is displayed below each image, and fixation locations are in red. \[fig:Itti-best\]](Figures/itti_best_8) ![Same as Figure \[fig:Itti-best\] above, but with the 8 images with lowest value for $\beta_{i}$ . \[fig:Itti-worst\]](Figures/itti_worst_8) ![The centrality bias. On the left panel, we plot every fixation recorded in @Kienzle:CenterSurroundPatternsOptimalPredictors. On the right, a non-parametric Bayesian estimate of the intensity function. Central locations are much more likely to be fixated than peripheral ones. \[fig:Centrality-bias\]](Figures/marginal_fixations) Graphical model diagnostics\[sub:Graphical-model-diagnostics\] -------------------------------------------------------------- Once one has fitted a statistical model to data, one has to make sure the fitted model is actually at least in rough agreement with the data. A good fit alone is not the only thing we require of a model, because fits can in some cases be arbitrarily good if enough free parameters are introduced (see e.g., [@BishopPRML], ch. 3). But assessing fit is an important step in model criticism [@GelmanHill:DataAnalysisUsingRegression], which will let us diagnose model failures, and in many cases will enable us to obtain a better understanding of the data itself. In this section we will focus on informal, graphical diagnostics. More advanced tools are described in @Baddeley:ResidualAnalysisSpatialPP. @Ehinger:ModelingSearchForPeopleIn900Scenes use a similar model-criticism approach in the context of saliency modelling. Since a statistical model is in essence a recipe for how the data are generated, the most obvious thing to do is to compare data simulated from the model to the actual data we measured. In the analysis presented above, the assumption is that the data come from a Poisson process whose log-intensity is a linear function of Itti-Koch interestingness: $$\eta_{i}\left(x,y\right)=\alpha_{i}+\beta_{i}m_{i}(x,y)\label{eq:itti-saliency-simple-1}$$ For a given image, we have estimated values $\hat{\alpha}_{i},\hat{\beta}_{i}$ (mean a posterior estimate). A natural thing to do is to ask what data simulated from a model with those parameters look like[^5]. In Figure \[fig:simulations-fitted-model\], we take the image with the maximum estimated value for $\beta_{i}$ and compare the actual recorded fixation locations to four different simulations from an IPP with the fitted intensity function. What is immediately visible from the simulations is that, while the real data present one strong cluster that also appears in the simulations, the simulations have a higher proportion of points outside of the cluster, in areas far from any actual fixated locations. Despite these problems, the fit seems to be quite good compared to other examples from the dataset: Figure \[fig:Simulations-fitted-less-good\] shows two other examples, image 45, which has a median $\beta$ value of about 4, and image 32, which had a $\beta$ value of about 0. In the case of image 32, since there is essentially no relationship between the interestingness values and fixation locations, the best possible intensity function of the form given by Equation (\[eq:itti-saliency-simple\]) is one with $\beta=0$, a uniform intensity function. ![Simulating from a fitted point process model. The fixations on image 82 (rightmost in Figure \[fig:Itti-best\]) were fitted with the model given by Equation (\[eq:itti-saliency-simple\]), resulting in an estimated log-intensity $\eta(x,y)$ which is plotted as a heatmap in the background of each panel. In red we plot the actual fixation locations (the same in every of the four panels), and in green simulations from the fitted model, four different realizations, one in each panel. \[fig:simulations-fitted-model\]](Figures/itti_koch_compare_simul_82) ![Same as in Figure \[fig:simulations-fitted-model\], with the fixations measured on image 45 (left) and 32 (right) of the dataset. The agreement between data and simulations is of distinctly poorer quality than in image 82.\[fig:Simulations-fitted-less-good\]](Figures/itti_koch_compare_simul_45 "fig:") ![Same as in Figure \[fig:simulations-fitted-model\], with the fixations measured on image 45 (left) and 32 (right) of the dataset. The agreement between data and simulations is of distinctly poorer quality than in image 82.\[fig:Simulations-fitted-less-good\]](Figures/itti_koch_compare_simul_32 "fig:") It is also quite useful to inspect some of the *marginal* distributions. By marginal distributions we mean point distributions that we obtain by merging data from different conditions. In Figure \[fig:marginal-fixation-locations-simple\], we plot the fixation locations across all images in the dataset. In the lower panel we compare it to simulations from the fitted model, in which we generated fixation locations from the fitted model for each image so as to simulate an entire dataset. This brings to light a failure of the model that would not be obvious from looking at individual images: based on Itti-Koch interestingness alone we would predict a distribution of fixation locations that is almost uniform, whereas the actual distribution exhibits a central bias, as well as a bias for the upper part of the screen. ![Comparing marginal fixation locations. On the left panel, we plot each fixation location in images 1 to 100 (each dot corresponds to one fixation). On the right panel, we plot simulated fixation locations from the fitted model corresponding to Equation (\[eq:itti-saliency-simple\]). A strong spatial bias is visible in the data, not captured at all by the model.\[fig:marginal-fixation-locations-simple\]](Figures/marginal_locations_simple-0) Overall, the model derived from fitting Equation (\[eq:itti-saliency-simple\]) seems rather inadequate, and we need to account at least for what seems to be some prior bias favouring certain locations. Explaining how to do so requires a short detour through the topic of non-parametric inference, to which we turn next. Inferring the intensity function non-parametrically\[sub:Inferring-the-intensity-function-nonparam\] ---------------------------------------------------------------------------------------------------- Consider the data in Figure \[fig:Centrality-bias\]: to get a sense of how much observers prefer central locations relative to peripheral ones, we could define a central region **$\mathcal{A}$**, count how many fixations fall in it, compared to how many fixations fall outside. From the theoretical point of view, what we are doing is directly related to estimating the intensity function: the expected number of fixations in $\mathcal{A}$ is after all $\int_{\mathcal{A}}\lambda\left(x,y\right)\mbox{d}x\mbox{d}y$, the integral of the intensity function over $\mathcal{A}$. Seen the other way, counting how many sample points are in $\mathcal{A}$ is a way of estimating the integral of the intensity over $\mathcal{A}$. Modern statistical modelling emphasizes non-parametric estimation. If one is trying to infer the form of an unknown function $f(x)$, one should not assume that $f(x)$ has a certain parametric form unless there is very good reason for this choice (interpretability, actual prior knowledge or computational feasibility). Assuming a parametric form means assuming for example that $f(x)$ is linear, or quadratic: in general it means assuming that $f(x)$ can be written as $f(x)=\phi(x;\beta)$, where $\beta$ is a finite set of unknown parameters, and $\phi\left(x;\beta\right)$ is a family of functions over $x$ parameterised by $\beta$. Nonparametric methods make much weaker assumptions, usually assuming only that $f$ is smooth at some spatial scale. We noted above that estimating the integral of the intensity function over a spatial region could be done by counting the number of points the region contains. Assume we want to estimate the intensity $\lambda(x,y)$ at a certain point $x_{0},y_{0}$. We have a realisation $S$ of the point process (for example a set of fixation locations). If we assume that $\lambda(x,y)$ is spatially smooth, it implies that $\lambda(x,y)$ varies slowly around $x_{0},y_{0}$, so that we may consider it roughly constant in a small region around $x_{0},y_{0}$, for instance in a circle of radius $r$ around $(x_{0},y_{0})$. Call this region $\mathcal{C}_{r}$ - the integral of the intensity function over $\mathcal{C}_{r}$ is related to the intensity at $(x_{0}$,$y_{0}$) in the following way: $$\int_{\mathcal{C}_{r}}\lambda(x,y)\mbox{d}x\mbox{d}y\approx\int_{\mathcal{C}_{r}}\lambda\left(x_{0},y_{0}\right)\mbox{d}x\mbox{d}y=\lambda\left(x_{0},y_{0}\right)\times\int_{\mathcal{C}_{r}}\mbox{d}x\mbox{d}y$$ $\int_{\mathcal{C}_{r}}\mbox{d}x\mbox{d}y$ is just the area of circle $\mathcal{C}_{r}$, equal to $\pi r$. Since we can estimate $\int_{\mathcal{C}_{r}}\lambda(x,y)\mbox{d}x\mbox{d}y$ via the number of points in $\mathcal{C}_{r}$, it follows that we can estimate $\lambda(x_{0},y_{0})$ via: $$\hat{\lambda}\left(x_{0},y_{0}\right)=\frac{\left|S\cap\mathcal{C}_{r}\right|}{\pi r}$$ $\left|S\cap\mathcal{C}_{r}\right|$ is the cardinal of the intersection of the point set $S$ and the circle $\mathcal{C}_{r}$ (note that they are both sets), shorthand for “number of points in $S$ that are also in $\mathcal{C}_{r}$”. What we did for $(x_{0},y_{0})$ remains true for all other points, so that a valid strategy for estimating $\lambda(x,y)$ at any point is to count how many points in $S$ are in the circle of radius $r$ around the location. The main underlying assumption is that $\lambda\left(x,y\right)$ is roughly constant over a region of radius $r$. This method will be familiar to some readers in the context of non-parametric density estimation, and indeed it is almost identical[^6]. It is a perfectly valid strategy, detailed in @Diggle:StatisticalAnalysisSpatialPointPatterns, and its only major shortcoming is that the amount of smoothness (represented by $r$) one arbitrarily uses in the analysis may change the results quite dramatically (see Figure \[fig:Nonparametric-estimation-classical\]). Although it is possible to also estimate $r$ from the data, in practice this may be difficult (see [@Illian:StatAnalysisSpatPointPatterns], Section 3.3). ![Nonparametric estimation of the intensity function using a moving window. The data are shown on the leftmost panel. The intensity function at $(x,y)$ is estimated by counting how many points are within a radius $r$ of $(x,y)$. We show the results for $r=20,50,100$. Note that with $r\rightarrow0$ we get back the raw data. For easier visualization, we have scaled the intensity values such that the maximum intensity is 1 in each panel. \[fig:Nonparametric-estimation-classical\]](Figures/nonparam_circles) ![Nonparametric Bayesian estimation of the intensity function. We use the same data as in Figure \[fig:Nonparametric-estimation-classical\]. Inference is done by placing a Gaussian process prior on the log-intensity function, which enforces smoothness. Hyperparameters are integrated over. See text and appendix \[sub:Intro-to-GPs\] for details. ](Figures/nonparam_bayes) There is a Bayesian alternative: put a prior distribution on the intensity $\lambda$ and base the inference on the posterior distribution of $\lambda(x,y)$ given the data, with $$p(\lambda|S)\propto p(S|\lambda)p(\lambda)$$ as usual. We can use the posterior expectation of $\lambda(x,y)$ as an estimator (the posterior expectation is the mean value of $\lambda(x,y)$ given the data), and the posterior quantiles give confidence intervals[^7]. The general principles of Bayesian statistics will not be explained here, the reader may refer to @Kruschke:DoingBayesianAnalysis or any other of the many excellent textbooks on Bayesian statistics for an introduction. To be more precise, the method proceeds by writing down the very generic model: $$\log\lambda\left(x,y\right)=f(x,y)+\beta_{0}$$ and effectively forces $f(x,y)$ to be a relatively smooth function, using a Gaussian Process prior. Exactly how this is achieved is explained in Appendix \[sub:Intro-to-GPs\], but roughly, Gaussian Processes let one define a probability distribution over functions such that smooth functions are much more likely than non-smooth functions. The exact spatial scale over which the function is smooth is unknown but can be averaged over. To estimate the intensity function of one individual point process, there is little cause to prefer the Bayesian estimate over the classical non-parametric estimate we described earlier. As we will see however, using a prior that favours smooth functions becomes invaluable when one considers *multiple point processes* with shared elements. Including a spatial bias, and looking at predictions for new images\[sub:Including-a-spatial-bias\] --------------------------------------------------------------------------------------------------- We have established that models built from interest maps do not fit the data very well, and we have hypothesized that one possible cause might be the presence of a spatial bias. Certain locations might be fixated despite having relatively uninteresting contents. A small modification to our model offers a solution: we can hypothesize that all latent intensity functions share a common component. In equation form: $$\eta_{i}\left(x,y\right)=\alpha_{i}+\beta_{i}m_{i}(x,y)+g(x,y)\label{eq:itti-koch-spatial-bias}$$ As in the previous section, we do not force $g(x,y)$ to take a specific form, but only assume smoothness. Again, we use the first 100 images of the dataset to estimate the parameters. The estimated spatial bias is shown on Figure \[fig:itti-spatial-bias\]. It features the centrality bias and the preference for locations above the midline that were already visible in the diagnostics plot of Section \[sub:Graphical-model-diagnostics\] (Fig. \[fig:marginal-fixation-locations-simple\]). ![Estimated spatial bias $g(x,y)$ (Eq. \[eq:itti-koch-spatial-bias\]).\[fig:itti-spatial-bias\] ](Figures/itti_estimated_spatial_bias) From visual inspection alone it appears clear that including a spatial bias is necessary, and that the model with spatial bias offers a significant improvement over the one that does not. However, things are not always as clear-cut, and one cannot necessarily argue from a better fit that one has a better model. There are many techniques for statistical model comparison, but given sufficient data the best is arguably to compare the predictive performance of the different models in the set (see [@Pitt:TowardMethodSelectCompModelsCog], and [@Robert:TheBayesianChoice], ch. 7 for overviews of model comparison techniques). In our case we could imagine two distinct prediction scenarios: 1. For each image, one is given, say, 80% of the recorded fixations, and must predict the remaining 20%. 2. One is given all fixation locations in the first $n$ images, and must predict fixations locations in the next $k$. To use low-level saliency maps in engineering applications [@Itti:AutomaticFoveationVideoCompression], what is needed is a model that predicts fixation locations on arbitrary images—i.e. the model needs to be good at the second prediction scenario outlined above. The model is tuned on recorded fixations on a set of training images, but to be useful it must make sensible predictions for images outside the original training set. From the statistical point of view, there is a crucial difference between prediction problems (1) and (2) above. Problem (1) is easy: to make predictions for the remaining fixations on image $i$, estimate $\beta_{i}$ and $\alpha_{i}$ from the available data, and predict based on the estimated values (or using the posterior predictive distribution). Problem (2) is much more difficult: for a new image $j$ we have no information about the values of $\beta_{j}$ or $\alpha_{j}$. In other words, in a new image the interest map could be very good or worthless, and we have no way of knowing that in advance. We do however have information about what values $\beta_{j}$ and $\alpha_{j}$ are likely to take from the estimated values for the images in the training set. If in the training set nearly all values $\beta_{1},\beta_{2},\ldots,\beta_{n}$ were above 0, it is unlikely that $\beta_{j}$ will be negative. We can represent our uncertainty about $\beta_{j}$ with a probability distribution, and this probability distribution may be estimated from the estimated values for $\beta_{1},\beta_{2},\ldots,\beta_{n}$. We could, for example, compute their mean and standard deviation, and assume that $\beta_{j}$ is Gaussian distributed with this particular mean and standard deviation[^8]. Another way, which we adopt here, is to use a kernel density estimator so as not to impose a Gaussian shape on the distribution. As a technical aside: for the purpose of prediction the intercept $\alpha_{j}$ can be ignored, as its role is to modulate the intensity function globally, and it has no effect on where fixations happen, simply on *how many* fixations are predicted. Essentially, since we are interested in fixation locations, and not in how many fixations we get for a given image, we can safely ignore $\alpha_{i}.$ A more mathematical argument is given in Appendix \[sub:Conditioning-on-n\]. Thus how to predict? We know how to predict fixation locations *given* a certain value of $\beta_{j}$, as we saw earlier in Section \[sub:Graphical-model-diagnostics\]. Since $\beta_{j}$ is unknown we need to average over our uncertainty. A recipe for generating predictions is to sample a value for $\beta_{j}$ from $p(\beta_{j})$, and conditional on that value, sample fixation locations. Please refer to Figure \[fig:Explanation-Predictions\] for an illustration. ![Predictions for a novel image. The latent intensity function in Equation \[eq:itti-koch-spatial-bias\] has two important components: the interest map $m(x,y)$, shown here in panel **(a)** for image 103, and a general spatial component $g(x,y)$, shown in **(b)**. Image 103 does not belong to the training set, and the value of $\beta_{103}$ is therefore unknown: we do not know if $m(x,y)$ will be a strong predictor or not, and must therefore take this uncertainty into account. Uncertainty is represented by the distribution the $\beta$ coefficient takes over images, and we can estimate this distribution from the estimated values from the training set. In **(c)** we show those values as dashes, along with a kernel density estimate. Conditional on a given value for $\beta_{103}$, our predictions come from a point process with log-intensity function given by $\beta_{103}m_{103}\left(x,y\right)+g\left(x,y\right)$: in **(d)**, we show the intensity function for $\beta_{103}=0,3,6$. In **(e)**, we show simulations from the corresponding point processes (conditional on $n=100$ fixations, see \[sub:Conditioning-on-n\]). In general the strategy for simulating from the predictive distribution will be to sample a value of $\beta$ from $p(\beta)$, and sample from the corresponding point process as is done here. \[fig:Explanation-Predictions\]](Figures/illustrate_prediction) In Figure \[fig:Predicting-marginal-fixations\] we compare predictions for marginal fixation locations (over all images), with and without a spatial bias term. We simulated fixations from the predictive distribution for images 101 to 200. We plot only one simulation, since all simulations yield for all intents and purposes the same result: without a spatial bias term, we replicate the problem seen in Figure \[fig:marginal-fixation-locations-simple\]. We predict fixations distributed more or less uniformly over the monitor. Including a spatial bias term solves the problem. ![Predicting marginal fixation locations. In the first panel we plot the location of all fixations for images 101 to 200. In the second panel we plot simulated fixation locations for the same images from the naive model of Equation \[eq:itti-saliency-simple\]. In the second panel we plot simulated fixation locations for the same images from the model of Equation \[fig:itti-spatial-bias\], which includes a spatial bias. Note that these are predicted fixation locations for entirely new images, and not a fit. Including a spatial bias improves predictions enormously. \[fig:Predicting-marginal-fixations\]](Figures/predictive_marginal_locations) What about predictions for individual images? Typically in vision science we are attempting to predict a one-dimensional quantity: for example, we might have a probability distribution for somebody’s contrast threshold. If this probability distribution has high variance, our predictions for any *individual* trial or the average of a number of trials are by necessity imprecise. In the one-dimensional case it is easy to visualise the degree of certainty by plotting the distribution function, or providing a confidence interval. In a point process context, we do not deal with one-dimensional quantities: if the goal is to predict where 100 fixations on image $j$ might fall, we are dealing with a 200 dimensional space—100 points times 2 spatial dimensions. A maximally confident prediction would be represented by a probability distribution that says that all points will be at a single location. A minimally confident prediction would be represented by the uniform distribution over the space of possible fixations, saying that all possible configurations are equally likely. Thus the question that needs to be addressed is, where do the predictions we can make from the Itti-Koch model fall along this axis? It is impossible to provide a probability distribution, or to report confidence intervals. A way to visualise the amount of uncertainty we have is by drawing samples from the predictive probability distribution, to see if the samples vary a lot. Each sample is a set of a 100 points: if we notice for example that over 10 samples all the points in each sample systematically cluster at a certain location, it indicates that our predictive distribution is rather specific. If we see at lot of variability across samples, it is not. This mode of visualisation is better adapted to a computer screen than to be printed on paper, but for five examples we show eight samples in Figure \[fig:Samples-predictive\]. To better understand the level of uncertainty involved, imagine that the objective is to perform (lossy) image compression. We picked this example because saliency models are sometimes advocated in the compression context [@Itti:AutomaticFoveationVideoCompression]. Lossy image compression works by discarding information and hoping people will not notice. The promise of image-based saliency models is that if we can predict what part of an image people find interesting, we can get away with discarding more information where people will not look. Let us simplify the problem and assume that either we compress an area or we do not. The goal is to find the largest possible section of the image we can compress, under the constraint that if a 100 fixations are made in the image, less than 5 fall in the compressed area (with high probability). If the predictive distribution is uniform, we can afford to compress less than 5% of the area of the image. A full formalisation of the problem for other distributions is rather complicated, and would carry us outside the scope of this introduction, but looking at the examples of Figure \[fig:Samples-predictive\] it is not hard to see qualitatively that for most images, the best area we can find will be larger than 5% but still rather small: in the predictive distributions, points have a tendency of falling in most places except around the borders of the screen. ![Samples from the predictive distributions for the model including spatial bias. We picked five images at random, and generated 8 samples from the predictive distribution from each, using the technique outlined in Figure \[fig:Explanation-Predictions\]. Each row corresponds to one image, with samples along the columns. This lets us visualise the uncertainty in the predictive distributions, see text.\[fig:Samples-predictive\]](Figures/predictive_examples) The reason we see such imprecision in the predictive distributions is essentially because we have to hedge our bets: since the value of $\beta$ varies substantially from one image to another, our predictions are vague by necessity. In most cases, models are evaluated in terms of average performance (for example, average AUC performance over the dataset). The above results suggest that looking just at average performance is insufficient. A model that is consistenly mediocre may for certain applications be preferable than a model that is occasionally excellent but sometimes terrible. If we cannot tell in advance when the latter model does well, our predictions about fixation locations may be extremely imprecise. Dealing with non-stationarities: dependency on the first fixation\[sub:Dealing-with-non-stationarities:\] --------------------------------------------------------------------------------------------------------- One very legitimate concern with the type of models we have used so far is the independence assumption embedded into the IPP: all fixations are considered independent of each other. Since successive fixations tend to occur close to one another, we know that the independence assumption is at best a rough approximation to the truth. There are many examples of models in psychology that rather optimistically assume independence and thus neglect nonstationarities: when fitting psychometric functions for example, one conveniently ignores sequential dependencies, learning, etc. but see @Fruend:InfPsychFunNonstatBehav or Schönfelder and Wichmann (in press). One may argue, however, that models assuming independence are typically simpler, and therefore less likely to overfit, and that the presence of dependencies effectively acts as an additional (zero-mean) noise source that does not bias the results. This latter assumption requires to be explicitly checked, however. In this section we focus specifically on a source of dependency that could bias the results (dependency on the first fixation), and show that (a) our models can amended to take this dependency into account, (b) that the dependency indeed exists, although (c) results are not affected in a major way. We conclude with a discussion of dependent point processes and some pointers to the relevant literature. In the experiment of @Kienzle:CenterSurroundPatternsOptimalPredictors, subjects only had a limited amount of time to look at the pictures: generally less than 4 seconds. This does not always allow enough time to explore the whole picture, so that subjects may have only explored a part of the picture limited to an certain area around the initial fixation. As explained on Figure \[fig:dependence-on-initial-fixation\], such dependence may cause us to underestimate the predictive value of a saliency map. Supposing that fixations are indeed driven by the saliency map, there might be highly salient regions that go unexplored because they are too far away from the initial fixation. In a model such as we have used so far, this problem would lead to under-estimating the $\beta$ coefficients. As with spatial bias, there is again a fairly straightforward solution: we add an additional spatial covariate, representing distance to the original fixation. The log-intensity functions are now modelled as: $$\eta_{ij}\left(x,y\right)=\alpha_{ij}+\beta_{i}m_{i}(x,y)+\gamma d_{ij}(x,y)+\nu c(x,y)\label{eq:initial-fixation-model}$$ Introducing this new spatial covariate requires some changes. Since each subject started at a different location, we have one point process per subject and per image, and therefore the log-intensity functions $\eta_{ij}$ are now indexed both by image $i$ and subject $j$ (and we introduce the corresponding intercepts $\alpha_{ij}$). The covariate $d_{ij}\left(x,y\right)$ corresponds to the Euclidean distance to the initial fixation, i.e. if the initial fixation of subject $j$ on image $i$ was at $x=10,y=50$, $d_{ij}\left(x,y\right)=\sqrt{\left(x-10\right)^{2}+\left(y-50\right)^{2}}$. The coefficient $\gamma$ controls the effect of the distance to the initial fixation: a negative value of $\gamma$ means that intensity decreases away from the initial location, or in other words that fixations tend to stay close to the initial location. For the sake of computational simplicity, we have replaced the non-parametric spatial bias term $g(x,y)$ with a linear term $\nu c(x,y)$ representing an effect of the distance to the center ($c(x,y)=\sqrt{x^{2}+y{}^{2}}$). Coefficient $\nu$ plays a role similar to $\gamma$: a negative value for $\nu$ indicates the presence of a centrality bias. We have scaled $c$ and $d_{ij}$ so that a distance of 1 corresponds to the width of the screen. In this analysis we exclude the initial fixations from the set of fixations, they are used only as covariates. The model does not have any non-parametric terms, so that we can estimate the coefficients using maximum likelihood (standard errors are estimated from the usual normal approximation at the mode). ![Dependence on the initial fixation location: a potential source of bias in estimation? We show here a random saliency map, and suppose that fixation locations depend on saliency but are constrained by how far away they can move from the original fixation location. The original fixation is in grey, the circle shows a constraint region and the other points are random fixations. The area at the bottom half of the picture is left unexplored not because it is not salient but because it is too far away from the original fixation location. Such dependencies on the initial location may lead to an underestimation of the role of saliency. We describe in the text a method for overcoming the problem. \[fig:dependence-on-initial-fixation\]](Figures/example_dep_initial_fix) Again we use the first 100 images in the set to estimate the parameters, and keep the next 100 for model comparison. The fitted coefficient for distance to the initial location is $\hat{\gamma}=-3.2$ (std. err 0.1), and for distance to the center we find $\hat{\nu}=-1.6$ (std. err. 0.1). The value of $\gamma$ indicates a clear dependency on initial location: everything else being equal, at a distance of half the width of the screen the intensity has dropped by a factor 5. To see if the presence of this dependency induces a bias in the estimation of coefficients for the saliency map, we also fitted the model without the initial location term (setting $\gamma$ to 0). ![Effect of correcting for dependence on the initial fixation location (see Fig. \[fig:dependence-on-initial-fixation\]). We compare the estimated value of $\beta_{i}$ using the model of Equation \[eq:initial-fixation-model\], to coefficients estimated from a constrained model that does not include the initial fixation as a covariate. The dots are coefficient pairs corresponding to an image, and in blue we have plotted a trend line. The diagonal line corresponds to equality. Although we do find evidence of dependence on initial fixation location (see text), it does not seem to cause any estimation bias: if anything the coefficients associated with saliency are slightly higher when estimated by the uncorrected model.\[fig:Effect-of-correcting-coefficients\]](Figures/correction_nonstat_coeffs) We compared the estimated values for $\beta_{i}$ in both models, and show the results on Figure \[fig:Effect-of-correcting-coefficients\]: the differences are minimal, although as we have argued there is certainly some amount of dependence on the initial fixation. The lack of an observed effect on the estimation of $\beta$ is probably due to the fact that different observers fixate initially in different locations, and that the dependency effectively washes out in the aggregate. An interesting observation is that in the reduced model the coefficient associated with distance to the center is estimated at -4.1 (std. err. 0.1), which is much larger than when distance to initial fixation is included as a covariate. Since initial fixations are usually close to the center, being close to the center is correlated with being close to the initial fixation location, and part of the centrality bias observed in the dataset might actually be better recast as dependence on the initial location. In this we have managed to capture a source of dependence between fixations, while seemingly still saving the IPP assumption. We have done so by positing conditional independence: in our improved model all fixations in a sequence are independent given the initial one. An alternative is to drop the independence assumption altogether and use dependent point process models, in which the location of each point depends on the location of its neighbours. These models are beyond the scope of this paper, but they are discussed extensively in @Diggle:StatisticalAnalysisSpatialPointPatterns and @Illian:StatAnalysisSpatPointPatterns, along with a variety of non-parametric methods that can diagnose interactions. Discussion ========== We introduced spatial point processes, arguing that they provide a fruitful statistical framework for the analysis of fixation locations and patterns for researchers interested in eye movements. In our exposition we analyzed a low-level saliency model by @IttiKoch:ComputationalModellingVisualAttention, and we were able to show that—although the model had predictive value on average—it had varying usefulness from one image to another. We believe that the consequences of this problem for prediction are under-appreciated: as we stated in Section \[sub:Including-a-spatial-bias\], when trying to predict fixations over an arbitrary image, this variability in quality of the predictor leads to predictions that are necessarily vague. Although insights like this one could be arrived at starting from other viewpoints, they arise very naturally from the spatial point process framework presented here. Indeed, older methods of analysis can be seen as approximations to point process model, as we shall see below. Owing to the tutorial nature of the material, there are some important issues we have so far set swept under the proverbial rug. The first is the choice of log-additive decompositions: are there other options, and should one use them? The second issue is that of causality [@Henderson:HumGazeControlRealWorldScenePerception]: can our methods say anything about what drives eye movements? Finally, we also need to point out that the scope of point process theory is more extensive than what we have been able to explore in this article, and the last part of the discussion will be devoted to other aspects of the theory that could be of interest to eye movement researchers. Point processes versus other methods of analysis ------------------------------------------------ In Section \[sub:FixLocationLocalProp\], we described various methods that have been used in the analysis of fixation locations. Many treat the problem of determining links between image covariates and fixation locations as a patch classification problem: one tries to tell from the contents of an image patch whether it was fixated or not. In the appendix (Section \[sub:The-patch-classification-problem\]), we show that patch classification has strong ties to point process models, and under some specific forms can be seen as an approximation to point process modelling. In a nutshell, if one uses logistic regression to discriminate between fixated and non-fixated patches, then one is effectively modelling an intensity function. This fact is somewhat obscured by the way the problem is usually framed, but comes through in a formal analysis a bit too long to be detailed here. This result has strong implications for the logic of patch classification methods, especially regarding the selection of control patches, and we encourage interested readers to take a look at Section \[sub:The-patch-classification-problem\]. Point process theory also allows for a rigorous examination of earlier methods. We take a look in the appendix at AROC values and area counts, two metrics that have been used often in assessing models of fixations. We ask for instance what the ideal model would be, according to the AROC metric, if fixations come from a point process with a certain intensity function. The answer turns out to depend on how the control patches are generated, which is rather crucial to the correct interpretation of the AROC metric. This result and related ones are proved in Section \[sub:ROC-and-Area-counts\]. We expect that more work will uncover more links between existing methods and the point process framework. One of the benefits of thinking in terms of the more general framework is that many results have already been proven, and many problems have already been solved. We strongly believe that the eye movement community will be able to benefit from the efforts of others who work on spatial data. Decomposing the intensity function ---------------------------------- Throughout the article we have assumed a log-additive form for our models, writing the intensity function as $$\lambda\left(x,y\right)=\exp\left(\sum\alpha_{i}v_{i}(x,y)\right)\label{eq:log-add-decomp}$$ for a set of covariates $v_{1},\ldots,v_{n}$. This choice may seem arbitrary - for example, one could use $$\lambda\left(x,y\right)=\sum\alpha_{i}v_{i}(x,y)\label{eq:add-decomp}$$ a type of mixture model similar to those used in @Vincent:DoWeLookAtLights. Since $\lambda$ needs to be always positive, we would have to assume restrictions on the coefficients, but in principle this decomposition is just as valid. Both (\[eq:log-add-decomp\]) and (\[eq:add-decomp\]) are actually special cases of the following: $$\lambda\left(x,y\right)=\Phi\left(\sum\alpha_{i}v_{i}(x,y)\right)$$ for some function $\Phi$ (analoguous to the inverse link function in Generalised Linear Models, see [@McCullaghNelderGLMs]). In the case of Equation \[eq:log-add-decomp\] we have $\Phi\left(x\right)=\exp\left(x\right)$ and in the case of \[eq:add-decomp\] we have $\Phi\left(x\right)=x$. Other options are available, for instance @Park:ActiveLearningNeuralResponseFunc use the following function in the context of spike train analysis: $$\Phi\left(x\right)=\log(\exp\left(x\right)+1)$$ which approximates the exponential for small values of $x$ and the identity for large ones. Single-index models treat $\Phi$ as an unknown and attempt to estimate it from the data [@McCullaghNelderGLMs]. From a practical point of view the log-additive form we use is the most convenient, since it makes for a log-likelihood function that is easy to compute and optimise, and does not require restrictions on the space of parameters. From a theoretical perspective, the log-additive model is compatible with a view that sees the brain as combining multiple interest maps $v_{1},v_{2},...$ into a master map that forms the basis of eye movement guidance. The mixture model implies on the contrary that each saccade comes from a roll of dice in which one chooses the next fixation according to one of the $v_{i}$’s. Concretely speaking, if the different interest maps are given by, e.g. contrast and edges, then each saccade is either contrast-driven with a certain probability or on the contrary edges-driven. We do not know which model is the more realistic, but the question could be addressed in the future by fitting the models and comparing predictions. It could well be that the situation is actually even more complex, and that the data are best described by both linear and log-linear mixtures: this would be the case, for example, if occasional re-centering saccades are interspersed with saccades driven by an interest map. Causality --------- We need to stress that the kind of modelling we have done here does not address causality. The fact that fixation locations can be predicted from a certain spatial covariate does not imply that the spatial covariate causes points to appear. To take a concrete example, one can probably predict the world-wide concentration of polar bears from the quantities of ice-cream sold, but that does not imply that low demand for ice-cream causes polar bears to appear. The same caveat apply in spatial point process models as in regression modelling, see @GelmanHill:DataAnalysisUsingRegression. Regression modelling has a causal interpretation only under very restrictive assumptions. In the case of determining the causes of fixations in natural images, the issue may actually be a bit muddled, as different things could equally count as causing fixations. Let us go back to polar bears and assume that, while rather indifferent to ice cream, they are quite partial to seals. Thus, the presence of seals is likely to cause the appearance of polar bears. However, due to limitations inherent to the visual system of polar bears, they cannot tell between actual seals and giant brown slugs. The presence of giant brown slugs then also causes polar bears to appear. Both seals and giant brown slugs are valid causes of the presence of polar bears, in the counterfactual sense: no seal, no polar bear, no slug, no polar bear either. A more generic description at the algorithmic level is that polar bears are drawn to anything that is brown and has the right aspect ratio. At a functional level, polar bears are drawn to seals because that is what the polar bear visual system is designed to do. The same goes for saccadic targeting: an argument is sometimes made that fixated and non-fixated patches only differ in some of their low-level statistics because people target objects, and the presence of objects tend to cause these statistics to change [@Nuthmann:ObjectBasedAttentionalSelection]. While the idea that people want to look at objects is a good *functional* account, at the algorithmic level they may try to do so by targeting certain local statistics. There is no confounding in the usual sense, since both accounts are equally valid but address different questions: the first is algorithmic (how does the visual system choose saccade targets based on an image?) and the other one teleological (what is saccade targeting supposed to achieve?). Answering either of these questions is more of an experimental problem than one of data analysis, and we cannot—and do not want to—claim that point process modelling is able to provide anything new in this regard. Scope and limitations of point processes ---------------------------------------- Naturally, there is much we have left out, but at least we would like to raise some of the remaining issues. First, we have left the temporal dimension completely out of the picture. Nicely, adding a temporal dimension in point process models presents no conceptual difficulty; and we could extend the analyses presented here to see in detail whether, for example, low-level saliency predicts earlier fixations better than later ones. We refer the reader to @RodriguesDiggle:BayesEstPredSpatTempLGCoxProc and @ZammitMangion:PointProcModellingAfghanWarDiary for recent work in this direction. Second, in this work we have considered that a fixation is nothing more than a dot: it has spatial coordinates and nothing more. Of course, this is not true: a fixation lasted a certain time, during which particular fixational eye movements occured, etc. Studying fixation duration is an interesting topic in its own right, because how long one fixates might be tied to the cognitive processes at work in a task [@Nuthmann:CRISP]. There are strong indications that when reading, gaze lingers longer on parts of text that are harder to process. Among other things, the less frequent a word is, the longer subjects tend to fixate it [@Kliegl:TrackingTheMindDuringReading]. Saliency is naturally not a direct analogue of word frequency, but one might nonetheless wonder whether interesting locations are also fixated longer. We could take our data to be fixations coupled with their duration, and we would have what is known in the spatial statistics literature as a *marked point process*. Marked point processes could be of extreme importance to the analysis of eye movements, and we refer the reader to @Illian:HierarchicalSPPAnalysisPlantCommunity for some ideas on this issue. Third, another limitation we need to state is that the point process models we have described here do not deal very well with high measurement noise. We have assumed that what is measured is an actual fixation location, and not a noisy measurement of an actual measurement location. In addition, the presence of noise in the oculomotor system means that actual fixation location may not be the intended one, which of course adds an intractable source of noise to the measurements. Issues do arise when the scale of measurement error is larger than the typical scale at which spatial covariates change. Although there are theoretical solutions to this problem (involving mixture models), they are rather cumbersome from a computational point of view. An less elegant work-around is to blur the covariates at the scale of measurement error. Finally, representing the data as a set of locations may not always be the most appropriate way to think of the problem. In a visual search task for example, a potentially useful viewpoint would be to think of a sequence of fixations as covering a certain area of the stimulus. This calls for statistical models that address random shapes rather than just random point sets, an area known as stochastic geometry [@Stoyan:StochasticGeometryAndItsApplications], and in which point processes play a central role, too. Appendices ========== The messy details of spatial statistics, and how to get around them -------------------------------------------------------------------- The field of applied spatial statistics has evolved into a powerful toolbox for the analysis of eye movements. There are, however, two main hurdles in terms of accessibility. First, compared to eye-movement research, the more traditional application fields (ecology, forestry, epidemiology) have a rather separate set of problems. Consequently, textbooks (e.g., @Illian:StatAnalysisSpatPointPatterns), focus on non-Poisson processes, since corresponding problems often involve mutual interactions of points, e.g., how far trees are from one another and whether bisons are more likely to be in groups of three than all by themselves. Such questions have to do with the second-order properties of point processes, which express how points attract or repel one another. The formulation of point process models with non-trivial second-order properties, however, requires rather sophisticated mathematics, so that the application to eye-movement data is no longer straight-forward. Second, while the formal properties of point process models are well-known, practical use is hindered by computational difficulties. A very large part of the literature focuses on computational techniques (maximum likelihood or Bayesian) for fitting point process models. Much progress has been made recently (see, among others, [@HaranTierney:AutomatingMCMCSpatialModels], or [@Rue:INLA]). Since we these technical difficulties might not be of direct interest to most eye-movement researchers, we developed a toolkit for the R environment that attempts to mathematical details under the carpet. We build on one of the best techniques available (INLA) [@Rue:INLA] to provide a generic way to fit multiple point process models without worrying too much about the underlying mathematics. The toolkit and a manual in the form of the technical report has been made available for download on the first author’s webpage. Gaussian processes and Gauss-Markov processes\[sub:Intro-to-GPs\] ----------------------------------------------------------------- Gaussian Processes (GPs) and related methods are tremendously useful but not the easiest to explain. We will stay here at a conceptual level, computational details can be found in the monograph of @RasmussenWilliamsGP. Rather than directly state how we use GPs in our models, we start with a detour on non-parametric regression (see Figure \[fig:non-parametric-regression\]), which is were Gaussian processes are most natural. In non-parametric regression, given the (noisy) values of a function $f\left(x\right)$ measured at points $x_{1},\ldots,x_{n}$, we try to infer what the values of $f$ are at other points. *Interpolation* and *extrapolation* can be seen as special cases of non-parametric regression - ones where noise is negligible. The problem is non-parametric because we do not wish to assume that $f(x)$ has a known parametric form (for example, that $f$ is linear). For a statistical solution to the problem, we need a likelihood, and usually it is assumed that $y_{i}|x_{i}\sim\N\left(f\left(x_{i}\right),\sigma^{2}\right)$ which corresponds to observing the true value corrupted by Gaussian noise of variance $\sigma^{2}$. This is not enough, since there are uncountably many functions $f$ that have the same likelihood, namely all those that have the same value at the sampling points $x_{1},\ldots,x_{n}$ (Fig. \[fig:non-parametric-regression\]). Thus, we need to introduce some constraints. Parametric methods constrain $f$ to be in a certain class, and can be thought of as imposing “hard” constraints. Nonparametric methods such as GP regression impose *soft* constraints, by introducing an a priori probability on possible functions such that reasonable functions are favoured (Fig. \[fig:non-parametric-regression\] and \[fig:GPs-nonparam-reg\]). In a Bayesian framework, this works as follows. What we are interested in is the posterior probability of $f$ given the data, which is as usual given by $p(f|\mathbf{y})\propto p(\mathbf{y}|f)p(f)$. As we mentioned above $p(\mathbf{y}|f)=\prod_{i=1}^{n}\N\left(y_{i}|f(x_{i}),\sigma^{2}\right)$ is equal for all functions that have the same values at the sampled points $x_{1},\ldots,x_{n}$, so what distinguishes them in the posterior is how likely they are a priori—which is, of course, provided by the prior distribution $p(f)$. How to formulate $p(f)$? We need a probability distribution that is defined over a space of functions. The idea of a process that generates random functions may not be as unfamiliar as it sounds: a Wiener process, for example, can be interpreted as generating random functions (Fig. \[fig:Samples-Wiener-Process\]). A Wiener process is a diffusion: it describes the random motion of a particle over time. To generate the output of a Wiener process, you start at time $t_{0}$ with a particle at position $z(t_{0})$, and for each infinitesimal time increment you move the particle by a random offset, so that over time you generate a “sample path” $z(t)$. This sample path might as well be seen as a function, just like the notation $z(t)$ indicates, so that each time one runs a Wiener process, one obtains a different function. This distribution will probably not have the required properties for most applications, since samples from a Wiener process are much too noisy - they generate functions that look very rough and jagged. The Wiener process is however a special case of a GP, and this more general family has some much more nicely-behaved members. A useful viewpoint on the Wiener process is given by how successive values depend on each other. Suppose we simulate many sample paths of the Wiener Process, and each time measure the position at time $t_{a}$ and $t_{b}$, so that we have a collection of $m$ samples $\left\{ \left(z_{1}\left(t_{a}\right),z_{1}\left(t_{b}\right)\right),\ldots,\left(z_{m}\left(t_{a}\right),z_{m}\left(t_{b}\right)\right)\right\} $. It is clear that $z\left(t_{a}\right)$ and $z\left(t_{b}\right)$ are not independent: if $t_{a}$ and $t_{b}$ are close, then $z(t_{A})$ and $z\left(t_{B}\right)$ will be close too. We can characterise this dependence using the covariance between these two values: the higher the covariance, the more likely $z\left(t_{a}\right)$ and $z\left(t_{b}\right)$ are to be close in value. Figure \[fig:Cov-Wiener-process\] illustrates this idea. If we could somehow specify a process such that the correlation between two function values at different places does not decay too fast with the distance between these two places, then presumably the process would generate rather smooth functions. This is exactly what can be achieved in the GP framework. The most important element of a GP is the covariance function $k(x,x')$[^9], which describes how the covariance between two function values depend on where the function is sampled: $k(x,x')=\mbox{Cov}\left(f(x),f(x')\right)$. We now have the necessary elements to define a GP formally. A GP with mean 0 and covariance function $k(x,x')$ is a distribution on the space of functions of some input space $\mathcal{X}$ into $\mathbb{R}$, such that for every set of $\left\{ x_{1},\ldots,x_{n}\right\} $, the sampled values $f(x_{1}),\ldots,f(x_{n})$ are such that $$\begin{aligned} f(x_{1}),\ldots,f(x_{n}) & \sim & \N\left(0,\mathbf{K}\right)\\ \mathbf{K}_{ij} & = & k(x_{i},x_{j})\end{aligned}$$ In words, the sampled values have a multivariate Gaussian distribution with a covariance matrix given by the covariance function $k$. This definition should be reminiscent of that of the IPP: here too we define a probability distribution over infinite-dimensional objects by constraining every finite-dimensional marginal to have the same form. A shorthand notation is to write that $$f\sim\mathcal{GP}\left(0,k\right)$$ and this is how we define our prior $p(f)$. Covariance functions are often chosen to be Gaussian in shape[^10] (sometimes called the “squared exponential” covariance function, to avoid giving Gauss overly much credit): $$k(x,x')=\nu\exp\left(-\lambda\left(x-x'\right){}^{2}\right)$$ It is important to be aware of the roles of the hyperparameters, here $\nu$ and $\lambda$. Since $k(x,x)=\mbox{Var}\left(f(x)\right)$, we see that $\nu$ controls the marginal variance of $f$. This gives the prior a scale: for example, if $\nu=1$, the variance of $f(x)$ is 1 for all $x$, and because $f(x)$ is normally distributed this implies that we do not expect $f$ to take values much larger than 3 in magnitude. $\lambda$ plays the important role of controlling how fast we expect $f\left(x\right)$ to vary: the greater $\lambda$ is, the faster the covariance decays. What this implies is for very low values of $\lambda$ we expect $f$ to be locally almost constant, for very large values we expect it to vary much faster (Fig. \[fig:GP-cov-function\]). In practice it is often better (when possible) not to set the hyperparameters to pre-specified values, but infer them also from the data (see [@RasmussenWilliamsGP] for details). One of the concrete difficulties with working with Gaussian Processes is related to the need to invert large covariance matrices when performing inference. Inverting a large, dense matrix is an expensive operation, and a lot of research has gone into finding ways of avoiding that step. One of the most promising is to approximate the Gaussian Process such that the *inverse* covariance matrix (the precision matrix) is sparse, which leads to large computational savings. Gauss-Markov processes are a class of distributions with sparse inverse covariance matrices, and the reader may consult @RueHeld:GMRFTheoryApplications for an introduction. ![A non-parametric regression problem. We have measured some output $y$ for 10 different values of some covariate $x$. We plot these data as open circles. Our assumption is that $y=f(x)+\epsilon$, where $\epsilon$ is zero-mean noise. We wish to infer the underlying function $f$ from the data without assuming a parametric form for $f$. The two functions shown as dotted curves both have the same likelihood - they are equally close to the data. In most cases the function in red will be a far worse guess than the one in black. We need to inject that knowledge into our inference, and this can be done by imposing a prior on possible latent functions $f$. This can be done using a GP. \[fig:non-parametric-regression\]](Figures/example_nonparamreg) Details on Inhomogeneous Poisson Processes\[sub:Details-on-IPPs\] ----------------------------------------------------------------- We give below some details on the likelihood function for inhomogeneous Poisson processes (IPPs), as well as the techniques we used for performing Bayesian inference. ### The likelihood function of an IPP An IPP is formally characterised as follows: given a spatial domain $\Omega$, e.g here $\Omega=[0,1]^{2}$, and an intensity function $\lambda:\,\Omega\rightarrow\mathbb{R}^{\text{+}}$ then an IPP is a probability distribution over finite subsets $S$ of $\Omega$ such that, for all sets $\mathcal{D}\in\Omega$, $$\left|S\cap\mathcal{D}\right|\sim Poi\left(\int_{\mathcal{D}}\lambda\left(x\right)\mbox{d}x\right)\label{eq:IPP-poisson-marginal-property}$$ $\left|S\cap\mathcal{D}\right|$ is short-hand for the cardinal of $S\cap\mathcal{D}$, the number of points sampled from the process that fall in region $\mathcal{D}$. Note that in IPP, for disjoint subsets $\mathcal{D}_{1},\ldots,\mathcal{D}_{r}$ the distributions of $\left|S\cap\mathcal{D}_{1}\right|,\ldots,\left|S\cap\mathcal{D}_{r}\right|$ are independent[^11]. For purposes of Bayesian inference, we need to be able to compute the likelihood, which is the probability of the sampled point set $S$ viewed as a function of the intensity function $\lambda\left(\centerdot\right)$. We will try to motivate and summarize the necessary concepts without a rigorous mathematical derivation, interested readers should consult @Illian:StatAnalysisSpatPointPatterns for details. We note first that the likelihood can be approximated by *gridding* the data: we divide $\Omega$ into a discrete set of regions $\Omega_{1},\ldots,\Omega_{r}$, and count how many points in $S$ fell in each of these regions. The likelihood function for the gridded data is given directly by Equation \[eq:IPP-poisson-marginal-property\] along with the independence assumption: noting $k_{1},\ldots,k_{r}$ the bin counts we have $$\begin{aligned} p\left(k_{1},\ldots,k_{r}|\lambda\right) & = & \prod_{j=1\ldots r}\frac{\left(\lambda_{j}\right)^{k_{j}}}{k_{j}!}\exp\left(-\lambda_{j}\right)\label{eq:Poisson-counts}\\ \lambda_{j} & = & \int_{\Omega_{j}}\lambda\left(x\right)\mbox{d}x\nonumber \end{aligned}$$ Also, since $\Omega_{1},\ldots,\Omega_{r}$ is a partition of $\Omega$, $\prod\exp\left(-\lambda_{j}\right)=\exp(-\sum\lambda_{j})=\exp\left(-\int_{\Omega}\lambda\left(x\right)\mbox{d}x\right)$. As we make the grid finer and finer we should recover the true likelihood, because ultimately if the grid is fine enough for all instance and purposes we will have the true locations. As we increase the number of grid points $r$, and the area of each region $\Omega_{j}$ shrinks, two things will happen: - the counts will be either 0 (in the vast majority of empty regions), or 1 (around the locations $s_{1},\ldots,s_{n}$ of the points in $S$). - the integrals $\int_{\Omega_{j}}\lambda\left(x\right)\mbox{d}x$ will tend to $\lambda\left(x_{j}\right)\mbox{d}x$, with $x_{j}$ any point in region $\Omega_{j}$. In dimension 1, this corresponds to saying that the area under a curve can be approximated by height x length for small intervals. Injecting this into Equation (\[eq:Poisson-counts\]), in the limit we have: $$p(S|\lambda\left(\cdot\right))=\frac{1}{n}\left\{ \prod_{i=1}^{n}\lambda\left(s_{i}\right)\mbox{d}x\right\} \exp\left(-\int_{\Omega}\lambda\left(x\right)\mbox{d}x\right)\label{eq:likelihood-function-IPP}$$ Since the factors $dx$ and $n^{-1}$ are independent of $\lambda$ we can neglect them in the likelihood function. ### Conditioning on the number of datapoints, computing predictive distributions\[sub:Conditioning-on-n\] The Poisson process has a remarkable property [@Illian:StatAnalysisSpatPointPatterns]: conditional on sampling $n$ points from a Poisson process with intensity function $\lambda\left(x,y\right)$, these $n$ points are distributed independently with density $$\bar{\lambda}\left(x,y\right)=\frac{\lambda\left(x,y\right)}{\int\lambda\left(x,y\right)\mbox{d}x\mbox{d}y}$$ Intuitively speaking, this means that if you know the point process produced 1 point, then this point is more likely to be where intensity is high. This property is the direct analogue of its better known discrete variant: if $z_{1},z_{2},\ldots,z_{n}$ are independently Poisson distributed with mean $\lambda_{1},\ldots,\lambda_{n}$, then their joint distribution conditional on their sum $\sum z_{i}$ is multinomial with probabilities $\pi_{i}=\frac{\lambda_{i}}{\sum\lambda_{j}}$. Indeed, the continuous case can be seen as the limit case of the discrete case. We bring up this point because it has an important consequence for prediction. If the task is to predict where the 100 next points are going to be, then the relevant predictive distribution is: $$p(S|S\mbox{ has size }n)=\prod_{i=1}^{n}\frac{\lambda\left(x_{i},y_{i}\right)}{\int\lambda\left(x,y\right)\mbox{d}x\mbox{d}y}\label{eq:distribution-n-known}$$ where $S$ is a point set of size $n$, whose points have $x$ coordinates $x_{1},\ldots,x_{n}$ and $y$ coordinates $y_{1},\ldots,y_{n}$ . Equation \[eq:distribution-n-known\] is the right density to use when evaluating the predictive abilities of the model with $n$ known (for example if one wants to compute the predictive deviance). In the main text we had models of the form: $$\log\lambda_{i}\left(x,y\right)=\eta_{i}\left(x,y\right)=\alpha_{i}+\beta_{i}m_{i}(x,y)$$ and we saw that when predicting data for a new image $j$ we do not know the values of $\alpha_{j}$ and $\beta_{j}$, and need to average over them. The good news is that when $n$ is known we need not worry about the intercept $\alpha_{j}$: all values of $\alpha_{j}$ lead to the same predictive distribution, because $\alpha_{j}$ disappears in the normalisation in Equation \[eq:distribution-n-known\]. Given a distribution $p(\beta)$ for possible slopes, the predictive distribution is given by: $$\begin{aligned} p(S_{j}|S_{j}\mbox{ has size }n) & = & \int p(\beta_{j})\prod_{i=1}^{n}\frac{\exp\left(\beta_{j}m_{j}(x_{i},y_{i})\right)}{\int\exp\left(\beta_{j}m_{j}(x,y)\right)\mbox{d}x\mbox{d}y}\mbox{d}\beta_{j}\end{aligned}$$ It is important to realise that the distribution above does *not* factorise over points, unlike (\[eq:distribution-n-known\]) above. Computation requires numerical or Monte Carlo integration over $\beta$ (as far as we know). Approximations to the likelihood function ----------------------------------------- One difficulty immediately arises when considering Equation (\[eq:likelihood-function-IPP\]): we require the integral$\int_{\Omega}\lambda\left(x\right)\mbox{d}x$. While not a problem when $\lambda\left(\cdot\right)$ has some convenient closed form, in the cases we are interested in $\lambda(x)=\exp\left(\eta\left(x\right)\right)$, with $\eta\left(\cdot\right)$ a GP sample. The integral is therefore not analytically tractable. A more fundamental difficulty is that the posterior distribution $p(\lambda\left(\cdot\right)|S)$ is over an infinite-dimensional space of functions - how are we to represent it? All solutions use some form of discretisation. A classical solution is to use the approximate likelihood obtained by binning the data (Eq. \[eq:Poisson-counts\]), which is an ordinary Poisson count likelihood. The bin intensities $\lambda_{j}=\int_{\Omega_{j}}\lambda\left(x\right)\mbox{d}x$ are approximated by assuming that bin area is small relative to the variation in $\lambda\left(\cdot\right)$, so that: $$\lambda_{j}=\lambda\left(x_{j}\right)\left|\Omega_{j}\right|$$ with $|\Omega_{j}|$ the area of bin $\Omega_{j}$ and **$x_{j}$**. The approximate likelihood then only depends on the value of $\lambda\left(\cdot\right)$ at bin centres, so that we can now represent the posterior as the finite-dimensional distribution $p(\lambda_{1}\ldots\lambda_{r}|S)$. In practice we target rather $p(\eta_{1}\ldots\eta_{r}|S)$, for which the prior distribution is given by (see $$\eta(x_{1}),\ldots,\eta(x_{r})\sim\N\left(0,\mathbf{K}_{\theta}\right)$$ Here $\mathbf{K}_{\theta}$ is the covariance matrix corresponding to the covariance function $k_{\theta}\left(\cdot,\cdot\right)$, and $\theta$ represents hyperparameters (e.g., marginal variance and length-scale of the process). A disadvantage of the binning approach is that fine gridding in 2D requires many, many bins, which means that good spatial resolution requires dealing with very large covariance (or precision) matrices, slowing down inference. Another solution, due to @BermanTurner:ApproxPointProcessLikelihoods, uses again the values of $\eta\left(\cdot\right)$ sampled at $r$ grid points, but approximates directly the original likelihood (\[eq:likelihood-function-IPP\]). The troublesome integral $\int_{\Omega}\lambda\left(x\right)\mbox{d}x$ is dealt with using simple numerical quadrature: $$\int_{\Omega}\lambda\left(x\right)\mbox{d}x\approx\sum w_{j}\exp\left(\eta\left(x_{j}\right)\right)$$ where the $w_{j}$’s are quadrature weights. The values $\lambda\left(s_{i}\right)$ at the sampled points are interpolated from the known values at the grid points: $$\lambda\left(s_{i}\right)=\exp\left(\sum_{j=1\ldots r}a_{ij}\eta\left(x_{j}\right)\right)$$ the $a_{ij}$ are interpolation weights. Injecting into (\[eq:likelihood-function-IPP\]) we have the approximate log-likelihood function: $$\mathcal{L}\left(\eta\right)=\sum_{i,j}a_{ij}\eta\left(x_{j}\right)-\sum w_{j}\exp\left(\eta\left(x_{j}\right)\right)\label{eq:approx-IPP-likelihood-interp}$$ This log-likelihood function is compatible with the INLA framework for inference in latent Gaussian models (see [@Rue:INLA] and the website www.r-inla.org). The same log-likelihood function can be also be used in the classical maximum-likelihood framework. Approximate confidence intervals and p-values for coefficients can be obtained from nested model comparisons, or using resampling techniques (these techniques are explained in most statistics textbooks, including [@Wasserman:AllOfStats]). Relating patch statistics to point process theory\[sub:Relating-patch-statistics-to-PP-theory\] ----------------------------------------------------------------------------------------------- As noted above much of the previous work in the literature has focused on the statistics of image patches as a way of characterising the role of local image features in saliency. In this section we will show that these analyses can be related to point process modelling, leading to new insights. First, patch classification task approximates IPP models under certain assumptions. Second, the common practice of measuring performance by the area under the ROC curve can be grounded in point process assumptions. Third, the alternative procedure which consists in measuring performance by the proportion of fixations in the most salient region is in fact just a variant of the ROC procedure when seen in the point process context. ### The patch classification problem as an approximate to point process modeling\[sub:The-patch-classification-problem\] We show here that patch classification can be interpreted as an approximation to point process modeling. Although we focus on the techniques used in the eye movement literature, our analysis is very close in spirit to that of @Baddeley:SpatialLogisticRegAndChangeOfSupport on the spatial logistic regression methods used in Geographic Information Systems. Note that @Baddeley:SpatialLogisticRegAndChangeOfSupport contains in addition many interesting results not detailed here and much more careful mathematical analysis. Performing a classical patch statistics analysis involves collecting for a set of $n$ fixations $n$ “positive” image patches around the fixation locations and $n$ “negative” image patches around $n$ control locations, and comparing the contents of the patches. To avoid certain technical issues to do with varying intercepts (which we discuss later), we suppose that all fixations come from the same image. The usual practice is to compute some summary statistics on each patch, for example its luminance and contrast, and we note $\mathbf{v}(x,y)\in\R^{d}$ the value of those summary statistics at location $x,y$. We will see that $\mathbf{v}(x,y)$ plays the role of spatial covariates in point process models. We assume that the $n$ control locations are drawn uniformly from the image. The next step in the classical analysis is to compare the conditional distributions of $\mathbf{v}$ for positive and negative patches, which we will denote $p(\mathbf{v}|s=1)$ and $p(\mathbf{v}|s=0)$. If these conditional distributions are different, than they afford us some way to tell between selected and non-selected locations based on the contents of the patch. Equivalently, $p\left(s|\mathbf{v}\right)$, the probability of the patch label given the covariates, is different from the base rate of 50% for certain values of $\mathbf{v}$. Patch classification is concerned with modelling $p(s|\mathbf{v})$, a classical machine learning problem that can be tackled via logistic regression or a support vector machine, as in @Kienzle:CenterSurroundPatternsOptimalPredictors. Under certain conditions, statistical analysis of the patch classification problem can be related to point process modelling. Patch classification can be related to *spatial logistic regression*, which in turn can be shown to be an approximation of the IPP model [@Baddeley:SpatialLogisticRegAndChangeOfSupport]. We give here a simple proof sketch that deliberately ignores certain technical difficulties associated with the smoothness (or lack thereof) of $\mathbf{v}\left(x,y\right)$. In spatial logistic regression, a set of fixations is turned into binary data by dividing the domain into a large set of pixels $a_{1}\ldots a_{r}$ and defining a binary vector $z_{1}\ldots z_{r}$ such that $z_{i}=1$ if pixel $a_{i}$ contains a fixation and $z_{i}=0$ otherwise. The second step is to regress these binary data onto the spatial covariates using logistic regression, which implies the following statistical model: $$\begin{aligned} p\left(\mathbf{z}|\bb\right) & =\prod_{i=1}^{r}\Lambda\left(\bb^{t}\mathbf{v}_{i}+\beta_{0}\right)^{z_{i}}\left(1-\Lambda\left(\bb^{t}\mathbf{v}_{i}+\beta_{0}\right)\right)^{1-z_{i}}\label{eq:spatial-logistic-model}\\ \Lambda\left(\eta\right) & =\frac{1}{1+e^{-\eta}}\label{eq:logistic-function}\end{aligned}$$ Equation \[eq:logistic-function\] is just the logistic function, and the value $\mathbf{v}_{i}$ of the covariates at the $i-$th pixel can be either the average value over the pixel or the value at the center of the pixel. By the Poisson limit theorem, the independent Bernoulli likelihood of Equation \[eq:spatial-logistic-model\] becomes that of a Poisson process as pixel size tends to 0. In this limit the probability of observing any individual $z_{i}=1$ will be quite small, so that $\eta_{i}=\bb^{t}\mathbf{v}_{i}+\beta_{0}$ will be well under 0 around the peak of the likelihood. For small values of $\eta$, $\Lambda\left(\eta\right)\approx\exp\left(\eta\right)$, so that: $$p\left(\mathbf{z}|\bb\right)\approx\prod_{i=1}^{r}\exp\left(\bb^{t}\mathbf{v}_{i}+\beta_{0}\right)^{z_{i}}\left(1-\exp\left(\bb^{t}\mathbf{v}_{i}+\beta_{0}\right)\right)^{1-z_{i}}$$ We take the log of $p\left(\mathbf{z}|\bb\right)$ and split the indices of the pixels according to the value of $z_{i}$: we note $\mathcal{I}^{+}$ the set of selected pixels (pixels such that $z_{i}=1$), $\mathcal{I}^{-}$ its complement. This yields: $$\log\, p\left(\mathbf{z}|\bb\right)\approx\sum_{\mathcal{I}^{+}}\left\{ \bb^{t}\mathbf{v}_{i}+\beta_{0}\right\} +\sum_{\mathcal{I}^{-}}\left\{ \log\left(1-\exp\left(\bb^{t}\mathbf{v}_{i}+\beta_{0}\right)\right)\right\}$$ We make use again of the fact that Bernoulli probabilities will be small, which implies that $1-\exp\left(\bb^{t}\mathbf{v}_{i}+\beta_{0}\right)$ will be close to 1. Since $\log\left(x\right)\approx x-1$ for $x\approx1$, the approximation can be further simplified to: $$\log\, p\left(\mathbf{z}|\bb\right)\approx\sum_{\mathcal{I}^{+}}\left\{ \bb^{t}\mathbf{v}_{i}+\beta_{0}\right\} -\sum_{\mathcal{I}^{-}}\exp\left(\bb^{t}\mathbf{v}_{i}+\beta_{0}\right)$$ If the pixel grid is fine enough the second part of the sum will cover almost every point, and will therefore be proportional to the integral of $\bb^{t}\mathbf{v}\left(x,y\right)+\beta_{0}$ over the domain. This shows that the approximate likelihood given in Equation (\[eq:spatial-logistic-model\]) tends to the likelihood of an IPP (Eq. \[eq:likelihood-function-IPP\]) with log intensity function $\log\lambda\left(x,y\right)=\bb^{t}\mathbf{v}\left(x,y\right)+\beta_{0}$, which is exactly the kind used in this manuscript. This establishes the small-pixel equivalence of spatial logistic regression and IPP modeling. It remains to show that spatial logistic regression and patch classification are in some cases equivalent. In the case described above, one has data for one image only, collects $n$ patches at random as examples of non-fixated locations, and then performs logistic regression on the patch labels based on the patch statistics $\mathbf{v}_{i}$. This is essentially the same practice often used in spatial logistic regression, where people simply throw out some of the (overabundant) negative examples at random. Throwing out negative examples leads to a loss in efficiency, as shown in @Baddeley:SpatialLogisticRegAndChangeOfSupport . It is interesting to note that under the assumption that fixations are generated from a IPP with intensity $\lambda\left(x,y\right)=\bb^{t}\mathbf{v}\left(x,y\right)+\beta_{0}$, giving us $n$ positive examples, and assuming that the $n$ negative examples are generated uniformly, the logistic likelihood becomes exact (this is a variant of lemma 12 in [@Baddeley:SpatialLogisticRegAndChangeOfSupport]). The odds-ratio that a point at location $x,y$ was one of the original fixations is simply: $$\log\,\frac{p(z=1|x,y)}{p(z=0|x,y)}=\log\,\frac{\lambda\left(x,y\right)}{A^{-1}}=\bb^{t}\mathbf{v}\left(x,y\right)+\beta_{0}+\log\left(A\right)\label{eq:logistic-regression-exact}$$ Here $A$ is the total area of the domain. Equation \[eq:logistic-regression-exact\] shows in another way the close relationship between patch classification and spatial point process models: patch classification using logistic regression is a correct (but inefficient) model under IPP assumptions. In actual practice patch classification involves a) combining patches from multiple images and b) possibly different techniques for classification than logistic regression. The first issue may be problematic, since as we have shown above the coefficients relating covariates to intensity, and certainly intercept values, may vary substantially from one image to another. This can be fixed by changing the covariate matrix appropriately in the logistic regression. The second issue is that, rather than logistic regression, other classification methods may be used : does the point process interpretation remain valid? If one uses support vector machines, then the answer is yes, at least in the limit of large datasets (SVM and logistic regression are asymptotically equivalent, see [@Steinwart:ConsistencySVMAndOtherRegKernelClass]): the coefficients will differ only in magnitude. The same holds for probit, complementary log-log or even least-squares regression: the correct coefficients will be asymptotically recovered up to a scaling factor. In actual practice, the difference between classification methods is often small and all of them may equally be thought of as approximating point process modeling. ### ROC performance measures, area counts, and minimum volume sets\[sub:ROC-and-Area-counts\] Many authors have used the area under the ROC curve as a performance measure for patch classification. Another technique, used for example in @Torralba:ContextualGuidanceEyeMovements, is to take the image region with the 20% most salient pixels and count the proportion of fixations that occured in this region (a proportion over 20% is counted as above-chance performance). We show below that the two procedures are related to each other and to point process models. The notion of minimum volume regions will be needed repeatedly in the development below, and we therefore begin with some background material. #### Minimum-volume sets A minimum-volume set with confidence level $\alpha$ is the smallest region that will contain a point with probability at least $\alpha$ (it extends the notion of a confidence interval to arbitrary domains, see Fig. \[eq:maximum-probability-set\]). Formally, given a probability density $\pi(s)$ over some domain, the minimum volume set $F_{\alpha}$ is such that: $$\begin{aligned} F_{\alpha} & =\underset{F\in\mathcal{M}\left(\Omega\right)}{\mbox{argmin}}\mbox{Vol}\left(F\right)\label{eq:minimum-volume-set}\\ \mbox{subject to} & \int_{F}\pi\geq\alpha\nonumber \end{aligned}$$ where the minimisation is over all measurable subsets of the domain $\Omega$ and $\mbox{Vol}\left(F\right)=\int_{F}1$ is the total volume of set $F$. Intuitively speaking, the smaller the minimum-volume sets of $\pi$, the less uncertainty there is: if 99% of the probability is concentrated in just 20% of the domain, then $\pi$ is in some sense 5 times more concentrated than the uniform distribution over $\Omega$. In the case of Gaussian distributions the minimum volume sets are ellipses, but for arbitrary densities they may have arbitrary shapes. In @NunezGarcia:LevelSetsMinimumVolumeSetsPDF it is shown that the family of minimum volume sets of $\pi$ is equal to the family of contour sets of $\pi$: that is, every set $F_{\alpha}$ is equal to a set $\left\{ s\in\Omega|\pi\left(s\right)\geq\pi_{0}\right\} $ for some value $\pi_{0}$ that depends on $\alpha$[^12]. We can therefore measure the amount of uncertainty in a distribution by looking at the volume of its contour sets, a notion which will arise below when we look at optimal ROC performance. #### ROC measures In ROC-based performance measures, the output of a saliency model $m(x,y)$ is used to classify patches as fixated or not, and performance classification is measured using the area of under the ROC curve. The ROC curve is computed by varying a criterion $\xi$ and counting the rate of False Alarms and Hits that result from classifying a patch as fixated. In this section we relate this performance measure to point process theory and show that: 1. If non-fixated patches are sampled from a homogeneous PP, and fixated patches from a IPP with intensity $\lambda\left(x,y\right)$, then the optimal saliency model in the sense of the AUC metric is $\lambda(x,y)$ (or any monotonic transformation thereof). In this case the AUC metric measures the precision of the IPP, i.e. how different from the uniform intensity function $\lambda\left(x,y\right)$ is. 2. If non-fixated patches are not sampled from a uniform distribution, but from some other PP with intensity $\varphi\left(x,y\right)$, then the optimal saliency model in the sense of the AUC metric is no longer $\lambda\left(x,y\right)$ but $\frac{\lambda\left(x,y\right)}{\varphi\left(x,y\right)}$. In other words, a saliency model could correctly predict fixation locations but perform sub-optimally according to the AUC metric if non-fixated locations are sampled from a non-uniform distribution (for example when non-fixated locations are taken from other pictures). 3. Since AUC performance is invariant to monotonic transformations of $m(x,y)$, it says nothing about how intensity scales with $m(x,y)$. In the following we will simplify notation by noting spatial locations as $s=(x,y)$, and change our notation for functions accordingly ($m(s),\lambda(s)$, etc.). We assume that fixated patches are drawn from a PP with intensity $\lambda(s)$ and non-fixated patches from a PP with intensity $\varphi\left(s\right)$. In ROC analysis locations are examined independently from one another, so that all that matters are the normalised intensity functions (probability densities for single points, see Section \[sub:Conditioning-on-n\]). Without loss of generality we assume that $\lambda$ and $\varphi$ integrate to 1 over the domain. By analogy with psychophysics we define the task of deciding whether a single, random patch $s$ was drawn from $\lambda$ (the fixated distribution) or $\varphi$ (the non-fixated distribution) as the Yes/No task. Correspondingly, the 2AFC task is the following: two patches $s_{1},s_{2}$ are sampled random, one from $\lambda$ and one from $\varphi$, and one must guess which of the two patches came from $\lambda$. The Y/N task is performed by comparing the “saliency” of the patch $m(s)$ to a criterion $\xi$. The 2AFC task is performed by comparing the relative saliency of $s_{1}$ and $s_{2}$: if $m(s_{1})>m(s_{2})$ the decision is that $s_{1}$ is the fixated location. We will use the fact that the area under the ROC curve is equal to 2AFC performance [@GreenSwets:SDT]. 2AFC performance can be computed as follows: we note $O=\left(1,0\right)$ the event corresponding to $s_{1}\sim\lambda$ and $s_{2}\sim\varphi$. The probability of a correct decision under the event $O=(1,0)$ is: $$p_{c}=\int_{\Omega}\lambda(s_{1})\left\{ \int_{\Omega}\varphi\left(s_{2}\right)\I\left(m\left(s_{1}\right)>m(s_{2})\right)\mbox{d}s_{2}\right\} \mbox{d}s_{1}\label{eq:2AFC-prob-correct}$$ where $\I\left(m\left(s_{1}\right)>m(s_{2})\right)$ is the indicator function of the event that $s_{1}$ has higher saliency than $s_{2}$ (according to $m$). Note that the $O=\left(0,1\right)$ event is exactly symmetrical, so that we do not need to consider both. We first consider the case where non-fixated locations are drawn uniformly ($\varphi\left(s\right)=V^{-1}$, where $V$ is the area or volume of the observation window), and ask what the optimal saliency map $m$ is - in the sense of giving maximal 2AFC performance and therefore maximal AUC. 2AFC can be viewed as a categorisation task over a space of stimulus pairs, where the two categories are $O=(1,0)$ and $O=(0,1)$ in our notation. The so-called “Bayes rule” is the optimal rule for categorisation [@DudaHartStork:PatternClassification], and in our case it takes the following form: answer $O=(1,0)$ if $p\left(O=(1,0)|s_{1},s_{2})\right)>p\left(O=(0,1)|s_{1},s_{2}\right)$. The prior probability $p\left(O=(0,1)|s_{1},s_{2}\right)$ is $1/2$, so the decision rule only depends on the likelihood ratio: $$\frac{p\left(s_{1},s_{2}|O=(1,0)\right)}{p\left(s_{1},s_{2}|O=(0,1)\right)}=\frac{\lambda\left(s_{1}\right)\varphi\left(s_{2}\right)}{\lambda\left(s_{2}\right)\varphi\left(s_{1}\right)}=\frac{\lambda\left(s_{1}\right)}{\lambda\left(s_{2}\right)}\label{eq:2AFC-likelihood-ratio}$$ The optimal decision rule consist in comparing $\lambda(s_{2})$ to $\lambda\left(s_{1}\right)$, which is equivalent to using $\lambda$ as a saliency map (or any other monotonic transformation of $\lambda$). The probability of a correct decision (\[eq:2AFC-prob-correct\]) is then: $$p_{c}=\int_{\Omega}\lambda(s_{1})\left\{ V^{-1}\int_{\Omega}\I\left(\lambda\left(s_{1}\right)\geq\lambda(s_{2})\right)\mbox{d}s_{2}\right\} \mbox{d}s_{1}\label{eq:2AFC-optimal-uniform}$$ The inner integral $\int_{\Omega}\I\left(\lambda\left(s_{1}\right)\geq\lambda(s_{2})\right)\mbox{d}s_{2}$ corresponds to the total volume of the set of all locations with lower density than the value at $s_{1}$: a contour set of $\lambda$, which as we have seen is also a minimum volume set. This observation leads to another of expressing the integral in (\[eq:2AFC-optimal-uniform\]): intuitively, each location $s_{1}$ will be on the boundary of a minimum volume set $F_{\alpha}$, for some value of $\alpha$, and the inner integral will correspond to the volume of that set. We re-express the integral by grouping together all values of $s_{1}$ that lead to the same set $F_{\alpha}$, and hence the same value of the inner integral: these are the set of values $s_{1}$ that fall along a density contour $\lambda\left(s_{1}\right)=\lambda_{\alpha}$. Suppose that we generate a random value of $s_{1}$ and note the $\alpha$ value of the contour set $s_{1}$ falls on: call $a$ this random variable. By definition the event $s_{1}\in F_{\alpha}$ happens with probability $\alpha$, so that $p(a\leq\alpha)=\alpha$, and therefore $a$ has a uniform distribution over $[0,1]$ (it is in fact a p-value). We can express Equation (\[eq:2AFC-optimal-uniform\]) as an expectation over $a$: $$p_{c}=\int_{0}^{1}p(a=\alpha)\left(1-V\left(\alpha\right)\right)\mbox{d}a=1-\int_{0}^{1}V\left(\alpha\right)\mbox{d}\alpha\label{eq:AUC-precision}$$ Here $V\left(\alpha\right)$ is the relative volume of the minimum-volume set with confidence level $\alpha.$ $V(0.9)=0.2$ means that, for a location $s$ sampled from $\lambda$, the smallest region of space we can find that includes 90% of the observations takes up just 20% of the observation window. For a uniform distribution $V\left(\alpha\right)=\alpha$. Whatever the density $\lambda$, $V\left(0\right)=0$, and if small regions include most of the probability mass we are going to see a slow increase in $V\left(\alpha\right)$ as $\alpha$ rises. This will lead in turn to a low value for the integral $\int_{0}^{1}V\left(\alpha\right)\mbox{d}\alpha$. Having small regions that contain most of the probability mass is the same as having a concentrated or precise point process, and therefore Equation (\[eq:AUC-precision\]) shows that under the optimal decision rule the AUC value measures the precision of the point process. We now turn to the case where non-fixated locations are not taken from the uniform distribution: for example, when those locations are randomly drawn from fixations observed on other pictures. The optimal rule is again Bayes’ rule, Equation \[eq:2AFC-likelihood-ratio\], which is equivalent to using $m(s)=\lambda(s)/\varphi\left(s\right)$ as a saliency map. Under an assumption of center bias this may artificially inflate the AUC value of saliency maps which have low values around the center. This problem can be remedied by computing an estimate of the intensity $\varphi$ and using $m(s)/\varphi(s)$ (or more conveniently $\log m(s)-\log\varphi\left(s\right))$ instead of $m(s)$ when computing AUC scores. ![Minimum volume sets. We show in the upper panel two density functions (A) and (B) along with minimal volume sets with $\alpha=0.8$: these are the smallest areas containing 80% of the probability, and correspond to contours of the density (see text). The $\alpha=0.8$ area is much larger in A than in B, which reflects higher uncertainty. In the lower panel, the area of the minimal volume set of level $\alpha$ is represented as a function of the confidence level $\alpha$. The integral of this function is shown in the text to equal optimal ROC performance in the patch classification problem, and reflects the underlying uncertainty in the point process. \[fig:Minimum-volume-sets\]](Figures/illustrate_min_volum) #### Area counts In area counts the goal is to have a small region that contains as many fixations as possible. Given a discrete saliency map, we can build a region that contains the 20% most salient pixels, and count the number of fixations that occured there. In this section we show that if fixations come from a point process with intensity $\lambda$, the optimal saliency map is again $\lambda$ (again, up to arbitrary monotonic transformations). In that context, we show further that if we remove the arbitrariness of setting a criterion at 20%, and integrate over criterion position, we recover exactly the AUC performance measure - the two become equivalent. Let us define the following optimisation problem: given a probability density $\pi$, we seek a measurable set $G$ such that $$\begin{aligned} G_{q} & =\underset{F\in\mathcal{M}\left(\Omega\right)}{\mbox{argmax}}\int_{G}\pi\label{eq:maximum-probability-set}\\ \mbox{s.t}\, & V\left(G\right)=q\nonumber \end{aligned}$$ that is, among all measurable subsets of $\Omega$ of relative volume $q$, we seek the one that has maximum probability under $\pi$. These maximum-probability sets and the minimum-volume-sets defined above are related: indeed the optimisation problems that define them (\[eq:minimum-volume-set\] and \[eq:maximum-probability-set\]) are dual. This follows from writing down the Lagrangian of \[eq:maximum-probability-set\]: $$\mathcal{L}\left(G,\eta\right)=\int_{G}\pi+\eta\left(V\left(G\right)-q\right)$$ which is equivalent to that of (\[eq:minimum-volume-set\]) for some value of $\eta$. This result implies that the family of solutions of the two problems are the same: a maximum-probability set for some volume $q$ is a minimum-volume set for some confidence level $\alpha$. Since we know that the family of contours sets is the family of solutions of the minimum-volume problem, it follows that it is also the family of solutions of the maximum-probability problem. In turn, this implies that if fixations come from an IPP with intensity $\lambda$, the optimal saliency map according to the area count metric must have the same top 20% values in the same locations as $\lambda$. To remove the arbitrariness associated with the criterion, we measure the total probability in $G_{q}$ for each value of $q$ between 0 and 1 and integrate: $$A_{c}=\int_{0}^{1}\left(\int_{G_{q}}\lambda\left(s\right)\mbox{d}s\right)\mbox{d}q=\int_{0}^{1}\mbox{Prob}\left(q\right)\mbox{d}q$$ The integrand $\mbox{Prob}\left(q\right)$ measures the probability contained within the maximum-probability set of size $q$: because of the equivalence of maximum-probability and minimum-coverage sets, $\mbox{Prob}(q)$ is the inverse of the function $V\left(\alpha\right)$, which measured the relative size of the minimum-volume set with confidence level $\alpha$. Therefore $A_{c}=1-\int_{0}^{1}V\left(\alpha\right)\mbox{d}\alpha$, which is exactly the optimal AUC performance under uniform samples, as shown in the previous section. Area counts and ROC performance are therefore tightly related. The authors would like to thank Torsten Betz for insightful comments on the manuscript. This work was funded, in part, by the German Federal Ministry of Education and Research (BMBF) through the Bernstein Computational Neuroscience Programs FKZ 01GQ1001F (Ralf Engbert, Potsdam), FKZ 01GQ1001B (Felix Wichmann, Berlin) and FKZ 01GQ1002 (Felix Wichmann, Tübingen). [^1]: Our eyes are never perfectly still and miniature eye movements (microsaccades, drift, tremor) can be observed during fixations [@CiuffredaTannen:EyeMovementBasics]. [^2]: The actual duration was sampled from a Gaussian distribution $\N\left(2,0.5^{2}\right)$ truncated at 1 sec. [^3]: $\beta_{i}$ should not be interpreted as anything more than a rough measure of performance. It has a relatively subtle potential flaw: if the Itti-Koch map for an image happens by chance to match the typical spatial bias, then $\beta_{i}$ will likely be estimated to be above 0. This flaw is corrected when a spatial bias term is introduced, see Section \[sub:Including-a-spatial-bias\]. [^4]: This may not necessarily be a intrinsic flaw of the model: it might well be that in certain “boring” pictures, or pictures with very many high-contrast edges, people will fixate just about anywhere, so that even a perfect model—the true causal model in the head of the observers—would perform relatively badly. [^5]: Simulation from an IPP can be done using the “thinning” algorithm of @LewisShedler:SimulationNonhomogenPP, which is a form of rejection sampling. [^6]: Most often, instead of using a circular window, a Gaussian kernel will be used. [^7]: For technical reasons Bayesian inference is easier when done on the log-intensity function $\eta(x,y)$, rather than on the intensity function, so we actually use the posterior mean and quantiles of $\eta(x,y)$ rather than that of $\lambda(x,y)$. [^8]: There is a cleaner way of doing that, using multilevel/random effects modelling [@GelmanHill:DataAnalysisUsingRegression], but a discussion of these techniques would take us outside the scope of this work. [^9]: A GP also needs a mean function, but here we will assume that the mean is uniformly 0. See @RasmussenWilliamsGP for details. [^10]: For computational reasons we favour here the (also very common) Matern class of covariance functions, which leads to functions that are less smooth than with a squared exponential covariance. [^11]: In other words, knowing how many fixations there were on the upper half of the screen should not tell you anything about how many there were in the lower half. This might be violated in practice but is not a central assumption for our purposes. [^12]: This is not true in the non-pathological cases where these sets are not uniquely defined, for example in the case of the uniform distribution where minimum-volume sets may be constructed arbitrarily.
--- abstract: 'Two-dimensional quantum gravity, defined either via scaling limits of random discrete surfaces or via Liouville quantum gravity, is known to possess a geometry that is genuinely fractal with a Hausdorff dimension equal to $4$. Coupling gravity to a statistical system at criticality changes the fractal properties of the geometry in a way that depends on the central charge of the critical system. Establishing the dependence of the Hausdorff dimension on this central charge $c$ has been an important open problem in physics and mathematics in the past decades. All simulation data produced thus far has supported a formula put forward by Watabiki in the nineties. However, recent rigorous bounds on the Hausdorff dimension in Liouville quantum gravity show that Watabiki’s formula cannot be correct when $c$ approaches $-\infty$. Based on simulations of discrete surfaces encoded by random planar maps and a numerical implementation of Liouville quantum gravity, we obtain new finite-size scaling estimates of the Hausdorff dimension that are in clear contradiction with Watabiki’s formula for all simulated values of $c\in (-\infty,0)$. Instead, the most reliable data in the range $c\in [-12.5, 0)$ is in very good agreement with an alternative formula that was recently suggested by Ding and Gwynne. The estimates for $c\in(-\infty,-12.5)$ display a negative deviation from the latter formula, but the scaling is seen to be less accurate in this regime.' address: '$^1$ Radboud University, Nijmegen, The Netherlands.' author: - Jerome Barkley and Timothy Budd$^1$ title: 'Precision measurements of Hausdorff dimensions in two-dimensional quantum gravity' --- Introduction ============ The emergence of scale-invariance at criticality and the universality of the corresponding critical exponents is at the center of statistical physics, underlying for instance the self-similar features of spin clusters in the Ising model at critical temperature. In a purely gravitational theory, where the configuration of a system is encoded in the spacetime geometry, criticality and self-similarity may well be realized in a ultramicroscopic limit if the theory possesses a non-trivial ultraviolet fixed point. The presence of self-similarity generally means a departure from smooth (pseudo-)Riemannian geometry and forces one to reconsider the local structure of spacetime geometry. In particular, the dimension of spacetime is no longer a clear-cut notion, but depends on the practical definitions one employs and the fractal properties of the geometry to which these definitions are sensitive. Although fractal dimensions have been studied in many quantum gravity approaches (see e.g. [@Ambjorn2005; @Benedetti2009; @Reuter2011; @Calcagni2015; @Carlip2017]), putting such computations on a rigorous footing is difficult. Arguably the main reason for this is the lack of explicit examples of scale-invariant statistical or quantum-mechanical ensembles of geometries, a prerequisite for the existence of exact critical exponents and fractal dimensions. Two-dimensional Euclidean quantum gravity forms an important exception in that it provides a family of well-defined such ensembles as well as a variety of rigorous mathematical tools to study them. This makes it an ideal benchmark in the computational study of fractal dimensions in quantum gravity, albeit a toy model. Generally speaking, two-dimensional quantum gravity, the problem of making sense of the path integral over metrics on a surface, can be approached from two directions, roughly categorized as the continuum Liouville quantum gravity approach and the lattice discretization approach. In the first case, one tries to makes sense of the two-dimensional metric as a Weyl-transformation $\rmd s^2 = e^{\gamma \phi} \rmd\hat{s}^2$ of a fixed background metric $\rmd\hat{s}^2$, where the random field $\phi$ is governed by the Liouville conformal field theory with coupling $\gamma\in (0,2]$. In the latter case one considers random triangulations (or more general random planar maps) of increasing size in the search of scale-invariant continuum limits. The famous KPZ relation [@Knizhnik1988] describing the gravitational dressing of conformal matter fields was derived in Liouville quantum gravity but shown to hold for lattice discretizations in many instances, providing a long list of exact critical exponents. These exponents, however, only indirectly witness the fractal properties of the metric. That fractal dimensions can differ considerably from the topological dimension was first demonstrated by Ambjørn and Watabiki [@Ambjorn1995a], where the volume of a geodesic ball of radius $r$ in a random triangulation was computed to scale as $r^4$ (later proven rigorously [@Chassaing2004; @Angel2003]), suggesting a Hausdorff dimension of $4$ for the universality class of two-dimensional quantum gravity in the absence of matter. The self-similar random metric space of this *Brownian* universality class was later established [@LeGall2013; @Miermont2013] in full generality as the continuum limit of random triangulations, and was recently shown [@Miller2016] to agree with Liouville quantum gravity at $\gamma=\sqrt{8/3}$. While many geometric properties of the Brownian universality class are known, much less can be said about the universality classes with $\gamma\neq\sqrt{8/3}$ which are supposed to describe scale-invariant random geometries in the presence of critical matter systems. Notably, the *spectral dimension*, a fractal dimension related to the diffusion of a Brownian particle, has been argued [@Ambjorn1998a] to equal $2$ for the full range of $\gamma\in(0,2)$, which has recently been proven rigorously [@Rhodes2014; @Gwynne2017]. The dependence of the Hausdorff dimension on the coupling $\gamma$, however, has been a wide open question since the nineties. A formula for the Hausdorff dimension was put forward by Watabiki [@Watabiki1993] based on a heuristic computation of heat kernels in Liouville quantum gravity. Written in terms of $\gamma$, which is related to the central charge $c$ via , it reads $$\label{eq:watabiki} d_{\gamma}^{\mathrm{W}} = 1+ \frac{\gamma^2}{4}+\sqrt{\left(1+\frac{\gamma^2}{4}\right)^2+\gamma^2}$$ and correctly predicts $d_{\sqrt{8/3}}^{\mathrm{W}}=4$ for the Brownian value and $d_{\gamma}^{\mathrm{W}}\to 2$ in the semi-classical limit $\gamma\to 0$. However, its main support came soon after from numerical simulations of spanning-tree-decorated triangulations, a model of random triangulations in the universality class corresponding to $\gamma=\sqrt{2}$. For this model the Hausdorff dimension was estimated at $d_{\sqrt{2}} = 3.58 \pm 0.04$ [@Ambjorn1995] (and $d_{\sqrt{2}} \approx 3.55$ in [@Kawamoto1992]), in good agreement with the prediction $d_{\sqrt{2}}^{\mathrm{W}} = \tfrac12(3+\sqrt{17})\approx 3.56$. Since then all numerical estimates reported for these and other models are consistent with Watabiki’s prediction [@Ambjorn1995; @Anagnostopoulos1999; @Kawamoto2002; @Ambjorn2012a; @Ambjorn2013]. The most accurate estimates are summarized in Table \[tbl:previousresults\]. These values, as well as measurements from Liouville quantum gravity [@Ambjorn2014], are plotted in Figure \[fig:bounds\]b. Reference $c$ $\gamma$ $d_\gamma$ $d_\gamma^{\mathrm{W}}$ ------------------------ ------- --------------- --------------- ------------ --------- ------------------------- [@Ambjorn2012a] $-20$ $0.867\ldots$ $2.76\!\!\!$ $\pm$ $0.07$ $2.6596\ldots$ [@Anagnostopoulos1999] $-5$ $1.236\ldots$ $3.36\!\!\!$ $\pm$ $0.04$ $3.2360\ldots$ [@Ambjorn2013] $-2$ $\sqrt{2}$ $3.575\!\!\!$ $\pm$ $0.003$ $3.5615\ldots$ [@Ambjorn2013] $1/2$ $\sqrt{3}$ $4.217\!\!\!$ $\pm$ $0.006$ $4.2122\ldots$ [@Ambjorn2013] $4/5$ $\sqrt{10/3}$ $4.406\!\!\!$ $\pm$ $0.007$ $4.4207\ldots$ : \[tbl:previousresults\]Previous estimates of the Hausdorff dimension. However, recent mathematical developments in Liouville quantum gravity have shown that cannot be correct for small values of $\gamma$. Ding and Goswami [@Ding2018a] have proven a lower bound on the Hausdorff dimension $d_{\gamma}$ of the form $$\label{eq:boundsmallgamma} d_\gamma \geq 2 + C \frac{\gamma^{4/3}}{\log \gamma^{-1}}$$ for some unknown constant $C>0$ and sufficiently small $\gamma$, which is seen to be incompatible with $d_{\gamma}^{\mathrm{W}} = 2 + O(\gamma^2)$. How is it possible that an incorrect formula agrees so well with numerical data (in some cases at the three digit accuracy)? There are several conceivable explanations: 1. The fractal dimension we call the Hausdorff dimension in the case of triangulations measures something different compared to the one in Liouville quantum gravity. 2. The numerical estimate of the Hausdorff dimension in random triangulations is inaccurate. 3. The actual Hausdorff dimension is not given by but close enough in the tested regime to be compatible with the data. The first option has been ruled out in a very precise sense in the works [@Ding2018; @Dubedat2019; @Gwynne2019a] (and references therein). In particular it is shown that Liouville Quantum Gravity possesses a unique realization as a scale-invariant random metric space for each value $\gamma\in(0,2)$. The Hausdorff dimension $d_\gamma$ of this space is a strictly increasing, continuous function of $\gamma$. Moreover, the Hausdorff dimension of various models of random triangulations, including spanning-tree-decorated triangulations and all other models considered in this paper, are shown to agree with this $d_\gamma$ (with the value of $\gamma$ depending on the universality class). This warrants us speaking about *the* Hausdorff dimension $d_\gamma$ without specifying the precise model. ![(a) The known bounds [@Gwynne2019; @Ding2018; @Ang2019] on the Hausdorff dimension $d_{\gamma}$ together with the predictions by Watabiki and Ding & Gwynne. (b) A zoomed-in version shifted by Watabiki’s prediction together with numerical estimates: [@Ambjorn1995] in brown, [@Anagnostopoulos1999] in purple, [@Ambjorn2012a] in orange, [@Ambjorn2013] in black, [@Ambjorn2014] in green.\[fig:bounds\]](images/bounds){width="\linewidth"} Although only the value $d_{\sqrt{8/3}} = 4$ is known, rigorous bounds have recently been derived [@Gwynne2019; @Ding2018; @Ang2019]: $\underline{d}_\gamma \leq d_\gamma \leq \overline{d}_\gamma$ with $$\begin{aligned} \underline{d}_\gamma &= \begin{cases} 2+\frac{\gamma^2}{2} & 0 < \gamma \lesssim 0.576 \\ {\displaystyle\frac{12-\sqrt{6}\gamma+3\sqrt{10}\gamma+3\gamma^2}{4+\sqrt{15}}} & 0.576 \lesssim \gamma \leq \sqrt{8/3} \\ \frac{1}{3}\left(4+\gamma^2+\sqrt{16+2\gamma^2+\gamma^4}\right) & \gamma \geq \sqrt{8/3},\end{cases}\label{eq:dlower}\\ \overline{d}_\gamma &= \begin{cases} 2+\frac{\gamma^2}{2} + \sqrt{2}\gamma & 0 < \gamma \lesssim 0.460 \\ \frac{1}{3}\left(4+\gamma^2+\sqrt{16+2\gamma^2+\gamma^4}\right) & 0.460 \lesssim \gamma \leq \sqrt{8/3} \\ {\displaystyle\frac{12-\sqrt{6}\gamma+3\sqrt{10}\gamma+3\gamma^2}{4+\sqrt{15}}} & \gamma \geq \sqrt{8/3}.\end{cases}\label{eq:dupper}\end{aligned}$$ Contrary to , these bounds are still consistent with Watabiki’s formula, see Figure \[fig:bounds\]a. Note that for $\gamma=\sqrt{2}$ the bounds imply $d_{\sqrt{2}} \in (3.550,3.633) \subset d_{\sqrt{2}}^{\mathrm{W}} + (-0.012,0.072)$, so deviations from $d_{\sqrt{2}}^{\mathrm{W}}$ will only show up in the third digit. This lends credence to explanation (iii), but does make one wonder whether it is a coincidence that a relatively simple formula like precisely fits the bounds and . However, in [@Ding2018] Ding and Gwynne proposed an alternative, arguably even simpler, formula that satisfies all known bounds , and , $$\label{eq:dinggwynne} d_\gamma^{\mathrm{DG}} = 2 + \frac{\gamma^2}{2} + \frac{\gamma}{\sqrt{6}}.$$ A quick glance at Figure \[fig:bounds\]b leads one to conclude that it fits the numerical data at least as well as Watabiki’s prediction does. The goal of the current paper is to produce new numerical data to differentiate and hopefully rule out at least one of the predictions. From Figure \[fig:bounds\] it is clear that our chances are better when looking at smaller values of $\gamma$, where $d_\gamma^{\mathrm{W}}$ and $d_\gamma^{\mathrm{DG}}$ differ more substantially and the window between the upper and lower bound is bigger. This is achieved by simulating a variety of random planar map models (Sections \[sec:planarmapmodels\] and \[sec:fssplanarmaps\]) as well as a discretized version of Liouville quantum gravity (Sections \[sec:lqg\] and \[sec:fsslqg\]). The first study focuses on four models of random planar maps: Schnyder-wood-decorated triangulations ($\gamma=1$), bipolar-oriented triangulations ($\gamma=\sqrt{4/3}$), spanning-tree decorated quadrangulations ($\gamma=\sqrt{2}$) and uniform quadrangulations ($\gamma=\sqrt{8/3}$). These models, the first two of which have not been studied numerically before, are particularly convenient because they can be simulated very efficiently, allowing high statistics to be obtained for surfaces with millions of vertices. Moreover, their continuum limits in relation with Liouville quantum gravity are understood in considerable detail [@Kenyon2015; @Miller2016; @Gwynne2019b; @Li2017; @Ding2018]. In the second half of the paper we perform a numerical investigation of Liouville quantum gravity on a regular lattice, allowing one in principle to tune $\gamma$ to any desired value. #### Acknowledgments We thank the Niels Bohr Institute, University of Copenhagen, for the use of their computing facilities. Four statistical models of random planar maps {#sec:planarmapmodels} ============================================= To describe the models we employ terminology that is customary in the mathematical literature on random surfaces. A *planar map* is a multi-graph, i.e. an unlabeled graph with multiple edges between pairs of vertices and loops allowed, together with an embedding in the 2-sphere, such that the edges are simple and disjoint except where they meet at vertices (see e.g. Figure \[fig:spanningtreealgorithm\](a)). Any two planar maps that can be continuously deformed into each other are considered identical. We always take planar maps to be *rooted*, meaning that they are equipped with a distinguished oriented edge called the *root edge*. In this way we ensure that a planar map does not have any non-trivial automorphisms, which greatly simplifies counting and random sampling. A region in the sphere delimited by $d$ edges is called a *face of degree $d$* (where the edges that are incident to the face on both sides are double-counted). We denote by $\mathcal{M}^{(d)}_n$ the set of $d$-angulations of size $n$, i.e. the set of planar maps with $n$ faces that are all of degree $d$. In particular $\mathcal{M}^{(3)}_n$ is the set of *triangulations* with $n$ triangles and $\mathcal{M}^{(4)}_n$ is the set of *quadrangulations* with $n$ squares (of genus $0$ by construction). Although we will not use this, it is convenient to think of a planar map as describing a piecewise-flat surface obtained by associating to each face of degree $d$ a regular $d$-gon of side length $1$ and performing the gluing of the polygons according to the incidence relations of the planar map (this underlies for instance the visualizations of the large planar maps in Figure \[fig:simulations3d\]). Since there are finitely many quadrangulations (or triangulations) of fixed size $n$, the simplest model of a random surface is to sample one such quadrangulation at random with equal probability, called the *uniform quadrangulation* of size $n$ (see Section \[sec:uniformquad\]). To obtain different distributions one may couple the planar map to a variety of statistical systems. In the cases at hand these systems consist of decorations of a planar map by a coloring of some of its edges subject to a number of constraints. The systems are relatively simple in the sense that each decoration occurs with equal probability. The number $Z^{*}({\mathfrak{m}})$ of available decorations differs from one planar map ${\mathfrak{m}}$ to another, which explains the effect of the statistical system on the geometry of the random surface. In the language of statistical physics we are dealing with the *canonical partition function* $$Z^*_n = \sum_{{\mathfrak{m}}\in \mathcal{M}_n^{(d)}} Z^*({\mathfrak{m}}) = \sum_{{\mathfrak{m}}\in \mathcal{M}_n^{(d)}} \,\,\sum_{\text{decorations of }{\mathfrak{m}}} 1.$$ From a combinatorial point of view it often hard to determine the number of decorations $Z^*({\mathfrak{m}})$ of a given planar map ${\mathfrak{m}}$, while the total number $Z_n^*$ of decorated planar maps is more easily accessible. Similarly, from an algorithmic point of view, it is often much easier to generate a random decorated planar map (with uniform Boltzmann weight $1$) and then to forget the decoration, than it is to directly generate a random planar map with the non-trivial Boltzmann weight $Z^*({\mathfrak{m}})$. Below we describe four models fitting this bill and we describe in some detail the algorithms used to generate them. In general the canonical partition function will asymptotically be of the form $$Z_n^{*} \stackrel{n\to\infty}{\sim} C\, n^{\gamma_{\mathrm{s}}-2}\, \kappa^{n}$$ for some constants $C$, $\kappa$ and $\gamma_{\mathrm{s}}$. Whereas $C$ and $\kappa$ depend on the precise definition of the model, the *string susceptibility* $\gamma_{\mathrm{s}}$ is universal, meaning that it typically only depends on the universality class of the system. It is related via the KPZ formula [@Knizhnik1988] to the central charge $c$ of the coupled statistical system, $$\gamma_{\mathrm{s}} = \frac{c-1-\sqrt{(c-1)(c-25)}}{12}.$$ In the continuum limit the geometry of a random surface coupled to a statistical system of central charge $c \in (-\infty,1]$ is believed to be described by Liouville quantum gravity with parameter $\gamma \in (0,2]$ related to $c$ and $\gamma_{\mathrm{s}}$ via $$\label{eq:cgamma} c = 25 - 6 \left(\frac{2}{\gamma}+\frac{\gamma}{2}\right)^2, \quad \gamma_{\mathrm{s}} = 1 - \frac{4}{\gamma^2}.$$ Table \[tbl:couplings\] summarizes the relevant values for the four models. $\gamma_s$ $c$ $\gamma$ ------------------------------------------------------ ---------------- ----------------- -------------- Uniform quadrangulations [**(U)**]{} $-\frac{1}{2}$ $0$ $\sqrt{8/3}$ Spanning-tree-decorated quadrangulations [**(S)**]{} $-1$ $-2$ $\sqrt{2}$ Bipolar-oriented triangulations [**(B)**]{} $-2$ $-7$ $\sqrt{4/3}$ Schnyder-wood-decorated triangulations [**(W)**]{} $-3$ $-\frac{25}{2}$ $1$ : \[tbl:couplings\]Critical exponents of the four models. Uniform quadrangulations [**(U)**]{} {#sec:uniformquad} ------------------------------------ As mentioned the simplest situation corresponds to quadrangulations with no decoration, i.e. to the uniform case $Z^{\mathrm{U}}({\mathfrak{m}}) = 1$ for ${\mathfrak{m}}\in \mathcal{M}_n^{(d)}$. The enumeration $Z_n^{\mathrm{U}}$ of (rooted) quadrangulations of size $U$ goes back to Tutte in the sixties [@Tutte1963] and is given explicitly by $$Z_n^{\mathrm{U}} = \sum_{{\mathfrak{m}}\in\mathcal{M}_n^{(4)}} 1 = 2\, \frac{(2n)!}{n!(n+2)!}\, 3^n \quad\stackrel{n\to\infty}{\sim}\quad \frac{2}{\sqrt{\pi}} n^{-5/2}\, 12^n.$$ An efficient way to sample a quadrangulation of size $n$ uniformly at random uses the Cori-Vauquelin-Schaeffer bijection [@Schaeffer1998] (see e.g.[@Miermont2014 Section 2.3] for a review), which provides a $2$-to-$1$ map between quadrangulations with an additional marked vertex and certain labeled trees. Such trees can be generated easily via standard algorithms, after which the corresponding quadrangulations can be reconstructed. Spanning-tree-decorated quadrangulations [**(S)**]{} ---------------------------------------------------- The first type of decorations we consider is that of spanning trees on a quadrangulation ${\mathfrak{m}}\in\mathcal{M}_n^{(4)}$. Any quadrangulation admits a bipartition of its vertices, i.e. a black-white coloring of its vertices such that no two vertices of the same color are adjacent, that is unique if we specify that the origin of the root edge is colored white. A decoration of ${\mathfrak{m}}$ amounts to a choice of diagonal in each face of ${\mathfrak{m}}$ such that all diagonals combined form a pair of trees, one spanning the black vertices and the other spanning the white vertices (the red respectively blue tree in Figure \[fig:spanningtreealgorithm\]b). The exact enumeration also goes back to the sixties, in this case to Mullin [@Mullin1967], and reads $$\label{eqn:ZS} Z_n^{\mathrm{S}} = \sum_{{\mathfrak{m}}\in\mathcal{M}_n^{(4)}} Z^{\mathrm{S}}({\mathfrak{m}}) = C_n C_{n+1}\quad\stackrel{n\to\infty}{\sim}\quad \frac{4}{\pi} n^{-3}\, 16^n.$$ where $C_n = \frac{1}{n+1} \binom{2n}{n}$ is the $n$th Catalan number. The quantity $C_nC_{n+1}$ also counts the number of two-dimensional lattice walks of length $2n$ with unit steps (denoted by the cardinal directions $\{N,W,S,E\}$) starting and ending at the origin and staying in the quadrant ${\mathbb{Z}}_{\geq 0}^2$ (Figure \[fig:spanningtreealgorithm\]c). This is explained by a natural encoding [@Mullin1967; @Sheffield2016] of spanning-tree-decorated quadrangulations by such lattice walks. Starting from a lattice walk the corresponding quadrangulation is constructed iteratively by starting with just the root edge with the left side designated *active* (indicated in orange in Figure \[fig:spanningtreealgorithm\]e) and performing the operations in Figure \[fig:spanningtreealgorithm\]d consecutively for each step of the walk. ![(a) A rooted quadrangulation in $\mathcal{M}^{(4)}_{10}$. (b) A decoration by spanning trees. (c) The corresponding $\{N,W,S,E\}$-walk in the quadrant: N-E-E-S-W-N-E-N-E-N-W-E-S-S-S-N-W-W-W-S. (d) The schematic operations corresponding to the steps of the walk. (e) The first seven steps in the construction. \[fig:spanningtreealgorithm\]](images/spanningtreealgorithm){width="\linewidth"} With the bijection in hand we can generate a random quadrangulation according to the partition function from a uniform random $\{N,W,S,E\}$-walk in the quadrant of length $2n$. This can be achieved efficiently by using the decomposition $$Z_n^{\mathrm{S}} = C_nC_{n+1} = \sum_{\ell=0}^{n}\binom{2n}{2\ell}C_\ell C_{n-\ell},$$ where the summands count precisely the walks with $2\ell$ horizontal steps. We may thus first sample $\ell$ with probability distribution $\binom{2n}{2\ell}C_\ell C_{n-\ell}/Z_n^{\mathrm{S}}$ and then randomly interleave two random Dyck paths of lengths $2\ell$ and $2n-2\ell$ (one for the horizontal and one for the vertical steps). Bipolar-oriented triangulations [**(B)**]{} -------------------------------------------- ![(a) A triangulation in $\mathcal{M}^{(3)}_{10}$ with the root in green. (b) A bipolar orientation (all edges oriented to the north). (c) The corresponding $\{N,W,SE\}$-walk in the quadrant: N-N-SE-SE-N-W-SE-N-W-N-W-SE-SE-W-W. (d) The schematic operations corresponding to the steps of the walk. (e) The first ten steps in the construction. \[fig:bipolaralgorithm\]](images/bipolaralgorithm){width="\linewidth"} A bipolar orientation of a planar map ${\mathfrak{m}}$ is an assignment of directions to each edge of ${\mathfrak{m}}$ that is *acyclic*, i.e. has no directed cycles, and possesses a single *source* and *sink*, i.e. a vertex with no incoming respectively outgoing edges. Here we consider triangulations ${\mathfrak{m}}\in\mathcal{M}^{(3)}_n$ decorated with a bipolar orientation such that the origin and end-point of the root edge are respectively the source and the sink (Figure \[fig:bipolaralgorithm\]b). The total number of such bipolar-oriented triangulations with $2n$ triangles was also calculated by Tutte [@Tutte1973 Equation (32)] (see also [@Bousquet-Melou2011 Proposition 5.3]), $$Z_{2n}^{\mathrm{B}} = \sum_{{\mathfrak{m}}\in\mathcal{M}_{2n}^{(3)}} Z^{\mathrm{B}}({\mathfrak{m}}) = \frac{2(3n)!}{n!(n+1)!(n+2)!}\quad\stackrel{n\to\infty}{\sim}\quad \frac{\sqrt{3}}{\pi} n^{-4}\, 27^{n}.$$ Recently an encoding by lattice walks has been discovered [@Kenyon2015] (see also [@Gwynne2016; @Bousquet-Melou2019]) analogous to the spanning-tree-decorated quadrangulations. The lattice walks again start and end at the origin and stay in the first quadrant, but now consist of $3n$ steps in $\{N,W,SE\}$ (Figure \[fig:bipolaralgorithm\]c). To construct the bipolar-oriented triangulation from the walk, one starts with just the root edge with its endpoint designated as the *active* vertex (orange in Figure \[fig:bipolaralgorithm\]e) and applies the operations in Figure \[fig:bipolaralgorithm\]d according to the steps of the walk (except the very last). To generate a $\{N,W,SE\}$-walk efficiently, we make use of the fact that the number of walks from $(x,y)$ to $(0,0)$ of length $3m+x+2y$ is known [@Bousquet-Melou2010 Proposition 9] to be $$\frac{(x+1)(y+1)(x+y+2)\,(3m+x+2y)!}{m!(m+y+1)!(m+x+y+2)!}.$$ From this it follows that if a random $\{N,W,SE\}$-walk of length $3n$ is at $(x,y)$ after $3n-k$ steps, then the next step will be $N$, $W$ or $SE$ with probabilities $$\begin{aligned} &\text{N:}\quad\frac{(y+2)(x+y+3)(k-x-2y)}{3k(y+1)(x+y+2)}, \quad \text{W:}\quad\frac{x(x+y+1)(k+2x+y+6)}{3k(x+1)(x+y+2)}, \nonumber\\ &\text{SE:}\quad\frac{y(x+2)(k-x+y+3)}{3k(x+1)(y+1)}.\end{aligned}$$ This allows one to sample the walk iteratively in quasi-linear time. Schnyder-wood-decorated triangulations [**(W)**]{} -------------------------------------------------- Let ${\mathfrak{m}}\in \mathcal{M}_{2n}^{(3)}$ be a *simple* triangulation, meaning that it contains no double edges or loops (Figure \[fig:schnyderalgorithm\]a). Color the origin of the root edge, the endpoint of the root edge, and the remaining vertex incident to the triangle on the right of the root edge red, green, and blue respectively. The edges that are incident to at least one uncolored vertex are called *inner* edges. A *Schnyder wood* (also known as a *realizer*) [@Schnyder1989] on ${\mathfrak{m}}$ is a coloring in red, green, and blue and an orientation of all inner edges (Figure \[fig:schnyderalgorithm\]c) satisfying the following properties: - Each uncolored vertex has precisely one outgoing edge of each color. Moreover, the incoming and outgoing edges of the various colors are ordered around the vertex as in Figure \[fig:schnyderalgorithm\]b. - The inner edges incident to a colored vertex are all incoming and of the same color as the vertex. According to [@Bonichon2005 Corollary 19] the number of Schnyder-wood-decorated triangulations with $2n$ triangles is $$Z_{2n}^{\mathrm{W}} = \sum_{{\mathfrak{m}}\in\mathcal{M}_{2n}^{(3)}} Z^{\mathrm{W}}({\mathfrak{m}}) =C_{n-1}C_{n+1} - C_n^2\quad\stackrel{n\to\infty}{\sim}\quad \frac{3}{2\pi} n^{-5}\, 16^{n}.$$ Also for this model an encoding in terms of lattice walks in the quadrant is known [@Bonichon2005; @Bernardi2009; @Fusy2009; @Li2017], in this case consisting of $2n-2$ steps in $\{E,W,NW,SE\}$ starting and ending at the origin (Figure \[fig:schnyderalgorithm\]d). The way the encoding works is a bit different compared to the spanning-tree-decorated quadrangulations and bipolar-oriented triangulations. First of all one represents the $\{E,W,NW,SE\}$-walk as a *double Dyck path* of length $2n-2$, i.e. a pair of walks in the quadrant from the origin to $(2n-2,0)$ with steps in $\{NE,SE\}$ such that the first path does not go below the second (Figure \[fig:schnyderalgorithm\]e). This is achieved by plotting the graph of $x$ and $x+2y$ where $(x,y)$ ranges over the coordinates of the $\{E,W,NW,SE\}$-walk. To construct the triangulation we start with a single triangle with colored vertices and attach a red tree to the red vertex as encoded by the lower Dyck path in the usual fashion (“apply glue to the bottom of the Dyck path and squash horizontally”). We then label the uncolored vertices of the red tree from $0$ to $n-2$ in the order in which they are encountered when tracing the contour of the tree in clockwise direction, and assign label $n-1$ to the green vertex (Figure \[fig:schnyderalgorithm\]f). The upper Dyck path is then used to determine the position of the endpoints of the green edges: for each $SE$-step that is preceded by a total of $k$ $NE$-steps we add an endpoint to the vertex with label $k$. Since a single green edge has to start at each uncolored vertex (and end at the indicated positions), it is easy to see that there is a unique way to draw them while satisfying the condition in Figure \[fig:schnyderalgorithm\]b. As soon as the red and green edges are drawn (Figure \[fig:schnyderalgorithm\]g) the blue edges are also uniquely determined by this condition. ![(a) A simple triangulation in $\mathcal{M}_{12}^{(3)}$. (b) The condition on the ordering of edges around inner vertices. (c) A Schnyder wood. (d) The corresponding $\{E,W,NW,SE\}$-walk of length $10$: E-E-W-NW-E-W-E-SE-W-W. (e) The corresponding double Dyck path. (f) The red tree together with the endpoints of the green edges as encoded by the double Dyck path. (g) As soon as the red and green edges are known, the blue edges (one for each uncolored vertex) are determined. \[fig:schnyderalgorithm\]](images/schnyderalgorithm){width="\linewidth"} As in the case of the bipolar-oriented triangulations, there is an efficient method to generate random $\{E,W,NW,SE\}$-walks of length $2n-2$. The total number of such walks of length $2m+x$ starting at $(x,y)$ and ending at the origin is [@Bousquet-Melou2011 Proposition 11] $$\frac{(x+1)(y+1)(x+y+2)(x+2y+3)}{(2m+x+1)(2m+x+2)(2m+x+3)^2}\binom{2m+x+3}{m-y}\binom{2m+x+3}{m+1}.$$ It follows that if a random $\{E,W,NW,SE\}$-walk of length $2n-2$ is at $(x,y)$ after $2n-2-k$ steps that the next step will be $E,W,NW,SE$ with probabilities $$\begin{aligned} \text{E:}&\quad\frac{(x+2) (k-x+2) (x+y+3) (x+2 y+4) (k-x-2 y)}{4 k (k+2) (x+1) (x+y+2) (x+2y+3)},\\ \text{W:}&\quad\frac{x (k+x+4) (x+y+1) (x+2 y+2) (k+x+2 y+6)}{4 k (k+2) (x+1) (x+y+2) (x+2y+3)},\\ \text{NW:}&\quad\frac{x (y+2) (k+x+4) (x+2 y+4) (k-x-2 y)}{4 k (k+2) (x+1) (y+1) (x+2y+3)},\\ \text{SE:}&\quad\frac{(x+2)y (k-x+2) (x+2 y+2) (k+x+2 y+6)}{4 k (k+2) (x+1) (y+1) (x+2 y+3)}.\end{aligned}$$ Finite-size scaling analysis of (dual) graph distances {#sec:fssplanarmaps} ====================================================== As discussed in the introduction, the Hausdorff dimension $d_\gamma$ for $\gamma = \sqrt{8/3},\sqrt{2},\sqrt{4/3},1$ agrees with the growth exponent of the volume $|\mathrm{Ball}_r({\mathfrak{m}}_n)|$ of the ball of radius $r$, i.e. the number of vertices that have graph distance at most $r$ from a random initial vertex, in a random map ${\mathfrak{m}}_n$ of size $n$ sampled from model (U), (S), (B), (W) respectively,[^1] $$\label{eq:growth} \frac{\log |\mathrm{Ball}_r({\mathfrak{m}}_n)|}{\log r} \xrightarrow[\substack{n,r\to\infty \\n\gg r}]{} d_\gamma.$$ Since we cannot attain the limit $n,r\to\infty$ in simulations, we employ the finite-size scaling method to estimate the exponents. To this end, we need to make a few additional, but reasonable, assumptions. Let $R_n$ be the graph distance between two uniformly sampled vertices in a random planar map of size $n$ (sampled from one of the models). For integer $r$ we set $\rho_n^{(*)}(r) = \mathbb{P}(R_n = r)$ to be the probability that this distance is $r$, and we extend $\rho_n^{(*)}$ to a continuous function $\rho_n^{(*)} : (0,\infty)\to[0,\infty)$ by interpolation. We assume that for any $x>0$ $$\label{eq:scalingassumption} \lim_{n\to\infty} n^{1/d_\gamma}\rho_n^{(*)}(x \,n^{1/d_\gamma}) = \rho^{(*)}(x)$$ for a continuous probability distribution $\rho^{(*)}$ on $(0,\infty)$ that depends only on the model $(\ast)$. This is slightly stronger than the requirement that $R_n / n^{1/d_\gamma}$ converges in distribution as $n\to\infty$. As we will see shortly is well supported by our data and known to be correct for uniform quadrangulations (with an explicit limit [@Bouttier2003]).[^2] Note, however, that it does not quite imply , nor is it implied by . ![The graph distance (left) and dual graph distance (right) in a quadrangulation.\[fig:graphdistance\]](images/graphdistance){width=".6\linewidth"} We estimate $\rho_n^{(*)}$ with high accuracy for each model $(\ast)$ and sizes $n$ ranging from $2^8$ up to $2^{24}$ ($\approx 17$ million) by sampling a large ensemble of random planar maps ($10^7$ for small $n$ and $10^5$ for $n\geq 2^{19}$). In each random planar map we pick a single uniform random vertex and determine the graph distance to all other vertices in the map (Figure \[fig:graphdistance\]). All these distances for the planar maps in an ensemble are included in a histogram, which upon normalization and interpolation provides our estimate of $\rho_n^{(*)}$. It turns out to be convenient to supplement the analysis with another distribution $\rho_n^{(*)\dagger}$ that relies on a different notion of distance: the *dual* graph distance between two uniformly sampled faces in the planar map (right side of Figure \[fig:graphdistance\]). It is estimated in an analogous way, this time picking a uniform random face and determining the distances to all other faces. To test the convergence we choose optimal parameters $k_n$ to “collapse” the functions $x \mapsto k_n^{-1}\,\rho_n^{(*)}(k_n^{-1}x)$. To be precise, we denote by $n_0 = 2^{24}$ the largest system size and take $\rho^{(*)}_{n_0}$ as the reference distribution. Then for each $n \leq n_0$, $k_n$ is obtained by fitting $x \mapsto k_n^{-1}\,\rho_n^{(*)}(k_n^{-1}x)$ to $x \mapsto \rho_{n_0}^{(*)}(x)$, such that $k_n > 1$ for $n<n_0$ and $k_{n_0}=1$ by construction. In the fit we choose to only take into account the portion of the histogram for which $\rho_n^{(*)}(r) \geq \frac{1}{5} \max_{r'} \rho_n^{(*)}(r')$, thus avoiding the tails of the distribution that are more prone to discretization effects. ![Finite-size scaling for the dual graph distance in bipolar-oriented triangulations without shift (left) and with shift $s=2.941$ (right). The effect in the case of graph distance is less pronounced. \[fig:collapseshift\]](images/collapse_shift_bipolar_tri){width="\linewidth"} The collapse of $\rho_n^{(B)\dagger}$ in the case of dual graph distance on bipolar-oriented triangulations is shown in the left plot of Figure \[fig:collapseshift\]. A qualitative convergence is certainly observed, but one has to go to quite large sizes for the curves to become indistinguishable. A common technique [@Ferrenberg1991; @Ambjorn1995; @Ambjorn1998] to improve the collapse is by introducing a *shift* $\rho_n^{(*)}(r) \to \rho_n^{(*)}(r-s)$ in the histograms before performing the scaling, where $s\in{\mathbb{R}}$ is independent of $n$. For any fixed $s$, the convergence $n^{1/d_\gamma} \rho_n^{(*)}(x n^{1/d_\gamma} - s) \xrightarrow{n\to\infty} \rho^{(*)}(x)$ is of course equivalent to our scaling ansatz . One may think of this shift as absorbing a subleading correction in or, if you like, accounting for the freedom we have in the discrete setting to assign distance $s$ instead of $0$ to the initial vertex/face. The optimal shift $s$ is determined by fitting $x\mapsto k_n^{-1} \rho_n^{(*)}(k_n^{-1} (x+s_n) - s_n)$ to $x \mapsto \rho_{n_0}^{(*)}(x)$ for each $n$ and taking $s$ to be a (weighted) average of the values $s_n$. This way we fix $s$ once and for all to the values in Table \[tbl:shift\]. Model Graph distance Dual graph distance ------- ---------------- --------------------- (U) $s=0.940$ $s=4.608$ (S) $s=0.557$ $s=3.019$ (B) $s=0.359$ $s=2.941$ (W) $s=0.439$ $s=2.629$ : \[tbl:shift\]Optimal shift parameters ![Finite-size scaling of the graph distance in the four models with sizes ranging from $2^{10}=1024$ to $2^{24}=16\,777\,216$ and shifts as listed in the table. \[fig:fss\] ](images/collapse_vert){width="\linewidth"} With $s$ fixed we determine the optimal scaling $k_n$ yet again by fitting $x\mapsto k_n^{-1} \rho_n^{(*)}(k_n^{-1} (x+s) - s)$ to $x \mapsto \rho_{n_0}^{(*)}(x)$. The result in the case of $\rho_n^{(B)\dagger}$ is shown in the right plot of Figure \[fig:collapseshift\]. This time all curves for $n\gtrsim 2^{13}$ become indistinguishable at the resolution of the plot. The finite-size scaling of the graph distance including the shift is shown in Figure \[fig:fss\] for all four models. The very accurate scaling lends support to the existence of a continuous probability distribution $\rho^{(*)}$ in the limit . ![Plots of $\frac{\log(n_{0}/n)}{\log(k_{n}/k_{n_{0}})}$ with statistical error bars for all four models, showing both graph distance (purple) and dual graph distance (green) measurements. Watabiki’s and Ding-Gwynne’s predictions are indicated by horizontal lines. \[fig:ratioplot\]](images/ratioplot){width="\linewidth"} If is satisfied then $$\label{eq:knasymptotics} k_n \,\stackrel{n\to\infty}{\sim}\, c\, (n/n_0)^{-1/d_\gamma}$$ for some constant $c \approx 1$. To get a first idea of the rate of convergence to the asymptotics , we plot in Figure \[fig:ratioplot\] the logarithmic ratios $$\frac{\log(n_{0}/n)}{\log(k_{n}/k_{n_{0}})}$$ with statistical error bars for the four models using both the graph distance (purple) and its dual (green). The advantage of considering the different distance measurements becomes clear upon inspecting these plots: the deviations from the scaling relation appear with different sign, allowing one in principle to estimate the exponent $d_\gamma$ by eye at the point where the two curves converge. It is also immediately clear that the data is incompatible with $d_\gamma^{\mathrm{W}}$ in the case of bipolar-oriented and Schnyder-wood-decorated triangulations, and is much closer to $d_\gamma^{\mathrm{DG}}$. To accurately estimate $d_\gamma$, we make an ansatz for the leading-order correction to of the form $$\label{eqn:kfitleadingorder} k_n \approx \left(\frac{n}{n_0}\right)^{-\frac{1}{d}} \left(a + b\left(\frac{n}{n_0}\right)^{-\delta}\right),$$ where $a$ is close to $1$, $b$ is small and $\delta > 0$. The best fits are given in Table \[tbl:correctionfit\], including the statistical errors on $d$. Finally, combining the data from both distance measures yields the estimates for the Hausdorff dimension recorded in Table \[tbl:pmdimensions\] and plotted in Figure \[fig:result\]. ![Plots of $k_n (n/n_0)^{1/d^{\mathrm{W}}_\gamma}$ with the scaling parameters $k_n$ established via finite-size scaling of the graph distance (purple) and dual graph distance (green). The solid curves correspond to best fits of the ansatz . \[fig:correctionplot\] ](images/correctionplot){width="\linewidth"} Model $d$ $\delta$ $a$ $b$ -------------- --------------------- ---------- ----------- ------------- (U) $3.9969 \pm 0.0013$ $0.57$ $0.99992$ $-0.000020$ (U)$\dagger$ $4.037 \pm 0.024$ $0.16$ $0.9795$ $0.0206$ (S) $3.575 \pm 0.006$ $0.26$ $1.0025$ $-0.00275$ (S)$\dagger$ $3.581 \pm 0.004$ $0.26$ $0.9982$ $0.00190$ (B) $3.136 \pm 0.002$ $0.29$ $1.00081$ $-0.00084$ (B)$\dagger$ $3.141 \pm 0.003$ $0.27$ $0.9984$ $0.0018$ (W) $2.9077 \pm 0.0010$ $0.41$ $0.99968$ $-0.00014$ (W)$\dagger$ $2.906 \pm 0.002$ $0.33$ $0.99985$ $0.00038$ : \[tbl:correctionfit\]Parameters of the best fit of the data to the ansatz . Model $\gamma$ $d_\gamma$ $d_\gamma^{\mathrm{W}}$ $d_\gamma^{\mathrm{DG}}$ ------- -------------- -------------------- ------------------------- -------------------------- (U) $\sqrt{8/3}$ $3.9970\pm 0.0013$ 4.0000 4.0000 (S) $\sqrt{2}$ $3.5791\pm 0.0033$ 3.5616 3.5774 (B) $\sqrt{4/3}$ $3.1375\pm 0.0017$ 3.0972 3.1381 (W) 1 $2.9074\pm 0.0009$ 2.8508 2.9083 : \[tbl:pmdimensions\]Hausdorff dimension estimates for the four planar map models. ![The plot on the left shows the estimates of $d_\gamma$ in comparison to $d_\gamma^{\mathrm{W}}$ in the same scale as Figure \[fig:bounds\]b. The right plot shows the individual estimates of $d_\gamma$ using the graph distance (purple) and dual graph distance (green) in comparison with $d_\gamma^{\mathrm{DG}}$. \[fig:result\] ](images/result){width="\linewidth"} Hausdorff dimensions in Liouville quantum gravity {#sec:lqg} ================================================= Simulations of the four planar maps models have provided accurate estimates of the Hausdorff dimension $d_\gamma$ for $\gamma = 1,\sqrt{4/3},\sqrt{2},\sqrt{8/3}$. In principle one can extend this method by exploring new discrete models that live in other universality classes. However, finding such models that allow for efficient simulation is a non-trivial task[^3]. An alternative route is to start from the continuum description of Liouville quantum gravity. Formally one thinks of the random two-dimensional metric as a Weyl-transformation $g_{ab}=e^{\gamma \phi}\hat{g}_{ab}$ of a fixed background metric $\hat{g}_{ab}$ on the surface. The field $\phi$ is sampled with probability proportional to $e^{-S_{\mathrm{L}}[\phi]}$ with the Liouville action given by [@Knizhnik1988; @David1988; @Distler1989] $$\label{eq:liouvilleaction} S_{\mathrm{L}}[\phi] = \frac{1}{4\pi} \int \rmd^2 x\sqrt{\hat{g}(x)}(\hat{g}^{ab}\partial_a\phi\partial_b\phi + Q \hat{R}\phi+ 4\pi\lambda e^{\gamma\phi}),$$ where $\hat{R}$ is the scalar curvature of $\hat{g}_{ab}$, $\lambda$ the *cosmological constant*, and $Q = 2/\gamma + \gamma/2$. Since we are only interested in the local properties of the metric $g_{ab}$ we may as well choose $\lambda=0$ and fix $\hat{g}_{ab}=\delta_{ab}$ to the standard Euclidean metric on the unit torus (using periodic coordinates $x \in {\mathbb{R}}^2/{\mathbb{Z}}^2$). In this case the action becomes that of a scalar free field, $$\label{eq:gffaction} S_{\mathrm{L}}[\phi]=S_{\mathrm{GFF}}[\phi] = \frac{1}{4\pi} \int_{{\mathbb{R}}^2/{\mathbb{Z}}^2} \rmd^2 x\, \nabla\phi\cdot\nabla\phi.$$ The random field $\phi$ sampled with (suitably regularized) probability $e^{-S_{\mathrm{GFF}[\phi]}}$ is called the *Gaussian free field* in the mathematical literature [@Sheffield2007]. The pointwise values of $\phi$ are not well-defined, but the field can be rigorously understood as a random generalized function (or distribution) living in an appropriate Sobolev space. This implies that the identification $g_{ab} = e^{\gamma \phi(x)}\hat{g}_{ab}$ cannot make literal sense as a random Riemannian metric without choosing a regularization scheme. At the level of the volume form $\sqrt{g}\rmd^2 x = e^{\gamma\phi(x)}\rmd^2x$ this can achieved by imposing an ultraviolet cutoff $\epsilon$ on $\phi$, e.g. by setting $\phi_\epsilon(x)$ to be the average of $\phi$ over a circle of radius $\epsilon$ centered at $x$, and considering the limit $$\label{eq:liouvillemeasure} \lim_{\epsilon\to 0} \epsilon^{\gamma^2/2} e^{\gamma\phi_\epsilon(x)} \rmd^2x$$ viewed as a random measure, called the *$\gamma$-Liouville measure* [@Duplantier2011]. Determining a regularization scheme of $g_{ab} = e^{\gamma \phi(x)}\hat{g}_{ab}$ that gives rise to well-defined geodesic distances $d(x,y)$ between pairs of points is more challenging. The intuitive reason for this is that the distance between two points is realized by a curve that generically has a fractal structure (in the Euclidean background metric), meaning that its length will be quite sensitive to the way the ultraviolet cutoff is imposed. Nevertheless, there has been much progress in recent years, resulting in at least two different approaches. Liouville graph distance ------------------------ The *Liouville graph distance* $D_{\gamma,\delta}(x,y)$ between two points is defined as the fewest number of Euclidean disks of arbitrary radius, but volume at most $\delta$ as measured by the $\gamma$-Liouville measure, needed to cover a path connecting $x$ and $y$ [@Ding2018a; @Ding2019; @Ding2018]. This definition should be viewed as the analogue of the (dual) graph distance in a random planar map of size $n\approx \delta^{-1}$, where the distance is the fewest number of faces (which all have equal volume $\approx 1$) one has to traverse to get from one vertex to another. It has been shown rigorously [@Ding2018 Theorem 1.4] that $D_{\gamma,\delta}(x,y)$ for fixed $x$ and $y$ is of order $\delta^{-1/d_\gamma}$, i.e. $$\lim_{\delta\to 0} \frac{\log D_{\gamma,\delta}(x,y)}{\log \delta} = - \frac{1}{d_\gamma}.$$ This provides one avenue to measure $d_\gamma$ numerically, as was done by Ambjørn and the second author in [@Ambjorn2014]. There a discrete $\gamma$-Liouville measure was constructed by exponentiating a discrete Gaussian free field (see Section \[sec:dlfpp\] below) on a $w\times w$ square lattice with periodic boundary conditions. Instead of finding paths of disks of volume $\delta$ connecting pairs of points, distances were obtained from a Riemannian metric with local density constructed from averaging the $\gamma$-Liouville measure over such disks of volume $\delta$, for which one expects similar behaviour. Estimates on $d_\gamma$ obtained in [@Ambjorn2014] are shown in green in Figure \[fig:bounds\]b. Liouville first passage percolation ----------------------------------- Following [@Benjamini2010; @Ding2018a; @Ding2019; @Ding2018; @Dubedat2019], the *Liouville first passage percolation* distance $D_{\xi,\epsilon}(x,y)$ between two points $x$ and $y$ is given for $\xi>0$ in terms of the regularized Gaussian free field $\phi_\epsilon$ by $$\label{eq:fpp} D_{\xi,\epsilon}(x,y) = \inf_{\Gamma:x\to y} \int_0^1 e^{\xi \phi_\epsilon(\Gamma(t))} |\Gamma'(t)|\rmd t,$$ where the infimum is over piecewise differentiable paths $\Gamma$ with $\Gamma(0)=x$ and $\Gamma(1)=y$. In the hope to construct a metric for $\gamma$-Liouville quantum gravity one should *not* take $\xi = \gamma / 2$, which would arise from naively regularizing the Riemannian metric as $e^{\gamma \phi_\epsilon(x)} \hat{g}_{ab}$. Assuming the existence of a Hausdorff dimension $d_\gamma$, one would like an overall scaling of the volume (as measured by the $\gamma$-Liouville measure) by a factor $C$ to amount to a scaling of the geodesic distances by $C^{1/d_\gamma}$. The former is achieved by a constant shift $\phi(x) \to \phi(x) + \frac{1}{\gamma}\log C$ in , leading to an overall scaling of by $C^{\xi / \gamma}$, hence suggesting the relation $$\label{eq:xi} \xi = \gamma / d_\gamma.$$ On the other hand, under a coordinate transformation $x\mapsto x' = C x$ one should transform $\phi(x)\mapsto\phi'(x') = \phi(x) - Q \log C$ with $Q=2/\gamma+\gamma/2$ in order to preserve the $\gamma$-Liouville measure [@Duplantier2011]. Accordingly, transforms as $$\begin{aligned} D'_{\xi,\epsilon}(x',y') &= \inf_{\Gamma:x\to y} \int_0^1 e^{\xi \phi_\epsilon(C\Gamma(t))-\xi Q\log C} |C\Gamma'(t)|\rmd t\\ &= C^{1-\xi Q}\inf_{\Gamma:x\to y} \int_0^1 e^{\xi \phi_{\epsilon/C}(\Gamma(t))} |\Gamma'(t)|\rmd t = C^{1-\xi Q} D_{\xi,\epsilon/C}(x,y).\end{aligned}$$ Equality in the limit $\epsilon \to 0$ can only be achieved if $D_{\xi,\epsilon}(x,y)$ scales as $\epsilon^{1-\xi Q}$ as $\epsilon\to 0$ (see [@Ding2018 Section 2.3] for a similar heuristic). Indeed, it was proven rigorously in [@Ding2018 Theorem 1.5] that the following limit holds (in probability) $$\label{eq:lambda} \frac{\log D_{\xi,\epsilon}(x,y)}{\log \epsilon} \xrightarrow{\epsilon\to 0} \lambda(\xi),\quad \lambda(\xi) = 1-\xi Q = 1 - \frac{2}{d_\gamma} - \frac{\gamma^2}{2 d_\gamma}.$$ See [@Gwynne2019a Theorem 1.1] for results on the limit of $\epsilon^{-\lambda}D_{\xi,\epsilon}(x,y)$ as a metric space. Note that if we can determine $\lambda(\xi)$ for some value of $\xi$ satisfying $1-\lambda(\xi) > 2\xi$, then and can be inverted to determine a pair of values $\gamma$ and $d_\gamma$. This provides a different route towards numerical estimates of $d_\gamma$. Watabiki’s formula and Ding & Gwynne’s formula correspond to the particularly simple relations $$\lambda^{\mathrm{W}}(\xi) = \xi^2, \qquad \lambda^{\mathrm{DG}}(\xi) = \frac{\xi}{\sqrt{6}}.$$ Discrete Liouville first passage percolation (DLFPP) {#sec:dlfpp} ---------------------------------------------------- The Liouville first passage percolation distance has a natural discrete counterpart [@Benjamini2010; @Ding2018a; @Ang2019] that is particularly convenient for numerical simulations. Consider a $w\times w$ square lattice $\Lambda_w\coloneq({\mathbb{Z}}/w{\mathbb{Z}})^2$ with periodic boundary conditions. The *discrete Gaussian free field* $\psi:\Lambda_w \to {\mathbb{R}}$ on this lattice has probability distribution proportional to $$\exp\Big[-\frac{1}{4\pi} \sum_{x,y\in\Lambda_w} \psi(x) \Delta_{xy}\psi(y) \Big]\delta\Big(\sum_{x\in\Lambda_w}\psi(x)\Big),\quad \Delta_{xy}=\begin{cases} 4 & x=y\\ -1 & \text{$x$ and $y$ adjacent}\\ 0 & \text{otherwise,} \end{cases}$$ which is the natural discrete analogue of $e^{-S_{\mathrm{GFF}}[\phi]}$ in . The normalization of the field is such that $$\langle\psi(0)^2\rangle \stackrel{w\to\infty}{\sim} \log w.$$ The natural discretization of the first passage percolation distance is $$D_{\xi,w}(x,y) = \inf_{\Gamma : x \to y} \sum_{i=1}^{m} \tfrac{1}{2}\left(e^{\xi \psi(\Gamma(i-1))}+e^{\xi \psi(\Gamma(i))}\right),$$ where the sum is over nearest-neighbour walks $\Gamma : \{0,1,\ldots m\} \to \Lambda_w$ of arbitrary length $m$ from $\Gamma(0) = x$ to $\Gamma(m) = y$. See Figure \[fig:fieldplot\] for a few random samples. In [@Ang2019 Theorem 1.4] it is shown, in the slightly different setting of Dirichlet instead of periodic boundary conditions, that this distance approximates the continuum first passage percolation distance well. In particular, it satisfies the analogous scaling relation [@Ang2019 Theorem 1.5] $$\label{eq:dlfppscaling} \frac{\log D_{\xi,w}([w\, x],[w\, y])}{\log w} \xrightarrow{w\to \infty} 1-\lambda(\xi)$$ for $x,y\in[0,1)^2$ fixed, where $[w x]$ denotes the lattice point in $\Lambda_w$ closest to $w x$. Finite-size scaling of Liouville first passage percolation distance {#sec:fsslqg} =================================================================== ![Finite-size scaling of the DLFPP distance for $\xi = 0.1, 0.2, 0.3, 0.4$. Shown are plots of $x\mapsto k_w^{-1} \rho_{\xi,w}(k_w^{-1}x)$ with $k_w$ determined by a best fit for $w = 2^7, \ldots, 2^{12}$. \[fig:fpp\_collapse\_simple\]](images/fpp_collapse_simple){width="\linewidth"} We make a similar assumption as we did in the case of the random planar maps, namely that the probability density $\rho_{\xi,w}$ of the distance $D_{\xi,w}(x,y)$ between two points $x$ and $y$ sampled uniformly from $\Lambda_w$ satisfies a pointwise scaling limit $$\label{eq:fppscalingansatz} \lim_{w\to\infty} w^{1-\lambda} \rho_{\xi,w} (w^{1-\lambda} r) = \rho_\xi(r), \quad r>0.$$ To estimate $\rho_{\xi,w}$ we have sampled (at least) $2$ million instances of the Gaussian free field for $w$ ranging from $2^6$ to $2^{12}$ and $\xi$ from $0.01$ to $0.4$. For each field we pick an arbitrary starting point $x$ and determine the distances $D_{\xi,w}(x,y)$ to all points $y\in \Lambda_w$, which are then included in a histogram. As in the case of the planar maps, we fit $x\mapsto k_w^{-1}\rho_{\xi,w}(k_w^{-1}x)$ to the reference distribution $\rho_{\xi,w_0}(x)$, where $w_0 = 2^{12}$ is the largest lattice size considered. As before, only the data for which $\rho_{\xi,w}(r) \geq \nu \max_{r'}\rho_{\xi,w}(r')$ with $\nu = 0.2$ is used in the fit. Figure \[fig:fpp\_collapse\_simple\] plots $k_w^{-1}\rho_{\xi,w}(k_w^{-1}x)$ for the fitted values of $k_w$ and various values of $\xi$. The quality of the collapse is good for the larger values of $\xi$, and can be further improved by introducing a constant shift as was done before. ![The plot on the left shows the estimates of $\lambda$ with systematic errors (as recorded in Table \[tbl:lambda\]) in comparison to $\lambda^{\mathrm{W}} = \xi^2$ (red), $\lambda^{\mathrm{DG}} = \xi/\sqrt{6}$ (blue) and $\lambda = \frac{\xi}{\sqrt{6}}\left(1-\frac13(1-\sqrt{6}\,\xi)^3\right)$ (orange). The right plot shows the same data normalized by $\xi/\sqrt{6}$, as well as the individual data points for the different parameters $\nu$ and $s$ (gray) and the planar map estimates of Table \[tbl:pmdimensions\] (green). \[fig:lambda\_estimate\]](images/lambda_estimate){width="\linewidth"} For small values of $\xi$, however, the approach towards the limiting distribution $\rho_\xi$ of is markedly slower. In this regime the fitted values of $k_w$ depend more sensitively on the fitting procedure used and one should attribute a larger systematic uncertainty to them. To get a handle on this uncertainty we repeat the analysis with different choices of fitting parameters, namely $\nu = 0, 0.2, \ldots, 0.8$ for the range of data used and $s = 0, 2, 4$ for the constant shift (covering roughly the range of optimal shifts). For each choice of these parameters the values $k_w$ are determined and fitted to the ansatz $$\label{eq:kwansatz} k_w \approx \left(\frac{w}{w_0}\right)^{\lambda-1}\left(a + b \left(\frac{w}{w_0}\right)^{-\delta}\right)$$ analogous to . The collection of values of $\lambda$ obtained in this way is used to establish the systematic error on our estimate, which turns out to be significantly larger than the statistical error for all $\xi \lesssim 0.35$. The results are gathered in Table \[tbl:lambda\], which also includes the corresponding central charge $c= 25- 6(1-\lambda)^2/\xi^2$ and estimates for $\gamma$ and $d_\gamma$ calculated using and . We only record the error in $d_\gamma$ explicitly, because it is most significant when comparing to the formulas $d_{\gamma}^{\mathrm{DG}}$ and $d_\gamma^{\mathrm{W}}$. Figure \[fig:lambda\_estimate\] shows a plot of the estimated values of $\lambda$, while Figure \[fig:lfpp\_dimensions\] displays the corresponding estimates for the Hausdorff dimension $d_\gamma$. $\xi$ $\lambda$ $\quad c$ $\quad\gamma$ $d_\gamma$ ------- --------------------------------- ------------------------ --------------- --------------------- 0.010 $\quad 0.0033 \pm 0.0002 \quad$ $-59.6\mathrm{k}\quad$ $0.0201\quad$ $2.0070 \pm 0.0003$ 0.020 $0.0065 \pm 0.0003$ $-14.8\mathrm{k}$ 0.0402 $2.0140 \pm 0.0006$ 0.030 $0.0096 \pm 0.0004$ $-6.5\mathrm{k}$ 0.0606 $2.0213 \pm 0.0008$ 0.040 $0.0128 \pm 0.0005$ $-3.6\mathrm{k}$ 0.0812 $2.0293 \pm 0.0010$ 0.050 $0.0161 \pm 0.0005$ $-2.3\mathrm{k}$ 0.1019 $2.0380 \pm 0.0011$ 0.075 $0.0246 \pm 0.0008$ $-990.$ 0.1547 $2.0626 \pm 0.0017$ 0.100 $0.0341 \pm 0.0006$ $-534.$ 0.2093 $2.0932 \pm 0.0012$ 0.125 $0.0450 \pm 0.0004$ $-325.$ 0.2664 $2.1315 \pm 0.0009$ 0.150 $0.0555 \pm 0.0004$ $-213.$ 0.3261 $2.1738 \pm 0.0009$ 0.175 $0.0668 \pm 0.0004$ $-145.$ 0.3893 $2.2244 \pm 0.0010$ 0.200 $0.0782 \pm 0.0003$ $-102.$ 0.4565 $2.2827 \pm 0.0008$ 0.225 $0.0891 \pm 0.0003$ $-73.3$ 0.5285 $2.3489 \pm 0.0007$ 0.250 $0.0994 \pm 0.0002$ $-52.9$ 0.6062 $2.4247 \pm 0.0007$ 0.275 $0.1110 \pm 0.0001$ $-37.7$ 0.6929 $2.5196 \pm 0.0005$ 0.300 $0.1210 \pm 0.0003$ $-26.5$ 0.7888 $2.6293 \pm 0.0010$ 0.325 $0.1312 \pm 0.0003$ $-17.9$ 0.8994 $2.7675 \pm 0.0014$ 0.350 $0.1424 \pm 0.0004$ $-11.0$ 1.0346 $2.9561 \pm 0.0022$ 0.375 $0.1525 \pm 0.0003$ $-5.65$ 1.2076 $3.2201 \pm 0.0023$ 0.400 $0.1632 \pm 0.0002$ $-1.26$ 1.4787 $3.6968 \pm 0.0035$ : \[tbl:lambda\] Estimates for $\lambda$ with systematic errors obtained by fitting to . ![Left: The DLFPP estimates of $d_\gamma$ (error bars too small to display). Right: the estimates of $d_\gamma$ normalized by $d_\gamma^{\mathrm{DG}}$ with error bars as obtained from DLFPP (black) and random planar maps (green). \[fig:lfpp\_dimensions\]](images/lfpp_dimensions){width="\linewidth"} Discussion ========== The DLFPP estimates of $\lambda$ and $d_\gamma$ for $\xi = 0.35,\, 0.375,\, 0.4$ are in very good agreement with Ding & Gwynne’s prediction $\lambda^{\mathrm{DG}} = \xi / \sqrt{6}$ and consistent with the random planar map results (see the green data points in Figure \[fig:lambda\_estimate\] and Figure \[fig:lfpp\_dimensions\]). For $\xi < 0.35$ the measurements are still much closer to $\lambda^{\mathrm{DG}}$ than to Watabiki’s prediction $\lambda^{\mathrm{W}} = \xi^2$, but a significant negative deviation is visible, that is most pronounced at around $\xi = 0.1$ ($\gamma\approx 0.2$, $c \approx -2.3k$). The data is better described by the (completely ad hoc) formula $\lambda = \frac{\xi}{\sqrt{6}}\left(1-\frac13(1-\sqrt{6}\,\xi)^3\right)$. However, we are hesitant to rule out $\lambda^{\mathrm{DG}} = \xi / \sqrt{6}$ on the basis of the current data. The reason for this is that the quality of the finite-size scaling for smaller values of $\xi$ is not as good as one may have hoped, indicating that (much) larger lattice sizes might be necessary to observe accurate scaling. An explanation could be that for small $\xi$ and currently used lattice sizes, the DLFPP geodesics are too close to geodesics of the Euclidean lattice and therefore do not sufficiently display their fractal structure. Indeed, in Figure \[fig:fieldplot\] many of the DLFPP geodesics for $\xi=0.1$ are seen to contain fairly long straight segments, a phenomenon that becomes more pronounced for even smaller $\xi$. Since the DLFPP length of straight segments scales with an exponent that is different but for small $\xi$ quite close to $1-\lambda$, one may need to increase the lattice size $w$ considerably to make its subleading contribution small enough. Source code and simulation data {#source-code-and-simulation-data .unnumbered} =============================== The source code and simulation data, on which Section \[sec:fssplanarmaps\] and Section \[sec:fsslqg\] are based, are freely available online at [@Barkley2019]. References {#references .unnumbered} ========== [10]{} url \#1[[\#1]{}]{} urlprefix \[2\]\[\][[\#2](https://arxiv.org/abs/#2)]{} Ambj[ø]{}rn J, Jurkiewicz J and Loll R 2005 The spectral dimension of the universe is scale dependent [*Physical review letters*]{} [**95**]{} 171301 Benedetti D 2009 Fractal properties of quantum spacetime [*Physical review letters*]{} [**102**]{} 111303 Reuter M and Saueressig F 2011 Fractal space-times under the microscope: a renormalization group view on monte carlo data [*Journal of High Energy Physics*]{} [**2011**]{} 12 Calcagni G, Oriti D and Th[ü]{}rigen J 2015 Dimensional flow in discrete quantum geometries [*Physical Review D*]{} [**91**]{} 084047 Carlip S 2017 Dimension and dimensional reduction in quantum gravity [ *Classical and Quantum Gravity*]{} [**34**]{} 193001 Knizhnik V G, Polyakov A M and Zamolodchikov A B 1988 Fractal structure of 2d-quantum gravity [*Modern Physics Letters A*]{} [**3**]{} 819–826 Ambj[ø]{}rn J and Watabiki Y 1995 Scaling in quantum gravity [*Nuclear Physics B*]{} [**445**]{} 129–142 Chassaing P and Schaeffer G 2004 Random planar lattices and integrated super[B]{}rownian excursion [*Probab. Theory Related Fields*]{} [**128**]{} 161–212 Angel O 2003 Growth and percolation on the uniform infinite planar triangulation [*Geom. Funct. Anal.*]{} [**13**]{} 935–974 Le Gall J F 2013 Uniqueness and universality of the [B]{}rownian map [*Ann. Probab.*]{} [**41**]{} 2880–2960 Miermont G 2013 The [B]{}rownian map is the scaling limit of uniform random plane quadrangulations [*Acta Math.*]{} [**210**]{} 319–401 Miller J and Sheffield S 2016 Liouville quantum gravity and the [Brownian]{} map [II]{}: geodesics and continuity of the embedding (*Preprint* ) Ambj[ø]{}rn J, Nielsen J L, Rolf J, Boulatov D and Watabiki Y 1998 The spectral dimension of [2D]{} quantum gravity [*Journal of High Energy Physics*]{} [ **1998**]{} 010 Rhodes R and Vargas V 2014 Spectral dimension of [Liouville]{} quantum gravity [*Annales Henri Poincar[é]{}*]{} vol 15 (Springer) pp 2281–2298 Gwynne E and Miller J 2017 Random walk on random planar maps: spectral dimension, resistance, and displacement (*Preprint* ) Watabiki Y 1993 Analytic study of fractal structure of quantized surface in two-dimensional quantum gravity [*Progress of Theoretical Physics Supplement*]{} [**114**]{} 1–17 Ambj[ø]{}rn J, Jurkiewicz J and Watabiki Y 1995 On the fractal structure of two-dimensional quantum gravity [*Nuclear Physics B*]{} [**454**]{} 313–342 Kawamoto N, Kazakov V, Saeki Y and Watabiki Y 1992 Fractal structure of two-dimensional gravity coupled to $c=-2$ matter [*Physical review letters*]{} [**68**]{} 2113 Anagnostopoulos K, Bialas P and Thorleifsson G 1999 The ising model on a quenched ensemble of $c=-5$ gravity graphs [*Journal of statistical physics*]{} [**94**]{} 321–345 Kawamoto N and Yotsuji K 2002 Numerical study for the c-dependence of fractal dimension in two-dimensional quantum gravity [*Nuclear Physics B*]{} [ **644**]{} 533–567 Ambj[ø]{}rn J and Budd T G 2012 Semi-classical dynamical triangulations [ *Physics Letters B*]{} [**718**]{} 200–204 Ambj[ø]{}rn J and Budd T 2013 The toroidal [Hausdorff]{} dimension of 2d [Euclidean]{} quantum gravity [*Physics Letters B*]{} [**724**]{} 328–332 Ambj[ø]{}rn J and Budd T 2014 Geodesic distances in [Liouville]{} quantum gravity [*Nuclear Physics B*]{} [**889**]{} 676 – 691 Ding J and Goswami S 2018 Upper bounds on [Liouville]{} first-passage percolation and [Watabiki’s]{} prediction [*Commun. on Pure and Applied Mathematics*]{} (*Preprint* ) Ding J and Gwynne E 2018 The fractal dimension of [Liouville]{} quantum gravity: universality, monotonicity, and bounds [*Commun. Math. Phys.*]{} 1–58 (*Preprint* ) Dub[é]{}dat J, Falconet H, Gwynne E, Pfeffer J and Sun X 2019 Weak [LQG]{} metrics and [Liouville]{} first passage percolation (*Preprint* ) Gwynne E and Miller J 2019 Existence and uniqueness of the [Liouville]{} quantum gravity metric for $\gamma\in(0, 2)$ (*Preprint* ) Gwynne E and Pfeffer J 2019 Bounds for distances and geodesic dimension in [Liouville]{} first passage percolation (*Preprint* ) Ang M 2019 Comparison of discrete and continuum [Liouville]{} first passage percolation (*Preprint* ) Kenyon R, Miller J, Sheffield S and Wilson D B 2015 Bipolar orientations on planar maps and [SLE$_{12}$]{} (*Preprint* ) Gwynne E and Pfeffer J 2019 External diffusion limited aggregation on a spanning-tree-weighted random planar map (*Preprint* ) Li Y, Sun X and Watson S S 2017 Schnyder woods, [SLE$_{16}$]{}, and [Liouville]{} quantum gravity (*Preprint* ) Tutte W T 1963 A census of planar maps [*Canadian J. Math.*]{} [**15**]{} 249–271 Schaeffer G 1998 [*Conjugation d’arbres et cartes combinatoires aleatoires*]{} Ph.D. thesis Université Bordeaux I Miermont 2014 [*Aspects of Random Maps*]{} (Saint-Flour lecture notes) Mullin R C 1967 On the enumeration of tree-rooted maps [*Canadian J. Math.*]{} [**19**]{} 174–183 Sheffield S 2016 Quantum gravity and inventory accumulation [*Ann. Probab.*]{} [**44**]{} 3804–3848 Tutte W T 1973 Chromatic sums for rooted planar triangulations: the cases [$\lambda =1$]{} and [$\lambda =2$]{} [*Canadian J. Math.*]{} [**25**]{} 426–447 Bousquet-Mélou M 2011 Counting planar maps, coloured or uncoloured [ *Surveys in combinatorics 2011*]{} ([*London Math. Soc. Lecture Note Ser.*]{} vol 392) (Cambridge Univ. Press, Cambridge) pp 1–49 Gwynne E, Holden N and Sun X 2016 Joint scaling limit of a bipolar-oriented triangulation and its dual in the peanosphere sense (*Preprint* ) Bousquet-M[é]{}lou M, Fusy [É]{} and Raschel K 2019 Plane bipolar orientations and quadrant walks (*Preprint* ) Bousquet-Mélou M and Mishna M 2010 Walks with small steps in the quarter plane [*Algorithmic probability and combinatorics*]{} ([*Contemp. Math.*]{} vol 520) (Amer. Math. Soc., Providence, RI) pp 1–39 Schnyder W 1989 Planar graphs and poset dimension [*Order*]{} [**5**]{} 323–343 Bonichon N 2005 A bijection between realizers of maximal plane graphs and pairs of non-crossing [D]{}yck paths [*Discrete Math.*]{} [**298**]{} 104–114 Bernardi O and Bonichon N 2009 Intervals in [C]{}atalan lattices and realizers of triangulations [*J. Combin. Theory Ser. A*]{} [**116**]{} 55–75 Fusy E, Poulalhon D and Schaeffer G 2009 Bijective counting of plane bipolar orientations and [S]{}chnyder woods [*European J. Combin.*]{} [**30**]{} 1646–1658 Bouttier J, Di Francesco P and Guitter E 2003 Geodesic distance in planar graphs [*Nuclear Physics B*]{} [**663**]{} 535–567 Ferrenberg A M and Landau D 1991 Critical behavior of the three-dimensional [Ising]{} model: A high-resolution [Monte]{} [Carlo]{} study [*Physical Review B*]{} [**44**]{} 5081 Ambj[ø]{}rn J, Anagnostopoulos K, Ichihara T, Jensen L, Kawamoto N, Watabiki Y and Yotsuji K 1998 [The Quantum space-time of c = -2 gravity]{} [*Nucl. Phys.*]{} [**B511**]{} 673–710 (*Preprint* ) Gwynne E, Miller J and Sheffield S 2017 The [Tutte]{} embedding of the mated-[CRT]{} map converges to [Liouville]{} quantum gravity (*Preprint* ) David F 1988 Conformal field theories coupled to 2-d gravity in the conformal gauge [*Modern Physics Letters A*]{} [**3**]{} 1651–1656 Distler J and Kawai H 1989 Conformal field theory and 2d quantum gravity [ *Nuclear physics B*]{} [**321**]{} 509–527 Sheffield S 2007 Gaussian free fields for mathematicians [*Probab. Theory Relat. Fields*]{} [**139**]{} 521–541 Duplantier B and Sheffield S 2011 Liouville quantum gravity and [KPZ]{} [ *Invent. Math.*]{} [**185**]{} 333–393 Ding J, Zeitouni O and Zhang F 2019 Heat kernel for [Liouville]{} [Brownian]{} motion and [Liouville]{} graph distance [*Communications in Mathematical Physics*]{} (*Preprint* ) Benjamini I 2010 Random planar metrics [*Proceedings of the International Congress of Mathematicians*]{} vol 4 (World Scientific Publishing Company) pp 2177–2187 Barkley J and Budd T 2019 Simulation data and source code for Hausdorff dimension measurements in two-dimensional quantum gravity [*Zenodo*]{} doi:[10.5281/zenodo.3375454](https://doi.org/10.5281/zenodo.3375454) [^1]: In probabilistic terms, the limit can be understood as a limit in distribution as $n\to\infty$ followed by an almost sure limit as $n\to\infty$ [@Ding2018 Theorem 1.6]. [^2]: In the case of spanning-tree decorated quadrangulations [@Gwynne2019b Theorem 1.4] comes close by identifying the scaling of the diameter of ${\mathfrak{m}}_n$ with growing $n$. [^3]: The *mated-CRT maps* of [@Gwynne2017a] are a good candidate for arbitrary $\gamma\in(0,2)$.
--- abstract: 'Using direct $N$-body simulations of self-gravitating systems we study the dependence of dynamical chaos on the system size $N$. We find that the $N$-body chaos quantified in terms of the largest Lyapunov exponent $\Lambda_{\rm max}$ decreases with $N$. The values of its inverse (the so-called Lyapunov time $t_\lambda$) are found to be smaller than the two-body collisional relaxation time but larger than the typical violent relaxation time, thus suggesting the existence of another collective time scale connected to many-body chaos.' author: - 'Pierfrancesco Di Cintio$^{1,2}$' - 'Lapo Casetti$^{2,3,4}$' title: '$N$-body chaos, phase-space transport and relaxation in numerical simulations' --- Introduction ============ The dynamics of $N$-body gravitational systems, due to the long-range nature of the $1/r^2$ force, is dominated by [*mean-field*]{} effects rather than by inter-particle collisions for large $N$. Due to the extremely large number of particles it is often natural to adopt a description in the continuum ($N\to\infty$, $m\to 0$) collisionless limit in terms of the Collisionless Boltzmann Equation (CBE, see e.g. [@bt08]) for the single-particle phase-space distribution function $f(\mathbf{r},\mathbf{v},t)$ $$\label{vlasov} \partial_t f+\mathbf{v}\cdot\nabla_{\mathbf{r}}f+\nabla\Phi\cdot\nabla_{\mathbf{v}}f=0,$$ coupled to the Poisson equation $\Delta\Phi(\mathbf{r})=4\pi G\rho(\mathbf{r})$. In many real self-gravitating systems, such as elliptical galaxies, $N$ is large ($N\approx 10^{11}$), and the two-body relaxation time $t_{2b}\propto Nt_*/\log N$ associated to collisional processes [@1943ApJ....97..255C] is much larger than the age of the universe. The typical states of such systems are therefore idealized as collisionless equilibria of Eq. (\[vlasov\]), approached via the mechanism of violent relaxation first suggested by [@lyn69], on a time scale of the order of the average crossing time $t_*$.\ The question whether the continuum limit is effectively meaningful, and how the intrinsic discrete nature (see e.g. [@1980PhR....63....1K]) of collisionless systems affects their relaxation to (meta-)equilibrium is in principle still open.\ Using arguments of differential geometry [@gurz86], conjectured the existence of another relaxation time $t_*<\tau<t_{2b}$, linked to the inverse of the exponentiation rate of phase-space volumes, and scaling as $N^{1/3}$ with the system size (see also [@vesp92]). [@kandrup01; @kandrup02; @kandrup03; @kandrup04] studied the ensemble properties of tracer orbits in frozen $N$-body potentials (i.e., the potential generated by $N$ [*fixed*]{} particles distributed according to different densities $\rho(r)$). They found that even at large $N$ the discreteness effects are not negligible in models with both integrable and non-integral continuum limit counterparts. Moreover they find that the exponentiation rate of the phase-space volume of initially localized clumps of particles is compatible with the mean Lyapunov exponent associated to the particles. The latter was found to be a slightly increasing function of $N$. In parallel [@hem] studied the $N$-body chaos in self-consistent Plummer models as a function of $N$, quantified by means of the exponentiation rate of a subset of the $6N$ phase-space volume considering only the position part and finding again a slightly increasing degree of chaoticity.\ More recently, [@beraldo19], using information entropy arguments, suggested the existence of another relaxation scale, associated to discreteness effects in $N$-body models, that scales as $N^{-1/6}$.\ Here we revisit this matter further by studying the dynamics of individual tracer particle in frozen and self-consistent equilibrium models as well as the Lyapunov exponents of the full $N$-body problem. Methods ======= In both frozen and active models we consider the spherically symmetric Plummer density profile $$\label{plummer} \rho(r)=\frac{3}{4\pi}\frac{Mr_c^2}{(r_c^2+r^2)^{5/2}},$$ with total mass $M$ and core radius $r_c$. In order to generate the velocities for the active simulations, we use the standard rejection technique to sample the anisotropic equilibrium phase-space distribution function $f$ obtained from $\rho$ with the standard [@edd16] integral inversion. Throughout this work we assume units such that $G=M=r_c=1$, so that the dynamical time $t_*=\sqrt{r_c^3/GM}$ and the scale velocity $v_*=r_c/t_*$ are also equal to unity. Individual particle masses are then $m=1/N$. For the numerical simulations we use a standard fourth-order symplectic integrator with fixed time-step $\Delta t=5\times 10^{-3}$ to solve the particles’ equations of motion and the associated tangent dynamics used to evaluate the maximal Lyapunov exponent with the standard [@bgs] method, as the large $k$ limit of $$\Lambda_{\rm max}(t)=\frac{1}{k\Delta t}\sum_k\ln\left|\frac{W(k\Delta t)}{d_0}\right|.$$ In the above equation $W$ is the norm of the $6N$- or 6-dimensional tangent vector for a self-consistent $N$-body simulation and an individual particle orbit, respectively. In both cases $d_0=W(t=0)$. ![Maximal Lyapunov $\Lambda_{\rm max}$ exponent as a function of $N$ for the full $N$-body problem (upward triangles) and for a single tracer orbit in the time dependent potential of all the others (downward triangles). Average maximal Lyapunov exponent for an ensemble of non-interacting particles sampled from an isotropic Plummer model propagated in a frozen Plummer model (diamonds).[]{data-label="fig1"}](lmax.pdf){width="3in"} Results and discussion ====================== [@dcs] have computed the largest Lyapunov exponents $\Lambda_{\rm max}$ for the full $N$-body problem and for several tracer orbits for different system sizes $N$, finding that in the first case $\Lambda_{\rm max}$ is a decreasing function of $N$ with a slope between $-1/3$ and $-1/2$ (upward triangles in Fig. \[fig1\]), contrary to previous numerical estimates (see e.g. [@hem]). In the second case, when following a single particle in the potential of the others, $\Lambda_{\rm max}$ is almost constant in both active and frozen potentials for tightly bound particles, while decreases for less bound particles with different power-law trends as function of the binding energy.\ Curiously, when measuring the average largest Lyapunov exponent of a system of non-interacting particles randomly sampled from a self-consistent isotropic system and propagated in a frozen realization of the latter, $\langle\Lambda_{\rm max}\rangle$ has the same scaling with $N$ (and remarkably close values), as shown by the diamonds in Fig. \[fig1\].\ Here we compute the so-called emittance, a quantity defined in the context of charged particles beams (see e.g. [@kandrup04] and references therein) associated to the “diffusion” in phase-space volume, defined by $$\epsilon=(\epsilon_x\epsilon_y\epsilon_z)^{1/3},\quad \epsilon_i=\sqrt{\langle r_i^2\rangle\langle v_i^2\rangle-\langle r_iv_i\rangle^2},$$ where $\langle...\rangle$ denotes ensemble averages. We have evaluated $\epsilon$ for the full $N$-body problem, finding that the global system’s emittance is almost conserved for large $N$, while it appears to grow with time (although the model remains in virial equilibrium) for $N\lesssim 2000$, as shown in Fig. \[fig2\] (left pannels). ![Top left: evolution of the normalized collective emittance for an isotropic Plummer model with different values of $N$. Bottom left: evolution of the virial ratio $2K/|U|$. Right: evolution of the emittance of an initially localized cluster in the $N=16384$ case.[]{data-label="fig2"}](vir.pdf "fig:"){width="2.9in"} ![Top left: evolution of the normalized collective emittance for an isotropic Plummer model with different values of $N$. Bottom left: evolution of the virial ratio $2K/|U|$. Right: evolution of the emittance of an initially localized cluster in the $N=16384$ case.[]{data-label="fig2"}](emit.pdf "fig:"){width="2.2in"} When we consider instead the emittance of an initially localized cluster of tracers (see Fig. \[fig2\], right pannel) in an active $N-$body system, we observe that the latter has (independently of $N$) an increasing beahaviour with time, roughly proportional to $\epsilon_0\exp(\langle\Lambda_{\rm max}\rangle t)$ (dashed line), where $\epsilon_0$ is its value at $t=0$ and $\langle\Lambda_{\rm max}\rangle$ is the mean maximal Lyapunov exponent of the cluster. We have repeated the same numerical experiment for frozen $N$=body potentials and for time dependent potentials generated by non-interacting particle in a smooth Plummer model finding the same behaviour (See again Fig. \[fig2\]). Moreover, in direct $N$-body simulations, when we compare the evolution of $\epsilon$ for a subset of active particles with that of the same cluster of non-interacting particles, we find that for large $N$ $\epsilon$ grows in the same fashion for both tracers and active particles. In turn, $\epsilon$ retains generally lower values for initially localized clusters in the frozen case, as a consequence of the conservation of energy and smaller fluctuations of the angular momentum in the latter case (see [@dcs]).\ All these results lead us to conclude that the discreteness effects and the intrinsic chaoticity of the $N$-body problem may be linked to another effective relaxation scale shorter than $t_{2b}$. Moreover, it also appears that the conclusions arising from the study of frozen $N$-body models can not be easily extended to real self-consistent systems. The results on the effect of the discreteness chaos on collective instabilities such as the radial-orbit instability (ROI) will be published elsewere \[[@dcs2]\]. 1976, *Phys. Rev. A*, 14, 2338 2019, *ApJ* 870, 128 2008 Galactic dynamics (2nd edition, Princeton University Press) 1943, *ApJ*, 97, 255 1949, *Reviews of Modern Physics*, 21, 383 2019 *MNRAS* (submitted, arXiv:1901.08981) 2019 *in preparation* 1916, *MNRAS* 76, 525 1986, *A& A* 160, 203 2002, *ApJ* 580, 606 1980, *Phys. Rep.*, 63, 1 2001 *Phys. Rev. E* 64, 56209 2003, *ApJ* 585, 244 2004, *Phys. Rev. St Acc. B.* 7, 14202 1969, *MNRAS* 136, 101 2002 *Phys. Rev. E* 65, 66203 1992, *A& A* 266, 215
--- abstract: 'An investigation of quantum dynamical properties was conducted for $^{1}$H$^{+}$ / $^{2}$H$^{+}$ / $^{3}$H$^{+}$ ions as they traverse through a Carbon nanotube (CNT). The investigation is focused on a Fortran based program simulating a hydrogen ion passing through a CNT. Our work included testing convergence of the simulations, understanding dynamical behavior of hydrogen isotopes, and investigating the applied H$^{+}$ + CNT potential energy curve.' author: - 'Anthony B. Kunkel, Daniel J. Finazzo, Renat A. Sultanov, and Dennis Guster,' bibliography: - 'science1.bib' title: 'Quantum-mechanical treatment of $^{1}$H$^{+}$ / $^{2}$H$^{+}$ / $^{3}$H$^{+}$ ion dynamics in carbon nanotubes' --- Introduction ============ Hydrogen is known to be a key element that can be used in alternative energy. The alternative energy field is a extremely fast growing research topic since our current main energy source, fossil fuels, will not last forever. Hydrogen has been seen to benefit the alternative energy field by being an abundant and an efficient source of energy. Although, there are some hurdles that exist with using hydrogen as a fuel source, specifically in the act of storing it [@Sharma20151151; @Im20122749; @Orhan2015801]. Hydrogen is rarely found alone and is most often found bonded to other elements. This is an solvable problem since it is relatively easy to dissociate hydrogen from water molecules and separate them from the oxygen [@Pletcher201115089]. Another problem subsequently arises, considering that hydrogen is extremely dangerous to store due to its combustible properties [@Pasman20112407; @Petukhov20095924]. This problem is one of the main reason using hydrogen as a fuel source has been at a standstill. It is now of high importance to find a medium that will store and release hydrogen at will. One possible medium that is under investigation in this research and many others is using carbon nanotubes (CNT)[@Hynek1997601; @PhysRevB.67.245413; @4287144620090601; @Skouteris201318; @0953-8984-14-17-201]. If stored properly hydrogen could not only be stored safely but in a very small and organized space [@RafiiTabar2004235]. Another interesting use of the CNT is the potential for using it as an isotopic filter [@PhysRevA.64.022903; @PhysRevB.63.245419; @PhysRevLett.82.956; @PhysRevB.65.014503]. The work found when hydrogen isotopes traveled through a single-walled carbon nanotube (SWNT) displayed interesting results for isotopic separation. Since hydrogen isotopes vary in kinetic energy, due to their mass difference, they will exit the SWNT at different times [@doi:10.1021/nn901592x; @doi:10.1021/jp054511p; @Ghosh201547; @PhysRevLett.94.175501; @doi:10.1021/jp030601n; @doi:10.1021/ja0502573]. The interesting aspects investigated in this paper, is how the separation changes when varying the parameters of the CNT such as size and/or temperature. The larger the isotopic seperation shown would be incredibly useful in fields where hydrogen isotopes are of great importance. For instance, deuterium is sometimes found bonded to oxygen in the form of heavy water which must be filtered separately from hydrogen. These deuterium atoms can then be used in research with tritium for tests in the nuclear field [@edsgcl.36234808920140101]. Therefore, it is critical to find inexpensive ways to separate all of these isotopes of hydrogen. CNT might be the best solution for this low-cost isotopic separation [@doi:10.1021/jp064745o]. In this experiment a Fortran based program called CYLWAVE is used to find the flux of outgoing hydrogen atoms from the CNT. We are focused on manipulating the CNT to represent a physical system that takes hydrogen in but does not allow it to escape. Another focus is on how the isotopic separation is affected by the variables of the CNT. We are also interested in visually simulating the potential energy curve to get an understanding on what potentials are applied to the particles inside our CNT[@Skouteris2009459]. In this paper Sec. II describes the mechanics of the CYLWAVE program. It also explains how our variations of parameters in the program were approached. Sec. III gives an overview of information obtained through the course of our research. Sec. IV reviews the information presented throughout this document. Future work and outlook is also given in this section. All units used are in atomic units unless otherwise stated. Methods ======= The program as mentioned before, utilizes a wavepacket propagation approach inside a stationary CNT. Since the problem at hand is best suited in cylindrical coordinates it is necessary to deal with the Hamiltonian operator in cylindrical coordinates as well. The Hamiltonian then becomes as follows with $r$ as the radius, $\theta$ as the azimuthal angle, and $z$ as the longitudinal distance: $$\hat{H}=-\frac{\hbar^{2}}{2m}\left( \frac{\partial^{2}}{\partial z^{2}}+\frac{\partial^{2}}{\partial r^{2}}+\frac{1}{r}\frac{\partial}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}}{\partial \theta^{2}}\right) +V(r,\theta,z).$$ This equation may be further simplified by dealing with the first order derivative in r; $$\hat{H}=-\frac{\hbar^{2}}{2m}\left( \frac{\partial^{2}}{\partial z^{2}}+r^{-1/2}\frac{\partial^{2}}{\partial r^{2}}r^{1/2}+\frac{1}{4r^{2}}+\frac{1}{r^{2}}\frac{\partial^{2}}{\partial \theta^{2}}\right) +V(r,\theta,z).$$ Finally, if the time-dependent wavefunction $\Psi$ is replaced with $r^{1/2}\psi$ the Hamiltonian operator becomes [@1432]: $$\hat{H}=-\frac{\hbar^{2}}{2m}\left( \frac{\partial^{2}}{\partial z^{2}}+\frac{\partial^{2}}{\partial r^{2}}+\frac{1}{4r^{2}}+\frac{1}{r^{2}}\frac{\partial^{2}}{\partial \theta^{2}}\right) +V(r,\theta,z).$$ The function $V$ is a combination three different potential functions, to accurately design the potential energy surface. The sum of Lennard-Jones (6-12) potentials have been included to account for the interaction between the hydrogen atom and the carbon atoms. The Lennard-Jones potential is described as: $$V=4\varepsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right].$$ Here $\varepsilon$ is the potential well depth and $\sigma$ is the distance where the inter-particle potential is zero. There also are two simulated potentials designed to exclude nonphysical characteristics of the wavepacket. One of these potentials is placed at the longitudinal end of the tube in order to absorb the exiting particle. The other is located at the radial edge of the CNT to allow for an exponential decay of the wavefunction outside of the nanotube, to account for possible tunneling effects. The program allowed us to change the parameters of the particle and the CNT using the input, parameter, and potential energy surface files. Our experiment was solely focused on working with the input file in the interest of only changing specific variables and achieving convergence with our results. With the input file we were able to change the particle’s mass and longitudinal energy. It also allowed for changes to the components of the CNT such as its $z$, $r$, and $\theta$ integration points. After understanding the mechanics of the program it was a logical next step to find which components of the input file would be varied and which need to be held constant. The original file was designed as a test simulation to ensure that the program was functioning properly. The test simulation was found to be working correctly but it lacked detailed results, especially in the number of azimuthal integration points. We solved this problem by holding all parameters constant and changing the number of azimuthal points until convergence was found. This same technique was used in finding convergence in the integration points for the radial and longitudinal coordinates. The final steps in our simulation were to manipulate the longitudinal energy to determine the possibility of separation. The wavepacket approach breaks down at low energy and limited our ability to do simulations at low temperatures, i.e., less than 50 Kelvin (K). Although we are curious on the possibility of separation at temperatures near room temperature (300 K). We also took time to investigate other isotopes of hydrogen. Deuterium and tritium were used since they might also be traveling through a particular CNT. The original mass of the particle was 1836 atomic units (a.u.), which corresponded to a hydrogen ion. The mass was doubled and tripled, respectively, to simulate deuterium and tritium. Results ======= The Lennard-Jones potential is of great importance to the transportation of the hydrogen ion, and is imperative that we take time investigating how the program simulates the potential. We were able to visualize the potential by programming the simulation to display the values of the potential and the r coordinates, while keeping the z and theta values static. This approach utilizes the symmetry of the potential, understanding that the potential is dependent on r but remains constant for varying values of both $\theta$ and $z$. These potential values were then plotted in Figure 1 to display the potential energy curve used in the program. As mentioned before, there is a radial potential designed to restrict the particle to only be able to tunnel out of the CNT. This potential has been omitted since it is not part of the Lennard-Jones potential. It was apparent that low values of integration points was not sufficient in getting detailed results from the program. The values were tested for points 8 through 16. There were no visible changes in flux for azimuthal points 10 through 16, as shown in Figure 1, so it was concluded that convergence had been found. For the future runs we held the azimuthal points at 10 to save time on computation. This particular approach was applied to find convergence in the r and z coordinates. The initial number of integration points were 144 and 72 for z and r, respectively. The z-points were increased to 216 and 432 while keeping all other parameters constant. The results in Figure 3 show that there is no change in flux due to the increase in integration points. Similarly we increased the integration points of the r coordinate to 144. The results from these simulations, Figure 4, portrayed that there was no change between the changes r-points. Given this data we can see that the program is convergent for each of our chosen values of integration points. Although, there was a notable increase in computation time as we changed the integration points. Therefore we choose to use 72 r-points and 216 z-points to shorten the time per simulation. Another interesting result is due to the findings of the flux data associated with the particles hydrogen, deuterium, and tritium. Simulations were run on the three particles using the same values in all parameters except for the particles mass. Figure 5 shows that the particles vary in the amount of flux and at what time the maximum flux exits the tube. It is important to note that the flux is shown to be negative in the results due to the convention of the flux operator in the program, which is given by the equation $$\hat{F}=\frac{i}{\hbar}\left[\hat{H},\theta(z-z_{0})\right].$$ With $\theta(z-z_{0})$ being the Heaviside step function. This means that the results actually correspond to a maximum as opposed to the appearance of a minimum. The figure shows that hydrogen and its isotopes in fact may exit the CNT at different times. The problem that arises is the flux of each particle are overlapping. The temperature was changed to determine its affect on the amount of overlap between the particles. The temperature was varied from 50 K to 300 K as shown in Figures 5-9. The overlap persist with the change in temperature. Although, it does show interesting results in the change in flux. As the temperature is decreased we can see that the maximum flux continues to decrease for each particle. It can also be seen that the system with the least energy also has maximum flux at a later time than the higher energy graphs. These trends may be described by the particles change in velocity as the temperature is varied. We might also assume that the flux is also proportional to the velocity. Conclusion ========== In the experiment we were able to test the CYLWAVE program for convergence of the hydrogen ion inside the CNT. We tested several areas of the program including; the integration points for $\theta$, $r$, and $z$. We also investigated the change in flux in correspondence to the changes in longitudinal energy. Convergence was found when using integration points at 10, 216, and 72 for $\theta$, $r$, $z$ respectively. It was also found that the CNT produced a separation between peak flux for isotopes of hydrogen. The separation was seen to remain nearly constant as the energy modified from high to low. This fact requires more inspection in future work. As this work continues it will be important to look at several more aspects of the program and to adapt the CNT itself. One area of interest will be focused on the Lennard-Jones potential. More time must be required to truly understand the potentials affect of the CNT and on the transported particle. The potential could be modified to better suit other particles other than hydrogen ions and isotopes. Another concern that must be continued from this paper is the investigation of isotope separation. A closer look would shed light on how the size of CNT affects this separation. More specifically, how the change in radius and length adjusts the separation between the peaks in outgoing flux. These changes may also give way to finding a way to greatly decrease the outgoing particle’s flux. If large decreases in the flux can be shown it would greatly benefit the hydrogen storage and energy fields. ![Plot of the Lennard Jones potential applied to the H$^{+}$+CNT.](potgraph.png){width="160mm"} ![Convergence of azimuthal points 10, 15, and 16.](ThetaConvergence.png) ![Convergence of z-points 144, 216, and 432.](zconvergence.png) ![Convergence of r-points 72 and 144.](Rconvergence.png) ![Flux output of hydrogen, deuterium, and tritium at 300 K.](300_kelvin.png) ![Flux output of hydrogen, deuterium, and tritium at 200 K.](Hydrogen_deuterium_tritium.png) ![Flux output of hydrogen, deuterium, and tritium at 150 K.](000712532561LongEnergy.png) ![Flux output of hydrogen, deuterium, and tritium at 100 K.](000475021707LongEnergy.png) ![Flux output of hydrogen, deuterium, and tritium at 50 K.](000237510854LongEnergy.png)
--- abstract: 'We calculate linear and nonlinear optical susceptibilities arising from the excitonic states of monolayer MoS$_2$ for in-plane light polarizations, using second-quantized bound and unbound exciton operators. Optical selection rules are critical for obtaining the susceptibilities. We derive the valley-chirality rule for the second-harmonic generation in monolayer MoS$_2$, and find that the third-harmonic process is efficient only for linearly polarized input light while the third-order two-photon process (optical Kerr effect) is efficient for circularly polarized light using a higher order exciton state. The absence of linear absorption due to the band gap and the unusually strong two-photon third-order nonlinearity make the monolayer MoS$_2$ excitonic structure a promising resource for coherent nonlinear photonics.' author: - 'Daniel B. S. Soh' - Christopher Rogers - 'Dodd J. Gray' - Eric Chatterjee - Hideo Mabuchi bibliography: - 'MoS2.bib' nocite: '[@*]' title: 'Optical nonlinearities of excitons in monolayer MoS$_2$' --- Introduction ============ Design bottlenecks arising from energy dissipation and heat generation in the dense on-chip interconnect of CMOS computing architectures have led to renewed interest in approaches to all-optical information processing. The preeminent challenge remains to develop materials with low loss and large optical nonlinearity, which are suitable for incorporation with integrated nanophotonic structures [@sipahigil2016integrated; @alam2016large; @sipahigil2016single; @benson2011assembly; @liu2010mid]. Looking to the future, the development of coherent nonlinear photonics may be regarded as preliminary work towards quantum photonic architectures that represent and process information utilizing non-classical states of light [@hacker2016photon; @reimer2016generation; @kockum2017deterministic; @mabuchi2012qubit]. Beyond computation per se, nonlinear optical materials are fundamental for many other integrated photonics applications including on-chip frequency comb generation [@brasch2016photonic; @del2007optical], frequency conversion [@guo2016chip] and supercontinuum generation [@carlson2017photonic; @hsieh2007supercontinuum]. Atomically thin 2D materials are promising candidates for providing optical nonlinearity in integrated photonic circuits, especially as growth techniques have advanced in recent years to enable the deposition on top of lithographically fabricated devices [@ajayan2016two]. In particular, monolayer MoS$_2$ has attracted great interest following the discovery that it is indeed a direct band gap semiconductor [@mak2010atomically] with intriguing optical properties such as valley optical selectivity [@xiao2012coupled]. The nonlinear optical properties of monolayer MoS$_2$ are now being explored; in this article we aim to characterize important contributions from its excitonic bound states. Large collective optical responses from excitonic states are well known [@haug2009quantum; @wang2017excitons]. Reduced dimensionality further increases the optical response of excitons since the most significant contribution to exciton formation comes from the band edges where density of states in 2D is much larger than that in 3D. Hence, excitonic states of the transition metal (Mo, W) dichalcogenides (S$_2$, Se$_2$) (TMDs) are expected to contribute substantial optical nonlinearity even with their atomically thin layer thickness. Unlike in graphene, which has a significant linear absorption everywhere in the optical spectrum, nonlinear processes utilizing the excitonic states of monolayer MoS$_2$ may preserve a sufficient level of coherence due to the band gap. This unique set of features make monolayer MoS$_2$ an attractive material for coherent nonlinear photonics. The direct band gap around the $\pm {\boldsymbol{K}}$ points in the first Brillouin zone of monolayer MoS$_2$ is analytically best modeled by a gapped Dirac cone [@xiao2012coupled; @wang2017excitons; @wang2016radiative; @wang2015fast; @wang2012electronics]. We answer the natural question whether the monolayer MoS$_2$ has an optical nonlinearity comparable to that of graphene. Although there are numerous results available for the optical properties of monolayer MoS$_2$ [@zhang2014absorption; @hill2015observation; @wang2012electronics; @wang2016radiative; @selig2016excitonic], only a few results on the optical nonlinearities of monolayer MoS$_2$ are available [@kumar2013second; @clark2014strong; @merano2016nonlinear; @pedersen2015intraband; @trolle2014theory; @gruning2014second]. All of these focused on the second-harmonic generation process, which, according to our study, turns out to be a weak perturbative effect stemming from the threefold rotational symmetry, while the third-order nonlinearity may be more significant considering the symmetries of the excitons. We calculate the optical susceptibilities of monolayer MoS$_2$ when the frequency of the output light is nearly resonant with the highly optically responsive exciton energy levels. We show that, while the optical selection rule dictates the substantially contributing channels in nonlinear processes, that of MoS$_2$ excitonic states inherits the threefold rotational symmetry of the atomic structure. As a result, several unusual high order transition channels can be formed in the excitonic level transitions, which appear to violate the usual valley selection rule. Although previously an empirical nonlinear selection rule was adopted,[@seyler2015electrical; @xiao2015nonlinear] we explain the optical selection rules through the actual calculation of dipole moments based on massive Dirac Hamiltonian with the perturbative contribution from the threefold rotational symmetry of the atomic system. The same reason leads to unusually efficient third-harmonic generation and the Kerr nonlinearity with certain polarization configurations. We restrict our analysis to the case where the higher harmonic frequencies fall below the excition levels so that the linear absorption of this higher harmonic frequencies can be avoided. These transitions are of particular interest for all-optical information processing as well as quantum dynamical applications. This paper consists in the following. Section 2 contains the theory of light matter interaction for the monolayer MoS$_2$, clearly presenting the assumptions made, the Hamiltonians, and the perturbative approach. Section 3 presents the calculation of the linear and the nonlinear susceptibilities for the interesting linear and nonlinear processes, resolved by the input light polarizations. Finally, a conclusion with discussions follows. The appendix presents a clear derivation of the exciton creation operator based on our defined completeness relations in the Hilbert spaces, which necessarily clarifies the dimension of constants. We also rigorously derived the second quantized operators for the unbound exciton states in the same appendix. Interaction of monolayer M$\mathbf{o}$S$_2$ with a light field ============================================================== We assume zero temperature for simplicity. We count only the radiative transitions, ignoring the coupling to phonon excitations from the radiatively excited states. Most of the practical nonidealities are collected phenomenologically in the linewidth broadening factor. Our primary interest is the linear and the nonlinear optical processes that involve the bound exciton states of the monolayer MoS$_2$. We particularly assume a low-density exciton so that we address only the regime of a single exciton over the sample. Consequently, we ignore the exciton-exciton interaction. This makes the perturbative approach valid. We also ignore the trions, and focus solely on the exciton states. Unperturbed Hamiltonian ----------------------- ### Excitons The band structure of MoS$_2$ is well known [@xiao2012coupled]. Due to the mismatch between the Mo atom and the S atom, the spatial inversion symmetry is broken, and hence, the degeneracy at $\pm {\boldsymbol{K}}$ points of the monolayer MoS$_2$ is lifted. The band structure is best described by the gapped Dirac Hamiltonian (see the details in Appendix \[sec:band-structure\]). We then proceed to the exciton description below. At zero-degree temperature, the ground state is the Fermi sea ${\left| 0 \right\rangle}$ where all the electrons are in the valence band. A photon may be absorbed to produce an electron in the conduction band and a hole in the valence band. The Coulomb attraction between the two creates an exciton state. Considering that the exciton size in the monolayer MoS$_2$ is approximately [@zhang2014absorption] $\sim$ 1 nm, which is larger than the unit cell, we adopt the Wannier exciton [Schrödinger ]{}equation [@haug2009quantum; @dresselhaus1956effective]: $$\left[ - \frac{\hbar^2 \nabla^2}{2 m_r} + V(r) \right] \psi_\nu ({\boldsymbol{r}}) = E_\nu \psi_\nu ({\boldsymbol{r}}), \label{eq:schrodinger-exciton}$$ where $\psi_\nu ({\boldsymbol{r}}) = {\left\langle {\boldsymbol{r}} | \textrm{x}_\nu \right\rangle}$ with an exciton state ket ${\left| \textrm{x}_\nu \right\rangle}$ is the wave function of an electron-hole pair with the relative position ${\boldsymbol{r}} = {\boldsymbol{r}}_e - {\boldsymbol{r}}_h$ with the position of the electron ${\boldsymbol{r}}_e$ and the hole ${\boldsymbol{r}}_h$, respectively, $m_r = (1/m_c + 1/|m_v|)^{-1}$ is the reduced mass where we calculate approximately $m_c = 0.55 m_e$ and $m_v = - 0.56 m_e$ from the energy dispersion equation with the electron rest mass $m_e$ are the effective masses of the conduction and the valence band electrons, respectively, $V({\boldsymbol{r}})$ is the Coulomb potential between the electron and the hole, and $E_\nu$ is the energy eigenvalue with the quantum number $\nu$. We note that this [Schrödinger ]{}equation includes the Bloch state solutions of the electron and the hole through the renormalized particle mass $m_r$ that reflects the dispersions of the conduction and the valence bands. The monolayer MoS$_2$ is a 2D sheet. The Coulomb potential, however, is not strictly 2D due to the dielectric screening effect, and the more appropriate potential for an isolated 2D sheet is the Keldysh-type screened potential [@cudazzo2011dielectric]. The main differences between the strictly 2D Coulomb potential and the Keldysh-type screened potential are the binding energies and the oscillation strengths [@wu2015exciton; @robert2017optical; @kylanpaa2015binding], but the corrections are relatively small (of order unity) for the isolated MoS$_2$ 2D sheet when we fit the binding energy of the lowest exciton state to an empirical data (see for example Fig. 3 of Robert *et al.*[@robert2017optical] When the lowest binding energies are equalized, the difference in upper level energies is not significant). In addition, the exciton wave functions using the Keldysh-type screened potential are obtained usually through sophisticated numerical methods. Our main goal is to estimate the magnitude of the nonlinear response of the exciton states, and the simple strictly 2D Coulomb potential turns out to be sufficient for our purpose with the advantage of easier calculation of the transition matrix elements among the exciton states, based on the well-known analytic 2D hydrogen-type wave functions. Due to these reasons, we rather adopt the simple 2D Coulomb potential to obtain the 2D solution to the Wannier [Schrödinger ]{}equation whose energy eigenvalues are, for the quantum number $\nu = (n,m)$ with $n = 0,1,2, \cdots$ and $m = -n, -(n-1), \cdots, n$:[@haug2009quantum] $$E_\nu = - E_0 \frac{1}{(n+1/2)^2},$$ where $$E_0 = \frac{e^4 m_r}{2 ( 4 \pi \epsilon_0 \epsilon_r)^2 \hbar^2} = \left(\frac{m_r}{m_e}\right) \left(\frac{1}{\epsilon_r^2}\right) \text{Ry},$$ with the electron charge $e = -|e| = -1.6 \times 10^{-19}$ C, and the vacuum and the relative material permittivity $\epsilon_0, \epsilon_r$, respectively. Here, Ry = $13.6$ eV is the hydrogen Rydberg energy. Indeed, later in the text, our calculated results will be shown to be surprisingly close to the experimental results even with this simplified picture. The eigenvalues of [Schrödinger ]{}equation in equation are the binding energies. Therefore, the actual exciton energy levels are given through $E_c ({\boldsymbol{0}}) + E_\nu$ where $E_c({\boldsymbol{0}})$ is the lowest energy of the conduction band (see equation ). It is also noteworthy that the band structure calculated in Appendix \[sec:band-structure\] is essential in calculating all orders of susceptibilities since it constructs the exciton creation operator (see Appendix \[sec:exciton-creation\]). The wave function is [@haug2009quantum] $$\begin{aligned} \psi_{n,m} ({\boldsymbol{r}}) =& \sqrt{\frac{1}{ \pi a_0^2 (n+1/2)^3} \frac{(n-|m|)!}{[(n+|m|)!]^3}} \nonumber \\ &\times \rho^{|m|} \text{e}^{-\rho/2} L^{2|m|}_{n+|m|} (\rho) \text{e}^{i m \phi}, \label{eq:psir}\end{aligned}$$ where $a_0 = 4 \pi \hbar^2 \epsilon_0 \epsilon_r / (e^2 m_r)$, $\rho = 2r / ((n+1/2)a_0)$, and $L^p_q (\rho)$ is the Laguerre polynomials defined by $$L^p_q (\rho) = \sum_{\nu = 0}^{q-p} (-1)^{\nu + p} \frac{(q!)^2 \rho^\nu}{(q - p - \nu)! (p + \nu)! \nu!}.$$ These wave functions satisfy the normalization $\delta_{\nu, \nu'} = {\left\langle \textrm{x}_{\nu} | \textrm{x}_{\nu'} \right\rangle} = \int {\text{d}}^2 r {\left\langle \textrm{x}_\nu | {\boldsymbol{r}} \right\rangle} {\left\langle {\boldsymbol{r}} | \textrm{x}_{\nu'} \right\rangle} = \int {\text{d}}^2 r \psi_{\nu}^* ({\boldsymbol{r}}) \psi_{\nu'} ({\boldsymbol{r}}) $ where we used the completeness relation $\int {\text{d}}^2 r {\left| {\boldsymbol{r}} \rangle \langle {\boldsymbol{r}} \right|} = \mathbf{1}$. Also one can consider the Fourier transform pair using an additional completeness relation $\sum_{{\boldsymbol{q}}} {\left| {\boldsymbol{q}} \rangle \langle {\boldsymbol{q}} \right|} = {\boldsymbol{1}}$: $$\begin{aligned} \psi_\nu ({\boldsymbol{r}}) &= \frac{1}{\sqrt{A}} \sum_{{\boldsymbol{q}}} \psi_\nu ({\boldsymbol{q}}) \text{e}^{i {\boldsymbol{q}} \cdot {\boldsymbol{r}}}, \nonumber \\ \psi_\nu ({\boldsymbol{q}}) &= \frac{1}{\sqrt{A}} \int_{A} {\text{d}}^2 r \psi_\nu ({\boldsymbol{r}}) \text{e}^{- i {\boldsymbol{q}} \cdot {\boldsymbol{r}}},\end{aligned}$$ where $A$ is the entire sample area of the monolayer MoS$_2$. As an example, for $\nu = (0,0)$, we have $\psi_{(0,0)} ({\boldsymbol{r}}) = (2 \sqrt{2/\pi}/a_0) \text{e}^{-2 r / a_0}$ and the Fourier transform is $\psi_{(0,0)} ({\boldsymbol{q}}) = \sqrt{2 \pi/A} (8 a_0 / (4 + a_0^2 k^2)^{3/2})$. The corresponding energy eigenvalue is $-4 E_0$. According to this wave function, the radius of the lowest exciton state is calculated to be ${\left\langle \psi_{(0,0)} \right|} r {\left| \psi_{(0,0)} \right\rangle} = a_0/2$. The exciton radius of the monolayer MoS$_2$ is experimentally measured as $6 \sim 10 \; \text{\AA}$ at zero temperature [@zhang2014absorption]. In addition, the binding energy of the lowest exciton state of the monolayer MoS$_2$ is estimated as $-0.5 \sim -0.3$ eV [@zhang2014absorption; @klots2014probing; @hill2015observation; @chiu2015determination]. These two lead to the value of $\epsilon_r$, and we chose $\epsilon_r$ to be 7, which implies the exciton radius of 6.7 Å$\;$ and the binding energy of $-0.31$ eV. ### Second quantization of excitons The total Hamiltonian is the sum of the band Hamiltonian $H$ and the Coulomb potential $V({\boldsymbol{r}})$ such that $\mathcal{H}_0 = H + V({\boldsymbol{r}})$. When additional light-matter interaction Hamiltonian $\mathcal{H}_I$ is present, one faces the situation where two interaction Hamiltonians, $V(r)$ and $\mathcal{H}_I$, are both present. This makes the problem complicated. One approach is to absorb the Coulomb potential into the *unperturbed* Hamiltonian and deal with $\mathcal{H}_I$ as a perturbing Hamiltonian. The Hilbert subspace of the single particle excited states is spanned by the band basis of a pair of an electron and a hole: ${\left| {\boldsymbol{q}}, -{\boldsymbol{q}}' \right\rangle} = \alpha^\dag_{{\boldsymbol{q}}} \beta^\dag_{-{\boldsymbol{q}}'} {\left| 0 \right\rangle}$ where $\alpha^\dag_{{\boldsymbol{q}}}$ and $\beta^\dag_{-{\boldsymbol{q}}'}$ are the creation operators for the electron Bloch state in the conduction and the hole Bloch state in the valence band, with the momentum $\hbar {\boldsymbol{q}}$ and $-\hbar {\boldsymbol{q}}'$, respectively. Because we know that the exciton states diagonalize the unperturbed Hamiltonian $\mathcal{H}_0$, we now represent it using the second quantized exciton creation and annihilation operators. Following the procedure in Haug *et al*.[@haug2009quantum], we first define the creation operator of a bound exciton $B^\dag_{\nu {\boldsymbol{Q}}} = {\left| \nu {\boldsymbol{Q}} \rangle \langle 0 \right|}$ where $\nu$ is the quantum number of the exciton state, and $\hbar {\boldsymbol{Q}}$ is the combined momentum of the electron hole pair. Then, using the completeness $\int {\text{d}}^2 r {\left| {\boldsymbol{r}} \rangle \langle {\boldsymbol{r}} \right|} = {\boldsymbol{1}}$ and $\sum_{{\boldsymbol{q}}} {\left| {\boldsymbol{q}} \rangle \langle {\boldsymbol{q}} \right|} = {\boldsymbol{1}}$, it is straightforward to show (see the Appendix \[sec:exciton-creation\] and the equation ) that $$B^\dag_{\nu {\boldsymbol{Q}}} = \sum_{{\boldsymbol{q}}} \psi_\nu \left( {\boldsymbol{q}} - \frac{{\boldsymbol{Q}}}{2}\right) \alpha^\dag_{{\boldsymbol{q}}} \beta^\dag_{{\boldsymbol{Q}} - {\boldsymbol{q}}}, \label{eq:creation-bound-exciton}$$ At zero temperature, the exciton momentum $\hbar {\boldsymbol{Q}}$ must be equal to the momentum of the incoming photon since no phonon is available. Considering the negligibly small photon momentum compared to the crystal momentum $\hbar {\boldsymbol{q}}$, we can approximately set ${\boldsymbol{Q}} \approx {\boldsymbol{0}}$. Then, the bound exciton creation operator is $$B^\dag_{\nu} = \sum_{{\boldsymbol{q}}} \psi_\nu \left( {\boldsymbol{q}}\right) \alpha^\dag_{{\boldsymbol{q}}} \beta^\dag_{- {\boldsymbol{q}}}. \label{eq:bound-creation}$$ Appendix \[sec:exciton-creation\] also derives the creation operator for the unbound exciton states as $C^\dag_{{\boldsymbol{q}}} = \alpha^\dag_{{\boldsymbol{q}}} \beta^\dag_{- {\boldsymbol{q}}}$. Setting the energy of the ground state Fermi sea ${\left| 0 \right\rangle}$ as zero, and using the fact that the entire Hilbert subspace of the single excitation is spanned by the bound and the unbound exciton states such that the completeness relation is (Appendix \[sec:exciton-creation\]) $$\sum_{\nu} {\left| \mathrm{x}_\nu \rangle \langle \mathrm{x}_\nu \right|} + \sum_{{\boldsymbol{q}}} {\left| C_{{\boldsymbol{q}}} \rangle \langle C_{{\boldsymbol{q}}} \right|} = {\boldsymbol{1}}, \label{eq:completeness-single-excitation}$$ we finally obtain the second quantized Hamiltonian for the exciton states: $$\mathcal{H}_0 = \hbar \sum_{\nu} e_\nu B^\dag_{\nu} B_{\nu} + \hbar \sum_{{\boldsymbol{q}}} \omega_{{\boldsymbol{q}}} C^\dag_{{\boldsymbol{q}}} C_{{\boldsymbol{q}}}, \label{eq:second-quantized}$$ where the energy is given by $\hbar e_\nu = E_g + E_\nu$ for bound state excitons ($E_\nu < 0$), and $\hbar \omega_{{\boldsymbol{q}}} = E_g + \hbar^2 q^2 / (2 m_r)$ for the unbound exciton. Interaction Hamiltonian ----------------------- Let us now consider a monochromatic external field ${\boldsymbol{E}} (t) = \hat{{\boldsymbol{\varepsilon}}} \mathcal{E}({\boldsymbol{\kappa}}) \text{e}^{-i \omega_\kappa t}$ where $\hat{{\boldsymbol{\varepsilon}}}$ is the unit vector of polarization and each photon has a momentum $\hbar {\boldsymbol{\kappa}}$. The nature of the interaction between the external field and the monolayer MoS$_2$ is the dipole interaction represented by an interaction Hamiltonian [@haug2009quantum] $$\mathcal{H}_I = - \sum_{{\boldsymbol{q}}} \left[ d_{cv} ({\boldsymbol{q}}) \alpha^\dag_{{\boldsymbol{q}}} \beta^\dag_{- {\boldsymbol{q}}} \mathcal{E} ({\boldsymbol{\kappa}}) \text{e}^{- i \omega_\kappa t} + \text{h.c.} \right], \label{eq:2}$$ where h.c. stands for the Hermitian conjugate. Momentum is conserved in this interaction as $\hbar {\boldsymbol{\kappa}} = \hbar {\boldsymbol{Q}} \approx \hbar {\boldsymbol{q}} + (- \hbar {\boldsymbol{q}})$ since the crystal momentum ${\boldsymbol{q}}$ is much larger than ${\boldsymbol{\kappa}}$. Hence, in principle the incoming photon can excite an electron-hole pair with any ${\boldsymbol{q}}$. Here, the dipole moment for the interband transition is given by $$d_{cv} ({\boldsymbol{q}}) = {\left\langle c_{{\boldsymbol{q}}} \right|} e {\boldsymbol{r}} \cdot \hat{{\boldsymbol{\varepsilon}}} {\left| v_{{\boldsymbol{q}}} \right\rangle},$$ where ${\left| c_{{\boldsymbol{q}}} \right\rangle}$ and ${\left| v_{{\boldsymbol{q}}} \right\rangle}$ are the conduction and the valence band state with a crystal momentum $\pm \hbar {\boldsymbol{q}}$, respectively. Particularly for the $\sigma_+$ circularly polarized light with $\hat{{\boldsymbol{\varepsilon}}} = \hat{{\boldsymbol{\varepsilon}}}^+ = (1/\sqrt{2}) (\hat{{\boldsymbol{x}}} + i \hat{{\boldsymbol{y}}})$, the dipole moment is $d_{cv}^+ ({\boldsymbol{q}}) = (e/\sqrt{2}) {\left\langle c_{{\boldsymbol{q}}} \right|} (r \cos \phi + i r \sin \phi) {\left| v_{{\boldsymbol{q}}} \right\rangle} = (e/\sqrt{2}) {\left\langle c_{{\boldsymbol{q}}} \right|} r \text{e}^{i \phi} {\left| v_{{\boldsymbol{q}}} \right\rangle}$ where we adopted the polar position coordinate ${\boldsymbol{r}} = (r, \phi)$. We wish to use the second quantized bound and unbound exciton operators in the interaction Hamiltonian since our basis kets are the exciton states. We then calculate the following: $$\begin{aligned} \alpha^\dagger_{{\boldsymbol{q}}} \beta^\dagger_{- {\boldsymbol{q}}} &= \sum_{{\boldsymbol{A}}} \delta_{{\boldsymbol{q}}, {\boldsymbol{A}}} \alpha^\dagger_{{\boldsymbol{A}}} \beta^\dagger_{- {\boldsymbol{A}}} \nonumber \\ &= \sum_{{\boldsymbol{A}}, \nu} \psi^*_\nu ({\boldsymbol{q}}) \psi_\nu \left({\boldsymbol{A}}\right) \alpha^\dagger_{{\boldsymbol{A}}} \beta^\dagger_{- {\boldsymbol{A}}} \nonumber \\ &~~~+ \sum_{{\boldsymbol{q}}'', {\boldsymbol{A}}} {\left\langle {\boldsymbol{q}} | C_{{\boldsymbol{q}}''} \right\rangle} {\left\langle C_{{\boldsymbol{q}}''} | {\boldsymbol{A}} \right\rangle} \alpha^\dagger_{{\boldsymbol{A}}} \beta^\dagger_{-{\boldsymbol{A}}} \nonumber \\ &= \sum_{\nu} \psi^*_\nu ({\boldsymbol{q}}) B^\dagger_{\nu} + \sum_{{\boldsymbol{q}}'', {\boldsymbol{A}}} {\left\langle {\boldsymbol{q}} | C_{{\boldsymbol{q}}''} \right\rangle} {\left\langle C_{{\boldsymbol{q}}''} | {\boldsymbol{A}} \right\rangle} \alpha^\dagger_{{\boldsymbol{A}}} \beta^\dagger_{-{\boldsymbol{A}}}, \label{eq:3}\end{aligned}$$ where the last equation follows from equation , and the second equation follows from the following, using the completeness in equation : $$\begin{aligned} \delta_{{\boldsymbol{q}}, {\boldsymbol{q}}'} &= \langle {\boldsymbol{q}} | {\boldsymbol{q}}' \rangle = \sum_{\nu} \langle {\boldsymbol{q}} | \textrm{x}_\nu \rangle \langle \textrm{x}_\nu | {\boldsymbol{q}}' \rangle + \sum_{{\boldsymbol{q}}''} {\left\langle {\boldsymbol{q}} | C_{{\boldsymbol{q}}''} \right\rangle} {\left\langle C_{{\boldsymbol{q}}''} | {\boldsymbol{q}}' \right\rangle} \nonumber \\ &= \sum_{\nu} \psi_\nu^* ({\boldsymbol{q}}) \psi_\nu ({\boldsymbol{q}}') + \sum_{{\boldsymbol{q}}''} {\left\langle {\boldsymbol{q}} | C_{{\boldsymbol{q}}''} \right\rangle} {\left\langle C_{{\boldsymbol{q}}''} | {\boldsymbol{q}}' \right\rangle}.\end{aligned}$$ Following the treatment of Haug *et al.*[@haug2009quantum], approximating the band states as the free states allows ${\left\langle {\boldsymbol{q}} | C_{{\boldsymbol{q}}''} \right\rangle} \approx \delta_{{\boldsymbol{q}}, {\boldsymbol{q}}''}$. We then obtain $$\begin{aligned} &\mathcal{H}_I = \nonumber \\ &- \left[ \sum_{\nu} g_\nu B^\dag_{\nu} \mathcal{E}({\boldsymbol{\kappa}}) \textbf{e}^{-i \omega_\kappa t} + \sum_{{\boldsymbol{q}}} d_{cv} ({\boldsymbol{q}}) C_{{\boldsymbol{q}}}^\dag \mathcal{E}({\boldsymbol{\kappa}}) \textbf{e}^{-i \omega_\kappa t} \right] \nonumber \\ &+ \textrm{h.c.}, \label{eq:interaction-hamiltonian}\end{aligned}$$ where we defined $$g_\nu = \sum_{{\boldsymbol{q}}} d_{cv} ({\boldsymbol{q}}) \psi^*_\nu ({\boldsymbol{q}}) = e {\left\langle \mathrm{x}_\nu \right|} \hat{{\boldsymbol{\varepsilon}}} \cdot {\boldsymbol{r}} {\left| 0 \right\rangle}. \label{eq:gnu}$$ Optical selection rules and dipole moments {#sec:selection-rule} ------------------------------------------ ### Interband transition The well-known valley selection rule for the first-order interband transition is explained as follows: The $\sigma_+$ polarized light couples only to ${\boldsymbol{K}}$ valley whereas $\sigma_-$ polarized light couples only to $-{\boldsymbol{K}}$ valley. This chiral selection rule can be deduced from the symmetry considerations. The monolayer MoS$_2$ at $\pm {\boldsymbol{K}}$ points belong to $C_{3h}$ point symmetry group. Then, the Bloch wave functions of the valence bands transform like the states with angular momentum $\mp \hbar$ for $\pm {\boldsymbol{K}}$ valley, respectively, whereas the conduction bands transform like the states with zero angular momentum for both valleys[@xiao2012coupled; @cao2012valley; @mak2012control; @sallen2012robust; @sallen2014erratum]. This explains the chiral optical selection rule in the angular momentum conservation scheme. We note, however, that the $\sigma_+$ photon couples to the excitation of either a spin up or down electron at $+{\boldsymbol{K}}$ valley, depending on the optical frequency. Therefore, a broadband $\sigma_+$ photon will see the absorption peak at both transitions separated by the spin orbit coupling energy. It is, however, important to recognize that the symmetry argument is only for $\pm {\boldsymbol{K}}$ points (valley bottoms). For other ${\boldsymbol{k}} \neq \pm {\boldsymbol{K}}$, the valley can interact with the opposite circularly polarized photon as we will confirm below. This is a critical difference between the interband transitions and the transitions involving the exciton states as the exciton is a collective superposition from various ${\boldsymbol{q}}$ as shown in equation . In order to calculate the dipole moment, one can use the velocity operator ${\boldsymbol{\mathrm{v}}} = (1/\hbar) {\boldsymbol{\nabla}}_{{\boldsymbol{k}}} H$, which leads to $d_{cv} ({\boldsymbol{q}}) = -(i e / \hbar \omega_{{\boldsymbol{q}}}) {\left\langle u_{{\boldsymbol{q}},c} \right|} \hat{{\boldsymbol{\varepsilon}}} \cdot {\boldsymbol{\nabla}}_{{\boldsymbol{q}}} H {\left| u_{{\boldsymbol{q}},v} \right\rangle}$, where $H$ is the band Hamiltonian for the Bloch functions. An equivalent expression is the well-known Blount formula [@blount1962formalisms]: $$\begin{aligned} &{\left\langle \psi_{{\boldsymbol{k}}, \lambda} \right|} {\boldsymbol{r}} {\left| \psi_{{\boldsymbol{k}}', \lambda'} \right\rangle} = \nonumber \\ &-i {\boldsymbol{\nabla}}_{{\boldsymbol{k}}'} {\left\langle \psi_{{\boldsymbol{k}},\lambda} | \psi_{{\boldsymbol{k}}', \lambda'} \right\rangle} + i \delta_{{\boldsymbol{k}}, {\boldsymbol{k}}'} {\left\langle u_{{\boldsymbol{k}}, \lambda} \right|} {\boldsymbol{\nabla}}_{{\boldsymbol{k}}} {\left| u_{{\boldsymbol{k}}, \lambda'} \right\rangle}. \label{eq:blount}\end{aligned}$$ Here, ${\left| \psi_{{\boldsymbol{k}}, \lambda} \right\rangle} = \textrm{e}^{i {\boldsymbol{k}} \cdot {\boldsymbol{r}}} {\left| u_{{\boldsymbol{k}}, \lambda} \right\rangle}$ is the wave function of a Bloch state at band $\lambda$ with a periodic Bloch function $u_{{\boldsymbol{k}}, \lambda} ({\boldsymbol{r}}) = {\left\langle {\boldsymbol{r}} | u_{{\boldsymbol{k}}, \lambda} \right\rangle}$. For interband transition, the first term vanishes. Now we can calculate the dipole moment $d_{cv} ({\boldsymbol{q}})$ by diagonalizing the band Hamiltonian and finding the eigenvectors. If we use the analytical solution for the band states (derived in equation of Appendix \[sec:band-structure\]), we find the dipole moment for the $\sigma_+$ light to be $d_{cv}^+ ({\boldsymbol{q}}) = {\left\langle c_{{\boldsymbol{q}}} \right|} e {\boldsymbol{r}} \cdot \hat{{\boldsymbol{\varepsilon}}}^+ {\left| v_{{\boldsymbol{q}}} \right\rangle} = - i (\sqrt{2} e \hbar v/\Delta) (1 - 4 \hbar^2 v^2 q^2 / \Delta^2)$ for $\tau = +1$ ($+{\boldsymbol{K}}$ valley), but $d_{cv}^+ ({\boldsymbol{q}}) = - i \sqrt{2} e \hbar^3 v^3 q^2 \mathrm{e}^{i 2 \phi_q}/\Delta^3$ for $\tau = -1 $ ($-{\boldsymbol{K}}$ valley). Here, $\Delta = E_g \pm \tau E_\textrm{soc}/2$ for up or down spin subspace, respectively, with the energy band gap $E_g$ and the spin-orbit coupling energy $E_\textrm{soc}$. Also, ${\boldsymbol{q}} = (q_x, q_y) = {\boldsymbol{k}} - \tau {\boldsymbol{K}}$, and $q \mathrm{e}^{i \phi_q} = q_x + i q_y$. On the contrary, the $\sigma_-$ light produces $d_{cv}^- ({\boldsymbol{q}}) = {\left\langle c_{{\boldsymbol{q}}} \right|} e {\boldsymbol{r}} \cdot \hat{{\boldsymbol{\varepsilon}}}^- {\left| v_{{\boldsymbol{q}}} \right\rangle} = i \sqrt{2} e \hbar^3 v^3 q^2 \mathrm{e}^{-i 2 \phi_q}/\Delta^3$ for $\tau = +1$, but $d_{cv}^- ({\boldsymbol{q}}) = i (\sqrt{2} e \hbar v/\Delta) (1 - 4 \hbar^2 v^2 q^2 / \Delta^2)$ for $\tau = -1$. Here, $\hat{{\boldsymbol{\varepsilon}}}^- = (1/\sqrt{2}) (\hat{x} - i \hat{y}) = \hat{{\boldsymbol{\varepsilon}}}^{+*}$. For the $\pm {\boldsymbol{K}}$ points where $q = 0$, this dipole moment explains the valley selection rule. \ For a more accurate result, we numerically obtain the eigenvectors from the higher-order corrected band Hamiltonian (see Appendix \[sec:band-structure\]). Fig. \[fig:dcv\] shows the numerically evaluated $d_{cv\pm} ({\boldsymbol{q}})$ around $+{\boldsymbol{K}}$ point. Qualitatively the numerical solution $d_{cv+} ({\boldsymbol{q}})$ has a negligibly small real value, matching the analytical result. The maximum is also similar to the analytical solution. On the other hand, the threefold rotational symmetry is clearly shown. We note that $d_{cv-} ({\boldsymbol{q}} = {\boldsymbol{0}}) = 0$ while $d_{cv-} ({\boldsymbol{q}} \neq {\boldsymbol{0}}) \neq 0$ at $+{\boldsymbol{K}}$ point. This confirms that the chiral valley selection rule is only for $\pm {\boldsymbol{K}}$ points. Indeed, the symmetry argument breaks on the points away from $\pm {\boldsymbol{K}}$. ### Transition between Fermi sea ground state ${\left| 0 \right\rangle}$ and bound exciton states ${\left| \mathrm{x}_\nu \right\rangle}$ [ccc]{} $\nu = (n,m)$ & $g_{\nu}^+/g_{(0,0)}^+$ & $g_{\nu}^-/g_{(0,0)}^+$\ \ $(0,0)$ & $\mathbf{1}$ & $\approx 0$\ \ $(1,1)$ & $\approx 0$ & $\mathbf{0.022}$\ $(1,0)$ & $\mathbf{0.148}$ & $\approx 0$\ $(1,-1)$ & $\approx 0$ & $\approx 0$\ \ $(2,2)$ & $ \approx 0$ & $\approx 0$\ $(2,1)$ & $\approx 0$ & $ \mathbf{0.010}$\ $(2,0)$ & $\mathbf{0.068}$ & $\approx 0$\ $(2,-1)$ & $\approx 0$ & $\approx 0$\ $(2,-2)$ & $\approx 0$ & $\approx 0$\ \ $(3,3)$ & $\approx 0$ & $\approx 0$\ $(3,2)$ & $\approx 0$ & $\approx 0$\ $(3,1)$ & $\approx 0$ & $\approx 0$\ $(3,0)$ & $\mathbf{0.042}$ & $\approx 0$\ $(3,-1)$ & $\approx 0$ & $ \approx 0$\ $(3,-2)$ & $\approx 0$ & $ \approx 0$\ $(3,-3)$ & $\approx 0$ & $\approx 0$\ \ $(4,0)$ & $\approx 0$ & $\approx 0$\ $(5,0)$ & $\mathbf{0.053}$ & $\approx 0$\ $(6,0)$ & $\mathbf{0.086}$ & $\approx 0$\ $(7,0)$ & $\mathbf{0.055}$ &$\approx 0$\ $(8,0)$ & $\approx 0$ & $\approx 0$\ $(9,0)$ & $\mathbf{-0.034}$ & $\approx 0$\ $(10,0)$ & $\mathbf{-0.046}$ & $\approx 0$\ $(11,0)$ & $\mathbf{-0.040}$ & $\approx 0$\ $(12,0)$ & $\mathbf{-0.026}$ & $\approx 0$\ $(13,0)$ & $\mathbf{-0.011}$ & $\approx 0$\ $(n(>13),m)$ & $\approx 0$ &$\approx 0$\ The dipole moment $g_\nu$ defined in equation is often approximated as $g_\nu \approx \sum_{{\boldsymbol{q}}} d_{cv} ({\boldsymbol{0}}) \psi^*_\nu ({\boldsymbol{q}}) = \sqrt{A} d_{cv} ({\boldsymbol{0}}) \psi_\nu^* ({\boldsymbol{r}} = {\boldsymbol{0}})$ in an understanding that $\psi^*_\nu ({\boldsymbol{q}})$ is significant only for $|{\boldsymbol{q}}| \ll 1/a_0$ (Haug *et al.* [@haug2009quantum]). Notably for a given quantum number $\nu = (n,m)$ the wave function is $\psi_\nu ({\boldsymbol{r}}) \propto r^m$, and consequently, the substantial $g_\nu$ occurs for $\nu = (n,0)$. Using these, we arrive at an approximate analytical solution for the monolayer MoS$_2$: $$g_{(n,0)} = -i \sqrt{\frac{A}{(2n + 1) \pi}} \left(\frac{4 e \hbar v}{(2n + 1) a_0 \Delta}\right). \label{eq:gnu-analytic}$$ We numerically calculated both $g_{\nu}^\pm$ for $\sigma_\pm$, respectively, based on the dipole moments shown in Fig. \[fig:dcv\]. The result is shown in table \[tab:gnu\]. The values $g_{\nu}^-$ are generally small compared to the substantial $g_{\nu}^+$’s. Recall that $g_{\nu}^- = \sum_{{\boldsymbol{q}}} \psi_\nu^* ({\boldsymbol{q}}) d_{cv}^- ({\boldsymbol{q}})$. The envelope of $\psi_\nu ({\boldsymbol{q}})$ decays as $q$ increases. Since $d_{cv}^- ({\boldsymbol{q}} = {\boldsymbol{0}}) = 0$, $g_{\nu}^-$ must be significantly smaller than $g_\nu^+$. We find that only two transition dipoles $g_{(1,1)}^-$ and $g_{(2,1)}^-$ are substantial. The reason for this is as follows: we recall that the analytical solution $d_{cv}^- ({\boldsymbol{q}}) \propto \mathrm{e}^{-2 i \phi_q}$ at $+{\boldsymbol{K}}$ valley. The higher order correction, however, imposed the weak threefold rotational symmetry (see the equation in Appendix \[sec:band-structure\] and the text underneath). In perturbative treatment, the dipole moment can be expressed as [@simon1968second] $$\begin{aligned} &d_{cv}^- ({\boldsymbol{q}}) = \mathrm{e}^{-2 i \phi_q} \times \nonumber \\ & \left( \xi^{(0)} ({\boldsymbol{q}}) + \cos (3 \phi_q) \xi^{(1)} ({\boldsymbol{q}}) + \mathcal{O} (\cos^2 (3 \phi_q)) \right),\end{aligned}$$ where the zeroth order term does not possess the threefold rotational symmetry, but the first order term has a factor $\cos (3 \phi_q) = (1/2) (\mathrm{e}^{i 3 \phi_q} - \mathrm{e}^{-i 3 \phi_q})$. The net effect is $\delta d_{cv}^- ({\boldsymbol{q}}) = d_{cv}^- ({\boldsymbol{q}}) - \mathrm{e}^{-2 i \phi_q} \xi^{(0)} ({\boldsymbol{q}}) \propto \mathrm{e}^{+ i \phi_q}$ while we discard the faster term $\mathrm{e}^{-i 5 \phi_q}$ that will later result in zero while integrating over $\phi_q$. The Fourier transformed wave function of exciton is $\psi_{(1,1)}^* ({\boldsymbol{q}}) = 288 i a_0^2 \sqrt{3 \pi} q \mathrm{e}^{-i \phi_q}/(4 + 9 a_0^2 q^2)^{5/2} \propto \mathrm{e}^{-i \phi_q}$. Hence, these two cooperate such that $\psi^*_{(1,1)} ({\boldsymbol{q}}) \delta d_{cv}^- ({\boldsymbol{q}})$ does not depend on $\phi_q$, resulting in nontrivial value after integrating over $\phi_q$. This nontrivial integral over $\phi_q$ produces a substantial value for $g_{(1,1)}^-$, and $g_{(2,1)}^-$, although the amplitude of the latter is smaller due to a faster oscillation of $\psi^*_{(2,1)}$ in the radial direction than $\psi^*_{(1,1)}$. For a large $n$, however, the envelope of $\psi_\nu ({\boldsymbol{q}})$ quickly oscillates in the radial direction, resulting in small values for $g_{(n,1)}^-$ for large $n$. The same reason causes decreasing $g_{(n,0)}$ as $n$ increases. This weak opposite chiral valley response of the bound exciton states leads to some nontrivial optical nonlinearities in the monolayer MoS$_2$ as will be presented in the following sections. Unlike the usual chiral valley selection rule, the excitons respond to the opposite circularly polarized light since they are collective excitations including ${\boldsymbol{k}} \neq \pm {\boldsymbol{K}}$ (see equation ). Nonetheless, we note that this opposite chiral response is rather weak as they only exists in a weak perturbative fashion. ### Transition between bound exciton states ${\left| \mathrm{x}_\nu \right\rangle}$ The transition dipole moment between two bound exciton states follow the usual angular momentum conservation rule, which can be deduced from the spherical symmetry of excitons, since the Wannier Schrödinger equation in equation is rotationally symmetric, thus the bound exciton states have well defined angular momenta such that the angular momentum of ${\left| \mathrm{x}_{(n,m)} \right\rangle}$ state is $\hbar m$. Then, the optical selection rule is such that the transitions ${\left| \mathrm{x}_{n,m} \right\rangle} \rightarrow {\left| \mathrm{x}_{n',m\pm 1} \right\rangle}$ are allowed and mediated by the $\sigma_\pm$ circularly polarized photons, respectively, while all others are forbidden. Let us define the dipole moment $h^\pm_{\nu_1 \nu_2} \equiv e {\left\langle \mathrm{x}_{\nu_1} \right|} \hat{{\boldsymbol{\varepsilon}}}^\pm \cdot {\boldsymbol{r}} {\left| \mathrm{x}_{\nu_2} \right\rangle}$ between the two bound exciton states. Then, the optical selection rule is such that $$h^\pm_{(n',m')(n,m)} \left\{ \begin{array}{ll} \neq 0, & \mathrm{if}~~ m' = m \pm 1, \\ = 0, & \mathrm{otherwise.} \end{array} \right. \label{eq:selection-rule-exciton-levels}$$ Some selected dipole moment $h^+_{(n',m')(n,m)}$ are shown in the table \[tab:hnu1nu2-twophoton\], which we will use later for the Kerr nonlinearity calculation. [ c @ c ]{} $ \nu_1$ & $h^+_{(1,1) \nu_1} / (|e| a_0)$\ \ $(0,0)$ & $0.344$\ $(1,0)$ & $-3.18$\ $(2,0)$ & $0.752$\ $(3,0)$ & $0.320$\ $(4,0)$ & $0.194$\ $(5,0)$ & $0.135$\ $(6,0)$ & $0.102$\ $(7,0)$ & $0.080$\ ### Transition between bound exciton states ${\left| \mathbf{x}_\nu \right\rangle}$ and the unbound exciton states ${\left| C_{\mathbf{q}} \right\rangle}$ The relevant dipole moment of this transition is defined as $f_\nu ({\boldsymbol{q}}) = e {\left\langle \mathrm{x}_\nu \right|} \hat{{\boldsymbol{\varepsilon}}} \cdot {\boldsymbol{r}} {\left| C_{{\boldsymbol{q}}} \right\rangle}$. This dipole moment turns out to be negligibly small, which is rigorously shown in Appendix \[sec:fnu\]. ### Summary of optical selection rules ![Summary of allowed optical transitions in the $+{\boldsymbol{K}}$ valley. The red and blue transitions correspond to ${\left| 0 \right\rangle} \rightarrow {\left| \mathrm{x}_\nu \right\rangle}$ and ${\left| \mathrm{x}_{\nu_1} \right\rangle} \rightarrow {\left| \mathrm{x}_{\nu_2} \right\rangle}$, respectively. The solid and dotted lines are mediated by $\sigma_+$ and $\sigma_-$ photons, respectively. The values in the circles represent the dipole moments ($g_\nu$ are in unit of $g_{(0,0)}^+$ while $h_{\nu_1 \nu_2}$ are in unit of $|e|a_0$.) In the $-{\boldsymbol{K}}$ valley, the roles of $\sigma_\pm$ photons are switched.[]{data-label="fig:selection-rule"}](selection-rule-summary){width="50.00000%"} The optical selection rule, quantified through the appropriate dipole moments, plays the central role in the optical susceptibility calculations. While the transitions between the bound excitons $(h_{\nu_1 \nu_2})$ follow usual angular momentum conservation rule, the transitions from the ground state ${\left| 0 \right\rangle}$ to any bound exciton states $(g_\nu)$ are not trivial since the corresponding dipole moments $g_\nu$ weakly inherit the threefold rotational symmetry from the band states. Fig. \[fig:selection-rule\] summarizes the optical selection rule. It also reveals the values of the dipole moments for some transitions we will use later to calculate the optical susceptibilities. Induced current density and susceptibility ------------------------------------------ For clarity, we present the following procedure to calculate the susceptibilities, which is indeed well known in the literature [@boyd2003nonlinear]. We will extensively use the calculated dipole moments to evaluate the susceptibilities in the next section. When an external field is present, an induced current is produced as a result of the dipole interaction. It is obtained as $${\boldsymbol{J}} = e N_e \langle {\boldsymbol{\mathrm{v}}} \rangle = e N_e \text{tr}[{\boldsymbol{\mathrm{v}}} \rho], \label{eq:J}$$ where $N_e$ is the free carrier density, ${\boldsymbol{\mathrm{v}}}$ is the velocity operator, and $\rho$ is the quantum mechanical density operator. The density operator follows the von Neumann equation $i \hbar \dot{\rho} = [\mathcal{H}_{0} + \mathcal{H}_{I}, \rho]$. The solution is recursively obtained: $$\begin{aligned} &\rho (t) = - \frac{i}{\hbar} \int_{-\infty}^t {\text{d}}t' [\mathcal{H}_0 + \mathcal{H}_I, \rho(t')] \nonumber \\ &= - \frac{i}{\hbar} \int_{-\infty}^t {\text{d}}t' \left[ \mathcal{H}_0 + \mathcal{H}_I, \left( - \frac{i}{\hbar} \int_{-\infty}^{t'} {\text{d}}t'' [\mathcal{H}_0 + \mathcal{H}_I, \rho(t'')] \right) \right] \nonumber \\ &\hspace{5cm} \vdots\end{aligned}$$ Since $\mathcal{H}_I \propto \mathcal{E}({\boldsymbol{\kappa}})$, one can expand the perturbative order of $\rho$ such that $\rho(t) = \sum_{n=0}^{\infty} \rho^{(n)} (t)$ where $\rho^{(n)} (t)$ involves only $\mathcal{O} (\mathcal{E}^n(q))$ terms. We then use ${\boldsymbol{J}} = \sigma {\boldsymbol{E}} = \left( \sum_{n=0}^{\infty} \sigma^{(n)} \right) {\boldsymbol{E}}$ to resolve $\sigma^{(n)}$ order by order. Combining the relations ${\boldsymbol{J}} = \partial {\boldsymbol{P}}/ \partial t $ and ${\boldsymbol{P}} = \epsilon_0 \chi {\boldsymbol{E}}$, one obtains $$\frac{\partial}{\partial t}\left( \epsilon_0 (\chi^{(1)} + \chi^{(2)} + \cdots) {\boldsymbol{E}} (t) \right) = (\sigma^{(1)} + \sigma^{(2)} + \cdots) {\boldsymbol{E}} (t). \label{eq:chi-sigma}$$ Equating term by term, the relation between the susceptibility and the conductivity for each order is obtained, which finally resolves the optical susceptibilities for various orders. Perturbative solution --------------------- The advantage of using the second quantized exciton Hamiltonian in equation is that the exciton states already diagonalize the unperturbed Hamiltonian $\mathcal{H}_0$. Then, solving the [Schrödinger ]{}equation perturbatively becomes straightforward. To obtain the physical quantities such as the induced current, however, one must represent the operators in the exciton basis. It is our task to calculate the velocity operator ${\boldsymbol{\mathrm{v}}}$ in this exciton basis. For example, in the linear response theory where the incoming light photon energy is close to the energy of a bound exciton state ${\left| \textbf{x}_\nu \right\rangle}$, our Hilbert space is essentially two dimensional, with the basis $\{ {\left| \text{x}_\nu \right\rangle}, {\left| 0 \right\rangle} \}$. Consequently, the velocity operator and the density operator are now $2 \times 2$ matrices: $${\boldsymbol{\mathrm{v}}} = \left( \begin{array}{cc} {\boldsymbol{\mathrm{v}}}_{\text{xx}} & {\boldsymbol{\mathrm{v}}}_{\text{x0}} \\ {\boldsymbol{\mathrm{v}}}_{\text{0x}} & {\boldsymbol{\mathrm{v}}}_{\text{00}} \end{array}\right), \quad \rho = \left( \begin{array}{cc} \rho_{\text{xx}} & \rho_{\text{x0}} \\ \rho_{\text{0x}} & \rho_{\text{00}} \end{array}\right),$$ where each element is such that, for example, ${\boldsymbol{\mathrm{v}}}_{\text{x0}} = {\left\langle \text{x}_\nu \right|} {\boldsymbol{\mathrm{v}}} {\left| 0 \right\rangle}$. To obtain the matrix elements of the velocity operator, we move to the Heisenberg picture and connect to the dipole moment as follows: $$\begin{aligned} {\boldsymbol{\mathrm{v}}}_{\text{0x}} &= {\left\langle 0 \right|} \dot{{\boldsymbol{r}}} {\left| \text{x}_\nu \right\rangle} = - \frac{i}{\hbar} {\left\langle 0 \right|} [{\boldsymbol{r}}, \mathcal{H}_0 + \mathcal{H}_I] {\left| \text{x}_\nu \right\rangle} \nonumber \\ &= - \frac{i}{\hbar} {\left\langle 0 \right|} [{\boldsymbol{r}}, \mathcal{H}_0] {\left| \text{x}_\nu \right\rangle} = -i e_{\nu} {\left\langle 0 \right|} {\boldsymbol{r}} {\left| \text{x}_\nu \right\rangle}. \label{eq:vfe}\end{aligned}$$ Here, we used the fact that $[{\boldsymbol{r}}, \mathcal{H}_I] = 0$ since $\mathcal{H}_I \propto {\boldsymbol{r}}$ as it involves the dipole moment element. It is also noteworthy that the diagonal terms of the velocity operator ${\boldsymbol{\mathbf{v}}}$ are all zero according to the above derivation since the same energies of the same state cancel each other. We thus need only the off-diagonal terms of the density matrix to calculate the induced current: $${\boldsymbol{J}} = e N_e ({\boldsymbol{\mathrm{v}}}_{\text{x0}} \rho_{\text{0x}} + {\boldsymbol{\mathrm{v}}}_{\text{0x}} \rho_{\text{x0}} ). \label{eq:Jnew}$$ Next, since the normalization of the polarization vectors is $\hat{{\boldsymbol{\varepsilon}}}^- \cdot \hat{{\boldsymbol{\varepsilon}}}^+ = 1,$ the velocity matrix component in $\hat{{\boldsymbol{\varepsilon}}}^+$ is ${\boldsymbol{\mathrm{v}}}_{0\mathrm{x}} = - i e_\nu {\left\langle 0 \right|} \hat{{\boldsymbol{\varepsilon}}}^- \cdot {\boldsymbol{r}} {\left| \mathrm{x}_\nu \right\rangle} \hat{{\boldsymbol{\varepsilon}}}^+$. We calculate $$\begin{aligned} &{\left\langle 0 \right|} \hat{{\boldsymbol{\varepsilon}}}^- \cdot {\boldsymbol{r}} {\left| \text{x}_\nu \right\rangle} = \sum_{{\boldsymbol{q}}} \psi_{\nu} ({\boldsymbol{q}}) {\left\langle 0 \right|} \hat{{\boldsymbol{\varepsilon}}}^- \cdot {\boldsymbol{r}} \alpha^\dagger_{{\boldsymbol{q}}} \beta^\dagger_{- {\boldsymbol{q}}} {\left| 0 \right\rangle} \nonumber \\ &= \sum_{{\boldsymbol{q}}} \psi_\nu ({\boldsymbol{q}}) {\left\langle v({\boldsymbol{q}}) \right|} \hat{{\boldsymbol{\varepsilon}}}^- \cdot {\boldsymbol{r}} {\left| c({\boldsymbol{q}}) \right\rangle} = \frac{g_\nu^{+*}}{e}.\end{aligned}$$ This leads to ${\boldsymbol{\mathrm{v}}}_{\text{0x}} = \hat{{\boldsymbol{\varepsilon}}}^+ (- i e_\nu g_\nu^{+*} /e) $. All we have left is to solve the [Schrödinger ]{}equation for $\rho$. We first note that ${\left\langle \text{x}_\nu \right|}[\mathcal{H}_0, \rho] {\left| 0 \right\rangle} = \hbar e_{\nu} \rho_{\text{x0}}$. We then establish a differential equation for $\rho_{\text{x0}}$ in the [Schrödinger ]{}picture: $$\dot{\rho_{\text{x0}}} (t) = - i e_\nu \rho_{\text{x0}}(t) - \frac{i}{\hbar} {\left\langle \text{x}_\nu \right|}[\mathcal{H}_I, \rho(t)] {\left| 0 \right\rangle}.$$ From this, we carry out bookkeeping for each order on the differential equations for $n = 0,1,2,\cdots$: $$\begin{aligned} \dot{\rho}^{(0)}_\text{x0} (t) &= - i e_\nu \rho^{(0)}_\text{x0} (t), \nonumber \\ \dot{\rho}^{(n)}_\text{x0} (t) &= - i e_\nu \rho^{(n)}_{\text{x0}} (t) - \frac{i}{\hbar} {\left\langle \text{x}_\nu \right|}[\mathcal{H}_I, \rho^{(n-1)}] {\left| 0 \right\rangle}. \label{eq:perturbation-DE}\end{aligned}$$ Other matrix elements for $\rho^{(n)}$ can be obtained in a similar manner. Linear and nonlinear optical susceptibilities ============================================= In this section, we calculate the optical susceptibilities of the excitonic states from monolayer MoS$_2$ . We will first resolve the linear susceptibility and the resulting linear absorption and refractive index. Then, we proceed to the higher order nonlinear susceptibilities. Linear susceptibility {#sec:linear-susceptibility} --------------------- ![\[fig:first-trans\] Schematic of excitonic energy levels and the first order radiative transition of excitonic states. The continuum is the unbound electron-hole pair states.](first-trans){width="23.00000%"} We are interested in a case where an incoming photon has an energy closely resonant with an exciton state ${\left| \textrm{x}_\nu \right\rangle}$ energy level (see Fig. \[fig:first-trans\]). The first equation in describes the dynamics of $\rho_{\text{x0}}^{(0)}$ in the absence of any external perturbation. It is a free rotation. We then need to solve $\rho^{(1)}_{\textrm{x0}}$ to resolve $\chi^{(1)}$. For this, we first calculate for the case of $\sigma_+$ photon at $+{\boldsymbol{K}}$ valley: $$\begin{aligned} &{\left\langle \text{x}_\nu \right|}[\mathcal{H}_I, \rho^{(0)}] {\left| 0 \right\rangle} \nonumber \\ &= - {\left\langle \text{x}_\nu \right|} \left( \sum_{\nu'} g^+_{\nu'} \left( B^\dagger_{\nu'} \rho^{(0)} - \rho^{(0)} B^\dagger_{\nu'} \right) \right) {\left| 0 \right\rangle} \mathcal{E}({\boldsymbol{\kappa}}) \text{e}^{-i \omega_\kappa t} \nonumber \\ &~~~~ \quad - {\left\langle \text{x}_\nu \right|} \left( \sum_{\nu'} g^{+*}_{\nu'} \left( B_{\nu'} \rho^{(0)} - \rho^{(0)} B_{\nu'} \right) \right) {\left| 0 \right\rangle} \mathcal{E}^*({\boldsymbol{\kappa}}) \text{e}^{i \omega_\kappa t} \nonumber \\ &= -g^+_\nu (\rho_{\text{00}}^{(0)} - \rho_{\text{xx}}^{(0)}) \mathcal{E}({\boldsymbol{\kappa}}) \text{e}^{- i \omega_\kappa t} \nonumber \\ &= - g^+_\nu \mathcal{E}({\boldsymbol{\kappa}}) \text{e}^{-i \omega_\kappa t} , \label{eq:comm101}\end{aligned}$$ where we used the fact that $B_{\nu'} {\left| 0 \right\rangle} = {\left| \mathrm{x}_{\nu'} \rangle \langle 0 \right|}$ with $\rho_{\text{00}}^{(0)} = 1$ and $\rho_{\text{xx}}^{(0)} = 0$ since the state without the external field at zero temperature is the Fermi sea. From this, the first order differential equation is now $$\dot{\rho}_\text{x0}^{(1)} (t') = - i e_\nu \rho^{(1)}_\text{x0} (t') + \frac{i}{\hbar} g^+_\nu \mathcal{E} ({\boldsymbol{\kappa}}) \text{e}^{-i \omega_\kappa t'},$$ Integrating over $-\infty < t' < t$ yields the following first order solution: $$\rho^{(1)}_\text{x0} (t) = \frac{g^+_\nu}{\hbar} \frac{1}{(e_\nu - \omega_\kappa) - i \epsilon} \mathcal{E}({\boldsymbol{\kappa}}) \text{e}^{- i \omega_\kappa t}, \label{eq:first-order}$$ where $\epsilon$ is a positive infinitesimal parameter regulating the integral at $t' \rightarrow -\infty$. From $\rho^{(1)}_{\mathrm{x0}} (t) = \rho^{(1)}_{\mathrm{x0}} (\omega_\kappa) \mathrm{e}^{- i \omega_\kappa t}$, we easily obtain $$\rho^{(1)}_\mathrm{0x} (\omega_\kappa) = \rho^{(1)*}_\mathrm{x0} (-\omega_\kappa) = \frac{g^+_\nu}{\hbar} \frac{1}{ (e_\nu + \omega_\kappa) + i \epsilon} \mathcal{E} (\omega_\kappa),$$ where we used $\mathcal{E}^* (- \omega_\kappa) = \mathcal{E} (\omega_\kappa)$. This is a nonresonant term, which must be much less than the resonant term $\rho^{(1)}_{\mathrm{x0}}$. Then, using equations and , we obtain $${\boldsymbol{J}}^{(1)} = e N_e \frac{-i e_\nu g^{+*}_\nu}{e} \frac{g^+_\nu}{\hbar} \sum_{p_1 = \pm 1}\frac{1}{e_\nu +p_1( \omega_\kappa + i \epsilon)} \hat{{\boldsymbol{\varepsilon}}}\mathcal{E}({\boldsymbol{\kappa}}) \text{e}^{- i \omega_\kappa t}.$$ From this, we obtain the linear conductivity $\sigma^{(1)}$, and then, using the relation in equation , we obtain the linear susceptibility of the exciton state: $$\chi^{(1)} (\omega_\kappa) = \frac{e_\nu |g^+_\nu|^2 N_e}{\hbar \epsilon_0 \omega_\kappa} \sum_{p_1 = \pm 1} \frac{1}{ e_\nu +p_1( \omega_\kappa + i \epsilon)}.$$ We now explain how to handle the free carrier density $N_e$ in the following. The value of $g^\pm_\nu$ is generally numerically evaluated. If, however, we adopt the previous approximation $g^\pm_\nu \approx \sqrt{A} d^\pm_{cv} ({\boldsymbol{q}} = {\boldsymbol{0}}) \psi_\nu^* ({\boldsymbol{r}} = 0)$, we obtain $$\begin{aligned} \chi^{(1)} (\omega_\kappa) =& \left( \frac{\hbar e_\nu}{\hbar \omega_\kappa}\right) \frac{A N_e}{\epsilon_0} |d^+_{cv} ({\boldsymbol{0}})|^2 | \psi_\nu ({\boldsymbol{r}} = {\boldsymbol{0}})|^2 \nonumber \\ & \times \sum_{p_1 = \pm 1}\frac{1}{\hbar e_\nu + p_1 ( \hbar \omega_\kappa + i \hbar \epsilon)}.\end{aligned}$$ The induced current density ${\boldsymbol{J}} = \text{tr}[e (N_e \rho) \mathbf{v}]$ in equation captures the density of charge carriers and their movements. Particularly $N_e \rho$ with the quantum mechanical density $\rho$ (with unity maximum value) captures the density of the excited exciton. Since each exciton carries one excitation and thus one charge carrier, it is correct to replace $N_e \rightarrow 1/A d_\text{eff}$. Here, $d_{\text{eff}} \approx 6.5 ~\text{\AA}$ [@wang2012electronics; @radisavljevic2011single] is the effective thickness of the monolayer MoS$_2$. The resulting formula exactly matches the single spin electron results in Elliott’s seminal paper [@elliott1957intensity] as well as the formula appearing in Haug, et al. [@haug2009quantum] (see equation 10.103) and also the formula appearing in Klingshirn [@klingshirn2012semiconductor] (see equation 27.52). The agreement confirms that our replacement $N_e \rightarrow 1/A d_\textrm{eff}$ is reasonable. One must add the responses from the different exciton levels, resulting in the contribution from the bound exciton levels as $$\begin{aligned} \chi^{(1)}_B (\omega_\kappa) = \sum_\nu \sum_{p_1 = \pm 1} \frac{e_\nu |\overline{g^+_\nu}|^2}{\hbar \epsilon_0 \omega_\kappa d_\mathrm{eff}} \left( \frac{1}{ e_\nu + p_1 (\omega_\kappa + i \gamma_B/2)} \right), \label{eq:chi1_bound}\end{aligned}$$ where we used $\overline{g^+_\nu} = g^+_\nu/\sqrt{A}$, which does not depend on the sample size since $g_\nu \propto \sqrt{A}$. We also introduced the phenomenological replacement $\epsilon \rightarrow \gamma_B/2$ where $\gamma_B$ is the decay rate of the bound exciton ${\left| \mathrm{x}_\nu \right\rangle}$. Wang *et al.* [@wang2016radiative] and Selig *et al.* [@selig2016excitonic] calculated the radiative lifetime of the exciton at a temperature of 5 K to be $\sim$ 200 fs. From the radiative decay perspective, it is expected that the line broadening will depend on $\nu$. However, other broadening mechanisms including the phonon-exciton scattering and the disorder further broadens the spectrum [@wang2016radiative; @moody2015intrinsic] in real samples, and the difference among various $\nu$ from the radiative decay alone is washed out. To account for the phenomenological linewidth, various values were used ranging from 1 meV to 50 meV [@wang2017excitons; @mak2010atomically; @molina2013effect]. We particularly choose 10 meV that matches our own experimentally measured data at 4 K temperature [@rogers2017absorption], as well as the qualitative curves of the absorption spectra found in low-temperature experimental results [@zhang2014absorption; @he2013experimental; @qiu2013optical; @moody2015intrinsic]. The contribution from the unbound excitons is easily deduced as $$\begin{aligned} &\chi^{(1)}_U (\omega_\kappa) = \nonumber \\ &\int {\text{d}}^2 q \frac{\omega_{{\boldsymbol{q}}} |d^+_{cv} ({\boldsymbol{q}})|^2}{4 \pi^2 \hbar \epsilon_0 \omega_\kappa d_\mathrm{eff}} \sum_{p_1 = \pm 1 }\frac{1}{\omega_{{\boldsymbol{q}}} +p_1( \omega_\kappa + i \gamma_U/2)}, \label{eq:chi1_unbound}\end{aligned}$$ where we used the replacement $\sum_{{\boldsymbol{q}}} \rightarrow (A/(2 \pi^2)) \int {\text{d}}^2 k$. Here, $\gamma_U$ is the radiative decay rate (inverse of the radiative lifetime) of the conduction bands. Using Fermi’s golden rule, we obtain $\gamma_U = \omega^3_{{\boldsymbol{q}}}|d^+_{cv} ({\boldsymbol{q}})|^2/(2 \pi \epsilon_0 \hbar c^3)$. With the monolayer MoS$_2$ parameters, we obtain the radiative lifetime of the conduction band to be approximately 4 ns. Finally, we obtain the linear susceptibility: $\chi^{(1)A} (\omega_\kappa) = \chi^{(1)}_B (\omega_\kappa) + \chi^{(1)}_U (\omega_\kappa)$. For a single optical frequency $\omega_\kappa$, the contribution comes from all the bound and the unbound exciton states. Note, however, that $\chi^{(1)A} (\omega_\kappa)$ is the contribution only from the spin up electrons. The exciton states from the up spin in valley $+{\boldsymbol{K}}$ are called the *A excitons*. One must add the contribution from the *B excitons*, which comes from the spin down electrons. The major difference between the A and the B excitons is the energy eigenvalues. The B excitons have higher energy by $E_\mathrm{soc}$. Consequently, all the exciton level energies are offset by the similar amount. Finally, we obtain the true physical linear susceptibilities as $$\begin{aligned} \chi^{(1)} (\omega_\kappa) &= \chi^{(1)A} (\omega_\kappa) + \chi^{(1)B} (\omega_\kappa)\nonumber \\ &\approx \chi^{(1)A} (\omega_\kappa) + \chi^{(1)A} (\omega_\kappa - E_\mathrm{soc}/\hbar).\end{aligned}$$ This response is only for the $\sigma_+$ polarized light, coming from $+{\boldsymbol{K}}$ valley. Indeed, $\sigma_-$ polarized light sees the linear response from $+{\boldsymbol{K}}$, too. The relative strength of $g^-_{(1,1)}$ and $g^-_{(2,1)}$ are, however, only 2% and 1% of $g^+_{(0,0)}$, respectively. Therefore, the relative strength of the response will be only $\sim 10^{-4}$, compared to the strong $g_{(0,0)}$. The same applies to the case for $\sigma_+$ polarized light and the $-{\boldsymbol{K}}$ valley. Hence, the linear response of $\sigma_+$ light is mostly from $+{\boldsymbol{K}}$ valley. On the other hand, the contribution $\chi^{(1)}_U (\omega_\kappa)$ from $\sigma_-$ will increase as $\omega_\kappa$ increases well beyond $\Delta/\hbar$ since $d^-_{cv} ({\boldsymbol{q}}) \propto q^2$. \ ![\[fig:absorption\] Inferred absorption from the calculated $\chi^{(1)}$. The resonance labels indicate either A or B exciton with the quantum number $(n,m)$.](absorption-refractive){width="45.00000%"} We calculated the $\chi^{(1)}$ as shown in Fig. \[fig:chi1\]. The plot shows that the contribution only from the nonresonant terms (the sum of $p_1 = +1$ terms in equations and , dash-dot curves) is negligibly small. That from the unbound states (dot curves, ) leaves a long tail in the real part only. Far below the exciton resonances, the contribution from the nonresonant term starts gaining. On the other hand, the absorption decays fast below the exciton resonances. The contribution from the bound excitons dominates in the spectral range below the band edge. Near the band edge, the higher order excitons contribute significantly. The band edge for the A excitons (spin up electron) occurs at 2.16 eV, while that of the B excitons at 2.31 eV. The contribution from the unbound states reaches the spectrum below the band edge. Our model does not include higher conduction levels, which diminishes the influence of this unbound state contribution in the bound state resonances. We also calculate the linear absorption and the reflectance from the excitonic states (Fig. \[fig:absorption\]). The complex refractive index is given as $n = \sqrt{1 + \chi^{(1)}}$. The imaginary part produces the absorption coefficient $\alpha = 2 \text{Im} [n] \omega_\kappa / c$. The linear absorption from the 2D sheet is given by $\alpha d_\text{eff} = 2 d_\text{eff} \text{Im}[\sqrt{1 + \chi^{(1)}}] \omega_\kappa / c$. The single pass absorption does not depend on $d_\mathrm{eff}$ on the bound exciton resonances due to the large value of $|\chi^{(1)}|$. Fig. \[fig:chi1\] (b) shows the calculated absorption spectrum. The calculated absorption peaks for the lowest A and B exciton resonance match reasonably well the measured absorptions of 10% $\sim$ 15%, having the similar broadening [@he2013experimental; @qiu2013optical; @zhang2014absorption; @moody2015intrinsic]. We note that the distortion of the curves are due to the excessive negative real value of $\chi^{(1)}$, caused by underestimated contribution from the unbound exciton states as we mentioned above. As a result, the blue side of the resonance curves are much more exaggerated than the real situation. Nevertheless, both the absorption and the reflection curves match qualitative features of the published results. Second order susceptibility --------------------------- ![\[fig:second\] Schematic of the second-harmonic process where the second harmonic is near resonant with an exciton level.](second-harmonic-low){width="23.00000%"} Let us consider the second-harmonic generation for which the output second-harmonic frequency is nearly resonant with the exciton energy levels (see the Fig. \[fig:second\]). Due to the energy gap, one can avoid the direct linear absorption for the fundamental pump light. If one also avoids the direct linear absorption for the second harmonic by slightly detuning from the resonance, one can accomplish a coherent and efficient second-harmonic process. The same applies to the degenerate optical parametric amplifier pumped at the exciton resonance, amplifying the signal at the half frequency. This second-harmonic transition involves the virtual levels, which sum all possible intermediate levels linking the initial Fermi sea ground state ${\left| 0 \right\rangle}$ to the final exciton state ${\left| \mathrm{x}_\nu \right\rangle}$. We are particularly interested in the resonant second-harmonic frequency $2 \omega_\kappa \sim e_{0} (= e_{(0,0)})$ (the frequency of the state ${\left| \mathrm{x}_{(0,0)} \right\rangle})$ since it involves the largest dipole moment $g_{(0,0)}$. The virtual level can be either the bound or the unbound exciton states. ### Bound exciton virtual states Let us first consider the bound exciton virtual levels. The composite transition must obey the optical selection rule explained in section \[sec:selection-rule\]. Let us consider the case where the highest level is ${\left| \mathrm{x}_{(0,0)} \right\rangle}$. For $+{\boldsymbol{K}}$ valley, where the second-harmonic light is in $\sigma_+$, the second order transition involving two $\sigma_+$ fundamental photons is not allowed since $h^+_{(0,0)(n,0)} =0$ due to the angular momentum conservation rule. This implies the tensor element $\chi^{(2)}_{+;++} = 0$. Instead, the transition ${\left| 0 \right\rangle} \rightarrow {\left| \mathrm{x}_{(1(2),1)} \right\rangle} \rightarrow {\left| \mathrm{x}_{(0,0)} \right\rangle}$ is allowed by absorbing two $\sigma_-$ photons because the first transition relies on the dipole moment $g^-_{1(2),1} (\neq 0)$, and the second transition relies on the dipole moment $h^-_{(0,0)(1(2),1)}$, which is nonzero. The transition ${\left| \mathrm{x}_{(0,0)} \right\rangle} \rightarrow {\left| 0 \right\rangle}$ emits a $\sigma_+$ photon as explained in previous section. This corresponds to the susceptibility tensor $\chi^{(2)}_{+;--}$. We note that $\chi^{(2)}_{+;-+} = 0$ since the dipole element $h^+_{(0,0)(1(2),1)} = 0$. Also, $\chi^{(2)}_{+;+-} = 0$ since $h^+_{(0,0)(n,0)} = 0$. For $-{\boldsymbol{K}}$ valley, the opposite circularly polarized photons are used in the same transitions. Since the second-harmonic output from $-{\boldsymbol{K}}$ valley is always $\sigma_-$ photon as we explained in the previous section, we conclude that $\chi^{(2)}_{-;--} = \chi^{(2)}_{-;+-} = \chi^{(2)}_{-;-+}= 0$ and $\chi^{(2)}_{-;++} \neq 0$. In summary, we have only two nonzero second-order susceptibility tensor elements, $\chi^{(2)}_{-;++}$ and $ \chi^{(2)}_{+;--}$. This result is consistent with the well known experimental results for the second-harmonic generation in TMDs, where the output second-harmonic polarization has the opposite chirality relative to the input circular polarization [@seyler2015electrical; @xiao2015nonlinear]. Let us quantify the tensor element $\chi^{(2)}_{+;--}$ from $+{\boldsymbol{K}}$ valley. For this, we solve the second order differential equation for the density matrix elements. First, the basis for the Hilbert space is $\{ {\left| 0 \right\rangle}, {\left| \mathrm{x}_{(s,1)} \right\rangle}, {\left| \mathrm{x}_{(0,0)} \right\rangle} \}$ where $s =$ 1 or 2. Since we now involve the exciton-exciton transition, we have an additional interaction Hamiltonian: $$\mathcal{H}'_I = - \sum_{s = 1,2} \left[ h^-_{(0,0)(s,1)} B^\dag_{0} B_{(s,1)} \mathcal{E} ({\boldsymbol{\kappa}}) \mathrm{e}^{- i \omega_\kappa t} + \mathrm{h.c.} \right]. \label{eq:interact3}$$ We need to calculate the matrix elements such as $\rho^{(2)}_{(1,2) 0} (t) = {\left\langle \mathrm{x}_{(s,1)} \right|} \rho^{(2)} (t) {\left| 0 \right\rangle}$, $\rho^{(2)}_{(0,0)(s,1)} (t) = {\left\langle \mathrm{x}_{(0,0)} \right|} \rho^{(2)} (t) {\left| \mathrm{x}_{(s,1)} \right\rangle}$, and $\rho^{(2)}_{(0,0)0} (t) = {\left\langle \mathrm{x}_{(0,0)} \right|} \rho^{(2)} (t) {\left| 0 \right\rangle}$. Using the operator properties and their action on the states, we obtain that the only substantial term among three is $\rho^{(2)}_{\mathrm{x}0} (t)$, given as (see Appendix \[sec:matrix-rho\]) $$\begin{aligned} & \rho^{(2)}_{\mathrm{x}0} (t) = \nonumber \\ & \frac{\mathcal{E}^2({\boldsymbol{\kappa}}) \text{e}^{- i 2 \omega_\kappa t}}{\hbar^2} \frac{g^-_\nu h^-_{(0,0)\nu}}{\left( e_\nu -( \omega_\kappa + i \epsilon) \right) \left(e_0 -( 2 \omega_\kappa + i \epsilon') \right)} . \label{eq:rho2}\end{aligned}$$ We already calculated the velocity element ${\boldsymbol{\mathrm{v}}}_{\mathrm{0x}} = \hat{{\boldsymbol{\varepsilon}}}^+ (-i e_0 g^{+*}_{(0,0)}/e)$. Using ${\boldsymbol{J}}^{(2)} = \sum_{\nu} e N_e ({\boldsymbol{\mathrm{v}}}_{\mathrm{0x}} \rho^{(2)}_{\mathrm{x0}} + \rho^{(2)}_{\mathrm{0x}} {\boldsymbol{\mathrm{v}}}_{\mathrm{x0}} )$ and ${\boldsymbol{J}}^{(2)} = \sigma^{(2)} \hat{{\boldsymbol{\varepsilon}}}^+ \mathcal{E}^2 ({\boldsymbol{q}}) \text{e}^{-i 2 \omega_\kappa t}$, we obtain $$\begin{aligned} &\sigma^{(2)} = \nonumber \\ & \sum_{\nu} \sum_{p_1 = \pm 1} \left( \begin{array}{l} \cfrac{-i N_e g^-_\nu h^-_{(0,0)\nu} g^{+*}_{(0,0)}}{\hbar^2} \\ \times \cfrac{1}{\left( e_\nu + p_1 ( \omega_\kappa + i \epsilon) \right) \left(e_{0} +p_1( 2 \omega_\kappa + i \epsilon')\right)} \end{array} \right).\end{aligned}$$ From the equation , the second order susceptibility for the second-harmonic generation is obtained through $$\chi^{(2)} (\omega_\kappa \sim e_\nu) = \frac{\sigma^{(2)}}{- i 2 \epsilon_0 \omega_\kappa}. \label{eq:sigma2}$$ Then, we finally obtain the contribution of the bound virtual exciton states: $$\begin{aligned} &\chi^{(2)}_{B,+;--} (\omega_\kappa \sim e_0/2) = \sum_{\nu} \sum_{p_1 = \pm 1} \cfrac{e_0 g^-_\nu h^-_{(0,0)\nu} g^{+*}_{(0,0)}}{2 \omega_\kappa \hbar^2 \epsilon_0 d_\mathrm{eff}} \nonumber \\ &\times \cfrac{1}{\left( e_\nu + p_1 ( \omega_\kappa + i \gamma_B/2) \right) \left(e_{0} + p_1( 2 \omega_\kappa + i \gamma_B/2) \right)} . \label{eq:chi2B}\end{aligned}$$ Recall that that $g^-_\nu$ is substantial only for $\nu = (1,1)$ and $\nu = (2,1)$. Note that this contains the resonant ($p_1 = -1$) and the nonresonant ($p_1 = +1$) term. Calculating $\chi^{(2)}_{B,-;++}$ from $-{\boldsymbol{K}}$ valley produces the same result since the only difference between the two valleys is the switched role between $\pm\sigma$. ### Unbound exciton virtual states We now calculate the contribution from the unbound exciton virtual states. Let us first consider the case of $\sigma_+$ polarized light. The cascaded second-order transition is ${\left| 0 \right\rangle} \rightarrow {\left| C({\boldsymbol{q}}) \right\rangle} \rightarrow {\left| \mathrm{x}_{(0,0)} \right\rangle} \rightarrow {\left| 0 \right\rangle}$. In order to address the second transition, we need the following interaction Hamiltonian: $$\mathcal{H}_I'' = - \sum_{{\boldsymbol{q}}} \left[ f_{(0,0)} ({\boldsymbol{q}}) B^\dag_0 C_{{\boldsymbol{q}}} \mathcal{E}({\boldsymbol{\kappa}}) \text{e}^{-i \omega_\kappa t} + \text{h.c.} \right], \label{eq:second-interaction-Hamiltonian}$$ where the new dipole transition element $f_\nu ({\boldsymbol{q}})$ is given as $$\begin{aligned} f_\nu ({\boldsymbol{q}}) &= e {\left\langle \mathrm{x}_\nu \right|} \hat{{\boldsymbol{\varepsilon}}} \cdot {\boldsymbol{r}} {\left| C_{{\boldsymbol{q}}} \right\rangle} \nonumber \\ &= e \sum_{{\boldsymbol{q}}'} \psi^*_{\nu} ({\boldsymbol{q}}') {\left\langle C_{{\boldsymbol{q}}'} \right|} \hat{{\boldsymbol{\varepsilon}}} \cdot {\boldsymbol{r}} \alpha^\dag_{{\boldsymbol{q}}} \beta^\dagger_{-{\boldsymbol{q}}} {\left| 0 \right\rangle} \nonumber \\ &= e \sum_{{\boldsymbol{q}}'} \psi^*_\nu ({\boldsymbol{q}}') {\left\langle C_{{\boldsymbol{q}}'} \right|} \hat{{\boldsymbol{\varepsilon}}} \cdot {\boldsymbol{r}} {\left| C_{{\boldsymbol{q}}} \right\rangle}. \label{eq:fnu}\end{aligned}$$ The physical intuition is that this dipole moment is a superposition of all intraband dipole moment weighted by the (Fourier-transformed) exciton wave function. We can easily deduce $\chi^{(2)}$ from this channel based on equation : $$\begin{aligned} &\chi^{(2)}_U (\omega_\kappa \sim e_0/2) = \int {\text{d}}^2 q \cfrac{e_\nu d_{cv} ({\boldsymbol{q}}) f_{\nu} ({\boldsymbol{q}}) g_\nu^*}{8 \pi^2 \omega_\kappa \epsilon_0 \hbar^2 d_\textrm{eff}} \times \nonumber \\ & \sum_{p_1 = \pm 1} \cfrac{1}{(\omega_{{\boldsymbol{q}}} +p_1( \omega_\kappa + i \gamma_U/2))(e_\nu + p_1( 2 \omega_\kappa + i \gamma_B/2))} .\end{aligned}$$ Appendix \[sec:fnu\] derives and conclude that $f^\pm_\nu ({\boldsymbol{q}})$ vanishes due to the symmetry. Hence, the virtual transition through the unbound exciton to land on a bound exciton state is negligible. This allows us to ignore in the future any virtual channel involving the unbound exciton states. ![Numerically evaluated $\chi^{(2)}_{+;--} (\omega_\kappa \sim e_0/2)$ based on the higher-order corrected gapped Dirac Hamiltonian. Also shown is the second-harmonic absorption at $2 \omega$ in black.[]{data-label="fig:chi2"}](chi2){width="45.00000%"} ### Overall second-order susceptibility We showed the opposite chirality rule between the fundamental light and the second-harmonic light for the second-harmonic generation. Since the virtual channels from the unbound excitons can be ignored, the second-order susceptibility is $\chi^{(2)} (\omega_\kappa \sim e_0/2) = \chi^{(2)}_B (\omega_\kappa \sim e_0/2)$. Fig. \[fig:chi2\] shows the calculated $\chi^{(2)}$ for a single polarized second-harmonic output from a linearly polarized pump light. The intensity of the second-harmonic light depends on the absolute value $|\chi^{(2)}|$ whereas the phase of $\chi^{(2)}$ explains the phase delay of the second-harmonic light [@boyd2003nonlinear]. The maximum value of the calculated $|\chi^{(2)}|$ at frequency $e_0/2$ is $6.6 \times 10^{-10}$ m/V. Fig. \[fig:chi2\] also shows the linear absorption at the second harmonic $2 \omega$. In order to avoid it, one may want to operate at slight red detuning from the resonance. The figure also shows the contribution from the nonresonant term ($p_1 = +1$ in equation ), which is negligibly small in both real and imaginary parts. This is expected since the second-order susceptibility is concentrated near resonance and both the two factors in the denominator of the resonant term in equation diverge around the resonance. A few more experimental results on the monolayer MoS$_2$ second-harmonic generation that quantified the second-order susceptibility were reported: Malard *et al.*[@malard2013observation] reported a sheet susceptibility of $8 \times 10^{-20}$ m$^2$/V, equivalent to a bulk $\chi^{(2)}$ of $1.2 \times 10^{-10} $ m/V, and Clark *et al.*[@clark2014strong] experimentally obtained $2 \times 10^{-9}$ m/V while Woodward *et al.*[@woodward2016characterization] reported $3 \times 10^{-11}$ m/V, all with the second harmonic at the A exciton resonance of 1.9 eV. These match our result within an order of magnitude. Trolle *et al.* theoretically calculated $\chi^{(2)}$ through the tight binding band structures and obtained $4 \times 10^{-9}$ m/V [@trolle2014theory], which also agrees with our result approximately within an order of magnitude, although the approach was different. Compared to the typical $\chi^{(2)}$ value $2 \times 10^{-11}$ m/V of lithium niobate, which is the common material for the second-harmonic generation, the single pass second-order effect in the monolayer MoS$_2$ is equivalent to approximately only nanometer thick lithium niobate material. Hence, the monolayer MoS$_2$ does not appear to be a strong second-harmonic nonlinear material. Third order susceptibility -------------------------- The third order processes that can avoid the direct linear absorption are the third-harmonic generation and the two-photon process (i.e., Kerr effect and two-photon absorption) as shown in Fig. \[fig:low-third\]. ### Third-harmonic generation We first consider the third-harmonic generation process where $\omega_\kappa \sim e_0/3$ (see Fig. \[fig:low-third\] (a)). This process involves two virtual levels between ${\left| 0 \right\rangle}$ and ${\left| \mathrm{x}_{(0,0)} \right\rangle}$. As we have seen from the previous calculation for $\chi^{(2)}$, the virtual contribution from the unbound excitons is negligible. We then count only the virtual levels from the bound exciton states. This requires a modification of the second interaction Hamiltonian in equation as $$\mathcal{H}'_I = - \sum_{\nu_1, \nu_2} \left[ h_{\nu_1 \nu_2} B^\dag_{\nu_1} B_{\nu_2} \mathcal{E} ({\boldsymbol{\kappa}}) \mathrm{e}^{- i \omega_\kappa t} + \mathrm{h.c.} \right].$$ This third-harmonic generation process involves the four states ${\left| 0 \right\rangle}, {\left| \mathrm{x}_{\nu_1} \right\rangle}, {\left| \mathrm{x}_{\nu_2} \right\rangle}, {\left| \mathrm{x}_{(0,0)} \right\rangle}$ with the successive transition ${\left| 0 \right\rangle} \rightarrow {\left| \mathrm{x}_{\nu_1} \right\rangle} \rightarrow {\left| \mathrm{x}_{\nu_2} \right\rangle} \rightarrow {\left| \mathrm{x}_{(0,0)} \right\rangle} \rightarrow {\left| 0 \right\rangle}$. The optical selection rule where only ${\left| \mathrm{x}_{(n,m)} \right\rangle} \rightarrow {\left| \mathrm{x} \right\rangle}_{(n,m\pm1)}$ are allowed from the polarization $\sigma_\pm$, respectively, applies here as well for efficient virtual transitions. For $\sigma_+$ input light alone, there are no cascaded transitions to arrive at ${\left| \mathrm{x}_{(0,0)} \right\rangle}$ through the two virtual bound exciton states. The same applies to $\sigma_-$. This forces the tensor elements $\chi^{(3)}_{TH, \pm; +++} = \chi^{(3)}_{TH, \pm;---} = 0$. On the other hand, if both $\sigma_\pm$ photons are present, they can cooperate and incur the following transition: ${\left| 0 \right\rangle} \rightarrow {\left| \mathrm{x}_{(s,0)} \right\rangle} \rightarrow {\left| \mathrm{x}_{s',-1} \right\rangle} \rightarrow {\left| \mathrm{x}_{(0,0)} \right\rangle} \rightarrow {\left| 0 \right\rangle}$ with $s = 0,1,2, \cdots$ and $s' = 1,2, \cdots$. The sequential transitions are mediated by $\sigma_+$, $\sigma_-$, $\sigma_+$, $\sigma_+$ for $+{\boldsymbol{K}}$ valley involving the dipole moments $g^+_{(s,0)}, h^-_{(s',-1)(s,0)}, h^+_{(0,0)(s,-1)}, g^{+*}_{(0,0)}$, respectively, leaving the output polarization in $\sigma_+$ light of the third harmonic. The opposite polarization sequence applies to the $-{\boldsymbol{K}}$ valley, leaving the output third-harmonic light in $\sigma_-$. Let us consider the tensor element $\chi^{(3)}_{TH, +;+-+} (= \chi^{(3)}_{TH, +;++-} = \chi^{(3)}_{TH, +;-++})$ from the $+{\boldsymbol{K}}$ valley. The detailed calculations reveal that the only nonzero matrix element in the density matrix $\rho^{(3)}$ are $\rho^{(3)}_{(0,0)0} = {\left\langle \mathrm{x}_{(0,0)} \right|} \rho^{(3)} {\left| 0 \right\rangle}$ and $\rho^{(3)}_{(s',-1)(0,0)} = {\left\langle \mathrm{x}_{(s',-1)} \right|} \rho^{(3)} {\left| \mathrm{x}_{(0,0)} \right\rangle}$ (see Appendix \[sec:matrix-rho\]): $$\begin{aligned} &\rho^{(3)+-+ }_{(0,0)0} = \sum_{s', s} \frac{h^+_{(0,0)(s',-1)}h^-_{(s',-1)(s,0)}g^+_{(s,0)} }{\hbar^3} \times \nonumber \\ & \frac{\varepsilon^3(\kappa) \mathrm{e}^{- 3 i \omega_\kappa t}}{(e_s - \omega_\kappa - i \epsilon)(e_{s'} - 2 \omega_\kappa - i \epsilon')(e_0 - 3 \omega_\kappa - i \epsilon'')},\end{aligned}$$ and $$\begin{aligned} &\rho^{(3)+-+ }_{(s',-1)(0,0)} = -\sum_{s', s} \frac{g^{+*}_{(0,0)}h^-_{(s',-1)(s,0)}g^+_{(s,0)} }{\hbar^3} \times \nonumber \\ & \frac{\varepsilon^3(\kappa) \mathrm{e}^{- 3 i \omega_\kappa t}}{(e_s - \omega_\kappa - i \epsilon)(e_{s'} - 2 \omega_\kappa - i \epsilon')(e_{s'} - e_0 + \omega_\kappa + i \epsilon'')}.\end{aligned}$$ We then calculate the induced current for the third-harmonic generation: $ {\boldsymbol{J}}^{(3)} = \sum_{\nu_1} e N_e ( \mathbf{v}_{(0,0)0} \rho^{(3)}_{0(0,0)} + \rho^{(3)}_{(0,0)0} \mathbf{v}_{0(0,0)} + \mathbf{v}_{(s',-1)(0,0)} \rho^{(3)}_{(0,0)(s',-1)} + \rho^{(3)}_{(s',-1)(0,0)} \mathbf{v}_{(0,0)(s',-1)} )$. After resolving the velocity matrix elements in a similar way to equations and , we use ${\boldsymbol{J}}^{(3)} = \sigma^{(3)} \hat{{\boldsymbol{\varepsilon}}}^+ \mathcal{E}^3 ({\boldsymbol{\kappa}}) \text{e}^{- i 3 \omega_\kappa t}$ with the following relation: $$\frac{\partial}{\partial t} \epsilon_0 \chi^{(3)}_{TH} (\omega_\kappa \sim e_\nu) \mathcal{E}^3 ({\boldsymbol{\kappa}}) \text{e}^{- i 3 \omega_\kappa t} = \sigma^{(3)} \mathcal{E}^3 ({\boldsymbol{\kappa}}) \text{e}^{- i 3 \omega_\kappa t},$$ which leads to $\chi^{(3)}_{TH} (\omega_\kappa \sim e_\nu) = \sigma^{(3)}/(-i 3 \epsilon_0 \omega_\kappa)$, we finally obtain the third-order susceptibility for the third-harmonic generation as $$\begin{aligned} \chi^{(3)}_{TH, B, +;+-+} & (\omega_\kappa \sim e_0/3) = \sum_{s,s'} \cfrac{g^{+*}_{(0,0)} h^+_{(0,0)(s',-1)} h^-_{(s',-1)(s,0)} g^+_{(s,0)}}{3 \omega_\kappa \epsilon_0 \hbar^3 d_\mathrm{eff}} \times \nonumber \\ & \left( \begin{array}{l} \sum_{p_1 = \pm 1} \cfrac{e_0}{\left( e_s + p_1( \omega_\kappa + i \gamma_{B}/2) \right)\left( e_{s'} + p_1( 2 \omega_\kappa + i \gamma_{B}/2) \right)\left( e_0 + p_1 ( 3\omega_\kappa + i \gamma_{B}/2) \right)} \\ - \sum_{p_2 = \pm 1} \cfrac{e_{s'}-e_0}{\left( e_s + p_2( \omega_\kappa + i \gamma_{B}/2) \right)\left( e_{s'} + p_2( 2 \omega_\kappa + i \gamma_{B}/2) \right)\left( e_{s'} - e_0 - p_2 ( \omega_\kappa + i \gamma_{B}/2) \right)} \end{array} \right). \label{eq:chi3TH}\end{aligned}$$ Here, $s = 0,1, \cdots$ and $s' = 1,2, \cdots$. There are four terms in the above for a given $s,s'$ pair. The first term with $p_1 = - 1$ is the resonant term with all frequency difference denominator factors, while the other three terms are nonresonant terms with at least one frequency sum in the denominator. This is the response from $+{\boldsymbol{K}}$ valley only. Since we ignore the virtual channel through the unbound exciton states, we obtain $\chi^{(3)}_{TH, +;++-} (\omega_\kappa \sim e_0/3) = \chi^{(3)}_{TH, B, +;++-} (\omega_\kappa \sim e_0/3)$. The response from the other valley is identical since $\sigma_\pm$ polarizations switch roles. Hence, we obtain the tensor elements $$\begin{aligned} &\chi^{(3)}_{TH, \pm;\pm \pm \mp} (\omega_\kappa \sim e_0/3) = \chi^{(3)}_{TH, \pm;\mp \pm \pm} (\omega_\kappa \sim e_0/3) \nonumber \\ &= \chi^{(3)}_{TH, \pm;\pm \mp \pm} (\omega_\kappa \sim e_0/3),\end{aligned}$$ all having the same result as in equation . All the other tensor elements are negligible. ![Numerically evaluated $\chi^{(3)}_{TH,+;+-+} (\omega_\kappa \sim e_0/3)$ based on the higher-order corrected gapped Dirac Hamiltonian. The real value (blue solid), the imaginary value (red solid), and the absolute value (green solid) of the total $\chi^{(3)}$ (sum of resonant and nonresonant terms) are shown. Also the separate contributions from the nonresonant terms (blue and red dotted) are shown to be negligibly small.[]{data-label="fig:chi3TH"}](chi3TH){width="45.00000%"} We evaluated this susceptibility tensor element numerically (see Fig. \[fig:chi3TH\]). Just as the second-harmonic generation, what matters in the third-harmonic generation efficiency is the amplitude $|\chi^{(3)}_{TH}|$, while the phase of $\chi^{(3)}_{TH}$ determines the phase of the third-harmonic output light. The maximum $|\chi^{(3)}_{TH}|$ of the monolayer MoS$_2$ is $1.5 \times 10^{-17}$ m$^2$/V$^2$, which can be favorably compared to the typical nonlinear bulk crystal third order susceptibility[@boyd2003nonlinear] $\sim 10^{-24}$ m$^2$/V$^2$. The linear absorption at the third harmonic (dotted black) shows a significant absorption at near resonance. Hence, for an efficient third-harmonic generation, one would operate at slight red detuning. The figure also shows the contribution from the nonresonant terms (dotted in red and blue). Both the real and the imaginary values from the nonresonant terms are negligible. The reason is as follows: the biggest contribution in the nonresonant term is from the second term in equation with $p_2 = - 1$. However, the magnitude of the resonant denominator’s real values $|e_s - \omega_\kappa|$ and $|e_{s'} - 2 \omega_\kappa|$ are still quite large since $\omega_\kappa \sim e_0/3$. In addition, the third-harmonic generation susceptibility is concentrated near resonance. ### Two-photon process Next, let us turn to the two-photon transition shown in Fig. \[fig:low-third\] (b). We consider the case where the input light frequency is such that $\omega_\kappa \sim e_0/2$. This process involves two virtual levels, one mediating the upward transition and the other the downward transition, corresponding to ${\left| 0 \right\rangle} \rightarrow {\left| \mathrm{x}_{\nu_1} \right\rangle} \rightarrow {\left| \mathrm{x}_{(0,0)} \right\rangle} \rightarrow {\left| \mathrm{x}_{\nu_2} \right\rangle} \rightarrow {\left| 0 \right\rangle}$. For $+{\boldsymbol{K}}$ valley, the circularly polarized input light $\sigma_-$ alone can make a second order transition since the virtual levels can be $\nu_1, \nu_2 = (1(2),1)$. Then, sequential transitions involve the corresponding dipole moment $g^-_{(1(2),1)}$, $h^-_{(0,0)(1(2),1)}$, $h^{-*}_{(0,0)(1(2),1)}$, $g^{-*}_{(1(2),1)}$, respectively, leaving the output photon in $\sigma_-$ polarization from $+{\boldsymbol{K}}$ valley. For $-{\boldsymbol{K}}$ valley, $\sigma_\pm$ polarizations switch roles, accepting $\sigma_+$ photons and leaving the output in $\sigma_+$. This sequence of transition, however, is not the most efficient two-photon transition: the transition dipole moment for ${\left| 0 \right\rangle} \leftrightarrow {\left| \mathrm{x}_{(1(2),1)} \right\rangle}$ is indeed small (see the Table \[tab:gnu\]). When we numerically evaluated, the maximum value of $| \chi^{(3)}_{TP} (\omega_\kappa = e_0/2)|$ was only $1.6 \times 10^{-21}$ m$^2$/V$^2$. Rather, involving an intermediate level whose dipole moment to and from the ground state is large must be much more efficient. This is accomplished if the upper state is ${\left| \mathrm{x}_{(1,1)} \right\rangle}$, through the the circularly polarized input light $\sigma_+$ in $+{\boldsymbol{K}}$ valley. As before, we ignore the virtual channels involving the unbound exciton states. The following two-photon transition is plausible: ${\left| 0 \right\rangle} \rightarrow {\left| \mathrm{x}_{(s,0)} \right\rangle} \rightarrow {\left| \mathrm{x}_{(1,1)} \right\rangle} \rightarrow {\left| \mathrm{x}_{(s',0)} \right\rangle} \rightarrow {\left| 0 \right\rangle}$ where $s,s' = 0,1,2, \cdots$. These transitions involve the dipole moments $g^+_{(s,0)}, h^+_{(1,1)(s,0)}, h^{+*}_{(1,1)(s',0)}, g^{+*}_{(s',0)}$, respectively, where all the dipole moments are indeed substantial. We also listed the value of $h_{(1,1)(n,0)}$ in Table \[tab:hnu1nu2-twophoton\]. We then need to calculate the tensor elements of $\rho^{(3)}$ from $+{\boldsymbol{K}}$ valley in the basis $\{ {\left| 0 \right\rangle}, {\left| \mathrm{x}_{(s,0)} \right\rangle}, {\left| \mathrm{x}_{(1,1)} \right\rangle} \}$. The only nonzero elements of $\rho^{(3)}$ are (see the derivation in Appendix \[sec:matrix-rho\]): $$\begin{aligned} &\rho^{(3)+++ }_{(s'',0)0} = \sum_{ s} \frac{h^{+*}_{(1,1)(s'',0)} h^+_{(1,1)(s,0)} g^+_{(s,0)} }{\hbar^3} \times \nonumber \\ & \frac{|\varepsilon(\kappa)|^2 \varepsilon(\kappa) \mathrm{e}^{- i \omega_\kappa t}}{(e_{s} - \omega_\kappa - i \epsilon)(e_{1} - 2 \omega_\kappa - i \epsilon')(e_{s''} + \omega_\kappa + i \epsilon'')}.\end{aligned}$$ and $$\begin{aligned} &\rho^{(3)+++ }_{(1,1)(s'',0)} = -\sum_{ s} \frac{g^{+*}_{(s'',0)} h^+_{(1,1)(s,0)} g^+_{(s,0)} }{\hbar^3} \times \nonumber \\ & \frac{|\varepsilon(\kappa)|^2 \varepsilon(\kappa) \mathrm{e}^{- i \omega_\kappa t}}{(e_{s} - \omega_\kappa - i \epsilon)(e_{1} - 2 \omega_\kappa - i \epsilon')(e_{1} - e_{s''} - \omega_\kappa - i \epsilon'')}.\end{aligned}$$ The two-photon induced current is $ {\boldsymbol{J}}^{(3)} = \sum_{\nu_1} e N_e ( \mathbf{v}_{(s'',0)(1,1)} \rho^{(3)}_{(s'',0)(1,1)} + \rho^{(3)}_{(1,1)(s'',0)} \mathbf{v}_{(s'',0)(1,1)} + \mathbf{v}_{(s'',0)0} \rho^{(3)}_{(s'',0)0} + \rho^{(3)}_{0(s'',0)} \mathbf{v}_{(s'',0)0} )$. We then need to resolve the following velocity matrix element: $$\begin{aligned} \mathbf{v}_{\nu_1 (1,1) } &= {\left\langle \mathrm{x}_{\nu_1} \right|} \dot{{\boldsymbol{r}}} {\left| \mathrm{x}_{(1,1)} \right\rangle} = - \frac{i}{\hbar} {\left\langle \mathrm{x}_{\nu_1} \right|} [{\boldsymbol{r}}, \mathcal{H}_0] {\left| \mathrm{x}_{(1,1)} \right\rangle} \nonumber \\ &= - i (e_{1} - e_{\nu_1}) {\left\langle \mathrm{x}_{\nu_1} \right|} {\boldsymbol{r}} {\left| \mathrm{x}_{(1,1)} \right\rangle}. \label{eq:velement}\end{aligned}$$ Hence, we obtain the component parallel to $\hat{{\boldsymbol{\varepsilon}}}^+$ as ${\boldsymbol{\mathrm{v}}}_{(1,1)\nu_1} = -i (e_{1} - e_{\nu_1}) {\left\langle \mathrm{x}_{\nu_1} \right|} \hat{{\boldsymbol{\varepsilon}}}^- \cdot {\boldsymbol{r}} {\left| \mathrm{x}_{(1,1)} \right\rangle} \hat{{\boldsymbol{\varepsilon}}}^+ = -i (e_{1} - e_{\nu_1}) (h^{+*}_{(1,1)\nu_1}/e) \hat{{\boldsymbol{\varepsilon}}}^+$. We then use ${\boldsymbol{J}}^{(3)}_{TP} = \sigma^{(3)}_{TP} |\mathcal{E} ({\boldsymbol{\kappa}})|^2 \hat{{\boldsymbol{\varepsilon}}}^+ \mathcal{E} ({\boldsymbol{\kappa}}) \text{e}^{- i \omega_\kappa t}$. We also use the fact that the two-photon susceptibility is obtained through $$\begin{aligned} \frac{\partial}{\partial t} \epsilon_0 \chi_{TP}^{(3)} (\omega_\kappa \sim e_\nu) &| \mathcal{E} ({\boldsymbol{\kappa}})|^2 \mathcal{E}({\boldsymbol{\kappa}}) \text{e}^{- i \omega_\kappa t} \nonumber \\ &= \sigma^{(3)}_{TP} | \mathcal{E}({\boldsymbol{\kappa}})|^2 \mathcal{E} ({\boldsymbol{\kappa}}) \text{e}^{- i \omega_\kappa t},\end{aligned}$$ which leads to $\chi^{(3)}_{TP} (\omega_\kappa) = \sigma^{(3)}_{TP}/(- i \epsilon_0 \omega_\kappa)$. From all these we finally obtain the two-photon susceptibility tensor element $$\begin{aligned} \chi^{(3)}_{TP, B, +;+++} & (\omega_\kappa \sim e_1/2) = \sum_{s,s'} \cfrac{\overline{g^{+*}_{(s',0)}} h^{+*}_{(s'0)(1,1)} h^{+}_{(1,1)(s,0)} \overline{g^+_{(s,0)}}}{\omega_\kappa \epsilon_0 \hbar^3 d_\mathrm{eff}} \times \nonumber \\ & \left( \begin{array}{l} -\sum_{p_1 = \pm 1} \cfrac{(e_1 - e_{s'})}{\left( e_s + p_1( \omega_\kappa + i \gamma_{B}/2) \right)\left( e_{1} + p_1( 2 \omega_\kappa + i \gamma_{B}/2) \right)\left( e_1 - e_{s'} + p_1 ( \omega_\kappa + i \gamma_{B}/2) \right)} \\ + \sum_{p_2 = \pm 1} \cfrac{e_{s'}}{\left( e_s + p_2( \omega_\kappa + i \gamma_{B}/2) \right)\left( e_1 + p_2( 2 \omega_\kappa + i \gamma_{B}/2) \right)\left( e_{s'} - p_2 ( \omega_\kappa + i \gamma_{B}/2) \right)} \end{array} \right). \label{eq:TP-low1} \end{aligned}$$ The above contains four terms: one resonant term with $p_1 = - 1$ from the first sum, and the other three nonresonant terms $(p_1 = +1, p_2 = \pm 1)$. Here, $\overline{g^+_{\nu}} = g^+_{\nu}/\sqrt{A}$, which does not depend on the sample area $A$ (see the Table \[tab:gnu\]). This is the response from $+{\boldsymbol{K}}$ valley with both the input and the output lights in $\sigma_+$ polarization. As was before, we ignore the virtual channels through the unbound excitons, and hence, we obtain the two-photon response $\chi^{(3)}_{TP} = \chi^{(3)}_{TP,B}$. The response from $-{\boldsymbol{K}}$ valley is identical to this since $\sigma_\pm$ polarizations switch roles, and both the input and the output from $-{\boldsymbol{K}}$ valley are in $\sigma_-$ polarization. All the tensor elements other than $\chi^{(3)}_{TP, \pm ; \pm \pm \pm}$ are negligible. ![\[fig:chi3TPcirc\] Calculated $\chi^{(3)}_{TP,++++} (\omega_\kappa \sim e_1/2)$ for the $\sigma_+$ input light polarization. We plot the ratio $\mathrm{Re}[\chi^{(3)}_{TP}]/\mathrm{Im}[\chi^{(3)}_{TP}]$ (cyan dotted), as well as the linear absorption (black dotted). Also shown are the contribution only from the nonresonant term (red and blue dots).](chi3TP){width="50.00000%"} Fig. \[fig:chi3TPcirc\] shows the calculated values of $\chi^{(3)}_{TP} (\omega_\kappa \sim e_1 / 2)$. The imaginary value of the two-photon third order susceptibility is related to the actual two-photon absorption, implying the loss of the incoming light in pairs. The real value of the two-photon third order susceptibility is related to the Kerr nonlinearity where the refractive index varies proportionally to the incoming light intensity. This is best seen by the relation [@boyd2003nonlinear]: $$\chi_\textrm{eff} = \chi^{(1)} + 3 \chi^{(3)}_{TP} | \mathcal{E} (\omega_\kappa)|^2.$$ The negative sign in equation is physically substantial since it produces a positive imaginary value for $\chi^{(3)}_{TP}$ implying real two-photon absorption. The maximum of the real value $\chi^{(3)}_{TP} (\omega_\kappa \sim e_{1}/2)$ is $8.5 \times 10^{-19}$ m$^2$/V$^2$ around $e_1/2$ of A excitons. This value is approximately six orders of magnitude larger than the typical bulk material. The figure shows the influence of the same transition for the B excitons on the blue side (spin down electrons). Additionally, it also shows the linear absorption, which comes from the off-resonant contribution from the nearest exciton states absorption ${\left| \mathrm{x}_{(0,0)} \right\rangle}$. The linear absorption is only order of $\sim 10^{-5}$, which is sufficiently small. The optical Kerr effect is a valuable resource for coherent optical switching. Hence, avoiding the incoherent two-photon absorption is important. We plotted the figure of merit $|\mathrm{Re}[\chi^{(3)}_{TP}]/\mathrm{Im}[\chi^{(3)}_{TP}|]$ in Fig. \[fig:chi3TPcirc\]. Let us compare the two-photon process results of the monolayer MoS$_2$ with those of graphene [@soh2016comprehensive]. The graphene exhibits $\chi^{(3)}_{TP} (e_1/2) \sim 4.8 \times 10^{-17}$ m$^2$/V$^2$, which is larger than the monolayer MoS$_2$. The ratio $|\mathrm{Re}[\chi^{(3)}_{TP}]/\mathrm{Im}[\chi^{(3)}_{TP}|]$ of graphene at the same frequency, however, is only 0.06, whereas the monolayer MoS$_2$ has quite a favorable ratio, much larger than unity over broadband at certain frequency regions. This is because MoS$_2$ exciton responses are narrow band resonances whereas that of the graphene is broadband interband transitions. In addition, the graphene also suffers from the broadband linear absorption of 2.3% for the pumping photon[@soh2016comprehensive] while such linear absorption is completely absent from the monolayer MoS$_2$ thanks to the band gap. This makes the monolayer MoS$_2$ a superior material for the coherent Kerr optical nonlinearity. It is noteworthy that the contribution from the nonresonant terms for the two-photon third-order susceptibility is much larger than others (dotted lines in Fig. \[fig:chi3TPcirc\]). The reason is as follows: the biggest contribution comes from the term with $p_2 = - 1$ from the second term in equation . The magnitude of the resonant denominators’ real values $|e_s - \omega_\kappa|$ and $|e_1 - 2 \omega_\kappa|$ is relatively small since $\omega_\kappa \sim e_1/2$. Hence, the contribution from the nonresonant terms in the two-photon susceptibility is significantly larger than other cases. Nevertheless, it is fair to say that the major contribution still comes from the resonant term. Conclusion and discussions ========================== We calculated the linear and nonlinear optical susceptibilities of excitonic states in monolayer MoS$_2$, based on the second-order corrected Dirac Hamiltonian around $\pm {\boldsymbol{K}}$ points in the first Brillouin zone. We derived and utilized the second quantized bound and unbound exciton operators and efficiently calculated the perturbative solutions of the density matrix. This connected to the induced current, the optical conductivity, and eventually the optical susceptibilities in a perturbative order. We showed that the simple higher-order corrected Dirac gapped Hamiltonian produced linear and second-order susceptibilities that reasonably match experimental results. An alternative route would be the detailed computationally heavy DFT-based calculation. The reasonable agreement of our theoretical results with experimental data may be somewhat surprising considering that we have approximated the physical system as completely two dimensional, whereas the detailed atomic positions are indeed in three dimensions, and hence, the detailed electron density distribution might have played an important role. However, the exciton is a collective excitation spanning the entire sample area and atomic details may be blurred over the large exciton size (several times the unit cell). It is thus plausible to consider our physical system as being approximately circularly symmetric, and the angular momentum based optical selection rules of our bound exciton solution played a vital role. We emphasize that such an averaging effect is indeed a nature of the Wannier excitons with a large size. The second-harmonic process of the exciton states from the monolayer MoS$_2$, on the other hand, is well expected to be small since the exciton states are approximately centro-symmetric where only a very minor centro-symmetry breaking feature is provided through the weak threefold rotational symmetry, connecting the Fermi sea and a couple of the high order excitons. We also note that we resolved quantitatively the previously known opposite chirality rule for the second-harmonic generation in the monolayer TMDs materials through directly calculating the dipole moments and the susceptibilities. The obtained third-order nonlinear optical susceptibility of monolayer MoS$_2$ merits further investigation for potential photonics applications. The excitonic states of this material are promising for device designs utilizing coherent nonlinear optical processes, such as the coherent Kerr-type optical operation in an extremely small strong cavity ([@mabuchi2012qubit]), since one can avoid incoherent linear loss while strong optical response is provided via collecting the broadband responses of the bands into a narrow band exciton resonance. It is worth mentioning that, while the center frequency of the lowest exciton state of our result is based on empirically measured binding energy, those of higher exciton states may need to be adjusted slightly according to either the more accurate Keldysh-type binding energies of exciton states or the actual experimental results, albeit the difference is small as we mentioned above. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy<span style="font-variant:small-caps;">13</span>s National Nuclear Security Administration under contract DE-AC04-94AL85000. This work was supported by the National Science Foundation under award PHY-1648807 and by a seed grant from the Precourt Institute for Energy at Stanford University. Electronic band structure of MoS$_2$ {#sec:band-structure} ==================================== For the band structure of the monolayer MoS$_2$, we assume a gapped Dirac cone model that was adopted in many of the theoretical works of the TMD material calculations [@xiao2012coupled; @kormanyos2013monolayer; @ridolfi2015tight; @fang2015ab; @rasmussen2015computational; @kormanyos2015k; @selig2016excitonic; @wang2015fast; @wang2016radiative]. This approach assumes the tight binding approximation, where the Bloch wave is $\psi_{{\boldsymbol{k}}, \lambda} ({\boldsymbol{r}}) = \textrm{e}^{i {\boldsymbol{k}} \cdot {\boldsymbol{r}}} u_{{\boldsymbol{k}}, \lambda} ({\boldsymbol{r}})$ with the band index $\lambda = c,v$ for the conduction and the valence bands, respectively. Here, ${\boldsymbol{k}}$ is a Bloch wave vector, and the Bloch function is represented as $u_{{\boldsymbol{k}}, \lambda} ({\boldsymbol{r}}) = (1/\sqrt{N}) \sum_{m} \text{e}^{i {\boldsymbol{k}} \cdot ({\boldsymbol{R}}_m - {\boldsymbol{r}})} \phi_{{\boldsymbol{k}}, \lambda} ({\boldsymbol{r}} - {\boldsymbol{R}}_m)$, where $N$ is the total number of atoms in the sample, ${\boldsymbol{R}}_m$ is the lattice site position, and $\phi_\lambda({\boldsymbol{r}})$ is the atomic orbital. At the $\pm {\boldsymbol{K}}$ points, it is conventional [@xiao2012coupled; @kormanyos2013monolayer; @ridolfi2015tight; @fang2015ab; @rasmussen2015computational; @kormanyos2015k; @selig2016excitonic; @wang2015fast; @wang2016radiative] to approximate $\phi_{\tau {\boldsymbol{K}}, c} ({\boldsymbol{r}}) = {\left\langle {\boldsymbol{r}} | d_{z^2} \right\rangle}$ and $\phi_{\tau {\boldsymbol{K}}, v} ({\boldsymbol{r}}) = (1/\sqrt{2}) \left( {\left\langle {\boldsymbol{r}} | d_{x^2 - y^2} \right\rangle} + i \tau {\left\langle {\boldsymbol{r}} | d_{xy} \right\rangle} \right)$ where ${\left| d_{z^2} \right\rangle}$, ${\left| d_{x^2 - y^2} \right\rangle}$, ${\left| d_{xy} \right\rangle}$ are the 4d shell atomic orbitals of the Mo atom. Here, $\tau = \pm 1$ is the valley index corresponding to $\pm {\boldsymbol{K}}$ points, respectively. In fact, the conduction and the valence bands at $\pm {\boldsymbol{K}}$ points consist of both the $d$ orbitals of Mo atoms and the $p$ orbitals of S atoms. The relative contributions of Mo atom $d$ orbitals are 92% in the conduction band and 84% in the valence band [@ridolfi2015tight]. Let us consider the Bloch waves around either of $\pm {\boldsymbol{K}}$ points. Adopting the basis $\{ {\left| u_{{\boldsymbol{0}}, c} \right\rangle}, {\left| u_{{\boldsymbol{0}}, v} \right\rangle} \}$, and considering only the subspace of either up or down electron spin, the Hamiltonian is given as [@xiao2012coupled] $$\begin{aligned} H_0 = \left( \begin{array}{cc} \Delta/2 & \hbar v (\tau q_x - i q_y) \\ \hbar v (\tau q_x + i q_y) & - \Delta/2 \end{array} \right),\end{aligned}$$ where $\Delta = E_g \pm \tau E_\textrm{soc}/2$ for up or down spin subspace, respectively, with the energy band gap $E_g$ and the spin-orbit coupling energy $E_\textrm{soc}$. Here ${\boldsymbol{q}} = (q_x, q_y) = {\boldsymbol{k}} - \tau {\boldsymbol{K}}$. This is a Hamiltonian for a gapped Dirac cone. The values we use are the results of the detailed DFT calculations [@kormanyos2013monolayer; @ridolfi2015tight], namely, $\hbar v = 3.82$ eV Å$~$ ($v = 5.8 \times 10^5$ m/s), $E_{g} = 2.23$ eV (DFT-HSE06) (and experimentally measured[@zhang2014direct] as 2.15 eV), and $E_{\text{soc}} = 146$ meV. If we expand the solution up to the second order with respect to ${\boldsymbol{q}}$, we obtain an analytical formula for the *uncorrected* band Hamiltonian $H_0$: $$\begin{aligned} E_c (q) &= \frac{\Delta}{2} + \frac{\hbar^2 v^2 q^2}{\Delta}, E_v (q) = -\left(\frac{\Delta}{2} + \frac{\hbar^2 v^2 q^2}{\Delta} \right), \nonumber \\ {\left| u_{{\boldsymbol{q}}, c} \right\rangle} &= \left( 1 - \frac{\hbar^2 v^2 q^2}{\Delta^2} \right) {\left| u_{{\boldsymbol{0}},c} \right\rangle} + \frac{\hbar v q \tau}{\Delta} \text{e}^{i \tau \phi_q} {\left| u_{{\boldsymbol{0}},v} \right\rangle}, \nonumber \\ {\left| u_{{\boldsymbol{q}}, v} \right\rangle} &= - \frac{\hbar v q \tau}{\Delta} \text{e}^{- i \tau \phi_q} {\left| u_{{\boldsymbol{0}}, c} \right\rangle} + \left( 1 - \frac{\hbar^2 v^2 q^2}{\Delta^2} \right) {\left| u_{{\boldsymbol{0}}, v} \right\rangle}, \label{eq:uqv}\end{aligned}$$ where $\phi_q = \arccos(q_x/q)$. The Dirac cone approximation inevitably produces the same effective mass for the conduction band electron and the valence band hole. For a more accurate calculation, one may adopt the higher order correction[@wang2015fast; @wang2016radiative] such that $H = H_0 + H_\text{C}$ with $$\begin{aligned} &H_{C} = \nonumber \\ &\left( \begin{array}{cc} \alpha q^2 & \kappa q^2 \mathrm{e}^{2 i \tau \phi_q} - \frac{\eta}{2} q^3 \mathrm{e}^{-i \tau\phi_q} \\*[3mm] \kappa q^2 \mathrm{e}^{-2i \tau \phi_q} - \frac{\eta}{2} q^3 \mathrm{e}^{i \tau \phi_q} & \beta q^2 \end{array} \right), \label{eq:HOC}\end{aligned}$$ where the numerical values of the parameter based on the DFT calculations are $\alpha = 1.72$ eV Å$^2$, $\beta = - 0.13$ eV Å$^2$, $\kappa = - 1.02$ eV Å$^2$, and $\eta = 8.52$ eV Å$^3$. The energy eigenvalues of the band Hamiltonian $H = H_0 + H_C$ are analytically solved as follows: $$\begin{aligned} &E_\lambda = \nonumber \\ &\frac{1}{2} (\alpha + \beta) q^2 +\lambda \frac{1}{2} \sqrt{ \begin{array}{l} 4 \hbar^2 v^2 q^2 + 2 q^2 (\alpha - \beta) \Delta \\ + \Delta^2 - 4 \hbar v q^4 \eta + q^6 \eta^2 \\ + q^4 ((\alpha-\beta)^2 + 4 \kappa^2) \\ + 4 q^3 (2 \hbar v - q^2 \eta) \kappa \cos(3 \phi_q) \end{array}}, \label{eq:threefold-rotational-sym}\end{aligned}$$ where $\lambda = \pm 1$ for the conduction and the valence band, respectively. The higher order correction does not only produce different effective masses for the conduction and the valence bands, but also gives rise to the well-known threefold rotational symmetry through the dependence on $\cos(3 \phi_q)$. This threefold rotational symmetry of the energy dispersion is common in hexagonal 2D materials. ![\[fig:effective\_mass\] Band structure of monolayer MoS$_2$ near $\pm {\boldsymbol{K}}$ points (i.e., ${\boldsymbol{q}} = {\boldsymbol{0}})$. Only the lowest conduction and highest valence bands are shown. The conduction band (red slid) and the valence band (blue solid) are fitted with quadratic curves (dotted lines) for extracting the effective masses at each band. Also shown are the exciton wavefunctions for states ${\left| \mathrm{x}_{(0,0)} \right\rangle}$ (cyan) and ${\left| \mathrm{x}_{(1,0)} \right\rangle}$ (orange).](effective_mass){width="45.00000%"} It is noteworthy that one extracts the effective masses for the conduction and the valence band through the energy dispersion (equation ), and use them to solve the Wannier exciton equation in equation . Although the actual dispersion is not completely parabolic, one often approximates the band dispersion quadratically. This approximation is particularly valid for the exciton where the superposing weight $\psi_{\nu} ({\boldsymbol{q}})$ in equation is heavily concentrated in the valley bottoms. Fig. \[fig:effective\_mass\] shows the quadratic fittings of the conduction and valence bands. Also shown are the exciton wavefunctions which become weights to construct an exciton state. We note that the upper states have more concentrated wavefunctions to the valley bottom. The quadratic fitting is reasonably good even for the lowest exciton level within the exciton wavefunctions. This concretely shows that the effective mass approach is valid for the monolayer MoS$_2$ excitons, which is also consistent with literature [@selig2016excitonic; @cheiwchanchamnangij2012quasiparticle; @berkelbach2013theory; @ramasubramaniam2012large; @qiu2013optical]. Exciton creation operator {#sec:exciton-creation} ========================= We derive the creation operators for both the bound and the unbound exciton states in terms of the band state basis. We first consider the bound exciton states, starting with the definition $B^\dag_{\nu, {\boldsymbol{Q}}} = {\left| \nu {\boldsymbol{Q}} \right\rangle} {\left\langle 0 \right|}$. The exciton state ${\left| \nu {\boldsymbol{Q}} \right\rangle}$ is a dual-particle state where there is an electron-hole pair. Let us recall that the band pair state is given by ${\left| {\boldsymbol{q}}, -{\boldsymbol{q}}' \right\rangle} = \alpha^\dag_{{\boldsymbol{q}}} \beta^\dag_{-{\boldsymbol{q}}'} {\left| 0 \right\rangle}$. This is a composite state of an electron Bloch state in the conduction and a hole Bloch state in the valence band, having the momentum $\hbar {\boldsymbol{q}}$ and $-\hbar {\boldsymbol{q}}'$, respectively. Any single particle state lives in a Hilbert space that is spanned by basis $\{ {\left| {\boldsymbol{q}}, -{\boldsymbol{q}}' \right\rangle} \}$. In this subspace, the completeness relation is $$\sum_{{\boldsymbol{q}}, {\boldsymbol{q}}'} {\left| {\boldsymbol{q}}, -{\boldsymbol{q}}' \right\rangle} {\left\langle {\boldsymbol{q}}, -{\boldsymbol{q}}' \right|} = \mathbf{1}.$$ Then, we obtain $$\begin{aligned} B^\dag_{\nu{\boldsymbol{Q}}} &= \sum_{{\boldsymbol{q}}, {\boldsymbol{q}}'} {\left| {\boldsymbol{q}}, -{\boldsymbol{q}}' \right\rangle} {\left\langle {\boldsymbol{q}}, -{\boldsymbol{q}}' | \nu {\boldsymbol{Q}} \right\rangle} {\left\langle 0 \right|} \nonumber \\ &= \sum_{{\boldsymbol{q}}, {\boldsymbol{q}}'} {\left\langle {\boldsymbol{q}}, -{\boldsymbol{q}}' | \nu {\boldsymbol{Q}} \right\rangle} \alpha^\dag_{{\boldsymbol{q}}} \beta^\dag_{-{\boldsymbol{q}}'}. \end{aligned}$$ Following the treatment of Haug *et al.* [@haug2009quantum], we then approximate the band Bloch states by free particle states such that ${\left\langle {\boldsymbol{r}}, {\boldsymbol{r}}' | {\boldsymbol{q}}, -{\boldsymbol{q}}' \right\rangle} \approx (1/A) \textbf{e}^{i {\boldsymbol{q}} \cdot {\boldsymbol{r}} + i {\boldsymbol{q}}' \cdot {\boldsymbol{r}}'}$. Then, we calculate the following using the completeness $\int {\text{d}}^2 r {\text{d}}^2 r' {\left| {\boldsymbol{r}}, {\boldsymbol{r}}' \rangle \langle {\boldsymbol{r}}, {\boldsymbol{r}}' \right|} = \mathbf{1}$: $$\begin{aligned} &{\left\langle {\boldsymbol{q}}, -{\boldsymbol{q}}' | \nu {\boldsymbol{Q}} \right\rangle} = \int_A {\text{d}}^2 r {\text{d}}^2 r' {\left\langle {\boldsymbol{q}}, -{\boldsymbol{q}}' | {\boldsymbol{r}}, {\boldsymbol{r}}' \right\rangle} {\left\langle {\boldsymbol{r}}, {\boldsymbol{r}}' | \nu {\boldsymbol{Q}} \right\rangle} \nonumber \\ &= \int {\text{d}}^2 r {\text{d}}^2 r' \frac{1}{A} \text{e}^{-i {\boldsymbol{q}} \cdot {\boldsymbol{r}}} \text{e}^{i {\boldsymbol{q}}' \cdot {\boldsymbol{r}}'} \psi_{\nu} ( {\boldsymbol{r}} - {\boldsymbol{r}}') \frac{1}{\sqrt{A}}\text{e}^{i {\boldsymbol{Q}} \cdot \frac{{\boldsymbol{r}} + {\boldsymbol{r}}'}{2}},\end{aligned}$$ where $\psi_{\nu} ({\boldsymbol{r}}'')$ is the solution to the exciton Schrödinger equation in equation . Then, we Fourier-transform $\psi_\nu ({\boldsymbol{r}}'')$ to obtain $$\begin{aligned} &{\left\langle {\boldsymbol{q}}, -{\boldsymbol{q}}' | \nu {\boldsymbol{Q}} \right\rangle} \nonumber \\ &= \frac{1}{A^2}\sum_{{\boldsymbol{q}}''} \int {\text{d}}^2 r {\text{d}}^2 r' \exp \left[ i \left( {\boldsymbol{Q}} \cdot \frac{{\boldsymbol{r}} + {\boldsymbol{r}}'}{2} - {\boldsymbol{q}} \cdot {\boldsymbol{r}} + {\boldsymbol{q}}' \cdot {\boldsymbol{r}}' + {\boldsymbol{q}}'' \cdot ({\boldsymbol{r}} - {\boldsymbol{r}}')\right) \right] \psi_{ \nu } ({\boldsymbol{q}}'') \nonumber \\ &= \frac{1}{A^2}\sum_{{\boldsymbol{q}}''}\int {\text{d}}^2 r {\text{d}}^2 r' \exp \left[ i \left( {\boldsymbol{r}} \cdot \left( \frac{{\boldsymbol{Q}}}{2} - {\boldsymbol{q}} + {\boldsymbol{q}}''\right) + {\boldsymbol{r}}' \cdot \left( \frac{{\boldsymbol{Q}}}{2} + {\boldsymbol{q}}' - {\boldsymbol{q}}''\right) \right) \right] \psi_{ \nu } ({\boldsymbol{q}}'') \nonumber \\ &= \sum_{{\boldsymbol{q}}''} \psi_\nu ({\boldsymbol{q}}'') \delta_{\frac{{\boldsymbol{Q}}}{2}, {\boldsymbol{q}} - {\boldsymbol{q}}''} \delta_{\frac{{\boldsymbol{Q}}}{2}, {\boldsymbol{q}}'' - {\boldsymbol{q}}'},\end{aligned}$$ where we used $(1/A)\int {\text{d}}^2 r \text{e}^{i ({\boldsymbol{q}} - {\boldsymbol{q}}') \cdot {\boldsymbol{r}}} = \delta_{{\boldsymbol{q}}, {\boldsymbol{q}}'}$. This leads to $$\begin{aligned} {\left\langle {\boldsymbol{q}}, -{\boldsymbol{q}}' | \nu {\boldsymbol{Q}} \right\rangle}= \delta_{{\boldsymbol{Q}}, ({\boldsymbol{q}} - {\boldsymbol{q}}')} \psi_{\nu} \left( \frac{{\boldsymbol{q}} + {\boldsymbol{q}}'}{2}\right).\end{aligned}$$ Hence, we finally obtain $$\begin{aligned} B^\dag_{\nu {\boldsymbol{Q}}} &= \sum_{{\boldsymbol{q}}, {\boldsymbol{q}}'}\delta_{{\boldsymbol{Q}}, ({\boldsymbol{q}} - {\boldsymbol{q}}')} \psi_{\nu} \left( \frac{{\boldsymbol{q}} + {\boldsymbol{q}}'}{2}\right) \alpha^\dag_{{\boldsymbol{q}}} \beta^\dag_{-{\boldsymbol{q}}'} \nonumber \\ &= \sum_{{\boldsymbol{q}}} \psi_{\nu} \left( {\boldsymbol{q}} - \frac{{\boldsymbol{Q}}}{2}\right) \alpha^\dag_{{\boldsymbol{q}}} \beta^\dag_{{\boldsymbol{Q}} - {\boldsymbol{q}}}. \label{eq:A-3}\end{aligned}$$ We also mention that this result matches other references [@wang2016radiative; @wang2015fast]. Approximating ${\boldsymbol{Q}} \approx {\boldsymbol{0}}$, the exciton creation operator is $B^\dag_{\nu} \equiv B^\dag_{\nu {\boldsymbol{0}}}$. Next, we consider the unbound exciton states. In the same line of thought as the bound exciton, we seek the creation operator for the unbound exciton to be a linear combination from the band basis: $$C^\dag_{\tilde{{\boldsymbol{q}}}} = \sum_{{\boldsymbol{q}}} \phi_{\tilde{{\boldsymbol{q}}}} ({\boldsymbol{q}}) \alpha^\dag_{{\boldsymbol{q}}} \beta^\dag_{-{\boldsymbol{q}}},$$ where $\tilde{{\boldsymbol{q}}}$ is the canonical conjugate momentum to the relative coordinate ${\boldsymbol{r}} = {\boldsymbol{r}}_e - {\boldsymbol{r}}_h$. Here, $\phi_{\tilde{{\boldsymbol{q}}}}({\boldsymbol{q}})$ is the weight to be determined. We require the two condition: orthogonality with the bound states ${\left\langle C_{\tilde{{\boldsymbol{q}}}} | \mathrm{x}_\nu \right\rangle} = 0$ for all $\tilde{{\boldsymbol{q}}}$ and $\nu$ where ${\left| C_{\tilde{{\boldsymbol{q}}}} \right\rangle} = C^\dag_{\tilde{{\boldsymbol{q}}}} {\left| 0 \right\rangle}$ and ${\left| \mathrm{x}_\nu \right\rangle} = B^\dag_{\nu} {\left| 0 \right\rangle}$, and normalization ${\left\langle C_{\tilde{{\boldsymbol{q}}}} | C_{{\boldsymbol{\tilde{q}}}'} \right\rangle} = \delta_{\tilde{{\boldsymbol{q}}} - {\boldsymbol{\tilde{q}}}'}$ where $\tilde{{\boldsymbol{q}}},{\boldsymbol{\tilde{q}}}'$ are continuous variables because ${\left| C_{\tilde{{\boldsymbol{q}}}} \right\rangle}$ is an unbound state. The energy eigenvalue of this unbound exciton state must be $$E_{\tilde{{\boldsymbol{q}}}} = E_g + \frac{\hbar^2 \tilde{q}^2}{2 m_r}.$$ We note that this is quite similar to the energy of a band pair state of an electron and a hole: $E_{{\boldsymbol{q}}} = E_g + \frac{\hbar^2 q^2}{2 m_r}$. Although $\tilde{{\boldsymbol{q}}}$ is not directly related to the crystal momentum ${\boldsymbol{q}}$, we suggest the replacement $\tilde{{\boldsymbol{q}}} \rightarrow {\boldsymbol{q}}$ and $\phi_{\tilde{q}} ({\boldsymbol{q}}) = \delta_{\tilde{{\boldsymbol{q}}}, {\boldsymbol{q}}}$ such that $$C^\dag_{{\boldsymbol{q}}} = \alpha^\dag_{{\boldsymbol{q}}} \beta^\dag_{-{\boldsymbol{q}}}. \label{eq:unbound-creation}$$ We then propose to approximate the unbound exciton state ${\left| C({\boldsymbol{q}}) \right\rangle}$ with the band pair state ${\left| {\boldsymbol{q}}, -{\boldsymbol{q}} \right\rangle}$ such that ${\left| C({\boldsymbol{q}}) \right\rangle} \approx {\left| {\boldsymbol{q}}, -{\boldsymbol{q}} \right\rangle}$. The orthogonality from the bound state is then $${\left\langle C_{{\boldsymbol{q}}} | \mathrm{x}_\nu \right\rangle} = \sum_{{\boldsymbol{q}}'} \psi_\nu ({\boldsymbol{q}}') {\left\langle 0 \right|} \alpha_{{\boldsymbol{q}}} \beta_{{\boldsymbol{q}}} \alpha^\dag_{{\boldsymbol{q}}'} \beta^\dag_{{\boldsymbol{q}}'} {\left| 0 \right\rangle} = \psi_\nu ({\boldsymbol{q}}),$$ where we used the usual anticommutation rule for $\alpha_{{\boldsymbol{q}}}$ and $\beta_{{\boldsymbol{q}}}$, and the notation ${\left| C_{{\boldsymbol{q}}} \right\rangle} = C^\dag_{{\boldsymbol{q}}} {\left| 0 \right\rangle}$, ${\left| \mathrm{x}_\nu \right\rangle} = B^\dag_{\nu} {\left| 0 \right\rangle}$. We note that $\psi_\nu ({\boldsymbol{q}}) \sim a_0/\sqrt{A} \sim 1/\sqrt{N}$ where $N$ is the number of the unit cells in the sample. Hence, for a sufficiently large sample, we obtain the approximate orthogonality ${\left\langle C_{{\boldsymbol{q}}} | \mathrm{x}_\nu \right\rangle} \sim 1 / \sqrt{N} \rightarrow 0$. The normalization is also easily obtained as $${\left\langle C_{{\boldsymbol{q}}} | C_{{\boldsymbol{q}}'} \right\rangle} = {\left\langle 0 \right|} \alpha_{{\boldsymbol{q}}} \beta_{{\boldsymbol{q}}} \alpha^\dag_{{\boldsymbol{q}}'} \beta^\dag_{{\boldsymbol{q}}'} {\left| 0 \right\rangle} = \delta_{{\boldsymbol{q}}, {\boldsymbol{q}}'}.$$ In addition, the energy is the same with the replacement $\tilde{{\boldsymbol{q}}} \rightarrow {\boldsymbol{q}}$. Hence, we conclude that, for a sufficiently large sample size $A$, the creation operator in equation is approximately correct. The operator $C^\dag_{{\boldsymbol{q}}}$ excites the electron in the valence bands to the conduction band. Hence, we can interpret as $C^\dag_{{\boldsymbol{q}}} = \left(\otimes_{{\boldsymbol{q}}' \neq {\boldsymbol{q}}} I_{{\boldsymbol{q}}'} \right) \otimes {\left| c_{{\boldsymbol{q}}} \rangle \langle v_{{\boldsymbol{q}}} \right|}$, where ${\left| c_{{\boldsymbol{q}}} \right\rangle}, {\left| v_{{\boldsymbol{q}}} \right\rangle}$ are the single electron Bloch states at the conduction and the valence band, respectively, with a momentum $\hbar {\boldsymbol{q}}$, and $I_{{\boldsymbol{q}}'} = {\left| c_{{\boldsymbol{q}}'} \rangle \langle c_{{\boldsymbol{q}}'} \right|} + {\left| v_{{\boldsymbol{q}}'} \rangle \langle v_{{\boldsymbol{q}}'} \right|}$. Using this representation, the anti-commutation rule for the bound and the unbound exciton creation and annihilation operators are easily obtained: $\{C_{{\boldsymbol{q}}}, C_{{\boldsymbol{q}}'}^\dag \} = \delta_{{\boldsymbol{q}}, {\boldsymbol{q}}'}$, $\{B_{\nu}, B_{\nu'}^\dag \} = \delta_{\nu, \nu'}$, $\{ C^\dag_{{\boldsymbol{q}}}, B_\nu \} \sim 1/ \sqrt{N} \rightarrow 0$ while all other anti-commutators are zero. It is also noteworthy that the Hilbert subspace for the single excitation is spanned by the bound and the unbound exciton states $\{ {\left| \mathrm{x}_\nu \right\rangle}, {\left| C_{{\boldsymbol{q}}} \right\rangle} \}$ with all possible $\nu$ and ${\boldsymbol{q}}$, and thus, the completeness in this single excitation subspace is $$\sum_{\nu} {\left| \mathrm{x}_\nu \rangle \langle \mathrm{x}_\nu \right|} + \sum_{{\boldsymbol{q}}} {\left| C_{{\boldsymbol{q}}} \rangle \langle C_{{\boldsymbol{q}}} \right|} = {\boldsymbol{1}}.$$ Density operator matrix elements {#sec:matrix-rho} ================================ In this section, we present derivations of density matrix elements that are used in the main text. Second-harmonic generation -------------------------- We resolve the matrix elements $\rho^{(2)}_{(s,1)0}, \rho^{(2)}_{(0,0)(s,1)},$ and $\rho^{(2)}_{(0,0)0}$ with $s=$ 1 or 2, with the polarization configuration of $\sigma^+;\sigma^-\sigma^-$, corresponding to the frequencies $2 \omega_\kappa$, $\omega_\kappa$, $\omega_\kappa,$ respectively. Recall that the interaction Hamiltonian is given as $\mathcal{H}_I + \mathcal{H}'_I$, where $\mathcal{H}_I$ is given in equation with an interaction coefficient $g^-_{(s,1)}$ and $\mathcal{H}'_I$ is given in equation , respectively. We first resolve $\rho^{(2)}_{(s,1)0}$ by considering $$\begin{aligned} &\dot{\rho}^{(2)}_{(s,1) 0} (t) = \nonumber \\ &- i e_{(s,1)} \rho^{(2)}_{(s,1) 0} (t) - \frac{i}{\hbar} {\left\langle \text{x}_{(s,1)} \right|}[\mathcal{H}_I + \mathcal{H}_I', \rho^{(1)}] {\left| 0 \right\rangle}.\end{aligned}$$ with the useful fact that the only nonzero matrix elements in $\rho^{(1)}$ in this situation are $$\begin{aligned} \rho^{(1)}_{(s,1)0} &= {\left\langle \mathrm{x}_{(s,1)} \right|} \rho^{(1)} {\left| 0 \right\rangle} =\frac{g^-_{(s,1)}}{\hbar} \frac{\varepsilon (\kappa) \mathrm{e}^{-i \omega_\kappa t}}{(e_{s} - \omega_\kappa - i \epsilon)} \label{eq:rho1s1}\end{aligned}$$ and its complex conjugate, where $s = 1,2$. Using this, we insert the completeness relation and obtain $$\begin{aligned} {\left\langle \text{x}_{(s,1)} \right|} [\mathcal{H}_I &+ \mathcal{H}'_I, \rho^{(1)}] {\left| 0 \right\rangle} = \nonumber \\ & {\left\langle \mathrm{x}_{(s,1)} \right|} \mathcal{H}_I + \mathcal{H}'_I {\left| \mathrm{x}_{(s,1)} \rangle \langle \mathrm{x}_{(s,1)} \right|} \rho^{(1)} {\left| 0 \right\rangle} \nonumber \\ &- {\left\langle \mathrm{x}_{(s,1)} \right|} \rho^{(1)} {\left| 0 \rangle \langle 0 \right|} \mathcal{H}_I + \mathcal{H}'_I{\left| 0 \right\rangle}.\end{aligned}$$ Both terms are zero since the diagonal matrix elements for $\mathcal{H}_I$ and $\mathcal{H}'_I$ are zero. This implies that $\rho^{(2)}_{(s,1)0} (t) = 0$ since $\rho^{(2)}_{(s,1)0} (-\infty) = 0$. Next, let us consider the matrix element $\rho^{(2)}_{(0,0)(s,1)}$. To calculate this, we evaluate the following commutator, using again that the only nonzero matrix elements of $\rho^{(1)}$ are $\rho^{(1)}_{(s,1)0}$ and its conjugate: $$\begin{aligned} {\left\langle \text{x}_{(0,0)} \right|} [\mathcal{H}_I &+ \mathcal{H}'_I, \rho^{(1)}] {\left| \mathrm{x}_{(s,1)} \right\rangle} = \nonumber \\ & {\left\langle \mathrm{x}_{(0,0)} \right|} \mathcal{H}_I + \mathcal{H}'_I {\left| 0 \rangle \langle 0 \right|} \rho^{(1)} {\left| \mathrm{x}_{(s,1)} \right\rangle} \nonumber \\ &- {\left\langle \mathrm{x}_{(0,0)} \right|} \rho^{(1)}(\mathcal{H}_I + \mathcal{H}'_I) {\left| \mathrm{x}_{(s,1)} \right\rangle}.\end{aligned}$$ The first term is zero since the only nonzero matrix elements for $\mathcal{H}_I$ are ${\left\langle \mathrm{x}_{(s,1)} \right|} \mathcal{H}_I {\left| 0 \right\rangle}$ and its complex conjugate, and the only nonzero matrix elements for $\mathcal{H}'_I$ are ${\left\langle \mathrm{x}_{(s,-1)} \right|} \mathcal{H}'_I {\left| \mathrm{x}_{(0,0)} \right\rangle}$ where $s = 1, 2, \cdots$ and its complex conjugate. The second term is zero since the bra ${\left\langle \mathrm{x}_{(0,0)} \right|}$ eliminates $\rho^{(1)}$. This implies that the matrix element $\rho^{(2)}_{(0,0)(s,1)}$ is zero. Finally, we resolve the matrix element $\rho^{(2)}_{(0,0)0}$ by solving $$\begin{aligned} &\dot{\rho}^{(2)}_{(0,0) 0} (t) = \nonumber \\ &- i e_{(0,0)} \rho^{(2)}_{(0,0) 0} (t) - \frac{i}{\hbar} {\left\langle \text{x}_{(0,0)} \right|}[\mathcal{H}_I + \mathcal{H}_I', \rho^{(1)}] {\left| 0 \right\rangle}. \label{eq:rhox01}\end{aligned}$$ We calculate the following $$\begin{aligned} {\left\langle \text{x}_{(0,0)} \right|} [\mathcal{H}_I &+ \mathcal{H}'_I, \rho^{(1)}] {\left| 0 \right\rangle} = \nonumber \\ & {\left\langle \mathrm{x}_{(0,0)} \right|} \mathcal{H}_I + \mathcal{H}'_I {\left| \mathrm{x}_{(s,1)} \rangle \langle \mathrm{x}_{(s,1)} \right|} \rho^{(1)} {\left| 0 \right\rangle} \nonumber \\ &- {\left\langle \mathrm{x}_{(0,0)} \right|} \rho^{(1)}(\mathcal{H}_I + \mathcal{H}'_I) {\left| 0 \right\rangle}.\end{aligned}$$ The second term is zero since the bra ${\left\langle \mathrm{x}_{(0,0)} \right|}$ eliminates $\rho^{(1)}$. We calculate the following: $$\begin{aligned} &{\left\langle \mathrm{x}_{(0,0)} \right|} \mathcal{H}_I + \mathcal{H}'_I {\left| \mathrm{x}_{(s,1)} \right\rangle} = {\left\langle \mathrm{x}_{(0,0)} \right|} \mathcal{H}'_I {\left| \mathrm{x}_{(s,1)} \right\rangle} \nonumber \\ &= -\sum_{s'=1,2} h^-_{(0,0)(s,1)} \varepsilon(\kappa) \mathrm{e}^{- i \omega_\kappa t} \rho^{(1)}_{(s,1)0}. \end{aligned}$$ Integrating the equation using above and equation , we obtain the final result in equation . Third-harmonic generation ------------------------- We resolve the matrix elements of $\rho^{(3)}$ in the third-harmonic generation with degenerate fundamental frequencies, but with polarization configuration of $\sigma^+;\sigma^+\sigma^-\sigma^+$, corresponding to the frequencies $3 \omega_\kappa, \omega_\kappa, \omega_\kappa, \omega_\kappa$, respectively. The relevant basis for the matrix elements is $\{ {\left| 0 \right\rangle}, {\left| (s, \pm 1) \right\rangle}, {\left| (s', 0) \right\rangle} \}$ where $s = 1,2, \cdots$ and $s' = 0,1, \cdots$. We will intensively use the selection rules (dipole moments) given in Table \[tab:gnu\] and equation . We first resolve the matrix elements for $\rho^{(1)}$. For $\sigma^+$ polarization, we find the only nonzero matrix element for $\rho^{(1)}$ to be $$\rho^{(1)+}_{(s,0)0} = {\left\langle \mathrm{x}_{(s,0)} \right|} \rho^{(1)} {\left| 0 \right\rangle} =\frac{g^+_{(s,0)}}{\hbar} \frac{\varepsilon (\kappa) \mathrm{e}^{-i \omega_\kappa t}}{(e_{s} - \omega_\kappa - i \epsilon)}, \label{eq:rho1plus}$$ where $s = 0,1,2, \cdots$. Next, we will resolve the matrix elements of $\rho^{(2)}$. For $\sigma^-\sigma^+$ polarization sequence, the first order process landed on ${\left| \mathrm{x}_{(s,0)} \right\rangle}$. Then, the second driving from $\sigma^-$ light will bring the state to ${\left| \mathrm{x}_{(s',-1)} \right\rangle}$ with $s'=1,2, \cdots$. Hence, the only nonzero matrix element is $\rho^{(2)-+}_{(s',-1)0}$, which is obtained by calculating the following commutator: $$\begin{aligned} &{\left\langle \mathrm{x}_{(s',-1)} \right|} [\mathcal{H}_I^- + \mathcal{H}^{'-}_I, \rho^{(1)+}] {\left| 0 \right\rangle} \nonumber \\ &= {\left\langle \mathrm{x}_{(s',-1)} \right|} (\mathcal{H}_I^- + \mathcal{H}^{'-}_I) {\left| \mathrm{x}_{(s,0)} \rangle \langle \mathrm{x}_{(s,0)} \right|} \rho^{(1)+} {\left| 0 \right\rangle} \nonumber \\ &~~~~~- {\left\langle \mathrm{x}_{(s',-1)} \right|} \rho^{(1)+}(\mathcal{H}_I^- + \mathcal{H}^{'+}_I){\left| 0 \right\rangle} \nonumber \\ &= {\left\langle \mathrm{x}_{(s',-1)} \right|} \mathcal{H}^{'-}_I {\left| \mathrm{x}_{(s,0)} \right\rangle} \rho^{(1)+}_{(s,0)0} \nonumber \\ &= - h^-_{(s',-1)(s,0)} \rho^{(1)+}_{(s,0)0} \mathcal{E} (\kappa) \mathrm{e}^{- i \omega_\kappa t},\end{aligned}$$ where the second term in the first equation is zero since ${\left\langle \mathrm{x}_{(s',-1)} \right|}$ eliminates $\rho^{(1)+}$. Here, we clarified that the interaction Hamiltonian is due to $\sigma^-$ light. In the second equation, we used the fact that $\mathcal{H}^{'+}_I$ connects ${\left| \mathrm{x}_{(s,0)} \right\rangle}$ and ${\left\langle \mathrm{x}_{(s',-1)} \right|}$. Then, after integration we obtain the matrix element $$\begin{aligned} &\rho^{(2)-+ }_{(s',-1)0} = \nonumber \\ &\frac{h^-_{(s',-1)(s,0)}g^+_{(s,0)} }{\hbar^2} \frac{\varepsilon^2(\kappa) \mathrm{e}^{- 2 i \omega_\kappa t}}{(e_{s} - \omega_\kappa - i \epsilon)(e_{s'} - 2 \omega_\kappa - i \epsilon')}.\end{aligned}$$ Next, we resolve the matrix elements for $\rho^{(3)+-+}$. Because we are solving for the third-harmonic generation, we look for the matrix elements proportional to $\mathrm{e}^{-i 3 \omega_\kappa t}$. It is obvious to see that one nonzero matrix element for $\rho^{(3)+-+}$ is $\rho^{(3)+-+}_{(0,0)0}$ which is given by $$\begin{aligned} &\rho^{(3)+-+ }_{(0,0)0} = \sum_{s', s} \frac{h^+_{(0,0)(s',-1)}h^-_{(s',-1)(s,0)}g^+_{(s,0)} }{\hbar^3} \times \nonumber \\ & \frac{\varepsilon^3(\kappa) \mathrm{e}^{- 3 i \omega_\kappa t}}{(e_{s} - \omega_\kappa - i \epsilon)(e_{s'} - 2 \omega_\kappa - i \epsilon')(e_{0} - 3 \omega_\kappa - i \epsilon'')}.\end{aligned}$$ To calculate another nonzero matrix element, we consider the light with frequency $\omega'_\kappa$, which we will set later $\omega'_\kappa = - \omega_\kappa$. Consider the commutator for the matrix element $\rho^{(3)+-+}_{(s',-1)(0,0)}$: $$\begin{aligned} &{\left\langle \mathrm{x}_{(s,-1)} \right|} [\mathcal{H}_I^+ + \mathcal{H}^{'+}_I, \rho^{(2)-+}] {\left| \mathrm{x}_{(0,0)} \right\rangle} \nonumber \\ &= {\left\langle \mathrm{x}_{(s,-1)} \right|} (\mathcal{H}_I^+ + \mathcal{H}^{'+}_I) \rho^{(2)-+} {\left| \mathrm{x}_{(0,0)} \right\rangle} \nonumber \\ &~~~ - {\left\langle \mathrm{x}_{(s,-1)} \right|} \rho^{(2)-+} {\left| 0 \rangle \langle 0 \right|}(\mathcal{H}_I^+ + \mathcal{H}^{'+}_I){\left| \mathrm{x}_{(0,0)} \right\rangle} \nonumber \\ &= - \rho^{(2)-+}_{(s,-1)0} {\left\langle 0 \right|} \mathcal{H}^{+}_I {\left| \mathrm{x}_{(0,0)} \right\rangle} \nonumber \\ &= g^{+*}_{(0,0)} \rho^{(2)-+}_{(s,-1)0} \mathcal{E}^* (\kappa') \mathrm{e}^{i \omega'_\kappa t}.\end{aligned}$$ We first integrate the differential equation. Then, setting $\omega'_\kappa = -\omega_\kappa$, we obtain a result proportional to $\mathrm{e}^{-i 3 \omega_\kappa t}$: $$\begin{aligned} &\rho^{(3)+-+ }_{(s',-1)(0,0)} = -\sum_{s', s} \frac{g^*_{(0,0)}h^-_{(s',-1)(s,0)}g^+_{(s,0)} }{\hbar^3} \times \nonumber \\ & \frac{\varepsilon^3(\kappa) \mathrm{e}^{- 3 i \omega_\kappa t}}{(e_{s} - \omega_\kappa - i \epsilon)(e_{s'} - 2 \omega_\kappa - i \epsilon')(e_{s'} - e_0 + \omega_\kappa + i \epsilon'')}.\end{aligned}$$ We note that this is a nonresonant contribution due to the factor $e_0 + \omega_\kappa + i \epsilon''$ in the denominator. Finally, we calculate the matrix element $\rho^{(3)+-+}_{(s',-1)0}$. For this, we calculate the following commutator: $$\begin{aligned} &{\left\langle \mathrm{x}_{(s,-1)} \right|} [\mathcal{H}_I^+ + \mathcal{H}^{'+}_I, \rho^{(2)-+}] {\left| 0 \right\rangle} \nonumber \\ &= {\left\langle \mathrm{x}_{(s,-1)} \right|} (\mathcal{H}_I^+ + \mathcal{H}^{'+}_I) {\left| \mathrm{x}_{(s,-1)} \rangle \langle \mathrm{x}_{(s,-1)} \right|} \rho^{(2)-+} {\left| 0 \right\rangle} \nonumber \\ &~~~ - {\left\langle \mathrm{x}_{(s,-1)} \right|} \rho^{(2)-+} {\left| 0 \rangle \langle 0 \right|}(\mathcal{H}_I^+ + \mathcal{H}^{'+}_I){\left| 0 \right\rangle}.\end{aligned}$$ Both terms are zero since $\mathcal{H}_I^+$ and $\mathcal{H}_I^{'+}$ have nonzero elements only on off-diagonal. This implies that $\rho^{(3)+-+}_{(s',-1)0}(t) = 0$. Two-photon transition --------------------- We resolve the matrix elements of $\rho^{(3)}$ for the two-photon transition with degenerate fundamental frequencies, with polarization configuration of $\sigma^+;\sigma^+\sigma^+\sigma^+$, corresponding to $2\omega_\kappa, \omega_\kappa, \omega_\kappa, - \omega_\kappa$, respectively. We present the result for the case where $2 \omega_\kappa \sim e_{(1,1)}$. The relevant basis for the matrix elements is $\{ {\left| 0 \right\rangle}, {\left| \mathrm{x}_{(s,0)} \right\rangle}, {\left| \mathrm{x}_{(1,1)} \right\rangle} \}$, where $s = 0,1,2, \cdots$. The only nonzero matrix element for $\rho^{(1)}$ is given in equation . We now resolve the matrix elements of $\rho^{(2)}$. The first-order process landed on the state ${\left| \mathrm{x}_{(s,0)} \right\rangle}$ with $s = 0,1,2, \cdots$. According to the selection rule, the second-order process with the polarization sequence $\sigma^+\sigma^+$ needs to land on ${\left| \mathrm{x}_{(s',1)} \right\rangle}$ with $s' = 1,2, \cdots$ via $\mathrm{e}^{-i \omega_\kappa t}$ term in $\mathcal{H}'_I$, or on ${\left| 0 \right\rangle}$ via $\mathrm{e}^{i \omega_\kappa t}$ term in $\mathcal{H}_I$. Let us consider $\rho^{(2) ++}_{(s',1)0}$ first. For this, let us calculate the commutator: $$\begin{aligned} &{\left\langle \mathrm{x}_{(s',1)} \right|} [\mathcal{H}_I^+ + \mathcal{H}^{'+}_I, \rho^{(1)+}] {\left| 0 \right\rangle} \nonumber \\ &= {\left\langle \mathrm{x}_{(s',1)} \right|} (\mathcal{H}_I^+ + \mathcal{H}^{'+}_I) {\left| \mathrm{x}_{(s,0)} \rangle \langle \mathrm{x}_{(s,0)} \right|} \rho^{(1)+} {\left| 0 \right\rangle} \nonumber \\ &~~~~~- {\left\langle \mathrm{x}_{(s',1)} \right|} \rho^{(1)+}(\mathcal{H}_I^+ + \mathcal{H}^{'+}_I){\left| 0 \right\rangle} \nonumber \\ &= {\left\langle \mathrm{x}_{(s',1)} \right|} \mathcal{H}^{'+}_I {\left| \mathrm{x}_{(s,0)} \right\rangle} \rho^{(1)+}_{(s,0)0} \nonumber \\ &= - h^+_{(s',1)(s,0)} \rho^{(1)+}_{(s,0)0} \mathcal{E} (\kappa) \mathrm{e}^{- i \omega_\kappa t}.\end{aligned}$$ From this, it easily follows that $$\begin{aligned} &\rho^{(2)++ }_{(1,1)0} = \nonumber \\ &\frac{h^+_{(1,1)(s,0)}g^+_{(s,0)} }{\hbar^2} \frac{\varepsilon^2(\kappa) \mathrm{e}^{- 2 i \omega_\kappa t}}{(e_s - \omega_\kappa - i \epsilon)(e_1 - 2 \omega_\kappa - i \epsilon')}.\end{aligned}$$ We then consider $\rho^{(2)}_{00}$. For this, let us calculate the commutator: $$\begin{aligned} &{\left\langle 0 \right|} [\mathcal{H}_I^+ + \mathcal{H}^{'+}_I, \rho^{(1)+}] {\left| 0 \right\rangle} \nonumber \\ &= {\left\langle 0 \right|} (\mathcal{H}_I^+ + \mathcal{H}^{'+}_I) {\left| \mathrm{x}_{(s,0)} \rangle \langle \mathrm{x}_{(s,0)} \right|} \rho^{(1)+} {\left| 0 \right\rangle} \nonumber \\ &~~~~~- {\left\langle 0 \right|} \rho^{(1)+} {\left| \mathrm{x}_{(s,0)} \rangle \langle \mathrm{x}_{(s,0)} \right|} (\mathcal{H}_I^+ + \mathcal{H}^{'+}_I){\left| 0 \right\rangle} \nonumber \\ &= {\left\langle 0 \right|} \mathcal{H}^{'+}_I {\left| \mathrm{x}_{(s,0)} \right\rangle} \rho^{(1)+}_{(s,0)0} - \mathrm{h.c.} \nonumber \\ &= - g^{+*}_{(s,0)} \rho^{(1)+}_{(s,0)0} \mathcal{E}^* (\kappa) \mathrm{e}^{i \omega_\kappa t} - \mathrm{h.c.}\end{aligned}$$ These terms are DC drives, which is proportional to an infinitesimal constant $\epsilon$, and hence, is negligible mathematically. Hence, the only significant nonzero elements of $\rho^{(2)}$ are $\rho^{(2)++}_{(1,1)0}$ and its complex conjugate. Next, we resolve the matrix elements for $\rho^{(3)+++}$. Our task is to find the matrix elements proportional to $\mathrm{e}^{- i \omega_\kappa t}$. The last frequency is negative: $-\omega_\kappa$, i.e., moving downward in energy. Since the second-order process landed on the state ${\left| \mathrm{x}_{(1,1)} \right\rangle}$, the third-order process must land on a state ${\left| \mathrm{x}_{(s'',0)} \right\rangle}$. One nonzero matrix element is thus given by $$\begin{aligned} &\rho^{(3)+++ }_{(s'',0)0} = \sum_{ s} \frac{h^{+*}_{(1,1)(s'',0)} h^+_{(1,1)(s,0)} g^+_{(s,0)} }{\hbar^3} \times \nonumber \\ & \frac{|\varepsilon(\kappa)|^2 \varepsilon(\kappa) \mathrm{e}^{- i \omega_\kappa t}}{(e_{s} - \omega_\kappa - i \epsilon)(e_{1} - 2 \omega_\kappa - i \epsilon')(e_{s''} + \omega_\kappa + i \epsilon'')}.\end{aligned}$$ This is a nonresonant term, due to the last factor in the denominator. We can find another nonzero matrix element $\rho^{(3)+++}_{(1,1)(s'',0)}$ as follows. Let us consider the commutator: $$\begin{aligned} &{\left\langle \mathrm{x}_{(1,1)} \right|} [\mathcal{H}_I^+ + \mathcal{H}^{'+}_I, \rho^{(2)++}] {\left| \mathrm{x}_{(s'',0)} \right\rangle} \nonumber \\ &= {\left\langle \mathrm{x}_{(1,1)} \right|} (\mathcal{H}_I^+ + \mathcal{H}^{'+}_I) \rho^{(2)-+} {\left| \mathrm{x}_{(s'',0)} \right\rangle} \nonumber \\ &~~~ - {\left\langle \mathrm{x}_{(1,1)} \right|} \rho^{(2)++} {\left| 0 \rangle \langle 0 \right|}(\mathcal{H}_I^+ + \mathcal{H}^{'+}_I){\left| \mathrm{x}_{(s'',0)} \right\rangle} \nonumber \\ &= -\rho^{(2)++}_{(1,1)0} {\left\langle 0 \right|} \mathcal{H}^{+}_I {\left| \mathrm{x}_{(s'',0)} \right\rangle} \nonumber \\ &= g^{+*}_{(s'',0)} \rho^{(2)++}_{(1,1)0} \mathcal{E}^* (\kappa) \mathrm{e}^{i \omega_\kappa t}.\end{aligned}$$ After integrating the differential equation, we obtain $$\begin{aligned} &\rho^{(3)+++ }_{(1,1)(s'',0)} = -\sum_{ s} \frac{g^{+*}_{(s'',0)} h^+_{(1,1)(s,0)} g^+_{(s,0)} }{\hbar^3} \times \nonumber \\ & \frac{|\varepsilon(\kappa)|^2 \varepsilon(\kappa) \mathrm{e}^{- i \omega_\kappa t}}{(e_{s} - \omega_\kappa - i \epsilon)(e_{1} - 2 \omega_\kappa - i \epsilon')(e_{1} - e_{s''} - \omega_\kappa - i \epsilon'')}.\end{aligned}$$ This is a resonant term. Finally, let us consider the matrix element $\rho^{(3)+++}_{(1,1)0}$. Let us consider the commutator $$\begin{aligned} &{\left\langle \mathrm{x}_{(1,1)} \right|} [\mathcal{H}_I^+ + \mathcal{H}^{'+}_I, \rho^{(2)++}] {\left| 0 \right\rangle} \nonumber \\ &= {\left\langle \mathrm{x}_{(1,1)} \right|} (\mathcal{H}_I^+ + \mathcal{H}^{'+}_I) {\left| \mathrm{x}_{(1,1)} \rangle \langle \mathrm{x}_{(1,1)} \right|} \rho^{(2)-+} {\left| 0 \right\rangle} \nonumber \\ &~~~ - {\left\langle \mathrm{x}_{(1,1)} \right|} \rho^{(2)++} {\left| 0 \rangle \langle 0 \right|}(\mathcal{H}_I^+ + \mathcal{H}^{'+}_I){\left| 0 \right\rangle}.\end{aligned}$$ Both terms are zero since $\mathcal{H}_I^+$ and $\mathcal{H}_I^{'+}$ have nonzero elements only on off-diagonal. This implies that $\rho^{(3)+++}_{(1,1)0} (t) = 0$. Calculation of the dipole moment $f^\pm_{\nu} (\mathbf{q}) = e {\left\langle \mathrm{x}_\nu \right|} \mathbf{r} \cdot \hat{\mathbf{\varepsilon}}^\pm {\left| C_{\mathbf{q}} \right\rangle}$ {#sec:fnu} =============================================================================================================================================================================================== Let us calculate the dipole moment $f^+_\nu ({\boldsymbol{q}})$ using the Blount formula in equation and the analytical solution in equation : $$\begin{aligned} f^+_\nu({\boldsymbol{q}}) =& - \hat{{\boldsymbol{\varepsilon}}}^+ \cdot i e \sum_{{\boldsymbol{q}}'} \psi^*_\nu ({\boldsymbol{q}}') {\boldsymbol{\nabla}}_{{\boldsymbol{q}}}{\left\langle c_{{\boldsymbol{q}}'} | c_{{\boldsymbol{q}}} \right\rangle}\nonumber \\ & - i e \sum_{{\boldsymbol{q}}}\psi^*_\nu ({\boldsymbol{q}}) (1 + \tau) \frac{\hbar^2 v^2}{\sqrt{2}\Delta^2} q \mathrm{e}^{i \tau \phi_q}. \label{eq:fnu1}\end{aligned}$$ The contribution coming from the second term on the right hand side is negligible due to the angular integral, if $\nu = (n,0)$. We then calculate the contribution from the first term. To evaluate this, let us use the following integration by parts. $$\begin{aligned} &\int_A {\text{d}}^2 q \sum_{{\boldsymbol{q}}'} \left( r({\boldsymbol{q}}'){\boldsymbol{\nabla}}_{{\boldsymbol{q}}} {\left\langle \psi_{{\boldsymbol{q}}', \lambda} | \psi_{{\boldsymbol{q}}, \lambda} \right\rangle} \right) s({\boldsymbol{q}}) \nonumber \\ &= \int_{\partial A} {\text{d}}{\boldsymbol{n}} r({\boldsymbol{q}}) s({\boldsymbol{q}}) - \int_A {\text{d}}^2 q r({\boldsymbol{q}}){\boldsymbol{\nabla}}_{{\boldsymbol{q}}} s({\boldsymbol{q}}).\end{aligned}$$ The first term is the boundary line integral. The contribution from the second term above vanishes due to the angular integral over $\phi_q$. This leads to $$\begin{aligned} &\chi^{(2)}_U (\omega_\kappa \sim e_0/2) = \nonumber \\ &-\hat{{\boldsymbol{\varepsilon}}}^+ \cdot \int_{\partial \textrm{FBZ}} {\text{d}}{\boldsymbol{n}} \left( \begin{array}{l} \psi^{*}_\nu ({\boldsymbol{q}})ie \frac{e_\nu d^+_{cv} ({\boldsymbol{q}}) g_\nu^*}{8 \pi^2 \omega_\kappa \epsilon_0 \hbar^2 d_\textrm{eff}} \\ \times \frac{1}{(\omega_{{\boldsymbol{q}}}- \omega_\kappa - i (\gamma_U/2))(e_\nu - 2 \omega_\kappa - i (\gamma_B/2))} \end{array} \right)\end{aligned}$$ Performing the boundary line integral involves multiplying the factor $\mathrm{e}^{i \phi_q}$ since $\hat{{\boldsymbol{\varepsilon}}}^+ \cdot \hat{{\boldsymbol{n}}} = \mathrm{e}^{i \phi_q}$. Recall that the threefold rotational symmetry is perturbatively treated. The zeroth order that does not have the threefold rotational symmetry integrates to zero over $\phi_q$. The higher-order perturbative terms involving the threefold rotational symmetry also vanishes as follows. Recall that the energy $\hbar \omega_{{\boldsymbol{q}}}$ also has the threefold rotational symmetry. Hence, the higher order terms in the integrand have a threefold rotational symmetry. We note that $$\int_0^{2\pi} {\text{d}}\phi \text{e}^{\pm i \phi} f(\cos(3 \phi)) = 0,$$ where $f$ is any analytical function. It is easily seen by considering $\cos(3 \phi) = (1/2) (\mathrm{e}^{i 3\phi} + \mathrm{e}^{-i 3 \phi})$, and the Taylor series term $\cos^n (3 \phi)$ involves $\mathrm{e}^{\pm i 3m \phi}$ with an integer $m$ that the integral of $f(\cos(3 \phi))$ over $\phi$ after multiplying with $\mathrm{e}^{\pm i \phi}$ vanishes. This leads to a conclusion that, regardless of the polarization, this boundary integral term must be zero. Therefore, $\chi^{(2)}_U (\omega_\kappa \sim e_0/2)$ vanishes. For $\sigma_-$ input polarization, the second term in equation vanishes, and the boundary line integral is the same result, hence, the contribution from the unbound exciton also vanishes for $\sigma_-$ light. Overall, we conclude that the unbound exciton does not efficiently couple back to the bound exciton states. This allows us to ignore any channel of the unbound exciton virtual states to land on a bound exciton state.
--- abstract: 'We have performed ground-based transmission spectroscopy of the hot Jupiter orbiting the cool dwarf WASP-80 using the ACAM instrument on the William Herschel Telescope (WHT) as part of the LRG-BEASTS programme. This is the third paper of a ground-based transmission spectroscopy survey of hot Jupiters using low-resolution grism spectrographs. We observed two transits of the planet and have constructed transmission spectra spanning a wavelength range of 4640 – 8840Å. Our transmission spectrum is inconsistent with a previously claimed detection of potassium in WASP-80b’s atmosphere, and is instead most consistent with a haze. We also do not see evidence for sodium absorption at a resolution of 100Å.' author: - | J. Kirk,$^{1}$[^1] P. J. Wheatley[^2],$^{1}$ T. Louden,$^{1}$ I. Skillen,$^{2}$ G. W. King,$^{1}$ J. McCormac$^{1}$, and P. G. J. Irwin$^{3}$\ $^1$Department of Physics, University of Warwick, Coventry, CV4 7AL, UK\ $^{2}$Isaac Newton Group of Telescopes, Apartado de Correos 321, 38700 Santa Cruz de Palma, Spain\ $^{3}$Atmospheric, Oceanic, and Planetary Physics, Clarendon Laboratory, University of Oxford, Parks Road, Oxford OX1 3PU, UK\ bibliography: - 'wasp80\_bib.bib' date: 'Accepted XXX. Received YYY; in original form ZZZ' title: 'LRG-BEASTS III: Ground-based transmission spectrum of the gas giant orbiting the cool dwarf WASP-80' --- \[firstpage\] methods: observational -techniques: spectroscopic -planets and satellites: individual: WASP-80b -planets and satellites: atmospheres Introduction ============ The Low Resolution Ground-Based Exoplanet Atmosphere Survey using Transmission Spectroscopy (LRG-BEASTS, ‘large beasts’) is a programme to characterise hot Jupiter atmospheres homogeneously. This is the third planet in our survey, following the detection of a haze-induced Rayleigh scattering slope in the atmosphere of HAT-P-18b [@Kirk2017] and clouds in the atmosphere of WASP-52b [@Louden2017]. Transmission spectroscopy is revealing a diverse array of exoplanet atmospheres, from clear (e.g. WASP-39b: @Fischer2016; @Nikolov2016; @Sing2016) to hazy (e.g. HAT-P-18b: @Kirk2017) to cloudy (e.g. WASP-52b: @Kirk2016; @Chen2017_w52; @Louden2017). As yet, no clear correlation has emerged between fundamental planetary parameters and the broad atmospheric characteristics (e.g. @Sing2016), although there is tentative evidence that hotter exoplanets are more likely to be cloud free [@Heng2016]. This makes additional studies necessary to shed light on such correlations if they exist, and LRG-BEASTS, along with other ground-based surveys such as the Gran Telescopio Canarias (GTC) exoplanet transit spectroscopy survey (e.g. @Parviainen2016, @Chen2017_w52), ACCESS [@Rackham2017], the Gemini/GMOS Transmission Spectral Survey [@Huitson2017], and the VLT/FORS2 survey (e.g. @Nikolov2016; @Gibson2017) will expand the sample of studied planets. While low-resolution transmission spectroscopy can probe the deeper, pressure-broadened features in the atmosphere and reveal clouds and hazes, high-resolution transmission spectroscopy can reveal narrow line features at lower pressures. Recent high-resolution results include the measurements of the wind speed on HD189733b [@Louden2015; @Brogi2016], abnormal variability in the stellar line profiles of H$_\alpha$ near the transit of HD189733b [@Cauley2017], detections of exoplanetary sodium (e.g. @Wyttenbach2015; @Khalafinejad2017; @Wyttenbach2017), and analysing the affect of centre-to-limb variation on transmission spectra [@Yan2017]. WASP-80b is a gas giant, with a radius of 0.986R$_{\textrm{J}}$, orbiting a cool dwarf, with a radius of 0.593R$_\odot$ [@Mancini2014_w80]. This puts it in a rare class of objects, and its large transit depth of 2.9% makes it a good candidate for transmission spectroscopy. For this reason, there have been previous atmospheric studies of WASP-80b. Transit photometry of WASP-80b has suggested a hazy atmosphere with no large variation with planetary radius (@Fukui2014; @Mancini2014_w80; @Triaud2015). However, [@Sedaghati2017], using the VLT/FORS2, reported a detection of pressure-broadened potassium absorption. This suggests a clear and low metallicity atmosphere, as clouds or hazes would act to mask the wings of this feature. In contrast, [@Parviainen2017] recently published a transmission spectrum of WASP-80b, using the GTC, that was best represented by a flat line and showed no evidence for potassium absorption. In this paper, we present a low resolution transmission spectrum of WASP-80b with ACAM, as part of our LRG-BEASTS programme, which is inconsistent with the claimed potassium feature. Observations ============ Two transits of WASP-80b were observed on the nights of 2016 August 18 and 2016 August 21 using the ACAM instrument [@Benn2008] on the William Herschel Telescope (WHT). This is the same instrument and setup we used in [@Kirk2017]. The observations were taken in fast readout mode with a smaller-than-standard window to reduce the overheads to 10s, with exposure times of 50s. For these transits we used a 40 arcsec wide slit as the 27 arcsec slit used in our study of HAT-P-18b was broken. We chose to use this wide slit to avoid differential slit losses between the target and comparison star. Due to the relatively sparse field, this wide slit did not cause problems with contaminating stars (Fig. \[fig:extraction\_frame\], top panel). As with [@Kirk2017], we chose not to use an order blocking filter due to concerns this may introduce unwanted systematics. Biases, flat fields and arc spectra were taken at the start and end of both nights. On the first night we observed the target from airmass 1.41 to 1.16 to 1.30, with a moon illumination of 99% at 33 degrees from the target. On the second night we observed the target from airmass 1.20 to 1.16 to 1.70, with a moon illumination of 83% at 73 degrees from the target. The second night was affected by clouds passing overhead before and during ingress but these cleared by the time of mid-transit (Fig. \[fig:ancillary\_plots\]). To perform differential spectroscopy, we simultaneously observed a comparison target with a similar magnitude to that of WASP-80. The colour difference between the two stars was larger than desirable as we were limited by the length of the ACAM slit, which is 7.6 arcmin, and the fact that WASP-80 has a very close companion. This companion needed to be oriented such that its light was not blended with that of the target (Fig. \[fig:extraction\_frame\]). The chosen comparison star is 4.3 arcmin from the target and has a $V$ magnitude of 12.6 and $B-V$ colour of 0.97. This compared to WASP-80’s $V$ magnitude of 11.9 and $B-V$ colour of 1.38. Data Reduction ============== To reduce the data, we used the same custom <span style="font-variant:small-caps;">python</span> scripts as introduced in [@Kirk2017]. For the first night, 102 bias frames were median combined to create a master bias and for the second night, 111 bias frames were median combined. To flat-field our science images, we used a spectral sky flat normalised with a running median, taken at twilight with the same 40 arcsec wide slit that was used for our science images. To create the sky flat, 66 flat frames were median combined for night 1 and 86 flat frames for night 2. To extract the spectra of both stars, we followed the process as described in [@Kirk2017]. This involved fitting a polynomial to the locations of the traces, fitting the sky background across two regions either side of each trace, and performing normal extraction with a fixed aperture (Fig. \[fig:extraction\_frame\]). We experimented with the choice of extraction aperture width, background offset, background width and the order of the polynomial fitted across the background. In each case, we fitted an analytic transit light curve with a cubic polynomial to the resulting white light curve and used the extraction parameters that produced the lowest RMS in the residuals. We found that a 25 pixel-wide aperture produced the lowest scatter. The pixel scale of ACAM is 0.25 arcsec pixel$^{-1}$. The background was estimated by fitting a quadratic polynomial across two regions either side of the target and comparison traces after masking contaminating stars from these regions (Fig. \[fig:extraction\_frame\]). For the target these regions were 100 pixels wide, and offset by 20 pixels from the target aperture. For the comparison these were again offset by 20 pixels from the comparison aperture but 150 pixels wide. The extra width was needed for the comparison as several contaminating stars had to be masked from these regions, while for the target a narrower region was used to avoid getting too close to the left hand edge of the chip (Fig. \[fig:extraction\_frame\]). We found that the combination of these background widths with the quadratic polynomial modelled the background variation well (Fig. \[fig:extraction\_frame\]). Cosmic rays were removed from the background regions by masking pixels that deviated by greater than $3\sigma$ from the median. Cosmic rays falling within the target aperture were removed in the same way as described in [@Kirk2017], by dividing each spectrum by a frame clean of cosmics and removing deviant points. These reductions resulted in 230 spectra for night 1 and 237 spectra for night 2. ![Top panel: a through-slit image of the field, taken with a 40 arcsec wide slit. The target is the star intersected by the left vertical line and the comparison is the star intersected by the right vertical line. The image has been cropped in the vertical direction for clarity. Middle panel: Example science frame following bias and flat field corrections, with blue wavelengths at the bottom of the CCD and red wavelengths at the top. The extraction apertures are shown by the solid blue lines for the target (left) and comparison (right). The dashed lines indicate the regions in which the sky background was estimated. Contaminating stars falling within these regions were masked from the background fit. The close companion to the target as noted in the text can be seen as the spectral trace immediately to the right of the target trace. This was masked from the background estimation in this region. Bottom panel: A cut along the spatial direction at a y-pixel of 740. The background polynomial fits to each star are shown by the green lines. The y-axis has been cropped for clarity.[]{data-label="fig:extraction_frame"}](extraction_frame_with_slit_and_slice.pdf) Diagnostics of the extraction of the data for night 1 and night 2 are shown in Fig. \[fig:ancillary\_plots\]. The clouds in night 2 are clearly seen as drops in transmission in the raw light curves of the target and comparison (Fig. \[fig:ancillary\_plots\], right-hand column, fifth panel). Using the raw light curves, we removed frames corresponding to drops in the transmission before further analysis. Fig. \[fig:wl\_fits\] shows the white light curve for night 2 once the cloud-affected frames had been removed, leaving 174 frames. ![image](ancillary_plots_night1.pdf) ![image](ancillary_plots_night2.pdf) With spectra extracted for each frame, we aligned the spectra in pixel space and wavelength calibrated them following the procedure of [@Kirk2017]. The pixel alignment was done by cross-correlating strong features in each spectral frame with a reference frame individually for the target and comparison. For each frame this resulted in pixel shift as a function of position on the CCD for both the target and comparison, which we fitted with a third-order polynomial. Using this polynomial, we then resampled the target and comparison spectra onto the grid of the reference frame, using <span style="font-variant:small-caps;">pysynphot</span>[^3] which conserves flux. To wavelength calibrate, we constructed an arc solution for both the target and comparison. Although the arc frames were taken with a different slit to the science images, and therefore could not be used for absolute wavelength calibration, they did provide a useful starting point. The final wavelength calibration was performed using stellar and telluric lines in the spectra of the target and comparison. Following the wavelength calibration, we divided the spectra into 35 wavelength bins (Fig. \[fig:bin\_locations\]). The bluest four bins were made wider than the rest to increase the signal to noise ratio in these bins, given the red spectrum of the star and red sensitivity of ACAM. This led to two bins 300Å wide, two bins 200Å wide and 31 bins 100Å wide. This width was chosen as a compromise between signal to noise and resolution. Similarly to [@Gibson2017], we used a Tukey window to smooth the edges of the bins to avoid problems arising from sharp bin edges (Fig. \[fig:bin\_locations\]). We chose to ignore the region containing the strong telluric oxygen feature at $\sim7600$Å due to increased noise in this binned light curve. While there were other telluric features present in the spectra, these were weaker and the light curves containing these features were not significantly worse than neighbouring bins. ![Example extracted spectra of the target (blue) and comparison (red), normalised such that $F$(7000Å) = 1.0. The lower filled boxes indicate the passbands used to create our wavelength-binned light curves. The grey region was excluded from our analysis.[]{data-label="fig:bin_locations"}](bin_locations_tukey.pdf) Data Analysis ============= Fitting the white light curves {#sec:wl_fits} ------------------------------ Following extraction of the spectra, we created normalised, differential, white light curves for both nights by integrating the spectra for each frame. We then fitted the white light curves from both nights together with analytic transit light curves [@MandelAgol] with a quadratic limb darkening law. As with [@Kirk2017], we chose to include a Gaussian process (GP) to model the red noise in the data and obtain robust errors for each of our fitted parameters. This was performed using the <span style="font-variant:small-caps;">george</span> <span style="font-variant:small-caps;">python</span> package [@george]. GPs have been used many times in the literature and have been shown to be powerful in modelling correlated noise in data (e.g. @Gibson2012; @Evans2015; @Kirk2017; @Louden2017). To model the red noise in our light curves, we chose to use a Matérn 3/2 kernel, defined by the length scale $\tau$ and the amplitude $a$. We also included a white noise kernel, defined by the variance $\sigma$. The parameters defining the model fitted to both white light curves were the inclination $i$, the ratio of semi-major axis to stellar radius $a/R_*$, the time of mid-transit $T_c$, the planet to star radius ratio $R_P/R_*$, the quadratic limb darkening coefficients $u1$ and $u2$, and the three parameters defining the GP, $a$, $\tau$ and $\sigma$. $i$ and $a/R_*$ were shared between both nights whereas the remaining parameters were not. This resulted in a total of 16 free parameters in the white light curve fits. A prior was placed on the limb-darkening coefficients such that $u1 + u2 \leq 1$, with no priors placed on the other transit parameters. Loose, uniform priors were placed on the GP hyperparameters to encourage convergence. To generate the starting values for the limb darkening coefficients, we made use of the limb darkening toolkit (<span style="font-variant:small-caps;">ldtk</span>; @LDTK), which uses <span style="font-variant:small-caps;">phoenix</span> [@Husser2013] models to derive limb darkening coefficients and errors. For the stellar parameters, we used the values of [@Triaud2015]. For the starting values of the GP hyperparameters, we optimised these to the out of transit data, using a Nelder-Mead algorithm [@NelderMead]. With these starting values, we ran a Markov chain Monte Carlo (MCMC) to the white light curves of both nights together, using the <span style="font-variant:small-caps;">emcee</span> <span style="font-variant:small-caps;">python</span> package [@emcee]. An initial run was performed with 2000 steps and 160 walkers. Following this the walkers were resampled around the result from the first run and run again for 2000 steps with 160 walkers, with the first 1000 steps discarded as burn in. The resulting fit is shown in Fig. \[fig:wl\_fits\] with the parameter values in Table \[tab:wl\_fit\_results\]. ![Combined fit to the white light curves of nights 1 and 2 using a Gaussian process. Top panel: the fitted model to nights 1 and 2 is shown by the red line, with an offset in y of -0.01 applied for clarity. The gaps in night 2’s light curve indicate the points that were clipped due to clouds. Bottom panel: The residuals to the fits in the top panel, offset by -0.008 for clarity. The dark grey and light grey regions indicate the 1 and 3$\sigma$ confidence intervals of the GP, respectively.[]{data-label="fig:wl_fits"}](wl_fit.pdf) [|l|c|c|c|c|c|]{} Parameter & Night 1 & Night 2 & @Sedaghati2017 & @Triaud2013\ $a/R_*$ & $12.66^{+0.12}_{-0.11}$ & - & $12.0647 \pm 0.0099$ & $12.99 \pm 0.03$\ $i$ ($^\circ$) & $89.10^{+0.31}_{-0.22}$ & - & $88.90 \pm 0.06$ & $88.92^{+0.07}_{-0.12}$\ $T_c$ (BJD) & $2457619.461954^{+0.000096}_{-0.000102}$ & $2457622.529914^{+0.000135}_{-0.000131}$ & $2456459.809578 \pm 0.000073$ & $2456125.417512^{+0.000067}_{-0.000052}$\ $R_P/R_*$ & $0.16975^{+0.00179}_{-0.00178}$ & $0.17302^{+0.00198}_{-0.00235}$ & $0.17386 \pm 0.00030$ & $0.17126^{+0.00017}_{-0.00026}$\ $u1$ & $0.47^{+0.09}_{-0.10}$ & $0.48 \pm 0.09$ & $0.491 \pm 0.238^a$ & -\ $u2$ & $0.23 \pm 0.17 $ & $0.21^{+0.18}_{-0.17}$ & $0.485 \pm 0.198^a$ & -\ Wavelength-binned light curve fitting ------------------------------------- The fitting of the white light curves allowed us to derive a common set of system parameters and independent common noise models from each of the nights’ light curves. With these we were able to fit wavelength-binned light curves for each night, after removing the common mode systematics model from each light curve and holding the system parameters ($a/R_*$ and $i$) and the time of mid transit $T_C$ fixed to the results from the white light curves. We again used a GP defined by a Matérn 3/2 kernel and a white noise kernel in order to account for the noise in the data. The remaining parameters defining the wavelength dependent models were the same as for the white light curve, resulting in 6 free parameters per light curve ($R_P/R_*$, $u1$, $u2$, $a$, $\tau$ and $\sigma$). For the limb darkening parameters, we used the log-likelihood evaluation of <span style="font-variant:small-caps;">ldtk</span> [@LDTK] in our fitting, and recovered values that were consistent with those that we measured from the white light curve. We began by fitting an analytic transit light curve model with a cubic-in-time polynomial to remove any points that deviated by greater than $4\sigma$ from the fit. This typically clipped at most 1 to 2 points per light curve. As with the white light curves, we then optimised the GP hyperparameters to the out of transit data using a Nelder-Mead algorithm. Following these steps, we marginalised over all 6 free parameters with an MCMC. This was run for each of the light curves, with 2000 steps and 60 walkers. A second run was then performed for each, with a small perturbation added to the values resulting from the first run and again for 2000 steps with 60 walkers. The fitted light curves are shown in Fig. \[fig:wb\_fits\_night1\] and Fig. \[fig:wb\_fits\_night2\] with the resulting $R_P/R_*$ values listed in Table \[tab:bin\_fits\] and transmission spectrum plotted in Fig. \[fig:trans\_spec1\]. In Table \[tab:bin\_fits\] we also include the error in the combined $R_P/R_*$ in terms of the atmospheric scale height $H$, which we calculate to be 206km, given an equilibrium temperature of 825K [@Triaud2015], a planetary surface gravity of 14.34ms$^{-1}$ [@Mancini2014_w80] and assuming the mean molecular mass to be 2.3 times the mass of a proton. Our results have a mean error across the transmission spectrum of 2.5 scale heights (0.00125$R_P/R_*$) which represents a good precision given the relatively small scale height of WASP-80b. -------------- --------------------------------- --------------------------------- ----------------------- ------------------ Wavelength Night 1 Night 2 Combined Error bin $R_P/R_*$ $R_P/R_*$ $R_P/R_*$ in scale heights 4640 – 4940Å $0.17509^{+0.00084}_{-0.00085}$ $0.17119^{+0.00332}_{-0.00227}$ $0.17474 \pm 0.00081$ 1.6 4940 – 5240Å $0.17126^{+0.00439}_{-0.00475}$ $0.17162^{+0.00228}_{-0.00195}$ $0.17147 \pm 0.00192$ 3.7 5240 – 5440Å $0.17354^{+0.00399}_{-0.00496}$ $0.17324^{+0.00295}_{-0.00280}$ $0.17339 \pm 0.00242$ 4.7 5440 – 5640Å $0.17082^{+0.00694}_{-0.00716}$ $0.17326^{+0.00289}_{-0.00325}$ $0.17300 \pm 0.00282$ 5.5 5640 – 5740Å $0.17594^{+0.00470}_{-0.00570}$ $0.17267^{+0.00244}_{-0.00238}$ $0.17329 \pm 0.00219$ 4.3 5740 – 5840Å $0.17606^{+0.00245}_{-0.00219}$ $0.17328^{+0.00208}_{-0.00197}$ $0.17441 \pm 0.00153$ 3.0 5840 – 5940Å $0.17062^{+0.00397}_{-0.00287}$ $0.17144^{+0.00295}_{-0.00323}$ $0.17093 \pm 0.00230$ 4.5 5940 – 6040Å $0.17726^{+0.00320}_{-0.00346}$ $0.17249^{+0.00279}_{-0.00278}$ $0.17450 \pm 0.00213$ 4.1 6040 – 6140Å $0.17050^{+0.00325}_{-0.00312}$ $0.17315^{+0.00165}_{-0.00155}$ $0.17257 \pm 0.00143$ 2.8 6140 – 6240Å $0.16903^{+0.00340}_{-0.00334}$ $0.17382^{+0.00133}_{-0.00122}$ $0.17318 \pm 0.00119$ 2.3 6240 – 6340Å $0.16675^{+0.00516}_{-0.00484}$ $0.17074^{+0.00219}_{-0.00180}$ $0.17004 \pm 0.00186$ 3.6 6340 – 6440Å $0.16993^{+0.00199}_{-0.00244}$ $0.17290^{+0.00173}_{-0.00180}$ $0.17183 \pm 0.00138$ 2.7 6440 – 6540Å $0.17216^{+0.00165}_{-0.00129}$ $0.17167^{+0.00248}_{-0.00192}$ $0.17184 \pm 0.00122$ 2.4 6540 – 6640Å $0.17171^{+0.00108}_{-0.00091}$ $0.17150^{+0.00208}_{-0.00280}$ $0.17167 \pm 0.00092$ 1.8 6640 – 6740Å $0.16922^{+0.00178}_{-0.00226}$ $0.17267^{+0.00236}_{-0.00217}$ $0.17082 \pm 0.00151$ 2.9 6740 – 6840Å $0.17020^{+0.00154}_{-0.00156}$ $0.17261^{+0.00185}_{-0.00178}$ $0.17121 \pm 0.00118$ 2.3 6840 – 6940Å $0.17209^{+0.00149}_{-0.00152}$ $0.17206^{+0.00213}_{-0.00181}$ $0.17204 \pm 0.00120$ 2.3 6940 – 7040Å $0.17241^{+0.00106}_{-0.00115}$ $0.17003^{+0.00163}_{-0.00197}$ $0.17182 \pm 0.00094$ 1.8 7040 – 7140Å $0.17110^{+0.00097}_{-0.00189}$ $0.17059^{+0.00057}_{-0.00051}$ $0.17067 \pm 0.00051$ 1.0 7140 – 7240Å $0.17496^{+0.00165}_{-0.00146}$ $0.17119^{+0.00244}_{-0.00281}$ $0.17396 \pm 0.00134$ 2.6 7240 – 7340Å $0.17356^{+0.00143}_{-0.00155}$ $0.17016^{+0.00151}_{-0.00134}$ $0.17177 \pm 0.00103$ 2.0 7340 – 7440Å $0.17186^{+0.00136}_{-0.00147}$ $0.16880^{+0.00246}_{-0.00122}$ $0.17060 \pm 0.00113$ 2.2 7440 – 7540Å $0.17210^{+0.00168}_{-0.00114}$ $0.16949^{+0.00150}_{-0.00127}$ $0.17061 \pm 0.00099$ 1.9 7740 – 7840Å $0.17179^{+0.00153}_{-0.00163}$ $0.17007^{+0.00056}_{-0.00051}$ $0.17023 \pm 0.00051$ 1.0 7840 – 7940Å $0.17365^{+0.00133}_{-0.00150}$ $0.16950^{+0.00067}_{-0.00060}$ $0.17018 \pm 0.00058$ 1.1 7940 – 8040Å $0.17140^{+0.00146}_{-0.00140}$ $0.16984^{+0.00060}_{-0.00054}$ $0.17003 \pm 0.00053$ 1.0 8040 – 8140Å $0.17171^{+0.00149}_{-0.00155}$ $0.17107^{+0.00063}_{-0.00058}$ $0.17114 \pm 0.00056$ 1.1 8140 – 8240Å $0.17087^{+0.00208}_{-0.00234}$ $0.16923^{+0.00101}_{-0.00091}$ $0.16947 \pm 0.00088$ 1.7 8240 – 8340Å $0.17223^{+0.00152}_{-0.00160}$ $0.17070^{+0.00093}_{-0.00082}$ $0.17104 \pm 0.00076$ 1.5 8340 – 8440Å $0.17239^{+0.00179}_{-0.00156}$ $0.17058^{+0.00088}_{-0.00088}$ $0.17095 \pm 0.00078$ 1.5 8440 – 8540Å $0.17130^{+0.00152}_{-0.00161}$ $0.17460^{+0.00113}_{-0.00091}$ $0.17356 \pm 0.00086$ 1.7 8540 – 8640Å $0.17451^{+0.00188}_{-0.00196}$ $0.17353^{+0.00175}_{-0.00152}$ $0.17390 \pm 0.00124$ 2.4 8640 – 8740Å $0.17040^{+0.00154}_{-0.00156}$ $0.17326^{+0.00127}_{-0.00155}$ $0.17202 \pm 0.00104$ 2.0 8740 – 8840Å $0.17094^{+0.00123}_{-0.00134}$ $0.17488^{+0.00129}_{-0.00145}$ $0.17284 \pm 0.00094$ 1.8 -------------- --------------------------------- --------------------------------- ----------------------- ------------------ Noise sources in the wavelength binned light curves --------------------------------------------------- We looked for correlations between different diagnostics and noise features in our light curves. Aside from an airmass correlation, we did not see any correlations between trace position, FWHM or sky background with the wavelength-binned light curves whose systematics varied from bin to bin (Figs. \[fig:wb\_fits\_night1\] and \[fig:wb\_fits\_night2\]). We also considered that noise may have been introduced by our post-reduction processing. We tested our resampling method by constructing light curves from unresampled data and tested our wavelength solution by binning in pixels prior to assigning a wavelength, neither of which changed the dominant noise characteristics. We also experimented with a non-linearity correction from the ACAM instrument pages[^4] and re-ran our reduction pipeline but also found this had no significant effect. The effect of flat-fielding was also tested by running a reduction with a flat field using a tungsten lamp and a reduction without a flat field at all and we found that neither of these had a significant effect. Transmission spectrum {#sec:trans_spec_results} --------------------- Following the fitting of the wavelength binned light curves of both nights, we constructed individual transmission spectra for each night and a combined transmission spectrum from the weighted mean (Fig. \[fig:trans\_spec1\]). We have renormalised the transmission spectrum for the first night, given the difference in $R_P/R_*$ from the two nights. We note that the results for night 1 in Table \[tab:bin\_fits\] are with the offset applied. In Fig. \[fig:trans\_spec1\], we also plot our combined transmission spectrum along with a clear atmosphere model, a Rayleigh scattering slope, and a flat line indicating a grey opacity source such as clouds. The clear atmosphere model was generated using <span style="font-variant:small-caps;">exo-transmit</span> [@Kempton2017] and assuming an isothermal temperature pressure profile of 800K, with a metallicity of $0.1 \times$ solar. ![image](trans_plot1_no_spot_correction.pdf) ------------------------------ ---------------------------- --------------------------- -- 4640 – 8840Å 7440 – 8840Å $ \chi^2$ ($\chi^2_{\nu}$) $\chi^2$ ($\chi^2_{\nu}$) Flat line 74.9 (2.27) **32.1 (2.92)** Rayleigh (T$_{\textrm{eq}}$) **68.5 (2.08)** 39.5 (3.59) $0.1 \times$ solar, clear 81.6 (2.47) 50.6 (4.60) $1 \times$ solar, clear 71.6 (2.17) 40.4 (3.67) ------------------------------ ---------------------------- --------------------------- -- : Goodness of fit of different model atmospheres to both our whole transmission spectrum (4640 – 8840Å, with 33 degrees of freedom) and the region overlapping with the study of [@Sedaghati2017] (7440 – 8840Å, with 11 degrees of freedom). The 95% confidence limit for a $\chi^2$ distribution with 33 degrees of freedom is 47.4 and for 11 degrees of freedom is 19.7. The models preferred in each region are shown in boldface.[]{data-label="tab:atmos_stats"} Table \[tab:atmos\_stats\] shows the goodness of fit of the model atmospheres considered. We find that while none of these models provide a satisfactory fit to the entire data set, a Rayleigh scattering slope at the equilibrium temperature of the planet (825K) is preferred. In Fig. \[fig:trans\_spec2\] we plot our combined transmission spectrum against the spectra of both [@Sedaghati2017] and [@Parviainen2017]. Our results clearly favour a transmission spectrum without broad potassium absorption and show no evidence of sodium at a resolution of 100Å. When considering just the region that overlaps with the study of [@Sedaghati2017], our transmission spectrum strongly favours a flat line over all the other models when comparing the Bayesian Information Criterion (which is a comparison of $\chi^2$ for the same degrees of freedom) for this region. However, the model atmospheres considered do not produce satisfactory fits in this region (Table \[tab:atmos\_stats\]). ![Our two night combined transmission spectrum (black error bars) along with the transmission spectra of [@Sedaghati2017] (magenta squares) and [@Parviainen2017] (blue circles).[]{data-label="fig:trans_spec2"}](trans_plot2_no_spot_correction.pdf) Fig. \[fig:trans\_spec2\] also provides a favourable comparison between the precision of our results using a 4-metre telescope with those from 8- and 10-metre telescopes (@Sedaghati2017; @Parviainen2017). It is clear that the precision of ground based studies is limited by systematics and not photon noise. Discussion {#sec:discussion} ========== Transmission spectrum {#sec:trans_spec_discussion} --------------------- The transmission spectrum presented here is best represented by a Rayleigh scattering slope with no broad alkali metal absorption, suggesting the presence of a high altitude haze (Fig. \[fig:trans\_spec1\], Table \[tab:atmos\_stats\]). It is inconsistent with the previously claimed detection of broad potassium absorption by [@Sedaghati2017]. In the study of [@Sedaghati2017], the authors interpreted their transmission spectrum as evidence for the pressure-broadened wings of the potassium feature. This would suggest a clear atmosphere such as in WASP-39b (@Fischer2016; @Sing2016; @Nikolov2016), as clouds and hazes have the effect of masking the wings of the alkali features (e.g. @Nikolov2015; @Mallonn2015; @Kirk2017; @Louden2017). In this case, we might expect to see the broad wings of the sodium feature but we see no evidence for these either, nor do we detect the core at a resolution of 100Å  (Fig. \[fig:trans\_spec1\]). Alkali chlorides are expected to condense at around the temperature of WASP-80b (825K, @Mancini2014_w80), which reduces the sodium and potassium in the upper atmosphere and gives rise to Rayleigh scattering [@Wakeford2015]. This scenario is consistent with our transmission spectrum as we do not detect broad sodium or potassium absorption, and find an atmosphere best represented by Rayleigh scattering (Fig. \[fig:trans\_spec1\], Table \[tab:atmos\_stats\]). However, we might expect chlorides to produce a steeper slope than we observe [@Wakeford2015]. It is interesting to note that for smaller planets hazes appear more prominent for planets with temperatures $\lesssim900$K (@Morley2015; @Crossfield2017), which is the temperature range that both HAT-P-18b, for which we detected a Rayleigh scattering haze [@Kirk2017], and WASP-80b fall within, although they probe a different parameter space and a different regime of atmospheric chemistry. Our results for these two relatively-cool hot Jupiters are also in line with [@Heng2016]’s tentative evidence that cooler planets are more likely to be cloudy. Despite the presence of a haze, it is still possible to detect the narrow line core arising from higher altitudes in the planetary atmosphere such as has been done for WASP-52b [@Chen2017_w52], HD 189733b (@Pont2008; @Sing2011; @Huitson2012; @Pont2013; @Louden2015) and HD 209458b (@Charbonneau2002; @Sing2008a [@Sing2008b]; @Snellen2008; @Langland-Shula2009; @Vidal-Madjar2011; @Deming2013), however these detections are often made in much narrower bins than the 100Å bins used here. For this reason, we cannot rule out the presence of the narrow core of the sodium feature despite the absence of the broad wings. While we cannot be certain what the cause of the discrepancy is between our results and those of [@Sedaghati2017], it is perhaps plausible that there were residual systematics in the VLT/FORS data of [@Sedaghati2017] associated with the Longitudinal Atmospheric Dispersion Corrector (LADC; @Avila1997) despite the authors’ thorough attempts to account for these. The observations of [@Sedaghati2017] were taken before the replacement of the anti-reflective coating of the LADC which was known to cause systematic errors [@Boffin2015]. We note that more recent observations by these authors, also using VLT/FORS, resulted in a ground-breaking detection of TiO in the atmosphere of WASP-19b but with the LADC left in park mode [@Sedaghati2017_wasp19]. We are also encouraged that ACAM is optically simpler than FORS with no atmospheric dispersion corrector. In addition to possible systematics in the VLT/FORS data of [@Sedaghati2017], we consider the effects of stellar activity in the next section. Stellar activity ---------------- Despite the lack of rotational modulation in its light curve, WASP-80 has a $\log R'_{\textrm{HK}}$ of -4.495, suggesting the star has a relatively high level of chromospheric activity [@Triaud2013; @Mancini2014_w80]. However, WASP-80 has a spectral type between K7V and M0V [@Triaud2013] and when comparing this value to a sample of K stars with measured $\log R'_{\textrm{HK}}$ values, we see that WASP-80’s $\log R'_{\textrm{HK}}$ is fairly typical (Fig. \[fig:activity\], values from @MartinezArnaiz2010). Given WASP-80’s chromospheric activity, it is puzzling that no star spot crossings have been observed in any of the studies of WASP-80 to date. This would suggest that either WASP-80 is spot free or that there are a large number of spots continuously on its surface, at latitudes not occulted by the planet (as postulated by @Mancini2014_w80). ![A plot of $\log R'_{\textrm{HK}}$ values of K dwarfs displaying chromospheric features from the sample of [@MartinezArnaiz2010], who gathered chromospheric activity measurements of 371 F to K stars within 25pc. The red line indicates WASP-80’s $\log R'_{\textrm{HK}}$.[]{data-label="fig:activity"}](Kstar_activity_plot.pdf) The orbit of WASP-80b is aligned with the stellar spin axis [@Triaud2015] and therefore crosses the same latitudes of the host during each transit. We calculate WASP-80b to have an impact parameter of 0.2, therefore it is plausible that there are spots at higher latitudes that are not occulted during transit. High latitude spots have been observed in a number of stars (e.g. @Strassmeier2009), although early M dwarfs have been observed to show more spots uniformly distributed in longitude and latitude (e.g. @Barnes2001 [@Barnes2004]), making the prospect of numerous unocculted spots less likely. Unocculted spots can induce blueward slopes in transmission spectra [@McCullough2014], mimicking a Rayleigh scattering signature. Unocculted spots cannot be strongly affecting our transmission spectrum as we do not detect a clear scattering signature, and since we do not know whether the star has evenly distributed spots, or has no spots, we have chosen not to adjust the transmission spectra presented here. However, we consider whether they could cause the discrepancy between the transmission spectrum presented here and that of [@Sedaghati2017]. Following the formalism of [@McCullough2014], who considered the effects of unocculted spots on the transmission spectrum of HD189733b, it can be shown that the apparent depth $(\tilde{R_P}/\tilde{R_*})^2$, can be related to the measured depth $(R_P/R_*)^2$, the filling factor of spots on the projected stellar surface $f$, and the fluxes of the star and a spot at a particular wavelength, $F_{\lambda}(spot)$ and $F_{\lambda}(star)$, through $$\left(\frac{\tilde{R_P}}{\tilde{R_*}}\right)^2 = \frac{(R_P/R_*)^2}{1 - f(1-F_{\lambda}(spot)/F_{\lambda}(star))},$$ which can be rearranged to find the filling factor, $f$. Using <span style="font-variant:small-caps;">atlas9</span> models of a star with $\mathrm{T_{eff}} = 4000$K, \[Fe/H\] = 0.0, and $\log g = 4.5$, and a spot with a temperature of 3500K (as consistent with observations of spots on stars of similar spectral type, @Berdyugina2005), we find that the difference in transit depth between our shortest and longest wavelength observations would require a filling factor of 2.6%. The slope found by [@Sedaghati2017] would require unocculted spots with a filling factor of 16%. If these transmission spectra were due to unocculted spots, then the filling factor of such spots would have to change by at least 13%. Since photometric modulations of this order are ruled out by the WASP data we conclude that stellar activity is very unlikely to be the cause of the discrepancy between our results and those of [@Sedaghati2017]. Conclusions =========== We have presented a ground-based optical transmission spectrum of the hot Jupiter WASP-80b. Our transmission spectrum is best represented by a Rayleigh scattering slope, suggesting the presence of a high altitude haze in the atmosphere of WASP-80b. We see no evidence for the broad wings of the potassium feature as claimed previously by [@Sedaghati2017] nor sodium at a resolution of 100Å. Instead, our transmission spectrum is in better agreement with those of [@Fukui2014], [@Mancini2014_w80], [@Triaud2015] and [@Parviainen2017]. Stellar activity is very unlikely to be the cause of the discrepancy between our results and those of [@Sedaghati2017] due to WASP-80’s lack of photometric modulation [@Triaud2013]. Instead, it is possible that there were residual systematics in the VLT/FORS data of [@Sedaghati2017], perhaps related to the LADC, despite the authors’ thorough attempts to account for these. This is the third paper in the LRG-BEASTS programme following the detection of a Rayleigh scattering haze in HAT-P-18b [@Kirk2017] and clouds in WASP-52b [@Louden2017]. LRG-BEASTS is demonstrating that 4-metre class telescopes can provide transmission spectra with precisions comparable to 8- and 10-metre class telescopes, and are capable of detecting and ruling out model atmospheres. This work also highlights the importance of independent and repeat studies of hot Jupiters. These studies as part of our larger survey will help to shed light on the prevalence, and physical origins, of clouds and hazes in hot Jupiter atmospheres by increasing the sample of studied hot Jupiters. Acknowledgements {#acknowledgements .unnumbered} ================ We thank the anonymous referee for their helpful suggestions and comments which improved the discussion of the manuscript. We also thank Hannu Parviainen and Tom Evans for useful discussions during the preparation of this manuscript. J.K. is supported by a Science and Technology Facilities Council (STFC) studentship. P.W. is supported by an STFC consolidated grant (ST/P000495/1). The reduced light curves presented in this work will be made available at the CDS (http://cdsarc.u-strasbg.fr/). This work made use of the <span style="font-variant:small-caps;">astropy</span> [@astropy], <span style="font-variant:small-caps;">numpy</span> [@numpy] and <span style="font-variant:small-caps;">matplotlib</span> [@matplotlib] <span style="font-variant:small-caps;">python</span> packages in addition to those cited within the body of the paper. The William Herschel Telescope is operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias. The ACAM spectroscopy was obtained as part of W/2016B/28. ![image](fitted_model_n1_1.pdf) ![image](fitted_model_n1_2.pdf) ![image](fitted_model_n1_3.pdf) ![image](fitted_model_n1_4.pdf) ![image](fitted_model_n2_1.pdf) ![image](fitted_model_n2_2.pdf) ![image](fitted_model_n2_3.pdf) ![image](fitted_model_n2_4.pdf) \[lastpage\] [^1]: E-mail: [email protected] [^2]: E-mail: [email protected] [^3]: https://pysynphot.readthedocs.io/en/latest/ [^4]: http://www.ing.iac.es/engineering/detectors/auxcam\_fast.jpg
--- abstract: | We show that the small quantum product of the generalized flag manifold $G/B$ is a product operation on $H^*(G/B)\otimes \bR[q_1,\ldots, q_l]$ uniquely determined by the fact that it is a deformation of the cup product on $H^*(G/B)$, it is commutative, associative, graded with respect to $\deg(q_i)=4$, it satisfies a certain relation (of degree two), and the corresponding Dubrovin connection is flat. We deduce that it is again the flatness of the Dubrovin connection which characterizes essentially the solutions of the “quantum Giambelli problem" for $G/B$. This result gives new proofs of the quantum Chevalley formula (see D. Peterson \[Pe\] and Fulton and Woodward \[Fu-Wo\]), and of Fomin, Gelfand and Postnikov’s description of the quantization map for $Fl_n$ (see \[Fo-Ge-Po\]). 2000 [*Mathematics Subject Classification.*]{} 14M15, 14N35 address: | Department of Mathematics and Statistics\ University of Regina\ College West 307.14\ Regina SK, Canada S4S 0A2 author: - 'Augustin-Liviu Mare' title: 'A characterization of the quantum cohomology ring of $G/B$ and applications' --- Introduction ============ Let us consider the complex flag manifold $G/B$, where $G$ is a connected, simply connected, simple, complex Lie group and $B\subset G$ a Borel subgroup. Let $\t$ be the Lie algebra of a maximal torus of a compact real form of $G$ and $\Phi \subset \t^*$ the corresponding set of roots. Consider an arbitrary $W$-invariant inner product $\langle \ , \ \rangle$ on $\t$. To any root $\alpha$ corresponds the coroot $$\alpha^{\vee}:= \frac{2\alpha}{\langle \alpha, \alpha\rangle}$$ which is an element of $\t$, by using the identification of $\t$ and $\t^*$ induced by $\langle \ , \ \rangle$. If $\{\alpha_1, \ldots ,\alpha_l\}$ is a system of simple roots then $\{\alpha_1^{\vee},\ldots, \alpha_l^{\vee}\}$ is a system of simple coroots. Consider $\{\lambda_1 ,\ldots , \lambda_l\} \subset \t^*$ the corresponding system of fundamental weights, which are defined by $\lambda_i(\alpha_j^{\vee})=\delta_{ij}$. The Weyl group $W$ is the subgroup of $O(\t, \langle \ , \ \rangle )$ generated by the reflections about the hyperplanes $\ker \alpha $, $\alpha\in \Phi^+$. It can be shown that $W$ is in fact generated by the [*simple reflections*]{} $s_1=s_{\alpha_1},\ldots, s_l=s_{\alpha_l}$ about the hyperplanes $\ker\alpha_1, \ldots, \ker\alpha_l$. The [*length*]{} $l(w)$ of $w$ is the minimal number of factors in a decomposition of $w$ as a product of simple reflections. We denote by $w_0$ the longest element of $W$. Let $B^-\subset G$ denote the Borel subgroup opposite to $B$. To each $w\in W$ we assign the [*Schubert variety*]{} $X_w=\overline{B^-.w}$. The Poincaré dual of $[X_w]$ is an element of $H^{2l(w)}(G/B)$, which is called the [*Schubert class*]{}. The set $\{\sigma_w~|~w\in W\}$ is a basis of $H^*(G/B)=H^*(G/B,\bR)$, hence $\{\sigma_{s_1},\ldots, \sigma_{s_l}\}$ is a basis of $H^2(G/B)$. A theorem of Borel \[Bo\] says that the map $$\label{borel} H^*(G/B)\to S(\t^*)/S(\t^*)^W=\bR[\{\lambda_i\}]/I_W$$ described by $ \sigma_{s_i} \mapsto [\lambda_i]$, $1\le i\le l$, is a ring isomorphism (we are denoting by $S(\t^*)^W=I_W$ the ideal of $S(\t^*)=\bR[\{\lambda_i\}]$ generated by the non-constant $W$-invariant polynomials). We will frequently identify $H^*(G/B)$ with the quotient ring from above. To any $l$-tuple $d=(d_1,\ldots ,d_l)$ with $d_i\in \bZ$, $d_i\ge 0$ corresponds a [*Gromov-Witten invariant*]{}. This assigns to any three Schubert classes $\sigma_u,\sigma_v, \sigma_w$ the number denoted by $\langle \sigma_u|\sigma_v|\sigma_{w}\rangle_d$, which counts the holomorphic curves $\varphi :\bC P^1 \to G/B$ such that $\varphi_*([\bC P^1])= d$ in $H_2(G/B)$ and $\varphi(0)$, $\varphi(1)$ and $\varphi(\infty)$ are in general translates of the Schubert varieties dual to $\sigma_u$, $\sigma_v$, respectively $\sigma_{w}$. Let us consider the variables $q_1,\ldots, q_l$. The [*quantum cohomology ring*]{} of $G/B$ is the space $H^*(G/B)\otimes \bR[\{q_i\}]$ equipped with the product $\circ$ which is $\bR[\{q_i\}]$-linear and for any two Schubert classes $\sigma_u, \sigma_v$, $u,v\in W$ we have $$\sigma_u \circ \sigma_v =\sum_{d=(d_1,\ldots ,d_l)\geq 0} q^d\sum_{w\in W} (\sigma_u \circ \sigma_v)_d \sigma_w ,$$ $u,v\in W$. Here $q^d$ denotes $q_1^{d_1}\ldots q_l^{d_l}$ and the cohomology class $(\sigma_u \circ \sigma_v)_d$ is determined by $$\label{gw}\langle (\sigma_u \circ \sigma_v)_d,\sigma_w\rangle = \langle \sigma_u|\sigma_v|\sigma_{w}\rangle_d,$$ for any $w\in W$. It turns out that the product $\circ$ is commutative, associative and it is a deformation of the cup product (by which mean that if we formally set $q_1=\ldots =q_l=0$, then $\circ$ becomes the same as the cup product). If we assign $$\deg q_i=4,\quad 1\le i \le l,$$ then we also have the grading condition $$\deg (a\circ b)=\deg a +\deg b,$$ for any two homogeneous elements $a, b$ of $ H^*(G/B)\otimes \bR[\{q_i\}] $. For more details about quantum cohomology we refer the reader to Fulton and Pandharipande \[Fu-Pa\]. The [*Dubrovin connection*]{} attached to the quantum product defined above is a connection[^1] $\nabla ^{\hbar}$ on the trivial vector bundle $H^*(G/B)\times H^2(G/B)\to H^2(G/B)$ defined as follows: Denote by $t_1,\ldots, t_l$ the coordinates on $H^2(G/B)$ induced by the basis $\sigma_{s_1}, \ldots, \sigma_{s_l}$. Consider the 1-form $\omega$ on $H^2(G/B)$ with values in ${\rm End}(H^*(G/B))$ given by $$\omega_t(X,Y) = X\circ Y,$$ for $t=(t_1,\ldots, t_l) \in H^2(G/B)$, $X\in H^2(G/B)$ and $Y\in H^*(G/B)$, where the convention $$q_i=e^{t_i}, \quad 1\le i\le l$$ is in force. Finally set $$\nabla^\h = d +\frac{1}{\h} \omega.$$ Note that the 1-form $\omega$ can be expressed as $$\omega = \sum_{i=1}^l \omega_i dt_i,$$ where $\omega_i$ denotes the matrix of the operator $\sigma_{s_i}\circ$ on $H^*(G/B)$ with respect to the basis consisting of the Schubert classes. The following result is well-known (cf. \[Du\]): The Dubrovin connection $\nabla^\h$ is flat for any $\h\in \bR\setminus \{0\}$, i.e. we have $$\label{zero}d\omega = \omega\wedge \omega = 0$$ The fact that $d\omega=0$ follows from $$\frac{\partial}{\partial t_i} \omega_j =\frac{\partial}{\partial t_j} \omega_i ,$$ which is equivalent to $$d_i(\sigma_{s_j}\circ\sigma_w)_d = d_j (\sigma_{s_i}\circ\sigma_w)_d$$ for any $w\in W$ and any $d=(d_1,\ldots, d_l)$, hence, by (\[gw\]), to $$d_i\langle \sigma_{s_j}|\sigma_w |\sigma_v\rangle_d= d_j\langle \sigma_{s_i}|\sigma_w |\sigma_v\rangle_d.$$ The latter equality follows from the “divisor property" (see \[Fu-Pa, equation (40)\] for a more general version of this formula): $$\langle \sigma_{s_j}|\sigma_w |\sigma_v\rangle_d = d_j \langle \sigma_w |\sigma_v\rangle_d.$$ The equality $\omega\wedge \omega=0$, i.e. $\omega_i\omega_j = \omega_j\omega_i$, $1\le i,j\le l$, follows from the fact that the product $\circ$ is commutative and associative. Another important property of the quantum product which is of interest for us is that we have the relation: $$\label{relation}\sum_{i,j=1}^l\langle \alpha_i^{\vee}, \alpha_j^{\vee}\rangle \sigma_{s_i}\circ\sigma_{s_j} = \sum_{i=1}^l\langle \alpha_i^{\vee}, \alpha_i^{\vee}\rangle q_i.$$ In order to prove this we take into account that: - we have (see \[Kim\] or \[Ma1, Lemma 3.2\]) $$\sigma_{s_i}\circ \sigma_{s_j} = \sigma_{s_i}\sigma_{s_j} +\delta_{ij}q_j$$ - the polynomial $\sum_{i,j=1}^l\langle \alpha_i^{\vee}, \alpha_j^{\vee}\rangle \lambda_{i}\lambda_{j}\in S(\t^*)$ is $W$-invariant (being just the squared norm on $\t$); hence, according to (\[borel\]), the following relation holds in $H^*(G/B)$: $$\sum_{i,j=1}^l\langle \alpha_i^{\vee}, \alpha_j^{\vee}\rangle \sigma_{s_i}\sigma_{s_j}=0.$$ The goal of this paper is to show that the quantum product for $G/B$ is essentially determined by the equations (\[zero\]) (the flatness of the Dubrovin connection) and (\[relation\]) (the degree two relation). More precisely, we will prove that: \[main\] Let $\star$ be a product on the space $H^*(G/B)\otimes \bR[\{q_i\}]$ which is commutative, associative, is a deformation of the cup product (in the sense defined above), satisfies the condition $\deg(a\star b)=\deg a +\deg b$, for $a,b$ homogeneous elements of $H^*(G/B)\otimes \bR[\{q_i\}]$, with respect to the grading $\deg q_i=4$, and - the Dubrovin connection $\nabla ^\h = d+\frac{1}{\h} \omega$, with $\omega(X,Y)= X\star Y$ is flat. In other words, if $\omega_k$ is the matrix of the $ \bR[\{q_i\}]$-linear endomorphism $\sigma_{s_k}\star$ of $H^*(G/B)\otimes \bR[\{q_i\}]$ with respect to the Schubert basis, then we have $$\frac{\partial}{\partial t_i} \omega_j =\frac{\partial}{\partial t_j} \omega_i$$ for all $1\le i,j\le l$ (the convention $q_i=e^{t_i}$ is in force). - we have $$\sum_{i,j=1}^l\langle \alpha_i^{\vee}, \alpha_j^{\vee}\rangle \sigma_{s_i}\star\sigma_{s_j} = \sum_{i=1}^l\langle \alpha_i^{\vee}, \alpha_i^{\vee}\rangle q_i.$$ Then $\star$ is the quantum product $\circ$. The proof will be done in section 2. There are two corollaries we would like to deduce from this theorem. The first one is a characterization of the quantum Giambelli polynomials in terms of the flatness of the Dubrovin connection. More precisely, let us denote by $QH^*(G/B)$ the quotient ring $\bR[\{\lambda_i\}, \{q_i\}]/\langle R_1,\ldots, R_l \rangle$, where $R_1, \ldots, R_l$ are the quantum deformations in the quantum cohomology ring $(H^*(G/B)\otimes \bR[\{q_i\}], \circ)$ of the fundamental homogeneous generators of $S(\t^*)^W$ ($R_1,\ldots, R_l$ have been determined explicitly by B. Kim in \[Kim\]; we will present in section 2 a few more details about that). For any $c\in \bR[\{\lambda_i\},\{q_i\}]$ we denote by $[c]_q$ the coset of $c$ in $QH^*(G/B)$. The map $\sigma_{s_i}\mapsto [\lambda_i]_q$ induces a tautological isomorphism $$\label{tautological}(H^*(G/B)\otimes \bR[\{q_i\}],\circ)\simeq QH^*(G/B).$$ Finding for each $w\in W$ a polynomial $\hat{c}_w\in \bR[\{\lambda_i\},\{q_i\}]$ whose coset in $QH^*(G/B)$ is the image of $\sigma_w$ — in other words, solving the quantum Giambelli problem — would lead to a complete knowledge of the quantum cohomology of $G/B$. We are looking for conditions which determine the polynomials $\hat{c}_w$. First of all, let us consider for each $w\in W$ a polynomial[^2] $c_w\in \bR[\{\lambda_i\}]$ whose coset corresponds to $\sigma_w$ via the isomorphism (\[borel\]). There are two natural conditions that we impose to the polynomials $\hat{c}_w$: $$\label{deg}\deg \hat{c}_w =\deg c_w$$ with respect to the grading $\deg \lambda_i=2$, $\deg q_i=4$, and $$\label{hat}\hat{c}_w|_{({\rm all} \ q_i \ =0)}=c_w.$$ Whenever the conditions (\[deg\]) and (\[hat\]) are satisified, the cosets $[\hat{c}_w]_q$, $w\in W$, are a basis of $QH^*(G/B)$ over $\bR[\{q_i\}]$. Consider the 1-form $$\omega=\sum_{i=1}^l\omega_i dt_i,$$ where $\omega_i$ is the matrix of multiplication of $QH^*(G/B)$ by $[\lambda_i]_q$ with respect to the latter basis. We can prove that: \[proper\] Let $\hat{c}_w$, $w\in W$, be polynomials in $\bR[\{\lambda_i\}, \{q_i\}]$ which satisfy the properties (\[deg\]) and (\[hat\]). Then the image of $\sigma_w$ by the isomorphism (\[tautological\]) is $[\hat{c}_w]_q$ for all $w\in W$ if and only if the connection $$\nabla^{\hbar} =d+\frac{1}{\hbar}\omega$$ is flat for all $\hbar\in \bR\setminus\{0\}$. The latter condition reads $$\frac{\partial}{\partial t_i} \omega_j =\frac{\partial}{\partial t_j} \omega_i,$$ for all $1\le i,j\le l$. Consider the $\bR[\{q_i\}]$-linear isomorphism[^3] $$\delta : QH^*(G/B)\to H^*(G/B)\otimes \bR[\{q_i\}]=\bR[\{\lambda_i\}, \{q_i\}]/(I_W\otimes \bR[ \{q_i\}])$$ determined by $$\label{delta}\delta[\hat{c}_w]_q = [c_w],$$ for all $w\in W$. Define the product $\star$ on $H^*(G/B)\otimes \bR[\{q_i\}]$ by $$x\star y = \delta(\delta^{-1}(x) \delta^{-1}(y)),$$ $x,y \in H^*(G/B)\otimes \bR[\{q_i\}]$. The product is commutative, associative, it is a deformation of the cup product on $H^*(G/B)$, and it satisfies $\deg(a\star b)=\deg a +\deg b$, where $a,b\in H^*(G/B)\otimes \bR[\{q_i\}]$ are homogeneous elements. The map $\delta$ is obviously a ring isomorphism between $QH^*(G/B)$ and $(H^*(G/B)\otimes \bR[\{q_i\}],\star)$. In particular, the following degree two relation holds: $$\sum_{i,j=1}^l\langle \alpha_i^{\vee}, \alpha_j^{\vee}\rangle [\lambda_i]\star[\lambda_j] = \sum_{i=1}^l\langle \alpha_i^{\vee}, \alpha_i^{\vee}\rangle q_i.$$ Moreover, the matrix of $[\lambda_i]\star$ on $H^*(G/B)\otimes \bR[q_1,\ldots, q_l]$ with respect to the Schubert basis $\{[c_w]: w\in W\}$ is just $\omega_i$. So if the connection $\nabla^{\hbar}$ is flat for all $\hbar$, then, by Theorem \[main\], the products $\star$ and $\circ$ are the same. This implies that $\delta$ is just the isomorphism (\[tautological\]). The conclusion follows from the definition (\[delta\]) of $\delta$. Corollary \[proper\] will be used in section \[last\] in order to recover the “quantization via standard monomials" theorem of Fomin, Gelfand, and Postnikov (see \[Fo-Ge-Po, Theorem 1.1\]). Our second application of Theorem \[main\] concerns the combinatorial quantum product $\star$ on $H^*(G/B)\otimes \bR[\{q_i\}]$, which has been constructed in \[Ma4\]. By definition, this product satisfies the following [*quantum Chevalley formula*]{}: $$\sigma_{s_i}\star \sigma_w = \sigma_{s_i}\sigma_w + \sum_{} \lambda_i(\alpha^{\vee})\sigma_{ws_{\alpha}},$$ for $1\le i\le l$, $w\in W$. Here the sum runs over all positive roots $\alpha$ with the property that $l(ws_{\alpha})=l(w)-2{\rm height}(\alpha^{\vee}) +1$, where we consider the expansion $\alpha^{\vee}=m_1\alpha_1^{\vee}+\ldots + m_l\alpha_l^{\vee}$, $m_j\in \bZ$, $m_j\ge 0$ and denote $${\rm height}(\alpha^{\vee})=m_1+\ldots +m_l ,\quad q^{\alpha^{\vee}}=q_1^{m_1}\ldots q_l^{m_l}.$$ We have also showed in \[Ma4\] that $\star$ satisfies all hypotheses of Theorem \[main\]. We deduce: The combinatorial and actual quantum products coincide. Consequently, the quantum product $\circ$ satisfies the quantum Chevalley formula: $$\label{chevalley}\sigma_{s_i}\circ \sigma_w = \sigma_{s_i}\sigma_w + \sum_{l(ws_{\alpha})=l(w)-2{\rm height}(\alpha^{\vee}) +1} \lambda_i(\alpha^{\vee})\sigma_{ws_{\alpha}},$$ for $1\le i\le l$, $w\in W$. [**Remarks.**]{} 1. The formula (\[chevalley\]) plays a crucial role in the study of the quantum cohomology algebra of $G/B$, as this is generated over $\bR[q_1,\ldots, q_l]$ by the degree 2 Schubert classes $\sigma_{s_1},\ldots, \sigma_{s_l}$. The formula was announced by D. Peterson in \[Pe\]. A rigorous intersection-theoretic proof has been given by W. Fulton and C. Woodward in \[Fu-Wo\]. It is one of the aims of our paper to give an alternative, conceptually new, proof of this formula. 2\. The tool we will be using in the proof of Theorem \[main\] is the notion of $\D$-module, in the spirit of Guest \[Gu\], Amarzaya and Guest \[Am-Gu\], and Iritani \[Ir\]. More precisely, we will show that the $\D$-modules associated in Iritani’s manner to the products $\circ$ and $\star$ are isomorphic, and then we conclude by using a certain uniqueness argument of Amarzaya and Guest \[Am-Gu\] (for more details, see the next section). [**Acknowledgements.**]{} I would like to thank Jost Eschenburg and Martin Guest for discussions on the topics contained in this paper. $\D$-modules and quantum cohomology =================================== The goal of this section is to give a proof of Theorem \[main\]. We denote by $\D$ the Heisenberg algebra, by which we mean the associative $\bR[\h]$-algebra generated by $Q_1,\ldots, Q_l,$ $P_1,\ldots, P_l$, subject to the relations $$\label{comm}[Q_i,Q_j]=[P_i,P_j]=0, \quad [P_i,Q_j] =\delta_{ij} \h Q_j,$$ $1\le i,j\le l$. It becomes a graded algebra with respect to the assignments $$\label{degree}\deg Q_i=4, \quad \deg P_i=\deg \h=2.$$ Note that any element $D$ of $\D$ can be written uniquely as an $\bR[\h]$-linear combination of monomials of type $Q^IP^J$. A concrete realization of $\D$ can be obtained by putting $Q_i=e^{t_i}$ and $P_i=\h\frac{\partial}{\partial t_i}$, $1\le i\le l$. We will be interested in certain elements of $\D$ which arise in connection with the Hamiltonian system of Toda lattice type corresponding to the coroots of $G$, namely the first quantum integrals of motion of this system. Those are homogeneous elements $D_k=D_k(\{Q_i\},\{P_i\},\h)$ of $\D$, $1\le k\le l$, which commute with $$D_1=\sum_{i,j=1}^l\langle \alpha_i^{\vee}, \alpha_j^{\vee}\rangle P_iP_j - \sum_{i=1}^l\langle \alpha_i^{\vee}, \alpha_i^{\vee}\rangle Q_i$$ and also satisfy the property that $D_k(\{0\}, \{\lambda_i\},0)$, $1\le k\le l$, are just the fundamental homogeneous $W$-invariant polynomials (for more details concerning the differential operators $D_1,\ldots, D_l$ we address the reader to \[Ma3\]). We will denote by $\I$ the left sided ideal of $\D$ generated by $D_1,\ldots, D_l$. Let $\star$ be a product on $H^*(G/B)\otimes \bR[\{q_i\}]$ which satisfies the hypotheses of Theorem \[main\]. Let us denote by $E$ the $\D$-module (i.e. vector space with an action of the algebra $\D$) $H^*(G/B)\otimes \bR[\{q_i\},\h]$ defined by $$Q_i.a= q_i a,\quad P_i.a = \sigma_{s_i}\star a +\h q_i\frac{\partial}{\partial q_i} a,$$ $1\le i\le l$, $a\in H^*(G/B)\otimes \bR[\{q_i\},\h]$. The isomorphism type of the $\D$-module $E$ corresponding to $\star$ is uniquely determined by the hypotheses of Theorem \[main\], as the following proposition shows: \[isomorphism\] If $\star$ is a product with the properties stated in Theorem \[main\], then the map $\phi:\D\to H^*(G/B)\otimes \bR [\{q_i\}, \h]$ given by $$f(\{Q_i\},\{P_i\},\h) \stackrel{\phi}{\mapsto} f(\{Q_i\}, \{P_i\}, \h).1= f(\{q_i\},\{\sigma_{s_i}\star +\h q_i\frac{\partial}{\partial q_i}\},\h).1$$ is surjective and induces an isomorphism of $\D$-modules $$\label{isom}\D/\I \simeq E,$$ where $\I$ is the left sided ideal of $\D$ generated by the quantum integrals of motion of the Toda lattice (see above). We will use the grading on $H^*(G/B)\otimes \bR[\{q_i\}, \h]$ induced by the usual grading on $H^*(G/B)$, $\deg q_i=4$ and $\deg \h=2$. Combined with the grading defined by (\[degree\]), this makes $\phi$ into a degree preserving map (more precisely, it maps a homogeneous element of $\D$ to a homogeneous element of the same degree in $H^*(G/B)\otimes \bR[\{q_i\}, \h]$). Let us prove first the surjectivity stated in our theorem. It is sufficient to show that any homogeneous element $a\in H^*(G/B)\otimes \bR[\{q_i\}, \h]$ can be written as $f(\{Q_i\}, \{P_i\}, \h).1$. We proceed by induction on $\deg a$. If $\deg a=0$, everything is clear. Now consider $a\in H^*(G/B)\otimes \bR[\{q_i\}, \h]$ a homogeneous element of degree at least 2. By a result of Siebert and Tian \[Si-Ti\], we can express $$a=g(\{q_i\},\{\sigma_{s_i}\star\}, \h)$$ for a certain polynomial $g$. We have $$a-g(\{Q_i\},\{ P_i\}, \h).1=a-g(\{q_i\},\{\sigma_{s_i}\star + \h q_i\frac{\partial}{\partial q_i}\} , \h).1 =\h b,$$ where $b\in H^*(G/B)\otimes \bR[\{q_i\}, \h]$ is homogeneous of degree $\deg a-2$ or it is zero. We use the induction hypothesis for $b$. We proved in \[Ma3\] (see the proof of Lemma 4.5) that the generators $D_k=D_k(\{Q_i\},\{P_i\},\h)$, $1\le k\le l$, of the ideal $\I$ satisfy $$\label{homog} D_k(\{Q_i\}, \{P_i\},\h).1=0.$$ If we let $\h$ approach $0$ in (\[homog\]) we obtain the relations $$\label{dek}D_k(\{q_i\}, \{\sigma_{s_i}\star\},0)=0,$$ $1\le k\le l$. They generate the whole ideal of relations in the ring $(H^*(G/B)\otimes \bR[\{q_i\}],\star)$. We need to show that if $D$ is an element of $\D$ with the property that $$\label{de}D(\{Q_i\},\{P_i\},\h).1=0$$ then $D\in\I$. Because the map $\phi$ is degree preserving, we may assume that $D$ is homogeneous and proceed by induction on $\deg D$. If $\deg D=0$, i.e. $D$ is constant, then (\[de\]) implies $D=0$, hence $D\in \I$. It now follows the induction step. From $$D.1=D(\{q_i\},\{\sigma_{s_i}\star + \h q_i\frac{\partial}{\partial q_i}\}, \h).1=0,$$ for all $\h$, we deduce the relation $D(\{q_i\},\{\sigma_{s_i}\star\}, 0)=0$ in the ring $(H^*(G/B)\otimes \bR[\{q_i\}],\star)$. Consequently we have the following polynomial identity $$D(\{q_i\},\{\lambda_i\},0) = \sum_kf_k(\{q_i\},\{\lambda_i\})D_k( \{q_i\}, \{\lambda_i\}, 0),$$ for certain polynomials $f_k$. By using the commutation relations (\[comm\]), we obtain the following identity in $\D$: $$\begin{aligned} D(\{Q_i\},\{P_i\},0) &\equiv \sum_k f_k(\{Q_i\},\{P_i\})D_k(\{Q_i\},\{P_i\},0) \ {\rm mod} \ \h \nonumber \\ {} & \equiv \sum_k f_k(\{Q_i\},\{P_i\})D_k(\{Q_i\},\{P_i\},\h) \ {\rm mod} \ \h.\nonumber \end{aligned}$$ In other words, $$D(\{Q_i\}, \{P_i\}, \h)= \sum_k f_k(\{Q_i\},\{P_i\})D_k(\{Q_i\},\{P_i\},\h) +\h D'(\{Q_i\},\{P_i\}, \h),$$ for a certain $D'\in\D$, with $\deg D'<\deg D$. From (\[dek\]) and (\[de\]) we deduce that $$D'(\{Q_i\},\{P_i\},\h).1=0$$ Since $\deg D'<\deg D$, we only have to use the induction hypothesis for $D'$ and get to the desired conclusion. Note that (\[isom\]) is also an isomorphism of $\bR[\{Q_i\},\h]$-modules. Since the actual quantum product $\circ$ satisfies the hypotheses of Theorem \[main\], we deduce that the dimension of $\D/\I$ as an $\bR[\{Q_i\},\h]$-module equals $|W|$. Let us consider the “standard monomial basis" $\{[C_w] : w\in W\}$ of $\D/\I$ over $\bR[\{Q_i\},\h]$ with respect to a choice of a Gröbner basis of the ideal $\I$ (for more details, see Guest \[Gu, section 1\] and the references therein). Any $C_w$ is a monomial in $P_1,\ldots,P_l$ and the cosets of the monomials $$c_w=C_w(\lambda_1,\ldots,\lambda_l),\quad w\in W$$ in $H^*(G/B)=S(\t^*)/S(\t^*)^W =\bR[\{\lambda_i\}]/I_W$ are a basis. We will need the following result (our proof relies on an idea of Amarzaya and Guest \[Am-Gu\]): \[unique\] There exists a unique basis $\{[\bar{C}_w]:w\in W\}$ of $\D/\I$ over $\bR[\{Q_i\},\h]$ with the following properties: - for all $w\in W$ the element $\bar{C}_w=\bar{C}_w(\{Q_i\},\{P_i\},\h)$ of $\D$ is homogeneous of degree $2\deg c_w$ with respect to the grading defined by (\[degree\]) - - for all $w\in W$ we have $$\bar{C}_w(\{0\},\{\lambda_i\},\h) \equiv c_w\ {\rm mod}\ I_W;$$ in particular $\bar{C}_w(\{0\},\{\lambda_i\},\h) {\rm mod}\ I_W$ is independent of $\hbar$ - - the elements $(\bar{\Omega}^i_{vw})_{v,w\in W}^{1\le i\le l}$ of $\bR[Q_1,\ldots, Q_l,\h]$ determined by $$P_i[\bar{C}_w]=\sum_{v\in W}\bar{\Omega}^i_{vw}[\bar{C}_v],$$ are independent of $\h$. In order to show that such a basis exists, we consider the isomorphism $$\phi:\D/\I \to H^*(G/B)\otimes \bR[\{q_i\},\h]$$ induced by the actual quantum product $\circ$ via Proposition \[isomorphism\]. The basis $\{[c_w]:w\in W\}$ of the right hand side induces the basis $\{[\bar{C}_w]=\phi^{-1}([c_w]): w\in W\}$ of $\D/\I$ over $\bR[\{Q_i\}, \h]$. It is obvious that the latter basis satisfies (i) and (iii). In order to show that it also satisfies (ii), we consider the following commutative diagram: $$\D/\I \stackrel{\phi}{\longrightarrow} H^*(G/B)\otimes \bR[\{q_i\}, \h]$$ $$\psi_1 \searrow \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \swarrow \psi_2 \ \ \ \ \ \ \ \ {}$$ $$H^*(G/B) \otimes \bR[\h]\ \ \ \ \ \ \ \ \ {}$$ where $\psi_2$ is the canonical projection and $\psi_1:\D/\I \to H^*(G/B) \otimes \bR[\h] =(\bR[\{\lambda_i\}]/I_W) \otimes \bR[\h]$ is given by $$[D(\{Q_i\},\{P_i\}, \h)] \mapsto [D(\{0\}, \{\lambda_i\}, \h)].$$ Note that $\psi_1$ is well defined, as for any $k=1,2,\ldots ,l$, the polynomial $D_k(\{0\}, \{\lambda_i\}, \h)$ is independent of $\h$, being equal to $u_k$, the $k$-th fundamental $W$-invariant polynomial (see \[Ma 2, section 3\]). We observe that $$[\bar{C}_w(\{0\}, \{\lambda_i\}, \h)]=\psi_1 [\bar{C}_w]=\psi_2[c_w]=[c_w],$$ hence condition (ii) is satisfied. In order to show that there exists at most one such basis, we will use a construction of Amarzaya and Guest \[Am-Gu\]. Let $\{[\bar{C}_w]: w\in W\}$ be a basis of $\D/\I$ with the properties (i), (ii) and (iii). We can write $$\label{bar}\bar{C}_w \equiv \sum_{v\in W} U^{vw}C_v \ {\rm mod} \ \I$$ with $U^{vw}\in\bR[\{Q_i\},\h]$. Decompose the matrix $U=(U^{vw})_{v,w\in W}$ as $$U=U_0+\h U_1 +\ldots +\h^k U_k,$$ where $U_0,\ldots, U_k$ have entries in $\bR[\{Q_i\}]$. Let us apply $\psi_1$ to both sides of equation (\[bar\]) and deduce that in $H^*(G/B)\otimes \bR[\h] =(\bR[\{\lambda_i\}]/I_W) \otimes \bR[\h]$ we have that $$[c_w] = \sum_{v\in W} U^{vw}|_{({\rm all} \ Q_i =0)}[c_v],$$ for all $w\in W$. This implies $$U^{vw}|_{({\rm all} \ Q_i =0)}=\delta_{vw},$$ where $\delta_{vw}$ is the Kroenecker delta. On the other hand, because any $\bar{C}_w$, $C_v$, $v,w\in W$, as well as any generator $D_i$ of $\I$ is homogeneous, we deduce that each $U^{vw}$ is homogeneous. We are led to the following property of the matrices $U_j$: - besides the diagonal of $U_0$, which is $I$, the entries of $U_0, U_1, \ldots, U_k$ are homogeneous polynomials with no degree zero term in $Q_1,\ldots, Q_l$ Let us choose an ordering of $W$ which is increasing with respect to $\deg c_w$. In this way, the set $\{[c_w]:w\in W\}$ is a basis of $H^*(G/B)$ consisting of $s_0=1$ elements of degree $0$, followed by $s_1$ elements of degree $2$, $\ldots$ , followed by $s_m$ elements of degree $2m=\dim G/B$. All matrices involved here appear as block matrices of the type $A=(A_{\alpha\beta})_{1\le\alpha,\beta \le m}$. We will say that a block matrix $A=(A_{\alpha\beta})_{1\le\alpha,\beta \le m}$ is $r$-[*triangular*]{} if $A_{\alpha\beta}=0$ for all $\alpha,\beta$ with $\beta -\alpha <r$. From the homogeneity of $U^{vw}$ mentioned above and the fact that $\deg Q_1=\ldots =\deg Q_l=4$, we deduce: - the block matrix $U_0-I$ is 2-triangular - for any $1\le j\le k$, the block matrix $U_j$ is $(j+2)$-triangular. In particular we can assume that $k=m-2$, hence $$\label{u}U=U_0+\h U_1 +\ldots +\h^{m-2} U_{m-2},$$ Consider the matrix $\Omega^i=(\Omega^i_{vw})_{v,w\in W}$ determined by $$\label{pi}P_i[C_w]=\sum_{v\in W}\Omega^i_{vw}[C_v].$$ As before, each $\Omega^i_{vw}$ is an element of $\bR[Q_1,\ldots, Q_l, \h]$ which is homogeneous with respect to the grading given by (\[degree\]). Also, if we apply $\psi_1$ on both sides of the equation (\[pi\]), we deduce that in $H^*(G/B)\otimes \bR[\h] =(\bR[\{\lambda_i\}]/I_W) \otimes \bR[\h]$ we have $$[\lambda_i][c_w]=\sum_v \Omega^i_{vw}|_{({\rm all} \ Q_i =0)}[c_v].$$ This shows that $$\label{omega} \Omega^i_{vw}|_{({\rm all} \ Q_j =0)} \ {\rm is \ independent \ of \ } \h, {\rm for \ all} \ v,w\in W, 1\le i\le l$$ From here on, it will be more convenient to work with the realization of $\D$ given by $Q_i=e^{t_i}$ and $P_i=\h\frac{\partial}{\partial t_i}$, $1\le i\le l$. Then $\Omega^i$ become matrices whose coefficients are homogeneous polynomials in $e^{t_1},\ldots, e^{t_l}$, and $\hbar$. Let us consider the 1-form $$\label{omega0}\Omega =\sum_{i=1}^l \Omega_i dt_i.$$ We decompose it as $$\Omega= \omega +\h\theta^{(1)}+\ldots + \h^p\theta^{(p)}.$$ From the homogeneity of the entries of $\Omega_i$, as well as from (\[omega0\]) we deduce that: - the block matrix $\omega$ is $(-1)$-triangular - the block matrix $\theta^{(j)}$ is $(j+1)$-triangular, for any $1\le j \le p$. In particular we can assume that $p=m-1$, hence $$\label{omega}\Omega= \omega +\h\theta^{(1)}+\ldots + \h^{m-2}\theta^{(m-2)}.$$ Now consider the matrix $\bar{\Omega}^i=(\bar{\Omega}^i_{vw})_{v,w\in W}$ determined by $$P_i[\bar{C}_w]=\sum_{v\in W}\bar{\Omega}^i_{vw}[\bar{C}_v].$$ Note that if $p\in \D$ is a polynomial $p(e^{t_1}, \ldots, e^{t_l})$, then we have $$\h\frac{\partial}{\partial t_i} \cdot p = p\cdot \h\frac{\partial}{\partial t_i} +\h\frac{\partial}{\partial t_i}(p).$$ By using this, we can easily deduce from (\[bar\]) that $$\bar{\Omega}^i= U^{-1}\Omega^i U+\h U^{-1}\frac{\partial}{\partial t_i} U.$$ Thus the 1-form $\bar{\Omega} =\sum_{i=1}^l \bar{\Omega}_i dt_i$ is given by $$\bar{\Omega} = U^{-1}\Omega U +\h U^{-1}dU.$$ Condition (iii) reads $\bar{\Omega}$ is independent of $\h$. From (\[u\]) and (\[omega\]) we can see that this is equivalent to $$U^{-1}\Omega U +\h U^{-1}dU =U_0^{-1}\omega U_0$$ and further to $$\label{diffeq}\Omega U +\h dU =UU_0^{-1}\omega U_0.$$ We will prove the following claim [*Claim.*]{} For a given $\Omega$ of the type (\[omega\]) with the properties (d) and (e), the system (\[diffeq\]) has at most one solution $U$ of the type (\[u\]) with $U_j$ satisfying (a) and (b). It is obvious that the claim implies that there exists at most one basis $\{[\bar{C}_w]:w\in W\}$ with the properties (i), (ii) and (iii), and the proof is complete. In order to prove the claim, let us write $$U=(I +\h V_1 + \h^2 V_2 +\ldots +\h^{m-2}V_{m-2})V_0$$ where $V_0=U_0$, $V_1=U_1U_0^{-1},\ldots , V_{m-2}=U_{m-2}U_0^{-1}$. Note that (a), (b) and (c) from above imply: - $V_0$ is a block matrix whose diagonal is $I$, such that $V_0-I$ is $2$-triangular, and all entries of $V_0$ which are not on the diagonal are polynomials with no degree zero term in $e^{t_1},\ldots, e^{t_l}$, - $V_0^{-1}$ is an upper triangular matrix, its diagonal is $I$, and all entries of $V_0^{-1}$ which are not on the diagonal are polynomials with no degree zero term in $e^{t_1},\ldots, e^{t_l}$, - for any $1\le j\le m-2$, the block matrix $V_j$ is $(j+2)$-triangular and its entries are polynomials with no degree zero term in $e^{t_1},\ldots, e^{t_l}$. By identifying the coefficients of powers of $\h$, the equation (\[diffeq\]) is equivalent to the system consisting of: $$\begin{aligned} \label{sys1}d(V_0)V_0^{-1} &= - \theta^{(1)}+[V_1, \omega]\end{aligned}$$ and $$\begin{aligned} \label{sys2} dV_1&=-\theta^{(2)}+[V_1,\theta^{(1)}]+[V_2,\omega]-V_1[V_1,\omega]\\ dV_i&=-\theta^{(i+1)}-\theta^{(i)}V_1-\ldots -\theta^{(2)}V_{i+1}+[V_i,\theta^{(1)}]+[V_{i+1},\omega]-V_i[V_1,\omega]\nonumber\end{aligned}$$ for $i\ge 2$. It is convenient to write a block matrix $A=(A_{\alpha\beta})_{1\le \alpha, \beta\le m}$ as $$A=A^{[-m]} +\ldots +A^{[-1]}+A^{[0]}+A^{[1]} +\ldots +A^{[m]}$$ where each block matrix $A^{[j]}$ is $j$-diagonal (i.e. $A_{\alpha\beta}^{[j]}=0$ whenever $\beta-\alpha \ne j$). Then for any two block matrices $A$ and $B$ we have: $$(AB)^{[j]} = \sum_k A^{[k]}B^{[j-k]}, \quad [A,B]^{[j]} =\sum_k[A^{[k]},B^{[j-k]}].$$ By (b), (c), (d) and (e) we can write: $$\begin{aligned} V_0=&I+V_0^{[2]}+V_0^{[3]}+\ldots + V_0^{[m]}\\ V_i=&V_i^{[i+2]} +V_i^{[i+3]}+\ldots + V_i^{[m]} \ \ (1\le i\le m-2)\\ \omega=&\omega^{[-1]} +\omega^{[0]} +\omega^{[1]}+\ldots +\omega^{[m]}\\ \theta^{(i)}=&\theta^{(i),[i+1]}+\theta^{(i),[i+2]}+\ldots +\theta^{(i),[m]} \ \ (1\le i\le m-1)\end{aligned}$$ In this way, the system (\[sys2\]) is equivalent to: $$\begin{aligned} dV_1^{[j]} = -\theta^{(2),[j]}& +\sum_{3\le k\le j-2}[V_1^{[k]}, \theta^{(1),[j-k]}]\\ {} & +\sum_{4\le k\le j+1} [V_2^{[k]},\omega^{[j-k]}]\\ {}& -\sum_{2\le k\le j-3}\sum_{3\le l\le k+1}V_1^{[j-k]}[V_1^{[l]},\omega^{[k-l]}] \\ {}\\ dV_i^{[j]} = -\theta^{(i+1),[j]} &-\sum_{3\le k\le j-i-1}\theta^{(i),[j-k]}V_1^{[k]}\\ {} &-\sum_{i+3\le k\le j-3}\theta^{(2),[j-k]}V_{i+1}^{[k]}\\ {} & +\sum_{i+2\le k\le j-2}[V_i^{[k]},\theta^{(1),[j-k]}]\\ {} & +\sum_{i+3\le k\le j+1}[V_{i+1}^{[k]},\omega^{[j-k]}]\\ {} & -\sum_{2\le k\le j-i-2}\sum_{3\le l\le k+1} V_i^{[j-k]} [V_1^{[l]},\omega^{[k-l]}],\end{aligned}$$ where $i\ge 2$. Define the total order on the matrices $V_i^{[j]}$, $j\ge i+2$ as follows: $V_{i_1}^{[j_1]} < V_{i_2}^{[j_2]}$ if and only if $j_1-i_1 <j_2-i_2$ or $j_1-i_1 =j_2-i_2$ and $j_1 <j_2$. We note that the system from above is of the form: $$dV_i^{[j]} = {\rm \ expression \ involving }\ V_{i'}^{[j']} > V_i^{[j]},$$ for $i\ge 1$ and $j\ge i+2$. Because all coefficients of the matrices $V_i^{[j]}$ are polynomials with no degree zero term in $e^{t_1}, \ldots, e^{t_l}$, we deduce inductively — starting with $V_{m-2}^{[m]}$ — that there exists at most one solution $V_i^{[j]}$, $i\ge 1$, $j\ge i+2$, of the system. It remains to show that there exists at most one $V_0$ which satisfies both the condition (f) and the equation (\[sys1\]). If $V_0'$ is another solution, then a simple calculation shows that $$d(V^{-1}_0V_0' )=0.$$ By condition (f), the matrix $V^{-1}_0V_0'$ has the diagonal $I$ and any entry of it which is not on the diagonal is a polynomial with no degree zero term in $e^{t_1}, \ldots, e^{t_l}$. So $V^{-1}_0V_0'=I$, which means $V_0=V_0'$. Now we can prove our main result: [*Proof of Theorem \[main\]*]{} Let $\star$ be a product with the properties stated in Theorem \[main\]. Consider the isomorphism of $\D$-modules $$\phi:\D/\I\to H^*(G/B)\otimes \bR[\{q_i\},\h]$$ given by Proposition \[isomorphism\]. The basis $\{[c_w]:w\in W\}$ of the right hand side induces the basis $\{[\bar{C}_w]=\phi^{-1}([c_w]): w\in W\}$ of $\D/\I$ over $\bR[\{Q_i\}, h]$. It is obvious that the latter satisfies the hypotheses (i) and (iii) of Proposition \[unique\]. We show that it also satisfies (ii) by using the argument already employed in the first part of the proof of Proposition \[unique\]. Now from Proposition \[unique\], we deduce that $$[\bar{C}_w]=[\hat{C}_w],$$ for $w\in W$, where the basis $\{[\hat{C}_w]:w\in W\}$ is induced by the actual quantum product $\circ$. Now, since $\phi$ is an isomorphism of $\D$-modules, $\phi([\bar{C}_w])=[c_w]$ and $\phi(P_i)=[\lambda_i]$, we deduce that the matrix of $[\lambda_i]\star$ with respect to the basis $\{[c_w]:w\in W\}$ is the same as the matrix of $P_i$ with respect to the basis $\{[\bar{C}_w]:w\in W\}$. Consequently we have $$[\lambda_i]\star a=[\lambda_i] \circ a,$$ for all $a\in H^*(G/B) \otimes \bR[q_1,\ldots, q_l].$ Hence the products $\star$ and $\circ$ are the same. Quantization map for $Fl_n$ {#last} =========================== In the case $G=SL(n,\bC)$, the resulting flag manifold is $Fl_n$, which is the space of all complete flags in $\bC^n$. Borel’s presentation (see eq. (\[borel\])) in this case reads $$H^*(Fl_n) = \bR[\lambda_1,\ldots, \lambda_{n-1}]/(I_n)_{\ge 2},$$ where $(I_n)_{\ge 2}$ denotes the ideal generated by the nonconstant symmetric polynomials of degree at least 2 in the variables $$x_1:=-\lambda_1, x_2:=\lambda_1-\lambda_2,\ldots, x_{n-1}:=\lambda_{n-2}-\lambda_{n-1}, x_n:=\lambda_{n-1}.$$ Equivalently, we have $$H^*(Fl_n)=\bR[x_1,\ldots, x_n]/I_n$$ where $I_n$ denotes the ideal generated by the nonconstant symmetric polynomials of degree at least 1 in the variables $x_1, \ldots, x_n.$ For any $k\in \{0,1,\ldots, n\}$ we consider the polynomials $e^k_0,\ldots, e^k_k$ in the variables $x_1,\ldots, x_k$, which can be described by $$\det \left[ \left(\begin{array}{ccccccc} x_1 & 0 & \ldots & 0\\ 0 & x_2 & \ldots & 0 \\ \ldots & \ldots & \ldots & \ldots \\ 0 & \ldots & 0 & x_{k} \\ \end{array}\right) +\mu I_k \right] = \sum_{i=0}^n e_i^k\mu^{k-i} .$$ For $i_1,\ldots, i_{n-1}\in \bZ$ such that $0\le i_j \le j$, we define $$e_{i_1 \ldots i_{n-1}}=e_{i_1}^1\ldots e_{i_{n-1}}^{n-1}.$$ These are called the [*standard elementary monomials.*]{} It is known (see e.g. \[Fo-Ge-Po, Proposition 3.4\]) that the set $\{[e_{i_1 \ldots i_{n-1}}] \ : \ 0\le i_j \le j\}$ is a basis of $H^*(Fl_n)$. We also consider the polynomials[^4] $\hat{e}^k_0,\ldots, \hat{e}^k_k$ in the variables $x_1,\ldots, x_k, q_1, \ldots, q_{k-1}$, which are described by $$\det \left[ \left(\begin{array}{ccccccc} x_1 & q_1 & 0 & \ldots & 0\\ -1 & x_2 & q_2 &\ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 &\ldots & -1 & x_{k-1} & q_{k-1} \\ 0 & \ldots & 0 & -1 & x_{k} \\ \end{array}\right) +\mu I_k \right] = \sum_{i=0}^k \hat{e}_i^k\mu^{k-i} .$$ For $i_1,\ldots, i_{n-1}$ such that $0\le i_j \le j$, we define the [*quantum standard elementary monomials*]{} $$\hat{e}_{i_1 \ldots i_{n-1}}=\hat{e}_{i_1}^1\ldots ,\hat{e}_{i_{n-1}}^{n-1}.$$ By a theorem of Ciocan-Fontanine \[Ci\] (in fact Kim’s theorem for $G=SL(n,\bC)$, see section 1), we have the following isomorphism of $\bR[q_1,\ldots, q_{n-1}]$-algebras $$\label{quantizationmap}(H^*(Fl_n)\otimes\bR[q_1,\ldots, q_{n-1}],\circ)\simeq QH^*(Fl_n) := \bR[x_1,\ldots x_n,q_1,\ldots, q_{n-1}]/\langle \hat{e}_1^n,\ldots ,\hat{e}_{n}^{n}\rangle,$$ which is canonical, in the sense that $[x_i]$ is mapped to $[x_i]_q$. According to \[Fo-Ge-Po\], we will call this the [*quantization map*]{}. Since the conditions (\[deg\]) and (\[hat\]) are satisfied, we deduce that $\{[\hat{e}_{i_1 \ldots i_{n-1}}]_q \ : \ 0\le i_j \le j\}$ is a basis of $QH^*(Fl_n)$ over $\bR[q_1,\ldots, q_{n-1}]$. We also point out the obvious fact that $\{[{e}_{i_1 \ldots i_{n-1}}] \ : \ 0\le i_j \le j\}$ is a basis of $H^*(Fl_n) \otimes \bR[q_1,\ldots, q_{n-1}]$ over $\bR[q_1,\ldots, q_{n-1}]$. The goal of this section is to give a different proof to the following theorem of Fomin, Gelfand, and Postnikov. \[main3\][(see \[Fo-Ge-Po, Theorem 1.1\])]{}. The quantization map described by equation (\[quantizationmap\]) sends $[e_{i_1\ldots i_{n-1}}]$ to $[\hat{e}_{i_1\ldots i_{n-1}}]_q.$ The main instrument of our proof is the $\D$-module $\D/\I$ defined in section 2. In this case (i.e. $G=SL(n,\bC)$) we can describe it explicitly, as follows: $\D$ is the (noncommutative) Heisenberg algebra defined at the beginning of section 2, where $l=n-1$. The left ideal $\I$ of $\D$ is generated by $\E^n_1,\ldots, \E^n_{n-1}$, where $$\det \left[ \left(\begin{array}{ccccccc} -P_1 & Q_1 & 0 & \ldots & 0\\ -1 & P_1 -P_2 & Q_2 &\ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 &\ldots & -1 & P_{n-2} -P_{n-1} & Q_{n-1} \\ 0 & \ldots & 0 & -1 & P_{n-1} \\ \end{array}\right) +\mu I_n \right] = \sum_{i=0}^n \E_i^n\mu^{n-i} .$$ In fact we will need more general elements of $\D$, namely, for each $k\in \{1,\ldots, n-1\}$, we consider the elements $\E_i^k$ of $\D$, with $0\le i\le k$, given by $$\det \left[ \left(\begin{array}{ccccccc} -P_1 & Q_1 & 0 & \ldots & 0\\ -1 & P_1 -P_2 & Q_2 &\ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 &\ldots & -1 & P_{k-2} -P_{k-1} & Q_{k-1} \\ 0 &\ldots & 0 & -1 & P_{k-1} - P_k \\ \end{array}\right) +\mu I_k \right] = \sum_{i=0}^k \E_i^k\mu^{k-i}.$$ One can easily see that when we expand the determinant in the left hand side of the last equation, we will have no occurrence of $P_jQ_j$ or $Q_jP_j$, $1\le j\le k-1$. This means that the lack of commutativity of $Q_j$ and $P_j$ creates no ambiguity in the definition of $\E^n_1,\ldots, \E^n_{n-1}$. We can also deduce that each of $\E^k_1,\ldots, \E^k_{k}$ is a linear combination of monomials in the variables $\{P_1,\ldots, P_{k}, Q_1,\ldots, Q_{k-1}\}$, with no ocurrence of $P_jQ_j$ or $Q_jP_j$ (i.e. the order of factors in each monomial is not important). As a consequence, the following recurrence formula \[Fo-Ge-Po, equation (3.5)\] still holds: $$\label{fgp}\E_{i}^k = \E_i^{k-1} + X_k \E_{i-1}^{k-1}+Q_{k-1}\E_{i-2}^{k-2},$$ where $X_k$ stands for $P_{k-1}-P_k$ and, by convention, $\E_j^k=0$, unless $0\le j \le k$. It is worth mentioning the following commutation relations, which will be used later: $$\label{commutation}[X_k, \E_j^{l}]=0,\quad [Q_k, \E_j^{l}]=0,$$ whenever $l\le k-1$. We also note that $\E_0^k=1$ and $\E_1^k = -P_k$ (where $P_n$ is by convention equal to 0). We will prove the following result. The elements $\E^k_1,\ldots, \E^k_{k-1}$ of $\D$ commute with each other. Consider the coordinates $s_0,\ldots, s_{k-1}$ on $\bR^k$. Following \[Kim-Joe\], we consider the differential operators $D_j(\h\frac{\partial }{\partial s_0}, ,\ldots,\h\frac{\partial }{\partial s_{k-1}}, e^{s_1-s_0}, \ldots, e^{s_{k-1}-s_{k-2}})$ given by $$\det \left[ \left(\begin{array}{ccccccc} \h \frac{\partial }{\partial s_0} & e^{s_1-s_0} & 0 & \ldots & 0\\ -1 & \h \frac{\partial }{\partial s_1} & e^{s_2-s_1} &\ldots & 0 \\ \ldots & \ldots & \ldots & \ldots & \ldots \\ 0 &\ldots & -1 & \h \frac{\partial }{\partial s_{k-2}} & e^{s_{k-1}-s_{k-2}} \\ 0 &\ldots & 0 & -1 & \h \frac{\partial }{\partial s_{k-1}} \\ \end{array}\right) +\mu I_k \right] = \sum_{i=0}^k D_i^k\mu^{k-i}.$$ By \[Kim-Joe, Proposition 1\], we have $[D_i^k, D_j^k]=0$ for all $0\le i,j\le k$. In order to prove our lemma, it is sufficient to note that if we make the change of coordinates $$s_1-s_0=t_1, \ldots , s_{k-1}-s_{k-2} = t_{k-1}, -s_{k-1}=t_k,$$ we obtain $$\h \frac{\partial }{\partial s_0} = - \h \frac{\partial }{\partial t_1} =-P_1, \h \frac{\partial }{\partial s_1} = \h \frac{\partial }{\partial t_1} - \h \frac{\partial }{\partial t_2} =P_1-P_2, \ldots, \h \frac{\partial }{\partial s_{k-1}} = \h \frac{\partial }{\partial t_{k-1}} - \h \frac{\partial }{\partial t_{k}} =P_{k-1}-P_{k},$$ where we have used the presentation of $\D$ given by $P_i=\h \frac{\partial }{\partial t_{i}}, Q_i=e^{t_i}$, $1\le i \le n-1$. The following technical result will be needed later. \[commq\] We have $$\label{firste} [\E_{j+1}^{k+1}, \E_i^k] = [\E_{i+1}^{k+1}, \E_j^k].$$ We prove this by induction on $k \ge 0$. For $k=0$, the equation is obvious (by the convention made above, we have $\E_0^j=0$). It follows the induction step. We use the recurrence formula (\[fgp\]). This gives $$[ \E_{j+1}^{k+1}, \E_i^k] = [ \E_{j+1}^{k} + X_{k+1} \E_{j}^{k}+Q_{k} \E_{j-1}^{k-1}, \E_i^k]= [Q_{k} \E_{j-1}^{k-1}, \E_i^k].$$ We continue by using again equation (\[fgp\]) and obtain $$\begin{aligned} {} & [Q_{k} \E_{j-1}^{k-1}, \E_i^{k-1} + X_k \E_{i-1}^{k-1}+Q_{k-1}\E_{i-2}^{k-2}] \\{} & = [Q_k, X_k] \E_{i-1}^{k-1}\E_{j-1}^{k-1} + [Q_k\E_{j-1}^{k-1}, Q_{k-1} \E_{i-2}^{k-2}] \\{} & = [Q_k, X_k] \E_{i-1}^{k-1}\E_{j-1}^{k-1} + Q_k[\E_{j-1}^{k-1}, Q_{k-1} \E_{i-2}^{k-2}] \\ {} & = [Q_k, X_k] \E_{i-1}^{k-1}\E_{j-1}^{k-1} + Q_k[\E_{j-1}^{k-1},\E_{i}^k - \E_i^{k-1} - X_k \E_{i-1}^{k-1}] \\{} & =[Q_k, X_k] \E_{i-1}^{k-1}\E_{j-1}^{k-1} +Q_k([\E_{j-1}^{k-1}, \E_{i}^k] - [\E_{j-1}^{k-1}, X_k \E_{i-1}^{k-1}])\\ {} & = [Q_k, X_k] \E_{i-1}^{k-1}\E_{j-1}^{k-1} +Q_k[\E_{j-1}^{k-1}, \E_{i}^k]\end{aligned}$$ Here we have used the commutation relations (\[commutation\]) several times. Similarly, we obtain $$[\E_{i+1}^{k+1}, \E_j^k] = [Q_k, X_k] \E_{j-1}^{k-1}\E_{i-1}^{k-1} +Q_k [\E_{i-1}^{k-1}, \E_{j}^{k}].$$ We use the induction hypothesis to finish the proof. Now we are ready to prove Theorem \[main3\]. [*Proof of Theorem \[main3\].*]{} Let $\omega_k$ denote the matrix of multiplication by $[y_k]_q$ with respect to the basis $\{[\hat{e}_{i_1 \ldots i_{n-1}}]_q \ : \ 0\le i_j \le j\}$ of $QH^*(Fl_n)$ (see equation (\[quantizationmap\])). More precisely, the entries of $\omega_i$ are polynomials in $q_1,\ldots, q_{n-1}$, determined by $$\label{coefficients} [y_k]_q [ \hat{e}_{i_1 \ldots i_{n-1}}]_q = \sum_{l_1,\ldots, l_{n-1}} \omega_k^{i_1 \ldots i_{n-1}, l_1 \ldots l_{n-1}}[\hat{e}_{l_1 \ldots l_{n-1}}]_q.$$ According to Corollary \[proper\], it is sufficient to show that $$\label{partiali}\frac{\partial}{\partial t_i}\omega_j =\frac{\partial}{\partial t_j}\omega_i,$$ for $1\le i,j\le n-1$, where as usually, we use the convention $q_i=e^{t_i}$. For $i_1,\ldots, i_{n-1}$ such that $0\le i_j \le j$, we consider $$\E_{i_1 \ldots i_{n-1}}:=\E_{i_1}^1 \E_{i_2}^2 \ldots \E_{i_{n-1}}^{n-1}.$$ In order to prove equation (\[partiali\]), it is sufficient to prove the following claim. [*Claim.*]{} In $\D/\I$ we have $$\label{claim3}[P_k] [ \E_{i_1 \ldots i_{n-1}}] =\sum_{l_1,\ldots, l_{n-1}} \Omega_k^{i_1 \ldots i_{n-1}, l_1 \ldots l_{n-1}}[\E_{l_1 \ldots l_{n-1}}],$$ where each $\Omega_k^{i_1 \ldots i_{n-1}, l_1 \ldots l_{n-1}}$ is obtained from $\omega_k^{i_1 \ldots i_{n-1}, l_1 \ldots l_{n-1}}$ by the modification $Q_i\mapsto q_i$. Indeed, if we make the usual identifications $P_k=\h\frac{\partial}{\partial t_k}$, $Q_k=e^{t_k}$, $1\le k\le n-1$, then (\[claim3\]) implies that the connection $$d+ \sum_{k=1}^{n-1}\frac{1}{\h}\Omega_kdt_k$$ is flat (see e.g. \[Gu, Proposition 1.1\]) for all values of $\h$, which implies (\[partiali\]). The proof of the claim relies on a noncommutative version of the quantum straightening algorithm of Fomin, Gelfand, and Postnikov \[Fo-Ge-Po\]. The key equation is the following. $$\label{straightening}\E_i^k\E_{j+1}^{k+1} + \E_{i+1}^k \E_j^k +Q_k\E_{i-1}^{k-1}\E_j^k = \E_j^k \E^{k+1}_{i+1} +\E_{j+1}^k\E_i^k +Q_k\E_{j-1}^{k-1}\E_i^k.$$ We note that this is the same as equation (3.6) in \[Fo-Ge-Po\]. The difference is that here we work in the algebra $\D$, which is not commutative, so it is not [*a priori*]{} clear that (\[straightening\]) still holds. In order to prove it, we use equation (\[fgp\]) twice and obtain: $$(\E_{j+1}^{k+1}-\E_{j+1}^k)\E_i^k = (X_{k+1}\E_j^k +Q_k\E_{j-1}^{k-1})\E_i^k,$$ and $$(\E_{i+1}^{k+1}-\E_{i+1}^k)\E_j^k = (X_{k+1}\E_i^k +Q_k\E_{i-1}^{k-1})\E_j^k.$$ If we subtract the second equation from the first one, we obtain: $$\E_{i+1}^{k+1}\E_j^k - \E_{j+1}^{k+1}\E_i^k= \E_{i+1}^k\E_j^k - \E_{j+1}^k\E_i^k +Q_k(\E_{i-1}^{k-1}\E_j^k - \E_{j-1}^{k-1}\E_i^k).$$ Now the left hand side can be written as $$\E_j^k\E_{i+1}^{k+1} - \E_i^k\E_{j+1}^{k+1} + [\E_{i+1}^{k+1},\E_j^k] - [\E_{j+1}^{k+1},\E_i^k] = \E_j^k\E_{i+1}^{k+1} - \E_i^k\E_{j+1}^{k+1} ,$$ where we have used Lemma \[commq\]. Equation (\[straightening\]) has been proved. Now we can use it exactly like in the commutative situation, described in \[Fo-Ge-Po\], in order to obtain the expansion of the product of $P_k= -\E_1^k$ and $\E_{i_1 \ldots i_{n-1}} = \E_{i_1}^1 \ldots \E_{i_{n-1}}^{n-1}$. More precisely, we begin with $$P_k \E_{i_1 \ldots i_{n-1}} = \E_{i_1}^1 \ldots \E_{i_{k-1}}^{k-1} P_k \E_{i_k}^k \E_{i_{k+1}}^{k+1} \ldots \E_{i_{n-1}}^{n-1} = -\E_{i_1}^1 \ldots \E_{i_{k-1}}^{k-1} \E_1^k \E_{i_k}^k \E_{i_{k+1}}^{k+1} \ldots \E_{i_{n-1}}^{n-1} ,$$ and then we use (\[straightening\]) repeatedly. The resulting coefficients in the final expansion will be the same as in the commutative situation. This finishes the proof of the claim, and also of Theorem \[main3\]. $\square$ [Fo-Ge-Po]{} A. Amarzaya and M. A. Guest, [*Gromov-Witten invariants of flag manifolds, via D-modules*]{}, Jour. London Math. Soc. (2) [**72**]{} (2005), 121–136 I. N. Bernstein, I. M. Gelfand and S. I. Gelfand, [*Schubert cells and cohomology of the space $G/P$*]{}, Russian Math. Surveys [**28**]{} (1973), 1–26 A. Borel, [*Sur la cohomologie des espaces fibrés principaux et des espaces homogènes des groupes de Lie compacts*]{}, Ann. of Math. [**57**]{} No. 2 (1953), 115–207 I. Ciocan-Fontanine, [*The quantum cohomology ring of flag varieties*]{}, Trans. Amer. Math. Soc. [**351**]{} (1999), no. 7, 2695–2729 B. Dubrovin, [*The geometry of 2D topological field theories*]{}, Integrable Systems and Quantum Groups, Lecture Notes in Mathematics, Vol. 1620, Springer-Verlag, New York, 1996, 120–348 S. Fomin, S. Gelfand, and A. Postnikov, [*Quantum Schubert polynomials*]{}, J. Amer. Math. Soc., [**10**]{} (1997), 565–596 W. Fulton and R. Pandharipande, [*Notes on stable maps and quantum cohomology*]{} Algebraic geometry—Santa Cruz 1995, Proc. Sympos. Pure Math., 62, Part 2, editors J. Kollar, R. Lazarsfeld and D.R. Morrison, 1997, 45–96 W. Fulton and C. Woodward, [*On the quantum product of Schubert classes*]{}, J. Algebraic Geom. [**13**]{} (2004), 641–661 M. A. Guest, [*Quantum cohomology via D-modules*]{}, Topology [**44**]{} (2005), 263–281 H. Iritani, [*Quantum D-module and equivariant Floer theory for free loop spaces*]{}, preprint [math.DG/0410487]{} B. Kim, [*Quantum cohomology of flag manifolds $G/B$ and quantum Toda lattices*]{}, Ann. of Math. [**149**]{} (1999), 129–148 B. Kim and D. Joe, [*Equivariant mirrors and the Virasoro conjecture for flag manifolds*]{}, Int. Math. Res. Not. [**15**]{} (2003), 859–882 A.-L. Mare, [*On the theorem of Kim concerning $QH^*(G/B)$*]{}, Integrable systems, topology and physics, editors M. Guest, R. Miyaoka and Y. Ohnita, Contemp. Math. 309, Amer. Math. Soc. (2002), 151-163 A.-L. Mare, [*Polynomial representatives of Schubert classes in $QH^*(G/B)$*]{}, Math. Res. Lett. [**9**]{} (2002), 757–770 A.-L. Mare, [*Relations in the quantum cohomology ring of $G/B$*]{}, Math. Res. Lett. [**11**]{} (2004), 35–48 A.-L. Mare, [*The combinatorial quantum cohomology ring of $G/B$*]{}, Jour. Alg. Comb. [**21**]{} (2005), 331–349 D. Peterson, [*Lectures on quantum cohomology of $G/P$*]{}, M.I.T. 1996 B. Siebert and G. Tian, [*On quantum cohomology rings of Fano manifolds and a formula of Vafa and Intriligator*]{}, Asian J. Math. [**1**]{} (1997), 679-695 [^1]: More precisely, a family of connections depending on the parameter $\h\in \bR\setminus \{0\}$. [^2]: These are solutions of the classical Giambelli problem for $G/B$. Such polynomials have been constructed for instance by Bernstein, I. M. Gelfand and S. I. Gelfand in \[Be-Ge-Ge\]. [^3]: This is what Amarzaya and Guest \[Am-Gu\] call a “quantum evaluation map". [^4]: These are the polynomials $E_i^k$ of \[Fo-Ge-Po\].
--- abstract: 'This paper proposes a semi-structural approach to verify the *nonblockingness* of a Petri net. We provide an algorithm to construct a novel structure, called *minimax basis reachability graph* (minimax-BRG): it provides an abstract description of the reachability set of a net while preserving all information needed to test if the net is *blocking*. We prove that a bounded deadlock-free Petri net is *nonblocking* if and only if its minimax-BRG is *unobstructed*, which can be verified by solving a set of *integer linear programming problems* (ILPPs). For Petri nets that are not deadlock-free, one needs to determine the set of deadlock markings. This can be done with an efficient approach based on the computation of *maximal implicit firing sequences* enabled by the markings in the minimax-BRG. The approach we developed does not require exhaustive exploration of the state space and therefore achieves significant practical efficiency, as shown by means of numerical simulations.' address: - 'SEME, Xidian University, Xi’an, China' - 'ISE, Macau University of Science and Technology, Taipa, Macau' - 'DIEE, University of Cagliari, Cagliari, Italy' author: - Chao Gu - Ziyue Ma - Zhiwu Li - Alessandro Giua bibliography: - 'automatica2019.bib' title: 'Verification of Nonblockingness in Bounded Petri Nets: A Novel Semi-Structural Approach' --- , , , Petri net, basis reachability graph, nonblockingness. Introduction {#Section1} ============ As discrete event models, Petri nets are commonly used in the framework of *supervisory control theory* (SCT) [@RW; @RW2; @SL]. From the point of view of computational efficiency, Petri nets have several advantages over simpler models such as automata [@c1; @c27; @Murata]: since states in Petri nets are not explicitly represented in the model in many cases, and structural analysis and linear algebraic approaches can be used without exhaustively enumerating the state space of a system. A suite of supervisory control approaches in discrete event systems focuses on an essential property, namely *nonblockingness* [@RW; @RW2; @NBLKS; @c11]. As defined in [@RW], nonblockingness is a property prescribing that all reachable states should be *co-reachable* to a set of *final states* representing the completions of pre-specified tasks. Consequently, to verify and ensure the nonblockingness of a system is a problem of primary importance in many applications and should be addressed with state-of-the-art techniques. The verification of nonblockingness in automata can be solved in a relatively straightforward manner. The authors in [@clin] address several sufficient conditions for nonblockingness verification. However, they are not very suitable for systems that contain complex feedback paths; in [@leduc2000hierarchical; @leduc2005hierarchical], a method called *hierarchical interface-based supervisory control*, i.e., to break up a plant into two subsystems and restrict the interaction between them, is developed to verify if a system is nonblocking; based on the *state tree structure*, the work in [@c24] studies an efficient algorithm for nonblocking supervisory control design in reasonable time and memory cost. In Petri net models, the works in [@c1; @c11] study the nonblockingness verification and enforcement from the aspect of *Petri net languages*. However, these methods rely on the construction and analysis of the *reachability graph*, which is practically inefficient; based on the concept of *theory of regions*[@uzam2002optimal], a compact maximally permissive controller is investigated in [@ghaffari2003design] to ensure the nonblockingness of a system. However, it still requires an exhaustive enumeration of the state space; for a class of Petri nets called *G-systems*, the work in [@zhao2013iterative] reports a deadlock prevention policy that can usually lead to a nonblocking supervisor with high computational efficiency but cannot guarantee maximally permissive behavior. As is known, the difficulty of enforcing nonblockingness lies in the fact that the optimal nonblocking supervisory control problem is *NP-hard* [@gohari2000complexity]. Moreover, the problem of efficiently verifying nonblockingness of a Petri net without constructing its reachability graph remains open to date. By this motivation, in this paper, we aim to develop a computationally efficient method for nonblockingness verification in Petri nets. A state-space abstraction technique in Petri nets, called *basis reachability graph (BRG)* approaches, was recently proposed in [@c8; @Basis]. In these approaches, only a subset of the reachable markings, called *basis markings*, are enumerated. This method can be used to solve *marking reachability* [@c3], *diagnosis* [@c8; @Basis] and *opacity* problems [@c10] efficiently. Thanks to the BRG, the state explosion problem can be mitigated and the related control problems can be solved efficiently. The BRG-based methods are *semi-structural* since only basis markings are explicitly enumerated in the BRG while all other reachable markings are abstracted by linear algebraic equations. On the other hand, in our previous work [@Gu] we show that the standard BRG-based approach may not be directly used for the nonblockingness verification due to the possible presence of *livelocks* and *deadlocks*. In particular, livelocks describe an undesirable non-dead strongly ergodic behavior such that the system continuously evolves without ever reaching its pre-specified task. Thus, a Petri net is blocking if a livelock that contains no final markings is reachable. However, the set of markings that form a livelock is usually hard to characterize and is not encoded in the classical BRG of the system. In our preliminary work in [@Gu], we proved that for a deadlock-free Petri net, nonblockingness verification can be done by constructing a structure namely the *expanded-BRG* and checking nonblockingness of each node it contains. However, the efficiency of this approach needs to be further improved. For a system that is not deadlock-free, a dead marking in the state space characterizes a terminal node, from which the system cannot further advance [@c18; @c20]. If there exists a dead marking that is not final (referred to it as a non-final deadlock), the system is verified to be blocking. Inspired by the classical BRG-based methodology, in this paper, we develop a novel semi-structural approach to verify the nonblockingness of a Petri net. The contribution of this paper consists of three aspects: - First, we define a new structure called a *minimax basis reachability graph* (minimax-BRG) and introduce a property called *unobstructiveness*. In plain words, a minimax-BRG is *unobstructed* if all nodes it contains are nonblocking. Analogous to BRG, the advantages of this method are that only part of the state space, namely *minimax basis markings*, is constructed and all other markings can be characterized as the integer solutions of a linear constraint set. - Second, owing to properties of the minimax-BRG, when a plant net is known to be deadlock-free, we propose a sufficient and necessary condition for nonblockingness verification, that is, a deadlock-free Petri net is nonblocking if and only if its minimax-BRG is unobstructed. - Finally, we provide for acyclic nets a characterization of deadlock. This allows us to address with the same technique we use to compute the minimax-BRG for the problem of deadlock analysis. Specifically, for a system that may contain deadlocks, the set of non-final dead markings can be computed and analyzed based on the markings in the corresponding minimax-BRG. Hence, the nonblockingness verification of nets that are not deadlock-free can be done by first determining the non-final deadlocks followed by checking the unobstructiveness of its minimax-BRG. The approach we developed does not require exhaustive exploration of the state space and therefore achieves significant practical efficiency. The rest of the paper is organized as follows. Section recalls some basic concepts and formalisms used in the paper. Section dissects the nonblockingness verification problem. Section develops a novel structure named the minimax-BRG and exposes a sufficient and necessary condition for nonblockingness verification of a deadlock-free system. In Section , we generalize the above results for the systems that are not deadlock-free. Numerical analyses are given in Section . Section draws conclusions and discusses future work. Preliminaries {#sec3/2} ============= In this section, we recall the main notions related to automata[@RW3], Petri nets [@Murata], and basis markings [@c3; @c8; @Basis] used in the paper. Automata -------- An automaton is a five-tuple $A=(X,\Sigma,\eta,x_0,X_m)$, where $X$ is a set of *states*, $\Sigma$ is an alphabet of *events*, $\eta : X \times \Sigma\rightarrow X$ is a *state transition function*, $x_0\in X$ is an *initial state* and $X_m\subseteq X$ is a set of *final states* (also called *marker states* in [@RW]). $\eta$ can be extended to a function $\eta : X \times \Sigma^{*}\rightarrow X$. A state $x\in X$ is *reachable* if $x = \eta(x_0, s)$ for some $s\in \Sigma^*;$ it is *co-reachable* if there exists $s^{\prime}\in \Sigma^*$ such that $\eta(x, s^{\prime})\in X_m$. An automaton is said to be *nonblocking* if any reachable state is co-reachable. Petri Nets {#PNbasic} ---------- A Petri net is a four-tuple $N=(P,T,Pre,Post)$, where $P$ is a set of $m$ *places* (graphically represented by circles) and $T$ is a set of $n$ *transitions* (graphically represented by bars). $Pre: P\times T\rightarrow \mathbb{N}$ and $Post: P\times T\rightarrow \mathbb{N}$ ($\mathbb{N}=\{0, 1, 2, \cdots\}$) are the *pre*- and *post*- *incidence functions* that specify the *arcs* directed from places to transitions, and vice versa in the net, respectively. The *incidence matrix* of $N$ is defined by $C=Post-Pre$. A Petri net is *acyclic* if there are no oriented cycles in its structure. Given a Petri net $N=(P,T,Pre,Post)$ and a set of transitions $T_x\subseteq T$, the *$T_x$-induced sub-net* of $N$ is a net resulting by removing all transitions in $T\setminus T_x$ and corresponding arcs from $N$, denoted as $N_x=(P,T_x,Pre_x,Post_x)$ where $T_x\subseteq T$ and $Pre_x$ ($Post_x$) is the restriction of $Pre$ ($Post$) to $P$ and $T_x$. The incidence matrix of $N_x$ is denoted by $C_x = Post_x-Pre_x$. A *marking* $M$ of a Petri net $N$ is a mapping: $P\to\mathbb{N}$ that assigns to each place of a Petri net a non-negative integer number of *tokens*. The number of tokens in a place $p$ at a marking $M$ is denote by $M(p)$. A Petri net $N$ with an initial marking $M_0$ is called a *net system*, denoted by $\langle N, M_0\rangle$. For a place $p\in P$, the *set of its input transitions* is defined by $^{\bullet}p=\{t\in T\mid Post(p,t)>0\}$ and the *set of its output transitions* is defined by $p^{\bullet}=\{t\in T\mid Pre(p,t)>0\}$. The notions for $^{\bullet}t$ and $t^{\bullet}$ are analogously defined. A transition $t\in T$ is *enabled* at a marking $M$ if $M\geq Pre(\cdot, t)$, denoted by $M[t\rangle$. If $t$ is enabled at $M$, the *firing* of $t$ yields marking $M^{\prime}=M+C(\cdot, t)$, which is denoted as $M[t\rangle M^{\prime}$. A marking $M$ is *dead* if for all $t\in T$, $M\ngeqslant Pre(\cdot, t)$. Marking $M^{\prime}$ is *reachable* from $M_{1}$ if there exist a feasible firing sequence of transitions $\sigma=t_{1}t_{2}\cdots t_{n}$ and markings $M_{2},\cdots, M_{n}$ such that $M_{1}[t_{1}\rangle M_{2}[t_{2}\rangle\cdots M_{n}[t_{n}\rangle M^{\prime}$ holds. Given a transition sequence $\sigma\in T^{*}$, $\varphi: T^{*}\rightarrow \mathbb{N}^{n}$ is a function that associates to $\sigma$ a vector $\textbf{y}=\varphi(\sigma)\in \mathbb{N}^{n}$, called the *firing vector* of $\sigma$. Let $\varphi^{-1}: \mathbb{N}^{n}\rightarrow T^{*}$ be the inverse function of $\varphi$, namely for $\textbf{y}\in \mathbb{N}^{n}$, $\varphi^{-1}(\textbf{y}):=\{\sigma\in T^{*}| \varphi(\sigma)=\textbf{y}\}$. The set of markings reachable from $M_{0}$ is called the *reachability set* of $\langle N, M_0\rangle$, denoted by $R(N, M_{0})$. A net system $\langle N, M_0\rangle$ is said to be *bounded* if there exists an integer $k\in \mathbb{N}$ such that for all $M\in R(N, M_0)$ and for all $p\in P$, $M(p)\leq k$ holds. The following well-known result shows that in acyclic nets, reachability can be characterized (necessary and sufficient condition) in simpler algebraic terms. [[@c8; @Murata]]{}\[ProX\] [Given a net system $\langle N, M_0\rangle$ where $N$ is acyclic, $M\in R(N, M_0)$, $M^{\prime}\in R(N, M_0)$ and a firing vector $\textbf{y}\in \mathbb{N}^n$, the following holds: $$M^{\prime}=M+C\cdot \textbf{y}\geq \textbf{0}\Leftrightarrow (\exists \sigma\in \varphi^{-1}(\textbf{y}))\ M[\sigma\rangle M^{\prime}.$$]{} $\hfill\blacksquare$ Let $G=(N, M_0, \mathcal{M_F})$ denote a Petri net system with initial marking $M_0$ and a set of final markings $\mathcal{M_F}$. $\mathcal{M_F}$ can be either given by explicitly listing all its members, or characterized by a *generalized mutual exclusion constraint* (GMEC)[@c25]. A GMEC is a pair $(\textbf{w}, k)$, where $\textbf{w}\in \mathbb{N}^m$ and $k\in \mathbb{N}$, that defines a set of markings $\mathcal{L}_{(\textbf{w},k)}=\{M\in\mathbb{N}^m| \textbf{w}^T\cdot M \leq k\}.$ Hereinafter, we adopt the GMEC-based representation to characterize $\mathcal{M_F}$ in $G$, i.e., let $\mathcal{M_F}=\mathcal{L}_{(\textbf{w},k)}$. \[NB\] [A marking $M\in R(N, M_0)$ of a Petri net system $G=(N, M_0, \mathcal{M_F})$ is said to be *blocking* if no final marking is reachable from it, i.e., $R(N, M)\cap \mathcal{M_F} =\emptyset$; otherwise $M$ is said to be *nonblocking*. System $G$ is *nonblocking* if no reachable marking is blocking; otherwise $G$ is *blocking*.$\hfill\blacksquare$]{} Basis Marking and Basis Reachability Graph (BRG) ------------------------------------------------ [Given a Petri net $N = (P, T, Pre, Post)$, transition set $T$ can be partitioned into $T=T_E\cup T_I$, where the disjoint sets $T_E$ and $T_I$ are called the *explicit* transition set and the *implicit* transition set, respectively. A pair $\pi=(T_E, T_I)$ is called a *basis partition* of $T$ if the $T_I$-induced sub-net of $N$ is acyclic. We denote $|T_E|=n_E$ and $|T_I|=n_I$. Let $C_I$ be the incidence matrix of the $T_I$-induced sub-net of $N$. $\hfill\blacksquare$]{} Note that the notion of BRG [@c8; @Basis] is first proposed in the context of events (transitions) classified as being “*observable*” and “*unobservable*”. However, a generalized version of this concept based on “explicit” and “implicit” transitions is presented in [@c3]. Given a Petri net $N = (P, T, Pre, Post)$, a basis partition $\pi=(T_E, T_I)$, a marking $M$, and a transition $t\in T_E$, we define $\Sigma(M, t)=\{\sigma\in T_{I}^{\ast}| M[\sigma\rangle M^{\prime}, M^{\prime}\geq Pre (\cdot, t)\}$ as the set of *explanations* of $t$ at $M$, and we define $Y(M, t)=\{\varphi(\sigma)\in \mathbb{N}^{n_I}| \sigma\in \Sigma(M, t)\}$ as the set of *explanation vectors*; meanwhile we define $\Sigma_{{\rm min}}(M, t)=\{\sigma\in \Sigma(M, t)| \nexists \sigma^{\prime}\in \Sigma(M, t): \varphi(\sigma^{\prime})\lneq \varphi(\sigma)\}$ as the set of *minimal explanations* of $t$ at $M$, and we define $Y_{{\rm min}}(M, t)=\{\varphi(\sigma)\in \mathbb{N}^{n_{I}}| \sigma\in \Sigma_{{\rm min}}(M, t)\}$ as the corresponding set of *minimal explanation vectors*.$\hfill\blacksquare$ \[DefX\] Given a net system $(N, M_0)$ and a basis partition $\pi=(T_E, T_I)$, its *basis marking set* $\mathcal{M_B}$ is defined as follows: - $M_0\in \mathcal{M_B}$; - If $M\in \mathcal{M_B}$, then for all $t\in T_E$, for all $\textbf{y}\in Y_{\rm min}(M, t)$, $M^{\prime}=M+C_I\cdot \textbf{y}+C(\cdot, t)\Rightarrow M^{\prime}\in \mathcal{M_B}$.$\hfill\blacksquare$ A marking $M$ in $\mathcal{M_B}$ is called a *basis marking* of $(N, M_0)$ with respect to $\pi=(T_E, T_I)$. Given a bounded net $N=(P, T, Pre, Post)$ with an initial marking $M_0$ and a basis partition $\pi=(T_E, T_I)$, its *basis reachability graph* is a non-deterministic finite state automaton $\mathcal{B}$ output by Algorithm 2 in [@c3]. The BRG $\mathcal{B}$ is a quadruple $(\mathcal{M_B}, {\rm Tr}, \Delta, M_0)$, where - the state set $\mathcal{M_B}$ is the set of basis markings; - the event set ${\rm Tr}$ is the set of pairs $(t, \textbf{y})\in T_E\times \mathbb{N}^{n_{I}}$; - the transition relation $\Delta=\{(M_1, (t, \textbf{y}), M_2)| t\in T_E, \textbf{y}\in Y_{\rm min}(M_1, t), M_2=M_1+C_I\cdot \textbf{y}+C(\cdot, t)\}$; - the initial state is the initial marking $M_0$.$\hfill\blacksquare$ We extend in the usual way the definition of transition relation to consider a sequence of pairs $\sigma\in {\rm Tr}^*$ and write $(M_1,\sigma,M_2)\in\Delta$ to denote that from $M_1$ sequence $\sigma$ yields $M_2$. \[DefIR\] Given a net $N = (P, T, Pre, Post)$, a basis partition $\pi=(T_E, T_I)$, and a basis marking $M_b\in M_B$, we define $R_I(M_b)=\{M\in \mathbb{N}^m|(\exists \sigma\in T_I^\ast)\; M_b[\sigma\rangle M\}$. as the *implicit reach* of $M_b$.$\hfill\blacksquare$ Since the $T_I$-induced sub-net is acyclic, by Proposition \[ProX\], it holds that: $$R_I(M_b)=\{M\in\mathbb{N}^m| (\textbf{y}_I\in\mathbb{N}^{n_I})\; M=M_b+C_I\cdot \textbf{y}_I\}.$$ BRG and Nonblockingness Verification {#sec2} ==================================== The efficient verification of nonblockingness in Petri nets without an exhaustive enumeration of the state space remains an open issue. To attempt to discover a solution to the nonblockingness verification problem by using the BRG-based method, in [@Gu], we first define the set of *i-coreachable markings* and introduce the notion of *unobstructiveness* of a BRG. \[NewDef0.2\] [Consider a bounded Petri net system $G=(N, M_0, \mathcal{M_F})$ with the set of basis markings $\mathcal{M_B}$. The set of *i-coreachable markings* is defined as $\mathcal{M_{\rm i_{co}}}=\{M_b\in \mathcal{M_B}| R_I(M_b)\cap \mathcal{M_F}\neq \emptyset\}.$$\hfill\blacksquare$]{} \[NewDef2\] [Consider a BRG $\mathcal{B}=(\mathcal{M_B}, Tr, \Delta, M_0)$ and a set of i-coreachable markings $\mathcal{M_{\rm i_{co}}}$. $\mathcal{B}$ is said to be *unobstructed* if for all $M_b\in \mathcal{M_B}$ there exist $M_b^{\prime}\in\mathcal{M_{\rm i_{co}}}$ in $\mathcal{B}$ and $\sigma \in {\rm Tr}^*$ such that $(M_b, \sigma, M_b^{\prime})\in\Delta$. Otherwise it is obstructed.$\hfill\blacksquare$]{} Proposition \[CDCBRG\] exposes how this property can be verified. Such a property is similar to the nonbockingness of Petri nets. For $\pi=(T_E, T_I)$ with $T_E=T$ and $T_I=\emptyset$, the unobstructiveness of a BRG is equivalent to nonblockingness of the corresponding Petri net, since in this case the BRG and reachability graph are isomorphic. \[CDCBRG\] [Given a Petri net system $(N, M_0, \mathcal{M_F})$, its BRG is unobstructed if and only if all basis markings are nonblocking, i.e., for all $M_b\in \mathcal{M_B}, R(N, M_b)\cap \mathcal{M_F}\neq\emptyset$.$\hfill\blacksquare$]{} A sufficient condition is further proved in [@Gu], as shown in Corollary \[CDCCor1\], to determine the nonblockingness of a Petri net. \[CDCCor1\] [A Petri net system $G=(N, M_0, \mathcal{M_F})$ is blocking if its BRG $\mathcal{B}$ is obstructed. $\hfill\blacksquare$]{} From another perspective, the BRG $\mathcal{B}$ of $G$ is unobstructed if $G$ is nonblocking. However, the converse is not true, i.e., the fact that a BRG of a net is unobstructed does not necessarily imply that the net is nonblocking. To help clarify it, an example is provided in the following. \[E1\] [Consider a Petri net system $(N, M_0, \mathcal{M_F})$ in Fig. \[Fig1\] with $M_0=[2\ 0\ 1]^{\rm T}$ and $\mathcal{M_F}=\{M_0\}$. In this net, $Pre(p_2, t_3)=\alpha$ is set to be a parameter ($\alpha\in \mathbb{N}$). Assuming $T_E = \{t_1\}$, the BRG of this net (regardless of the value of $\alpha$) is also shown in the same figure, where $\textbf{y}=[1\ 0]^{\rm T}$ is the minimal explanation vector of $t_1$ at basis marking $[0\ 2\ 1]^{\rm T}$. The reachability graphs for $\alpha=1$ and $\alpha=2$ are shown in Fig. \[Fig2\].]{} ![A parameterized Petri net with $T_E=\{t_1\}$ marked with shadow (left) and BRG $\mathcal{B}$ for all values of parameter $\alpha$ (right).[]{data-label="Fig1"}](new1107-eps-converted-to.pdf){width="4.4cm"} ![A parameterized Petri net with $T_E=\{t_1\}$ marked with shadow (left) and BRG $\mathcal{B}$ for all values of parameter $\alpha$ (right).[]{data-label="Fig1"}](new1107002-eps-converted-to.pdf){width="3.3cm"} ![Reachability graph of the net in Fig. \[Fig1\] with $\alpha=1$ (left) and $\alpha=2$ (right).[]{data-label="Fig2"}](new1107003-eps-converted-to.pdf){width="4cm"} ![Reachability graph of the net in Fig. \[Fig1\] with $\alpha=1$ (left) and $\alpha=2$ (right).[]{data-label="Fig2"}](new1107004-eps-converted-to.pdf){width="4cm"} Since all three basis markings, i.e., $M_{b0}$, $M_{b1}$ and $M_{b2}$ are nonblocking in these two cases, according to Proposition \[CDCBRG\], the BRG of the net is unobstructed regardless the value of $\alpha$. $G$ is deadlock-free if $\alpha=1$ and not deadlock-free if $\alpha=2$. When $\alpha=1$ the net is blocking due to the livelock composed by two markings $[1\ 0\ 0]^{\rm T}$ and $[0\ 1\ 0]^{\rm T}$. When $\alpha=2$ the net is also blocking because of the non-final deadlock $[0\ 0\ 0]^{\rm T}$.$\hfill\blacksquare$ Example \[E1\] shows the fact that all basis markings are unobstructed does not imply that all reachable markings are nonblocking, i.e., the unobstructiveness of a BRG does not necessarily imply the nonblockingness of the corresponding Petri net system. Specifically, as we mentioned in Section \[Section1\], two types of blocking markings should be particularly treated to conclude nonblockingness correctly: 1. dead but non-final; 2. livelocks, i.e., ergodic strongly-connected components of non-dead markings. Notice that the occurrence of such livelock and deadlock problems stems from the abstraction of information inherent in the basis marking approach, and the unobstructiveness of a BRG may not completely characterize the nonblockingness of the Petri net. Therefore, the classical structure of BRGs needs to be revised to encode additional information for checking nonblockingness. As a countermeasure, preliminary results are presented in [@Gu] to show how it is possible to modify the BRG to detect livelocks. In more detail, a structure named the *expanded BRG* is proposed. It expands the BRG such that all markings in $R(N, M_0)$ reached by firing a sequence transitions ending with an explicit transition are included. The set of markings in an expanded BRG is denoted as the *expanded basis marking* set $\mathcal{M_{B_E}}=\{M_0\}\cup\{M| \exists t\in T_E, \exists M^{\prime}\in R(N, M_0): M^{\prime}[t\rangle M\}.$ Although the expanded-BRG-based approach can be used to verify nonblockingness of a deadlock-free net, its efficiency needs to be further improved, since to enumerate all explanations at all basis and extended basis markings is still quite exhaustive. Meanwhile, the deadlock problem is not addressed. In the rest of this paper, an efficient approach based on a more compact structure, namely *minimax-BRG*, is proposed to solve the nonblockingness verification problem. For better organization, these two potential problems are separately treated in the following sections. In Section \[NewSection\], we focus on the detection of livelocks that cause blocking, while we assume that the plant net is pre-known to be deadlock-free. The proposed method is then generalized to nets that are not necessarily deadlock-free in Section \[secVD\]. Verifying Nonblockingness of Deadlock-Free Petri Nets Using Minimax-BRGs {#NewSection} ======================================================================== Maximal Explanations and Minimax Basis Markings {#SRGchapter} ----------------------------------------------- We first define *maximal explanations* and *maximal explanation vectors* as follows. \[DEFMAX\] Given a Petri net $N = (P, T, Pre, Post)$, a basis partition $\pi=(T_E, T_I)$, a marking $M$, and a transition $t\in T_E$, we define $\Sigma_{{\rm max}}(M, t)=\{\sigma\in \Sigma(M, t)| \nexists \sigma^{\prime}\in \Sigma(M, t): \varphi(\sigma^{\prime})\gneq \varphi(\sigma)\}$ as the set of maximal explanations of $t$ at $M$, and $Y_{{\rm max}}(M, t)=\{\varphi(\sigma)\in \mathbb{N}^{n_{I}}| \sigma\in \Sigma_{{\rm max}}(M, t)\}$ as the corresponding set of maximal explanation vectors.$\hfill\blacksquare$ From the standpoint of *partial order set* (poset), the set of maximal explanation vectors $Y_{\rm max}(M,t)$ is the set of maximal elements in the corresponding poset $Y(M,t)$. Note that, as is the case for the set of minimal explanations $\Sigma_{{\rm min}}(M, t)$[@c3; @c8; @Basis], $\Sigma_{{\rm max}}(M, t)$ may not be a singleton. In fact, there may exist multiple maximal firing sequences $\sigma_I\in T_I^{*}$ that enable an explicit transition $t$. However, similar to a result in [@CF], $|Y_{\rm max}(M,t)|\leq 1$ holds if the implicit sub-net of the system belongs to the class of *conflict-free Petri nets*. [A Petri net $N=(P, T, Pre, Post)$ is conflict-free if for all $p\in P$, $|p^{\bullet}|\leq 1$.$\hfill\blacksquare$]{} \[UniqueMax\] [Consider a net system $\langle N, M_0\rangle$ with a basis partition $\pi=(T_E, T_I)$, whose implicit sub-net is conflict-free. For all $M\in R(N, M_0)$ and $t\in T_E$, $|Y_{\rm max}(M, t)|\leq1$.]{} The thread of this proof simply follows the proof of Theorem 4 in [@CF], considering $Y_{\rm min}(M, t)$ as $Y_{\rm max}(M, t)$ and the implicit sub-net being *backward-conflict-free* (for all $p\in P$, $|^{\bullet}p|\leq 1$) as conflict-free. A net system $\langle N, M_0\rangle$, a basis partition $\pi= (T_E, T_I )$, a marking $M\in R(N, M_0)$, and $t\in T_E$ $Y_{\rm max}(M,t)$ Let $\Gamma=\left[\begin{array}{c|c} C_I^{\rm T} & I_{n_{I}\times n_{I}} \\ \hline A & B\\\end{array}\right]$ where $A=(M-Pre(\cdot, t))^{\rm T}$ and $B=\textbf{0}_{n_I}^{\rm T}$; Choose an element $A(i^{*}, j^{*})<0$; Let $\mathcal{I}^{+}=\{i|C_I^{\rm T} (i, j^{*})>0\}$; delete $\left[\begin{array}{c|c} A(i^{*}, \cdot) & B(i^{*}, \cdot) \\ \end{array} \right]$ from $\left[ \begin{array}{c|c} A & B\\ \end{array} \right]$, go to Step 2; add to $\left[ \begin{array}{c|c} A & B\\ \end{array} \right]$ a new row $\left[ \begin{array}{c|c} A(i^{*}, \cdot) & B(i^{*}, \cdot)+\Gamma(i, \cdot)\\ \end{array} \right];$ Let $\alpha={\rm row\_size}(\Gamma)$ and $\alpha_{\rm old}=0$; Let $\alpha_{\rm old}=\alpha$; Let $R:=[\Gamma(l, \cdot)+\Gamma(k, \cdot)]$; add row $R$ to $\left[ \begin{array}{c|c} A & B\\ \end{array} \right]$, derive $\Gamma_{\rm new}$; Let $\alpha={\rm row\_size}(\Gamma_{\rm new})$ and $\Gamma=\Gamma_{\rm new}$; Let $Y(M, t)$ be the set of row vectors in the updated sub-matrix $B=\Gamma((n_I+1): \alpha, (m+1): (n_I+m))$; Let $Y_{\rm max}(M,t)$ be the set of maximal elements in $Y(M, t)$. \[AlgoMax\] Algorithm \[AlgoMax\] can be used to compute $Y_{\rm max}(M,t)$ for a given marking $M$ and an explicit transition $t$. It consists of two stages, namely lines 1$\--$12 (stage 1) and lines 13$\--$29 (stage 2). Stage 1 follows the procedure of lines 1$\--$12 in Algorithm 1 in [@c3]. As a *breadth-first-search* technique, this part of the algorithm iteratively enumerates a set of firing vectors $\textbf{y}\in \mathbb{N}^{n_I}$ such that $\sigma\in\varphi^{-1}(\textbf{y})$ is an explanation of $t$, i.e., $M[\sigma\rangle M^{\prime}[t\rangle$. However, sub-matrix $B$ may not contain all explanation vectors at the end of stage 1, and hence we cannot obtain $Y_{\rm max}(M, t)$ by directly collecting all the maximal rows in $B$. In stage 2, we set $\alpha_{\rm old}$ equals to the row number of $\Gamma$ and add each of the rows in $\left[ \begin{array}{c|c} C_I^{\rm T} & I_{n_I\times n_I}\\ \end{array} \right]$ to each of the rows in $\left[ \begin{array}{c|c} A & B\\ \end{array} \right]$. If an obtained new row is nonnegative and does not equal to any of the rows in $\left[ \begin{array}{c|c} A & B\\ \end{array} \right]$, it is then recorded in $\left[ \begin{array}{c|c} A & B\\ \end{array} \right]$ and $\Gamma$ will be updated. In fact, a new explanation vector $\textbf{y}^{\prime}\in \mathbb{N}^{n_I}$ of $t$ at $M$ can be collected based on $R$, since there exists a firing sequence $\sigma^{\prime}\in\varphi^{-1}(\textbf{y}^{\prime})$ such that $M[\sigma^{\prime}\rangle M^{\prime\prime}[t\rangle$. Stage 2 ends when $\alpha$ equals to $\alpha_{\rm old}$, meaning that sub-matrix $\left[ \begin{array}{c|c} A & B\\ \end{array} \right]$ reaches a fixed point. Finally, the set of maximal explanations is obtained by collecting all the maximal rows in sub-matrix $B$. Now we define *minimax basis markings* in an iterative way as follows. \[Def1\] Given a net system $\langle N, M_0\rangle$ with a basis partition $\pi=(T_E, T_I)$, its minimax basis marking set $\mathcal{M_{B_M}}$ is recursively defined as follows 1. $M_0\in \mathcal{M_{B_M}}$; 2. $M\in \mathcal{M_{B_M}}$, $t\in T_E, \textbf{y}\in Y_{\rm min}(M, t)\cup Y_{\rm max}(M, t)$, $M^{\prime}=M+C_I\cdot \textbf{y}+C(\cdot, t)\Rightarrow M^{\prime}\in\mathcal{M_{B_M}}$. A marking in $\mathcal{M_{B_M}}$ is called a minimax basis marking of the net system with $\pi=(T_E, T_I)$.$\hfill\blacksquare$ In practice, the set of minimax basis markings is a smaller subset of reachable markings that contains the initial marking and is closed by reachability through a sequence that contains an explicit transition and one of its maximal or minimal explanations. \[RemX\] [The set of minimax basis marking $\mathcal{M_{B_M}}$ is a superset of the set of basis markings $\mathcal{M_B}$ defined in Definition \[DefX\], i.e., $\mathcal{M_{B_M}}\supseteq\mathcal{M_B}$. In fact, $\mathcal{M_B}$ can be recursively computed as in Definition \[Def1\] but assuming that in condition (b) $\textbf{y}\in Y_{\rm min}(M, t)$ holds, i.e., only minimal explanations are considered.]{} Minimax Basis Reachability Graph -------------------------------- \[Def2\] Given a bounded net system $\langle N, M_0\rangle$ and a basis partition $\pi=(T_E, T_I)$, its minimax-BRG is a non-deterministic finite state automaton $\mathcal{B_M}=(\mathcal{M_{B_M}}, {\rm Tr_{\mathcal{M}}}, \Delta_{\mathcal{M}}, M_0)$ computed by Algorithm \[Algo3\], where - $\mathcal{M_{B_M}}$ is the set of minimax basis markings; - ${\rm Tr}_\mathcal{M}$ is the set of pairs $(t, \textbf{y})\in T_E\times \mathbb{N}^{n_{I}}$; - $\Delta_{\mathcal{M}}$ is the transition relation $\{(M_1, (t, \textbf{y}), M_2)| t\in T_E; \textbf{y}\in (Y_{\rm min}(M_1, t)\cup Y_{\rm max}(M_1, t)), M_2=M_1+C_I\cdot \textbf{y}+C(\cdot, t)\}$; - $M_0$ is the initial marking.$\hfill\blacksquare$ We extend the definition of transition relation $\Delta_{\mathcal{M}}$ for sequences of pairs $\sigma^{+}=(t_1, \textbf{y}_1), (t_2, \textbf{y}_2), \cdots, (t_k, \textbf{y}_k)\in {\rm Tr}_\mathcal{M}^*$ and write $(M_1, \sigma^{+}, M_2)\in\Delta_{\mathcal{M}}$ to denote that from $M_1$ sequence $\sigma^{+}$ yields $M_2$ in $\mathcal{B_M}$. A net system $\langle N, M_0\rangle$ with $\pi=(T_E ,T_I)$ A minimax-BRG $\mathcal{B_M}=(\mathcal{M_{B_M}}, {\rm Tr_{\mathcal{M}}}, \Delta_{\mathcal{M}}, M_0)$ $\mathcal{M_{B_M}}:=\{M_0\}, {\rm Tr_{\mathcal{M}}}:=\emptyset$ and $\Delta_{\mathcal{M}}:=\emptyset$, assign no tag to $M_0$; select a state $M\in \mathcal{M_{B_M}}$ with no tag; $M^{\prime}:=M+C_I\cdot \textbf{y}+C(\cdot, t)$; $\mathcal{M_{B_M}}:=\mathcal{M_{B_M}}\cup \{M^{\prime}\}$; ${\rm Tr_{\mathcal{M}}}:={\rm Tr_{\mathcal{M}}}\cup \{(t,\textbf{y})\}$; $\Delta_{\mathcal{M}}:=\Delta_{\mathcal{M}}\cup \{(M, (t,\textbf{y}), M^{\prime})\}$; tag node $M$ ${\rm ``old"}$; Remove all tags. \[Algo3\] Algorithm \[Algo3\] computes a minimax-BRG. The set $\mathcal{M_{B_M}}$ is initialized at $\{M_0\}$. At the end of the procedure, it contains the set of minimax basis markings. For all untested markings $M\in \mathcal{M_{B_M}}$, i.e., those with no tag, and for all explicit transitions $t\in T_E$, we check whether there exist explanation vectors $\textbf{y}\in Y_{\rm min}(M,t)$ or $\textbf{y}\in Y_{\rm max}(M,t)$. If such explanation vectors exist, we compute all minimax basis markings (i.e., $M^{\prime}=M+C_I\cdot \textbf{y}+C(\cdot, t)$) and store them in $\mathcal{M_{B_M}}$. Moreover, the set of pairs $(t,\textbf{y})$ and transition relations between $M$ and $M^{\prime}$ are stored in ${\rm Tr_{\mathcal{M}}}$ and $\Delta_\mathcal{M}$, respectively. Algorithm \[Algo3\] stops when there is no unchecked marking in $\mathcal{M_{B_M}}$. Comparing with the construction of the BRG, where one needs to compute the minimal explanation vectors[@c8; @Basis], Algorithm \[Algo3\] requires to compute all markings that are reachable from the initial marking by firing not only all minimal explanation vectors but also all maximal ones. Note that for a bounded net system, $\mathcal{M_B}\subseteq\mathcal{M_{B_M}}\subseteq \mathcal{M_{B_E}}\subseteq R(N, M_0)$ holds. As for the complexity of Algorithm \[Algo3\], we point out that the minimax-BRG of a net system may be isomorphic to its reachability graph in the worst case, e.g., when $T_I=\emptyset$ and $T_E=T$. However, numerical results (e.g., see Section \[sec2.3\]) shows that in many practical cases $|\mathcal{M_{B_M}}|\ll|R(N, M_0)|$ holds and therefore achieves practical efficiency. \[EXPSRG\] [Consider the Petri net $\langle N, M_0\rangle$ in Fig. \[SO1\] with $M_0 =[3\ 0\ 1\ 1]^{\rm T}$ and $T_E=\{t_2, t_4\}$. Its minimax-BRG is depicted in Fig. \[MBRG\]. The RG of Petri net is shown in Fig. \[SO2\], where all minimax basis markings are marked in solid boxes.]{} ![A net system $\langle N, M_0\rangle$ with $T_E=\{t_2, t_4\}$ (marked with shadow).[]{data-label="SO1"}](new1107005-eps-converted-to.pdf){width="5.4cm"} ![The minimax-BRG $\mathcal{B_M}$ of $\langle N, M_0\rangle$ in Fig. \[SO1\].[]{data-label="MBRG"}](20200229-eps-converted-to.pdf){width="7.15cm"} ![Reachability graph of $\langle N, M_0\rangle$ in Fig. \[SO1\].[]{data-label="SO2"}](RG0902-eps-converted-to.pdf){width="7cm"} In the minimax-BRG, at $M_{b0}$, there are two explanation vectors for $t_2$: $\textbf{y}_1=[1\ 0]^{\rm T}$ (minimal) and $\textbf{y}_{2}=[3\ 1]^{\rm T}$ (maximal). There are two explanation vectors for $t_4$: $[0\ 0]^{\rm T}$ (minimal) and $\textbf{y}_3=[2\ 1]^{\rm T}$ (maximal). At $M_{b1}$, there is only one explanation vector for $t_2$: $[0\ 0]^{\rm T}$. At $M_{b2}$, there are two explanation vectors for $t_2$: $[0\ 0]^{\rm T}$ (minimal) and $\textbf{y}_{4}=[1\ 0]^{\rm T}$ (maximal). There is one explanation vector for $t_4$: $[0\ 0]^{\rm T}$. At $M_{b3}$, there are two explanation vectors for $t_2$: $\textbf{y}_5=[1\ 0]^{\rm T}$ (minimal) and $\textbf{y}_{6}=[2\ 1]^{\rm T}$ (maximal). At $M_{b4}$, there are two explanation vectors for $t_2$: $\textbf{y}_7=[1\ 0]^{\rm T}$ (minimal) and $\textbf{y}_{8}=[2\ 0]^{\rm T}$ (maximal); there are two explanation vectors for $t_4$: $[0\ 0]^T$ (minimal) and $\textbf{y}_{9}=[1\ 0]^{\rm T}$ (maximal). At $M_{b5}$, there is only one explanation vector for $t_2$: $\textbf{y}_{10}=[1\ 0]^{\rm T}$.$\hfill\blacksquare$ In the following, we show that the minimax-BRG preserves the reachability information and other non-minimax-basis markings can be algebraically characterized by linear equations. We first recall a property of BRG presented in [@c8] shown as follows. \[SRGnew\] [Given a net system $\langle N, M_0\rangle$ with a basis partition $\pi=(T_E, T_I)$ and a marking $M\in\mathbb{N}^m$, $M\in R(N, M_0)$ if and only if there exists a minimax basis marking $M_b\in \mathcal{M_{B_M}}$ such that $M\in R_I(M_b)$, where $\mathcal{M_{B_M}}$ is the set of the minimax basis markings in minimax-BRG of $\langle N, M_0\rangle$.]{} (only if) It is shown in [@c8] that such a property holds for the set of basis markings $\mathcal{M_B}$. As discussed in Remark \[RemX\], the set of minimax basis markings $\mathcal{M_{B_M}}$ is a superset of $\mathcal{M_B}$, hence the result follows. (if) Since $M\in R_I(M_b)$, according to Definition \[DefIR\], there exists a firing sequence $\sigma\in T_I^{*}$ such that $M_b[\sigma\rangle M$. On the other hand, there exists another firing sequence $\sigma^{\prime}\in T^*$ such that $M_0[\sigma^{\prime}\rangle M_b$, which implies that $M_0[\sigma^{\prime}\sigma\rangle M$ and concludes the proof. In summary, a marking $M$ is reachable from $M_0$ if and only if it belongs to the implicit reach of a minimax basis marking $M_b$ and thus $M$ can be characterized by a linear equation, i.e., $M=M_b+C_I\cdot \textbf{y}_I$, where $\textbf{y}_I=\varphi(\sigma_I)$, $\sigma_I\in T_I^{*}$ and $M_b[\sigma_I\rangle M$. Unobstructiveness of Minimax-BRGs {#sec2.1} --------------------------------- This subsection generalizes the notion of unobstructiveness that is given in [@Gu] for a BRG to a minimax-BRG. Such a property is essential to establish our method since it is strongly related to the nonblockingness of a Petri net and can be efficiently determined by solving a set of ILPPs. First, we define the set of *i-coreachable minimax basis markings*, denoted by $\mathcal{M_{\rm i_{co}}}$, from which at least one of the final markings in $\mathcal{M_F}$ is reachable by firing implicit transitions only. \[DefRe\] [Consider a bounded Petri net system $G=(N, M_0, \mathcal{M_F})$ with the set of minimax basis markings $\mathcal{M_{B_M}}$ in its minimax-BRG. The set of i-coreachable minimax basis markings of $\mathcal{M_{B_M}}$ is defined as $\mathcal{M_{\rm i_{co}}}=\{M_b\in \mathcal{M_{B_M}}| R_I(M_b)\cap \mathcal{M_F}\neq \emptyset\}.$$\hfill\blacksquare$]{} [Given a set of final markings defined by a single GMEC $\mathcal{L}_{(\textbf{w},k)}$ and a minimax basis marking $M_b$, $M_b$ belongs to $\mathcal{M_{\rm i_{co}}}$ if and only if the following set of integer constraints is feasible. $$\label{Equre0.16} \left\{ \begin{array}{lr} M_b+C_I\cdot \textbf{y}_I=M;\\ \textbf{w}^T\cdot M \leq k;\\ \textbf{y}_I\in \mathbb{N}^{n_I};\\ M\in \mathbb{N}^{m}. & \end{array} \right.$$]{} (only if) Since $M_b\in \mathcal{M_{\rm i_{co}}}$, according to Definition \[DefRe\], $R_I(M_b)\cap \mathcal{M_F}\neq \emptyset$. Therefore, ILPP (\[Equre0.16\]) meets feasible solution $\textbf{y}_I$. (if) The state equation $M_b+C_I\cdot \textbf{y}_I=M$ provides necessary and sufficient conditions for reachability since the implicit subnet is acyclic (see Proposition \[ProX\]). Moreover, $M\in\mathcal{L}_{(\textbf{w},k)}$ is a final marking. Therefore, the statement holds. The notion of unobstructiveness in a minimax-BRG is given in Definition \[ReDef2\]. In the following, we show how the unobstructiveness of a minimax-BRG is related to the nonblockingness of the corresponding Petri net. \[ReDef2\] [Given a minimax-BRG $\mathcal{B_M}=(\mathcal{M_{B_M}},\\ {\rm Tr_{\mathcal{M}}}, \Delta_{\mathcal{M}}, M_0)$ and a set of i-coreachable minimax basis markings $\mathcal{M_{\rm i_{co}}}\subseteq\mathcal{M_{B_M}}$, $\mathcal{B_M}$ is said to be *unobstructed* if for all $M_b\in \mathcal{M_{B_M}}$ there exist a marking $M_b^{\prime}\in\mathcal{M_{\rm i_{co}}}$ in $\mathcal{B_M}$ and a firing sequence $\sigma^{+}\in{\rm Tr}_\mathcal{M}^*$ such that $(M_b, \sigma^{+}, {M_b}^{\prime})\in\Delta_\mathcal{M}$. Otherwise it is *obstructed*.$\hfill\blacksquare$]{} \[Pro001\] [Given a Petri net system $(N, M_0, \mathcal{M_F})$, its minimax-BRG is unobstructed if and only if all minimax basis markings are nonblocking.]{} (only if) If a minimax-BRG $\mathcal{B_M}$ is unobstructed, then for all $M_b\in \mathcal{M_{B_M}}$ there exist a marking $M_b^{\prime}\in\mathcal{M_{\rm i_{co}}}$ in $\mathcal{B_M}$ and a sequence of pairs $\sigma^{+}=(t_1, \textbf{y}_1), (t_2, \textbf{y}_2), \cdots, (t_k, \textbf{y}_k)\ (\sigma^{+}\in{\rm Tr}_\mathcal{M}^*)$ such that $(M_b, \sigma^{+}, {M_b}^{\prime})\in\Delta_\mathcal{M}$. By Definition \[Def1\], this means that the net admits an evolution: $M_b[\sigma_1 t_1 \sigma_2 t_2 \cdots \sigma_k t_k\rangle {M_b}^{\prime}$, where $\sigma_i\in\varphi^{-1}(\textbf{y}_i)\ (i\in\{1, 2, \cdots, k\})$. Since $M_b^{\prime}\in\mathcal{M_{\rm i_{co}}}$, there exists an implicit firing sequence $\sigma_I$ such that $M_b^{\prime}[\sigma_I\rangle M_f$, where $M_f\in \mathcal{M_F}$. Thus it holds that $M_b[\sigma_1 t_1 \sigma_2 t_2 \cdots \sigma_k t_k\rangle {M_b}^{\prime}[\sigma_I\rangle M_f$, implying that $M_b$ is nonblocking. (if) The sufficient part can be proved by contradiction. Suppose that the minimax-BRG of the net $\mathcal{B_M}$ is not unobstructed. Let $\mathcal{M_{\rm i_{co}}}$ be the set of i-coreachable minimax basis markings of $\mathcal{M_{B_M}}$. Since $\mathcal{B_M}$ is obstructed, according to Definition \[ReDef2\], there exists a minimax basis marking $M_b^{\prime}\in \mathcal{M_{B_M}}$, from which there do not exist a marking $M_b^{\prime\prime}\in\mathcal{M_{\rm i_{co}}}$ and a sequence of pairs $\sigma^{+}\in {\rm Tr}_\mathcal{M}^*$ such that $(M_b^{\prime}, \sigma^{+}, {M_b}^{\prime\prime})\in\Delta_\mathcal{M}$. Therefore, there exists an implicit firing sequence $\sigma_I\in T_I^{*}$ such that $M_b^{\prime}[\sigma_I\rangle M_f$, where $M_f\in \mathcal{M_F}$. Based on Definition \[DefRe\], it holds that $M_b^{\prime}\in\mathcal{M_{\rm i_{co}}}$. Thus there exists an i-coreachable minimax basis marking $M_b^{\prime}$ and $\sigma^{+}=\varepsilon$ such that $(M_b^{\prime}, \sigma^{+}, {M_b}^{\prime})\in\Delta_\mathcal{M}$, which is a contradiction. According to Proposition \[Pro001\], to determine the unobstructiveness of minimax-BRG $\mathcal{B_M}$, we need to check if all minimax basis markings in $\mathcal{M_{B_M}}$ are nonblocking only, which can be verified by checking if all minimax basis markings are co-reachable to some i-coreachable minimax basis markings by analyzing the minimax BRG. An example is illustrated in the following. \[Example3\] Consider again the net system $\langle N, M_0\rangle$ shown in Fig. \[SO1\] and discussed in Example \[EXPSRG\] with $M_0=[3\ 0\ 1\ 1]^{\rm T}$ and $T_E=\{t_2, t_4\}$. Assuming that the set of final markings is $\mathcal{M_F}=\mathcal{L}_{(\textbf{w},k)}$ where $\textbf{w}=[1\ 1\ 0\ 0]^{\rm T}$ and $k=1$, we want to verify the unobstructiveness of its minimax-BRG $\mathcal{B_M}$ shown in Fig. \[MBRG\]. First we need to determine the set of i-coreachable minimax basis markings of this system by solving ILPP (\[Equre0.16\]): we conclude that $\mathcal{M_{\rm i_{co}}}=\{[0\ 1\ 0\ 0]^{\rm T}, [1\ 0\ 0\ 0]^{\rm T}\}$. Since all minimax basis markings are co-reachable to a marking in $\mathcal{M_{\rm i_{co}}}$ in $\mathcal{B_M}$, the minimax-BRG is unobstructed .$\hfill\blacksquare$ Minimax-BRG for Verification of Nonblockingness ----------------------------------------------- In this section, we investigate how minimax-BRG can be applied to the nonblockingness verification of the corresponding plant net. An intermediate result is proposed in Proposition \[CoreTheorem\]. \[CoreTheorem\] [Given a bounded net system $\langle N, M_0\rangle$ with basis partition $\pi=(T_E, T_I)$, for all $M\in R(N, M_0)$, for all $t\in T_E$, for all $\sigma\in \Sigma(M, t)$ with $M[\sigma t\rangle M^{\prime}$, the following implication holds: $$\begin{aligned} (\forall \sigma^{\prime}\in\Sigma(M, t))\ \varphi(\sigma)-\varphi(\sigma^{\prime})&=\tilde{\textbf{y}}\geq \textbf{0} \Rightarrow\\ (\exists \sigma^{\prime\prime}\in&\varphi^{-1}(\tilde{\textbf{y}}))\ M[\sigma^{\prime}t \sigma^{\prime\prime}\rangle M^{\prime} \\ \end{aligned}$$]{} Let $M^{\prime\prime}\in \mathbb{N}^{m}$ such that $M[\sigma^{\prime} t\rangle M^{\prime\prime}$. Then it holds that: $$\label{EQUNEW1} \left\{ \begin{array}{lr} M^{\prime}=M+C_I\cdot \varphi(\sigma)+C(\cdot, t) \\ M^{\prime\prime}=M+C_I\cdot \varphi(\sigma^{\prime})+C(\cdot, t) \end{array} \right.$$ From Equation (\[EQUNEW1\]) we conclude that $M^{\prime}-M^{\prime\prime}=C_I(\varphi(\sigma)-\varphi(\sigma^{\prime}))$, which implies $M^{\prime}=M^{\prime\prime}+C_I\cdot \tilde{\textbf{y}}$ and $\tilde{\textbf{y}}\in \mathbb{N}^{n}$. This indicates: $$\begin{aligned} \exists\sigma^{\prime\prime}\in \varphi^{-1}(\tilde{\textbf{y}}): M^{\prime\prime}[\sigma^{\prime\prime}\rangle M^{\prime} \end{aligned}$$ and thus $M[\sigma^{\prime}t\rangle M^{\prime\prime}[\sigma^{\prime\prime}\rangle M^{\prime}$ that concludes the proof. Proposition \[CoreTheorem\] shows the connection between two markings $M^{\prime}$ and $M^{\prime\prime}$ reachable from $M\in R(N, M_0)$, i.e., $M[\sigma t\rangle M^{\prime}$ and $M[\sigma^{\prime} t\rangle M^{\prime\prime}$, where $t\in T_E$, $\sigma\in\Sigma(M, t)$, $\sigma^{\prime}\in\Sigma(M, t)$, and $\varphi(\sigma)-\varphi(\sigma^{\prime})=\tilde{\textbf{y}}\geq 0$. If $M^{\prime}$ is nonblocking, then $M^{\prime\prime}$ is nonblocking as well, since there exists a firing sequence $\sigma^{\prime\prime}\in \varphi^{-1}(\tilde{\textbf{y}})$ such that $M^{\prime\prime}[\sigma^{\prime\prime}\rangle M^{\prime}$. According to this proposition, we next show that the unobstructiveness of the minimax-BRG is a necessary and sufficient condition for nonblockingness of the considered class of nets. \[NewLemma\] [Consider a bounded deadlock-free net system $\langle N, M_0\rangle$ with a basis partition $\pi=(T_E, T_I)$. For all markings $M\in R(N, M_0)$, there exists a firing sequence $\sigma t$, where $\sigma \in T_I^{*}$ and $t\in T_E$, such that $M[\sigma t\rangle$ holds.]{} We prove this statement by contradiction. Assume the system is deadlock-free and there exists a marking $M$ from which all explicit transitions are not enabled. Since the implicit sub-net of the system is bounded and acyclic, the maximal length of sequences enabled at $M$ and composed by only implicit transitions is finite. Hence, from $M$, after the firing of such maximal sequences of implicit transitions, the net reaches a deadlock, which is a contradiction. The result in Lemma \[NewLemma\] can be applied to both BRG and minimax-BRG. However, it does not imply that the marking reached after the firing of the explicit transition is a basis marking, as we have shown in Example \[E1\]. Hence, it does not rules out the presence of livelocks in the BRG. \[NewLemma2\] Consider a bounded deadlock-free net system $\langle N, M_0\rangle$ with a basis partition $\pi=(T_E, T_I)$. For all markings $M\in R(N, M_0)$, for all explicit transition $t\in T_E$, the following holds: $\sigma\in\Sigma(M,t)\Rightarrow(\exists\sigma^{\prime}\in \Sigma_{\rm max}(M,t))\ \varphi(\sigma^{\prime})\geq\varphi(\sigma).$ If $\sigma\notin\Sigma_{\rm max}(M,t)$, according to Definition \[DEFMAX\], there exists an explanation $\sigma^{\prime}\in \Sigma_{\rm max}(M,t))$ such that $\varphi(\sigma^{\prime})>\varphi(\sigma)$; otherwise $\varphi(\sigma^{\prime})=\varphi(\sigma)$, hence the result holds. \[NewLemma3\] [Given a bounded deadlock-free net system $\langle N, M_0\rangle$ with a basis partition $\pi=(T_E, T_I)$, the set of minimax basis markings of the system is $\mathcal{M_{B_M}}$. For all markings $M\in R(N, M_0)$, there exists $M_b\in \mathcal{M_{B_M}}$ such that $M_b\in R(N, M)$.]{} Due to Lemma \[NewLemma\], there exists a firing sequence $\sigma t$, where $\sigma \in T_I^{*}$ and $t\in T_E$, such that $M[\sigma t\rangle$, which implies that $\sigma\in \Sigma(M, t)$. By Lemma \[NewLemma2\], there exists a maximal explanation $\sigma^{\prime}\in \Sigma_{\rm max}(M,t)$ such that $\varphi(\sigma^{\prime})\geq\varphi(\sigma)$. Let $\varphi(\sigma^{\prime})-\varphi(\sigma)=\textbf{y}$ and $M[\sigma^{\prime} t\rangle M^{\prime}$, by Definition \[Def1\], $M^{\prime}\in\mathcal{M_{B_M}}$. According to Proposition \[CoreTheorem\], there exists a firing sequence $\sigma^{\prime\prime}\in\varphi^{-1}(\textbf{y})$ such that $M[\sigma t\sigma^{\prime\prime}\rangle M^{\prime}$, which implies that $M^{\prime}\in R(N,M)$. \[Theoremminimax\] [A bounded deadlock-free Petri net system $G=(N, M_0, \mathcal{M_F})$ is nonblocking if and only if its minimax-BRG $\mathcal{B_M}$ is unobstructed.]{} (only if) Since the net is nonblocking, all reachable markings, including all minimax basis markings, are nonblocking. By Proposition \[Pro001\], its minimax-BRG $\mathcal{B_M}$ is unobstructed. (if) Consider an arbitrary marking $M\in R(N, M_0)$. By Lemma \[NewLemma3\], there exists a minimax basis marking $M_b\in \mathcal{M_{B_M}}$ such that $M_b\in R(N, M)$, i.e., there exists a firing sequence $\sigma\in T^*$ such that $M[\sigma\rangle M_b$. Since the minimax BRG $\mathcal{B_M}$ is unobstructed, according to Proposition \[Pro001\], all minimax basis markings including $M_b$ are nonblocking, which implies that marking $M$ is co-reachable to a nonblocking marking. Hence, $G$ is nonblocking. By Theorem \[Theoremminimax\], for a deadlock-free net, one can use an arbitrary basis partition to construct the minimax-BRG to verify its nonblockingness. Since the existence of a livelock component that contains all blocking markings implies the existence of at least a blocking minimax basis marking $M_b$ in $\mathcal{B_M}$, the potential livelock problem mentioned in Section \[sec2\] is avoided. Verifying Nonblockingness of Petri nets with Deadlocks {#secVD} ====================================================== In this section, we generalize the above results to systems that are not deadlock-free. Notice that a dead marking $M\in R(N, M_0)$ can either be non-final (i.e., $M\notin \mathcal{M_F}$) or final (i.e., $M\in\mathcal{M_F}$). If there do not exist non-final dead markings, the following theorem applies. \[0904Th\] [Consider a bounded Petri net system $G=(N, M_0, \mathcal{M_F})$. $G$ is nonblocking if and only if its minimax-BRG $\mathcal{B_{M}}$ is unobstructed and all its dead markings are final.]{} (only if) Since all reachable markings are nonblocking, all dead markings (if any exists) and all minimax basis markings are also nonblocking. Hence, all dead markings are final and by Proposition \[Pro001\], the minimax-BRG $\mathcal{B_M}$ is unobstructed. (if) If the minimax-BRG $\mathcal{B_{M}}$ is unobstructed, all minimax basis markings are nonblocking, by Proposition \[Pro001\]. Consider an arbitrary marking $M\in R(N, M_0)$. By Proposition \[SRGnew\], there exist a minimax basis marking $M_b\in\mathcal{M_{B_M}}$ in the minimax-BRG of the system and an implicit firing sequence $\sigma_I\in T_I^{*}$ such that $M_b[\sigma_I\rangle M$. We prove that marking $M$ is nonblocking by contradiction. In fact, if we assume that $M$ is blocking, since all dead markings are final, $M$ is neither dead nor co-reachable to a deadlock in the system. Suppose that from $M$ no explicit transition can eventually fire, following the argument of the proof of Lemma \[NewLemma\], a dead marking will be reached, leading to a contradiction. Thus, there exist $\sigma_I^{\prime}\in T_I^*$ and $t\in T_E$ such that $M[\sigma_I^{\prime}t\rangle$. Further, we derive that $M_b[\sigma_I\sigma_I^{\prime}t\rangle$. Also, there exists a maximal explanation $\sigma^{\prime}\in\Sigma_{\rm max}(M_b, t)$ such that $\varphi(\sigma^{\prime})\geq\varphi(\sigma_I\sigma_I^{\prime})$. According to Proposition \[CoreTheorem\], it is concluded that $M$ is co-reachable to a minimax basis marking, which implies that $M$ is nonblocking, another contradiction, which concludes the proof. According to Theorem \[0904Th\], determining the nonblockingness of a plant $G$ can be done by two steps: (1) determine if there exists a reachable non-final dead marking; if not, then (2) determine the unobstructiveness of a minimax-BRG of it. Since step (2) has already been discussed in the previous section, we only need to study step (1). First, in Section \[NewSub0208\], we present an elementary characterization of dead markings in acyclic nets. Then we show how to determine the existence of non-final dead markings by using the minimax-BRG in Section \[Sub5.1\]. Characterization of Deadlocks in Acyclic Nets {#NewSub0208} --------------------------------------------- \[Prop0208\] Consider a net system $\langle N, M_0\rangle$ where $N$ is acyclic and $M_0[\sigma\rangle M$ where $\sigma\in T^*$. $M$ is dead if and only if the following holds: $\nexists\sigma^{\prime}\in T^*: (M_0[\sigma^{\prime}\rangle)\wedge (\varphi(\sigma^{\prime})\gneqq \varphi(\sigma)).$ (if) We prove this by contradiction. Assume that $M$ is not dead. Hence, there exists a transition $t\in T$ such that $M[t\rangle$. This implies that $M_0[\sigma t\rangle$ with $\varphi(\sigma t)\gneq\varphi(\sigma)$, a contradiction. Thus $M$ is dead. (only if) Suppose that there exists a firing sequence $\sigma^{\prime}\in L(M_0)$ such that $\varphi(\sigma^{\prime})\gneqq \varphi(\sigma)$. Therefore, the following equation holds: $$M_0+C\cdot\varphi(\sigma)=M_0+C\cdot\varphi(\sigma)+C\cdot(\varphi(\sigma^{\prime})-\varphi(\sigma)).$$ Since the net is acyclic, from Proposition \[ProX\], it follows that there exists a firing sequence $\sigma^{\prime\prime}$ with $\varphi(\sigma^{\prime\prime})=\varphi(\sigma^{\prime})-\varphi(\sigma)\gneqq \textbf{0}$ such that $M[\sigma^{\prime\prime}\rangle$, a contradiction. According to Proposition \[Prop0208\], for acyclic nets, dead markings can be characterized only by incidence matrix analysis. Based on that, in the following we show how to compute the set of non-final dead markings through minimax-BRG. Determination of Non-Final Dead Markings {#Sub5.1} ---------------------------------------- We first define the set of non-final dead markings, *maximal implicit firing sequences* and the corresponding set of vectors as follows. \[Defnmbd\] Given a Petri net system $(N, M_0, \mathcal{M_F})$ with basis partition $\pi=(T_E, T_I)$ and $\mathcal{M_F}=\mathcal{L}_{(\textbf{w},k)}$, let $\mathcal{M_{B_M}}$ be its set of minimax basis markings. The set of non-final dead markings is defined as $\mathcal{D}_{\rm nf}=\{M\in R(N, M_0)|(\forall t\in T, M\ngeqslant Pre(\cdot, t))\wedge(M\notin\mathcal{M_{F}})\}.\hfill\blacksquare$ \[1220new\] Given a bounded net system $\langle N, M_0\rangle$ with basis partition $\pi=(T_E, T_I)$ and a marking $M\in R(N, M_0)$, we define $\Sigma_{\rm I, max}(M)=\{\sigma\in T_I^*| (M[\sigma\rangle)\wedge(\nexists \sigma^{\prime}\in T_I^*: M[\sigma^{\prime}\rangle, \varphi(\sigma^{\prime})\gneqq \varphi(\sigma))\}$ as the set of maximal implicit firing sequences at $M$, and $Y_{\rm I, max}(M)=\{\varphi(\sigma)\in \mathbb{N}^{n_{I}}| \sigma\in \Sigma_{\rm I, max}(M)\}$ as the corresponding set of maximal implicit firing vectors.$\hfill\blacksquare$ \[Lemma0207\] Consider a bounded net system $\langle N, M_0\rangle$ with basis partition $\pi=(T_E, T_I)$ where $N$ is acyclic. A marking $M\in R(N, M_0)$ is dead if and only if the following holds: $\exists\sigma\in\Sigma_{\rm I, max}(M_0), \forall t\in T_E, \sigma\notin \Sigma_{{\rm max}}(M_0, t): M_0[\sigma\rangle M.$ (if) Since $\sigma\in \Sigma_{\rm I, max}(M_0)$, there does not exist an implicit transition $t_I\in T_I$ such that $M[t_I\rangle$. On the other hand, since for all $t\in T_E, \sigma\notin \Sigma_{{\rm max}}(M_0, t)$, there does not exist an explicit transition $t^{\prime}\in T_E$ such that $M[t^{\prime}\rangle$, which implies that $M$ is dead. (only if) Since $M$ is dead, then there does not exist $t\in T$ such that $M[t\rangle$. Therefore, $\sigma\in\Sigma_{\rm I, max}(M_0)$ and for all $t\in T_E, \sigma\notin \Sigma_{{\rm max}}(M_0, t)$. Proposition \[Lemma0207\] shows the relation between dead markings and minimax basis markings in a bounded and acyclic system. Next, we generalize this result to a system that may not be acyclic in Corollary \[Prop0121\]. \[Prop0121\] Given a bounded net system $\langle N, M_0\rangle$ with basis partition $\pi=(T_E, T_I)$, let $\mathcal{M_{B_M}}$ be its minimax basis marking set. Marking $M\in R(N, M_0)\setminus\mathcal{M_{B_M}}$ is dead if and only if the following holds: $\exists M_b\in\mathcal{M_{B_M}}, \exists\sigma\in\Sigma_{\rm I, max}(M_b), \forall t\in T_E, \sigma\notin \Sigma_{{\rm max}}(M_b, t): M_b[\sigma\rangle M.$ This statement follows from Propositions \[SRGnew\] and \[Lemma0207\]. By Corollary \[Prop0121\], all reachable dead markings that are non-minimax-basis can be obtained by firing a maximal implicit firing sequence $\sigma$ from a minimax basis marking $M_b$ where for all $t\in T_E$, $\sigma$ is not a maximal explanation of $t$. However, the set of dead markings that are minimax-basis may be omitted: suppose that $M_b^{\prime}\in\mathcal{M_{B_M}}$ is dead in a bounded system, then $\Sigma_{\rm I, max}(M_b^{\prime})=\emptyset$ and for all $t\in T_E, \Sigma_{{\rm max}}(M_b^{\prime}, t)=\emptyset$. To expose such a set of dead markings, we introduce Algorithm \[Algonew\] to compute the set of non-final dead markings $\mathcal{D}_{\rm nf}$ in a Petri net system. A bounded Petri net system $(N, M_0, \mathcal{M_F})$ with basis partition $\pi=(T_E ,T_I)$ and its minimax basis marking set $\mathcal{M_{B_M}}$ The set of non-final dead markings $\mathcal{D}_{\rm nf}$ $\mathcal{D}_{\rm nf}:=\emptyset$; Let $T^{\prime}=T\cup\{t_0\}$ and $T_E^{\prime}=T_E\cup\{t_0\}$; Let $Pre(\cdot, t_0)=Post(\cdot, t_0)=\textbf{0}$, $Pre^{\prime}=[Pre(\cdot, t_0);Pre]$, and $Post^{\prime}=[Post(\cdot, t_0);Post]$; Let $N^{\prime}=(P, T^{\prime}, Pre^{\prime}, Post^{\prime})$ and $\pi^{\prime}=(T_E^{\prime} ,T_I)$; Obtain a bounded Petri net system $(N^{\prime}, M_0, \mathcal{M_F})$ with basis partition $\pi^{\prime}$; $M^{\prime}:=M+C_I\cdot \textbf{y}$; $\mathcal{D}_{\rm nf}:=\mathcal{D}_{\rm nf}\cup \{M^{\prime}\}$; \[Algonew\] In Algorithm \[Algonew\], first, from lines 2$\--$5, we add an explicit transition $t_0$ to $N$ with $Pre(\cdot, t_0)=Post(\cdot, t_0)=\textbf{0}$ and derive a new Petri net system $(N^{\prime}, M_0, \mathcal{M_F})$. Obviously, the firing of $t_0$ will be unconditional and it holds that $R(N, M_0)=R(N^{\prime}, M_0)$. Hence, for all $M_b\in\mathcal{M_{B_M}}$, we conclude that $Y_{\rm I, max}(M_b)=Y_{\rm max}(M_b, t_0)$, i.e., the set of maximal implicit firing vectors at $M_b$ can be determined by computing maximal explanation of $t_0$ at $M_b$ based on Algorithm \[AlgoMax\]. Then, we determine if, for all $t\in T_E$, the obtained firing vector $\textbf{y}\in Y_{\rm I, max}(M_b)$ is not an explanation of $t$ at $M_b$. This can be technically transformed by checking if, for all $t\in T_E$, $t$ is disabled at marking $M^{\prime}=M_b+C_I\cdot\textbf{y}$: if so, according to Corollary \[Prop0121\], such a marking $M^{\prime}$ is dead and non-minimax-basis. If $M^{\prime}$ is not final, we collect it in $\mathcal{D}_{\rm nf}$. Note that the set of dead markings that are minimax-basis can also be computed by Algorithm \[Algonew\]: suppose that a minimax basis marking $M_b^{\prime}\in\mathcal{M_{B_M}}$ is dead. Since $Y_{\rm max}(M_b^{\prime}, t_0)=\{\textbf{0}\}$ and for all $t\in (T_E^{\prime}\setminus\{t_0\})$, $Y_{\rm max}(M_b^{\prime}, t)=\emptyset$, $M_b^{\prime}$ will be collected in $\mathcal{D}_{\rm nf}$ if it is not final. After this algorithm terminates, if $\mathcal{D}_{\rm nf}\neq\emptyset$, we conclude that the Petri net system is blocking; otherwise, the unobstructiveness verification procedure of the minimax-BRG should be further executed. Numerical Analysis {#sec2.3} ================== ![A parameterized manufacturing example.[]{data-label="FMS"}](20200121-eps-converted-to.pdf){width="7.8cm"} We use a parameterized Petri net system to show the scalability and efficiency of our method in this section. Chosen from [@c3], a net system is depicted in Fig. \[FMS\] as a parameterized net $\langle N, M_0\rangle$. Let $M_0(p_1)=M_0(p_{10})=M_0(p_{18})=\lambda$ and $M_0(p_{8})=M_0(p_{9})=M_0(p_{17})=M_0(p_{19})=M_0(p_{20})=M_0(p_{21})=M_0(p_{22})=\mu$. Consider $T_E=\{t_3, t_6, t_{11}, t_{13}\}$ (marked as shadow bars) and $T_I=\{t_1, t_2, t_{4}, t_{5}, t_7, t_8, t_{9}, t_{10}, t_{12}, t_{14}, t_{15}, t_{16}\}$. Also, we set $\mathcal{M_F}=\mathcal{L}_{(\textbf{w},k)}=\{M\in\mathbb{N}^m| \textbf{w}^{\rm T}\cdot M \leq k\}$, where $\textbf{w}=[0\ 1\ 0\ 0\ 0\ 0\ 0\ 0\ 1\ 0\ 0\ 0\ 0\ 1\ 0\ 0\ 0\ 0\ 0\ 0\ 0\ 0]^{\rm T}$ and $k=\mu$, to test nonblockingness of this Petri net system for all cases. Based on a laptop with Intel i7-5500U 2.40 GHz processor and 8 GB RAM, as shown in Table \[tablenew1\], for different values of the parameters $\lambda$ and $\mu$, we report in different columns the sizes of the reachability graph $|R(N, M_0)|$, the expanded BRG $|\mathcal{M_{B_E}}|$ and the minimax-BRG $|\mathcal{M_{B_M}}|$ as well as the time required to compute them. Meanwhile we also portray the ratios of $|\mathcal{M_{B_M}}|$ to $|\mathcal{M_{B_E}}|$ and $|\mathcal{M_{B_M}}|$ to $|R(N, M_0)|$. It can be verified that $|\mathcal{M_{B_M}}|\ll|\mathcal{M_{B_E}}|$ and $|\mathcal{M_{B_M}}|\ll|R(N, M_0)|$ in all cases. Note that although the size of minimax-BRG grows more than linearly with the increasing of the system scale, it does still not grow exponentially as the size of the reachability graph and the expanded BRG, which depends on the net structure, initial resource distribution and choice of basis partition $\pi=(T_E, T_I)$. In Table \[tablenew2\], we show the performance of computing the set of non-final dead markings $\mathcal{D}_{\rm nf}$ based on Algorithm \[Algonew\] and the total time required to determine nonblockingness (i.e., the sum of time to compute $\mathcal{M_{B_M}}$ and time to compute $\mathcal{D}_{\rm nf}$) in all cases. The cardinality of $\mathcal{D}_{\rm nf}$ and the time required to compute it for all cases are shown in columns 4$\--5$. Since $\mathcal{D}_{\rm nf}\neq\emptyset$ for all cases, we conclude in column 7 that the system is blocking in all cases. Conclusion and Future Work {#Con} ========================== In this paper, we study the problem of nonblockingness verification of a Petri net system. A semi-structural method using minimax-BRG is developed, which can be used to determine the nonblockingness of a deadlock-free Petri net by checking its unobstructiveness. This approach is generalized to nets that are not deadlock-free by computing the set of non-final dead markings based on the set of minimax basis markings. Hence, to verify nonblockingness, one can first determine the existence of non-final deadlocks and later check the unobstructiveness of its minimax-BRG. The main advantages of our methods are that they do not require an exhaustive enumeration of the state space and have wide applicability. As for future work, we will investigate necessary and sufficient conditions for verifying nonblockingness in unbounded nets. Second, if a plant net is blocking, we plan to study the nonblockingness enforcement problem and develop a supervisor to guarantee the closed-loop system to be nonblocking.
--- abstract: 'Consider a set of $N$ systems and an arbitrary interaction Hamiltonian $H$ that couples them. We investigate the use of local operations and classical communication (LOCC), together with the Hamiltonian $H$, to simulate a unitary evolution of the $N$ systems according to some other Hamiltonian $H''$. First, we show that the most general simulation using $H$ and LOCC can be also achieved, with the same time efficiency, by just interspersing the evolution of $H$ with local unitary manipulations of each system and a corresponding local ancilla (in a so-called LU+anc protocol). Thus, the ability to make local measurements and to communicate classical information does not help in non–local Hamiltonian simulation. Second, we show that both for the case of two $d$-level systems ($d>2$), or for that of a setting with more than two systems ($N>2$), LU+anc protocols are more powerful than LU protocols. Therefore local ancillas are a useful resource for non–local Hamiltonian simulation. Third, we use results of majorization theory to explicitly solve the problem of optimal simulation of two-qubit Hamiltonians using LU (equivalently, LU+anc, LO or LOCC).' author: - 'G. Vidal$^{1,2}$ and J. I. Cirac$^{1}$' title: | Non–local Hamiltonian simulation\ assisted by local operations and classical communication --- Introduction ============ The problem of using a given non–local Hamiltonian $H$ and some class of local operations to simulate another non–local Hamiltonian $H'$ has very recently attracted the attention of several authors in quantum information science [@Dur; @Dod; @Woc; @wir; @Woc2; @Nie; @Woc3]. Nonetheless, average Hamiltonian techniques, a basic ingredient in non–local Hamiltonian simulation, have been studied for many years in control theory [@control], and are commonly used in the area of nuclear magnetic resonance [@NMR]. From the perspective of quantum information science, non–local Hamiltonian simulation sets a frame for the parameterization of the non–local resources contained in multi–particle Hamiltonians, very much in the line of thought pursued to quantify the entanglement of multi–particle quantum states. In the most common setting, fast local unitary operations LU are performed on a series of systems to effectively modify the Hamiltonian $H$ that couples them. A remarkable result is the qualitative equivalence of all bipartite interactions under LU [@Dod; @wir; @Woc2; @Nie; @Woc3]. This can be shown to imply that any Hamiltonian $H$ with pairwise interactions between some of the systems can simulate any other Hamiltonian $H'$ consisting of arbitrary pairwise interactions between the same systems. At a quantitative level, the time–efficiency with which a Hamiltonian $H$ is able to simulate a Hamiltonian $H'$ can be used as a criterion to endow the set of non–local Hamiltonians with a (pseudo) partial order structure, that allows to compare the non–local capabilities of $H$ and $H'$ [@wir]. For two-qubit Hamiltonians, simulations using LU or arbitrary local operations LO have been shown to yield the same optimal time efficiencies, and the resulting partial order structure has been computed explicitly. This has led to the necessary and sufficient conditions for $H$ to be able to simulate $H'$ [*efficiently*]{} for infinitesimal times, that is, the conditions under which the use of $H$ for time $t$ allows to simulate $H'$ for the same time $t$, in the small time $t$ limit. Equivalently, this result shows how to [*time–optimally*]{} simulate $H'$ with $H$, in the sense of achieving the maximal simulation ratio $t'/t$, where $t$ is the time of interaction $H$ that it takes to simulate interaction $H'$ for a time $t'$. Ancillary systems, generalized local measurements and classical communication in non–local Hamiltonian simulation ----------------------------------------------------------------------------------------------------------------- The aim of this paper is to elucidate the role a number of resources play in the simulation of non–local Hamiltonians. Relatedly, we seek at establishing equivalences between different classes of operations that may be used in a simulation protocol. We first address the question whether classical communication (CC) between the systems is useful in non–local Hamiltonian simulation. Recall that in protocols that include local measurements, the ability to communicate which outcome has been obtained in measuring one of the systems allows for subsequent operations on other systems to depend on this information. Now, can this ability be used in non–local Hamiltonian simulation to enlarge the set of achievable simulations? Suggestively enough, the answer is yes in the closely related problem of converting one non–local gate into another non–local gate using LO. For instance, a series of two–qubit gates $U$ exist such that they can be achieved by performing a C–NOT gate and LOCC but can not be achieved by a C–NOT and LO [@gates]. We also study the advantage of using ancillary systems in simulation protocols, as well as performing general local operations instead of just local unitary transformations. Altogether, our analysis refer to the following classes of transformations: - LU = local unitary operations, - LU+anc = local unitary operations with ancillas, - LO = local operations [@note1], - LOCC = local operations with classical communication. Results ------- This paper contains the following three main results concerning the simulation of non–local Hamiltonian evolutions for infinitesimal times: ($i$) LOCC (or LO) simulation protocols can be reduced to LU+anc simulation protocols. That is, for $N$-particle Hamiltonian interactions $H$ and $H'$, any protocol that simulates $H'$ using $H$ and LOCC (or LO) can be replaced, without changing its time efficiency, with a protocol involving only $H$ and local unitary transformations. Each local unitary transformation may be performed jointly on one of the $N$ systems and a local ancilla. ($ii$) Apart from exceptional cases such as that of two-qubit Hamiltonians [@wir] —in which any LU+anc protocol can be further replaced with an even simpler protocol that uses only LU on each qubit—, the use of ancillas is, in general, advantageous. This is proven by constructing explicit examples of LU+anc protocols where ancillas are used to obtain simulations that cannot be achieved with only LU operations, both in the case of two $d$-level systems ($d>2$) and in the case of $N>2$ systems. ($iii$) For two-qubit Hamiltonians, we use results of majorization theory to recover the optimality results presented in [@wir]. In view of the equivalence between LU, LU+anc, LO and LOCC protocols for two–qubit systems, this solves the problem of time–optimal, two–qubit Hamiltonian simulation under any of these classes of operations. The structure of the paper is as follows. In section II we introduce some known results. Section III, IV and V present results ($i$), ($ii$) and ($iii$) respectively. Section VI contains some conclusions and appendices A and B discuss some technical aspects of sections III and V. Preliminaries ============= We start by reviewing some background material from Ref. [@wir], of which the present work can be regarded as an extension. Non–local Hamiltonian simulation and classes of operations ---------------------------------------------------------- Recall that the aim of non–local Hamiltonian simulation is, given a set of systems that interact according to Hamiltonian $H$ for time $t$ and a class $C$ of local control operations, to be able to produce an evolution $e^{-iH't'}$ for the systems, where $H'$ and $t'$ are the simulated Hamiltonian and the simulated time. \[We take $\hbar \equiv 1$ along the paper\]. As mentioned above, one can consider several classes of operations to assist in the simulation, including LU, LU+anc, LO and LOCC. As in [@wir], we make two basic assumptions: ($i$) these additional operations can be implemented very fast compared to the time scale of the Hamiltonian $H$ (we actually consider the setting in which they can be performed [*instantaneously*]{} and thus characterize the fast control limit); ($ii$) these operations are a cheap resource, so that optimality over simulation protocols is defined only in terms of the ratio $t'/t$, that is, in terms of how much time $t'$ of evolution according to $H'$ can be produced by using $H$ for a time $t$. Another interesting parameter characterizing simulations, that we do not analyze here, would be some measure of the complexity of the simulation, that is of the number of control operations that are performed. We also note that the inclusions between classes of operations, LU $\subset$ LU+anc $\subset$ LO $\subset$ LOCC, imply relations between the sets of achievable simulations and time efficiencies. For instance, since LOCC simulation protocols contain all LU simulation protocols, we expect LOCC protocols to be more powerful than LU protocols. Infinitesimal–time simulations ------------------------------ The maximal simulation factor $s(t')\equiv t'/t$ when simulating $e^{-iH't'}$ by using $H$ for time $t$ may depend on $t'$. However, we are ultimately interested in characterizing the non-local properties of interaction Hamiltonians, irrespective of interaction times. A sensible way to proceed is by considering the worst case situation, namely the time $t'$ for which the optimal ratio $s(t')$ achieves its minimal value. This occurs for an infinitesimal time $t'$. That is, simulations of $H'$ for a time such that $||H't'||<< 1$ are, comparatively, the most expensive in terms of the required time $t$ of interaction $H$. The reason is that, ($i$) simulations for an infinitesimal time are a particular case of simulation, providing an upper bound for the minimum of $s(t')$, and ($ii$) any finite-time simulation —or [*gate synthesis*]{}— can be achieved, maybe not optimally, by concatenating infinitesimal–time simulations. We shall denote $s_{H'|H}$ the limit $\lim_{t'\rightarrow 0} s(t')$, and call it the simulation factor of $H'$ with $H$ \[$s_{H'|H}$ corresponds to the inverse of the time overhead $\mu$ of Ref. [@Woc], that is $s_{H'|H} = \mu^{-1}$.\] Then, apart from quantifying the time efficiency in infinitesimal simulations, $s_{H'|H}$ has also two other meanings: - $T'/s_{H'|H}$ upper bounds the time $T$ of use of $H$ needed to perform the unitary gate $e^{-iH'T'}$, for any $T'$ ([*gate simulation*]{} or [*gate synthesis [@intcost]*]{}); - $s_{H'|H}$ is the optimal time efficiency in [*dynamics simulation*]{}. That is, $s_{H'|H}$ is the maximal achievable ratio $T'/T$, where $T$ is the time of $H$ required to simulate the [*entire*]{} evolution of a system according to $e^{-it'H'}$, where $t'$ runs from $0$ to $T'$ [@simu]. In an abuse of notation, we shall refer to condition $||H't'|| << 1$ as the small time limit, of which $\O(t') ~[$or $\O(t)]$ will denote first order corrections. Optimal and efficient simulations --------------------------------- For any class $C \in \{$LU, LU+anc, LO, LOCC$\}$ of the above operations and in the small time limit, the space of achievable evolutions using Hamiltonian $H$ and operations $C$ turns out to be convex. Then the following two problems, [**P1:**]{} [*Given any $H$ and $H'$, determine when $H'$ can be efficiently (i.e., $t'=t$) simulated with $H$ for infinitesimal times, denoted*]{} H’ \_C H; [**P2:**]{} [*Given any $H$ and $H'$, determine the simulation factor $s_{H'|H}$;*]{} are equivalent, since $s_{H'|H}$ is nothing but the greatest $s$ such that $sH'$ can be efficiently simulated by $H$, that is, such that $sH' \geq_C H$. Equivalence of LO and LU+anc protocols -------------------------------------- The simulation of non–local Hamiltonians using LO and that using LU+anc are equivalent (see [@wir] for details), in that any protocol based on LO can be replaced with another one that uses only LU+anc and that has the same time efficiency. The ultimate reason for this equivalence is that even if LO provide, through measurement outcomes, information that can be used to decide on posterior local manipulations, this information cannot be transmitted to the other parties \[unless the interaction itself is used for this purpose, but this leads to null efficiency $t'/t$ when $t\rightarrow 0$\]; then, unitarity of the simulated evolution implies that each party is effectively applying a trace–preserving local operation on its subsystem, and this can always be achieved using only LU+anc. The previous situation changes when classical communication is allowed between the parties, because then they can coordinate their manipulations. In spite of this fact, CC does not help in Hamiltonian simulation, as we move to discuss next. Equivalence of LOCC and LU+anc protocols ======================================== In this section we show that any protocol for non–local Hamiltonian simulation based on LOCC can be replaced with another one based only on LU+anc and having the same time efficiency. This result, valid for infinitesimal–time simulations on arbitrary $N$–particle systems, brings an important simplification to the general problem of non–local Hamiltonian simulation, since it implies the equivalence of LOCC, LO and LU+anc protocols. We first describe in detail a most general protocol for Hamiltonian simulation using LOCC. Then we show —through an argument that exploits the fact that entanglement only decreases under LOCC— that any such protocol can be replaced with another one using only LU+anc. The key point of the proof is to assume that one of the systems is initially entangled with an auxiliary system $Z$, and to realize that a non–trivial measurement (i.e., a measurement not equivalent to some local unitary transformation) on the system would partially destroy this entanglement in an irreversible way. Since we are simulating a unitary process on the systems (which should preserve the entanglement between those and $Z$), all local measurements must be trivial, and can be replaced with unitary transformations. Hamiltonian simulation using LOCC --------------------------------- For clearness sake we will perform most of the analysis in the simplest non–trivial case, that involving only two qubits, because this already contains all the ingredients of the general $N$-particle setting. Let us consider, then, that qubits $A$ and $B$, with Hilbert spaces $\H_A$ and $\H_B$, interact according to $H$ for an overall time $t$, and that, simultaneously, they are being manipulated locally. ### Local manipulation The most general local operation on, say, qubit $A$ can be achieved [@book] by ($i$) appending to $A$ an ancillary system $A'$ in some blank state $\ket{0_{A'}} \in \H_{A'}$; ($ii$) performing a unitary transformation $U$ on $\H_{AA'} = \H_A\otimes \H_{A'}$; ($iii$) performing an orthogonal measurement on a factor space $\H_{meas}$ of the total Hilbert space $\H_{AA'} = \K\otimes \H_{meas}$, given by projection operators $\{P^{\alpha}\}$; and ($iv$) tracing out a factor space $\T^{\alpha}$ of $\H_{AA'}= \H_{out}^{\alpha}\otimes \T^{\alpha}$, where $\H^{\alpha}_{out}$ and $\T^{\alpha}$ may depend on the measurement outcome $\alpha$ of step ($iii$). Under ($i$)–($iv$) the initial state $\ket{\phi_A}$ of qubit $A$ transforms with probability $p_{\alpha}$ according to && \_() =\ && \_[\^]{} , \[eq:ioio\] where $p_{\alpha} \equiv \tr [P^{\alpha} U ( \proj{\phi_A}\otimes\proj{0_{A'}} ) U^{\dagger}P^{\alpha}]$. We can introduce operators $M^{\alpha}_i: \H_A \rightarrow \H_{out}^{\alpha}$, M\_i\^ P\^U, where $\{\ket{i^{\alpha}}\}$ is an orthonormal basis of $\T^{\alpha}$. Then Eq. (\[eq:ioio\]) can be rewritten as \_() = \_i M\^\_iM\^\_i. Now, since in our case the eventual result of this manipulation must be a unitary evolution, we are interested in transformations $\E_{\alpha}$ that map pure states into pure states, that is, such that can be implemented by just one operator $M^{\alpha}:\H_A \rightarrow \H_{out}^{\alpha}$, M\^M\^, $p_{\alpha} = tr [M^{\alpha}\proj{\phi_A}M^{\alpha\dagger}]$. Therefore the effect of the local manipulation on qubit $A$ is a generalized measurement $\M$ that, with probability $p^{\alpha}$, maps the state of $A$ into a state supported on $\H_{out}^{\alpha}$, M\^, and produces classical information $\alpha$. The measurement operators $\{M^{\alpha}\}$ characterizing $\M$ satisfy $\sum_{\alpha} M^{\alpha \dagger}M^{\alpha} = I_{A}$. More generally, in a simulation protocol measurement $\M$ may depend on some previous information $\beta$, in which case we write $\M^{\beta}$. In addition, the corresponding measurement operators $\{M^{\alpha,\beta}\}$ may map states from a two–dimensional subspace $\H_{in}^{\beta} \subset \H_{AA'}$ into another two dimensional subspace $\H_{out}^{\alpha, \beta}\subset \H_{AA'}$ that depends both on the measurement outcome $\alpha$ and on the previous information $\beta$, that is M\^[,]{}:H\_[in]{}\^H\_[out]{}\^[, ]{}. In the following, a series of measurements $\M$ will be concatenated, in such a way that the [*out–subspace*]{} $\H_{out}$ for a given measurement is related to the [*in–subspace*]{} $\H_{in}$ for the next one. We consider that a sufficiently large ancillary system $A'$ in a pure state has been initially appended to qubit $A$ so that it provides at once the extra degrees of freedom needed to perform all generalized measurements $\M$ on $A$. Finally, all the above considerations apply also to qubit $B$, to which an ancillary system $B'$ is appended. ### LOCC simulation protocol A LOCC protocol for simulating $e^{-it'H'}$ by $H$ for time $t$ is characterize by a partition $\{t_1, t_2,...,t_n\}$ of $t$, where $t_i\geq 0$, $\sum_i t_i = t$, and a series of local measurements, $\{(\M_{0},\N_{0}),(\M_{1}^{\alpha_1},\N_{1}^{\alpha_1}), ..., (\M_{n}^{\alpha_n},\N_{n}^{\alpha_n}) \}$. The protocol runs as follows: 1. The simulation begins with measurements $\M_{0}$ and $\N_{0}$ being performed on $A$ and $B$, respectively. These map the original state of $AB$ into a state supported on some subspace of $AA'BB'$. 2. Then the two qubits $A$ and $B$ are left evolve according to $H$ for a time $t_1$. 3. After that, measurements $\M_{1}^{\alpha_1}$ and $\N_{1}^{\alpha_1}$ are performed. Here, index $\alpha_1$ indicates that the measurements being performed after time $t_1$ may depend on the outcomes of measurements $\M_{0}$ and $\N_{0}$. 4. Again, the measurements are followed by an evolution, for time $t_2$, of $A$ and $B$ according to $H$, and the protocol continues in an iterative fashion. 5. In step $k$, qubits $A$ and $B$ are first left evolve according to $H$ for a time $t_k$ and then measurements $\M_{k}^{\alpha_k}$ and $\N_{k}^{\alpha_k}$ ($\alpha_k$ denoting again a possible dependence on the outcome of any previous measurement) are locally performed in $AA'$ and $BB'$. 6. The protocol finishes after measurements $\M_{n}^{\alpha_n}$ and $\N_{n}^{\alpha_n}$ have been performed. These last measurements must leave the two–qubit system $AB$ in a pure state (that is, uncorrelated from systems $A'B'$, that are traced out). Thus, the two–qubit system $AB$ is initially in some state $\ket{\psi}$, becomes entangled with the ancillas $A'$ and $B'$ during the manipulations describes above, but ends up in the state $e^{-iH't'}\ket{\psi}$ after time $t$. Note that the protocol described above has a tree structure, starting with a preestablished couple of local manipulations and ending up at the extreme of a branch characterized by the outcomes of all (conditioned) local operations performed during the time interval $t$. We move now to characterize one of these branches. ![image](figura2) ### One branch of the protocol Let us suppose we run the simulation once. This corresponds to some given branch of the protocol, that we label $\Gamma$, and have represented in the figure. Branch $\Gamma$ is characterized by a series of measurement operators $\{(M_{0}^{\Gamma}, N_{0}^{\Gamma}), ..., (M_{n}^{\Gamma}, N_{n}^{\Gamma})\}$, where the superindices $\alpha_k$ containing the information that characterizes the branch have been replaced with $\Gamma$ to simplify the notation. Recall that the aim of the protocol is to achieve an evolution according to $e^{-iH't'}$. Therefore, for any initial vector $\ket{\psi}$ of the two–qubit system $AB$, the measurement operators $\{(M_{k}^{\Gamma}, N_{k}^{\Gamma})\}_{k=0}^n$ must obey && e\^[-iH’t’]{} |= ( M\_n\^N\_n\^) e\^[-it\_nH]{}\ &&           ( M\_1\^N\_1\^) e\^[-it\_1H]{} (M\_0\^N\_0\^ ) |, \[eq:locc\] where $p_{\Gamma}$ denotes the probability that branch $\Gamma$ occurs in the protocol. Eq. (\[eq:locc\]) is the starting point for the rest of the analysis in this section. LOCC protocols are as efficient as LU+anc protocols for infinitesimal time simulations -------------------------------------------------------------------------------------- As discussed in the introduction, we are interested here in simulations for an infinitesimal simulation time $t$. In this regime Eq. (\[eq:locc\]) significantly simplifies, because we can expand the exponentials to first order in $t$ (or equivalently, in $\{t_k\}$ and $t'$), thereby obtaining an equation which is linear both in $H$ and $H'$. In addition, if $t$ is small then qubits $A$ and $B$ interact only “a little bit”. In what follows we will use this fact to prove the main result of this section, namely that all the measurement operators $\{M_k^{\Gamma}, N_k^{\Gamma}\}_{k=0}^n$ in Eq. (\[eq:locc\]) must be, up to negligible corrections, proportional to unitary operators in some corresponding relevant supports. This will eventually imply that LU+anc protocols can already simulate any evolution $e^{-iH't'}$ achievable in a LOCC protocol. We note that this result is not valid for the interconversion of non–local gates [@gates]. There the systems are allowed to interact according to a finite gate (e.g. a C–NOT), and thus accumulate some finite amount of entanglement (e.g. an [*ebit*]{}) in the ancillary systems, that can be used, together with LOCC, to perform some new non–local gate (e.g. through some teleportation scheme). ### LOCC protocols for infinitesimal–time simulations We define a series of operators $M_k$ and $M_k'$ by &&M\_k M\_n\^M\_[k]{}\^,      k=1,,n,\ &&M\_k’ M\_[k-1]{}\^M\_0\^,      k=1,,n,\ &&M\_0 M\_n\^M\_[0]{}\^, and also an analogous series of operators $N_k,N_k'$ and $N_0$. Notice that operator $M_k'$ describes a concatenation of all local measurements in branch $\Gamma$ performed from the beginning of the protocol and up to step $k-1$ on the state initially supported on $\H_{AA'}$, while $M_k$ collects the manipulations that will be performed from step $k$ until the end of the protocol. In the small time regime, we can expand the exponentials in Eq. (\[eq:locc\]) as a series in $t_k$ and $t'$ to obtain, up to second order corrections $\O(t^2)$, ( I\_[AB]{} - istH’ ) =                 \[eq:inflocc0\]\ ( M\_0N\_0 - i t\_[k=1]{}\^n p\_k (M\_kN\_k) H (M\_k’N\_k’)),where we have introduced probabilities $p_k\equiv t_k/t$ and the efficiency factor $s\equiv t'/t$ of the branch, so that all times are expressed in terms of $t$. This equation indicates that M\_0N\_0 = I\_[AB]{} + Ø(t), \[eq:identity\] for any two–qubit state $\ket{\psi}$, from which it follows that the probability $p_{\Gamma}$ that branch $\Gamma$ occurs cannot depend on $\ket{\psi}$ up to $\O(t)$ corrections, also that both $M_0$ and $N_0$ must be proportional to the identity operator in $H_A$ and $H_B$, M\_0 &=& q I\_[A]{} + Ø(t),\ N\_0 &=&   q\^[-1]{} I\_[B]{} + Ø(t), where $q$ is some positive parameter. Notice that the order $t$ corrections in Eq. (\[eq:identity\]) correspond to local terms, that is, to operators of the form $t(I_A\otimes O_B + O'_A\otimes I_B)$, and thus are irrelevant to this discussion [@neglect]. In what follows we neglect these local terms for clearness sake. Bearing this remark and Eq. (\[eq:identity\]) in mind, we rewrite Eq. (\[eq:inflocc0\]) as the operator equation && (I\_[AB]{} - istH’ )= \[eq:inflocc\]\ && I\_[AB]{} - i t\_[k=1]{}\^n p\_k (M\_kN\_k) H (M\_k’N\_k’) + Ø(t\^2).That is, sH’ = \_[k=1]{}\^n p\_k (M\_kN\_k) H (M\_k’N\_k’) + Ø(t), \[eq:semihami\] where, because of Eq. (\[eq:identity\]), some other constrains apply. More precisely, if $M_k'$ and $N_k'$ are given by M\_k’ = q( + )\ N\_k’ =   q\^[-1]{}( + ), \[eq:Mk\] where $\{\ket{i_A}\}$ and $\{\ket{i_B}\}$ are orthonormal basis of $\H_A$ and $\H_B$ and $\{\ket{\mu_i^k}\in \H_{AA'}\}$ and $\{\ket{\nu_i^k}\in \H_{BB'}\}$ are arbitrary vectors, not necessarily normalized, then $M_k$ and $N_k$ must fulfill M\_k &=& + + Ø(t),\ N\_k &=& + +Ø(t), \[eq:Mk’\] where $\{\ket{\tilde{\mu}_i^k}\}$ is the biorthonormal basis of $\{\ket{\mu_i^k}\}$ (in the subspace spanned by $\{\ket{\mu_i^k}\}$), that is $\braket{\mu_i^k}{\tilde{\mu}_j^k} = \delta_{ij}$, and similarly $\{\ket{\tilde{\nu}_i^k}\}$ is the biorthonormal basis of $\{\ket{\nu_i^k}\}$, so that $M_0\otimes N_0 = (M_kM_k')\otimes(N_kN_k')$ fulfills Eq. (\[eq:identity\]). Now, going back to the measurement operators $M_k^{\Gamma}$, we can expand them as M\_0\^ &=& q(  +  )\ M\_k\^ &=& +       k=1,, n-1\ M\_n\^ &=& + + Ø(t) \[eq:operators\] and similarly for the $N_k^\Gamma$. ### Unitarity and conservation of entanglement We carry on this analysis by focusing our attention only on the operations performed on systems $AA'$. We will show that operators $M_k$ and $M_k'$ can be replaced with operators proportional to $\bra{0_{A'}}U_k$ and $U_k^{\dagger}\ket{0_{A'}}$, where $U_k$ is a unitary matrix acting on $H_{AA'}$. We will use the fact that the protocol must be able to keep the entanglement of $A$ with another system $Z$. Let us suppose, then, that qubit $A$ is entangled with a distant qubit $Z$, with the maximally entangled vector ( + ) \[eq:entangled\] describing the pure state of $AZ$. Any unitary evolution of qubits $A$ and $B$ preserves the amount of entanglement between qubit $Z$ and qubits $AB$. In particular, if the unitary evolutions according to $H$ are infinitesimal, then up to $\O(t)$ corrections qubit $Z$ must be still in a maximally entangled state with $A$ after the simulated evolution $e^{-istH'}$. This sets very strong restrictions on the kind of measurements that can be performed on $A$ during the simulation protocol. If during the $k^{th}$ measurement in branch $\Gamma$ part of the entanglement is destroyed, then the simulation protocol necessarily fails with some probability, because the destroyed entanglement can not be deterministically recovered. Indeed, even if subsequent measurement operators in branch $\Gamma$ would be able to restore the entanglement and so obey Eq. (\[eq:locc\]), another branch $\Gamma'$ diverging from $\Gamma$ after the $k^{th}$ must necessarily fail to recover the entanglement (recall the monotonically decreasing character of entanglement under LOCC, see e.g. [@VJN]) and thus with some probability the protocol must fail to simulate the unitary evolution [@robustness]. Let us see the effect of this restrictions on the first measurement operator $M_0^{\Gamma}$ in Eq. (\[eq:operators\]). It transforms the initial entangled state into a new state proportional to + , which remains maximally entangled if and only if $||\ket{\mu_0^1}|| =||\ket{\mu_1^1}|| \equiv r_1$ and $\braket{\mu_0^1}{\mu_1^1}=0$. But this are precisely the conditions for $M_1' (=M^{\Gamma}_0)$ to be proportional to a unitary operator from $\H_A$ to the out–space $\H_{out}^0$ spanned by $\{\ket{\mu_i^1}\}$ or, equivalently, to an isometry from $\H_A$ to $\H_{AA'}$. Thus, we can write M\_1’ = r\_1 U\^\_1, where U\_1\^ +\ +\_[l=1]{}\^[d\_[A’]{}-1]{} + is some unitary operation defined on $\H_{AA'}$. Here $d_{A'}$ is the dimension of $\H_{A'}$ and $\{\ket{\xi_{l,0}},\ket{\xi_{l,1}} \}_{l=1}^{d_{A'}}$ is a set of irrelevant vectors that together with $\ket{\mu_0^1}/r_1$ and $\ket{\mu_1^1}/r_1$ form an orthonormal basis of $\H_{AA'}$. Eq. (\[eq:Mk’\]) implies that, in addition, M\_1 = U\_1. This characterization in terms of a unitary transformation can now be easily extended to the rest of operators $M_k$ and $M_k'$. We use induction over $k$. We already have that the characterization works for $k=1$. Suppose it works for some $k-1$, that is, in the decomposition Eq. (\[eq:Mk\]) for $M'_{k-1}$ we have $||\ket{\mu^{k-1}_0}||=||\ket{\mu^{k-1}_1}||$ and $\braket{\mu^{k-1}_0}{\mu^{k-1}_1}=0$. This means that after the $(k-1)^{th}$ measurement in branch $\Gamma$, the initial state of Eq. (\[eq:entangled\]) becomes a state proportional to + + Ø(t), where the $\O(t)$ corrections are due to evolutions of $AB$ according to $H$ for a time of order $t$, which slightly entangle $B$ with $AZ$. Then, preservation of entanglement during the $k^{th}$ measurement (implemented by operator $M^{\Gamma}_{k-1}$) requires that also $||\ket{\mu_{k}^0}||=||\ket{\mu_{k}^1}||\equiv r_{k}$ and $\braket{\mu_{k}^0}{\mu_{k}^1}=0$, and therefore M\_[k]{}’&=& r\_[k]{} U\_[k]{}\^ + Ø(t),\ M\_[k]{} &=& U\_k +Ø(t), for some unitary transformation $U_k$ acting on $\H_{AA'}$. The same argument leads to expressing the operators $N_k$ and $N_k'$ in terms of unitary transformations $V_k$ acting on $\H_{BB'}$ as N\_[k]{}’&=& s\_[k]{} V\_[k]{}\^ + Ø(t),\ N\_[k]{} &=& V\_k +Ø(t). Therefore Eq. (\[eq:semihami\]) finally reads, up to $\O(t)$ corrections that vanish in the $t\rightarrow 0$ or fast control limit, sH’ = \_k p\_k (U\_kV\_k) H (U\^\_kV\^\_k) . \[eq:convexsum\] ### Equivalence between LOCC and LU+anc protocols The set $S_{H}^{LOCC}$ of non–local Hamiltonians that can be efficiently simulated by $H$ and LOCC is [*convex*]{}: if $H$ can efficiently simulate $H_1$ and $H_2$, then it can also efficiently simulate the Hamiltonian $pH_1+(1-p)H_2$. Indeed, we just need to divide the infinitesimal time $t$ into two parts and simulate $H_1$ for time $pt$ and then $H_2$ for time $(1-p)t$. The resulting Hamiltonian is precisely the above average of $H_1$ and $H_2$. Thus, in order to characterize the convex set $S_{H}^{LOCC}$, we can focus on its [*extreme points*]{}. Notice that the previous convexity argument also holds for the set $S_{H}^{LU+anc}$ of Hamiltonians that can be efficiently simulated with LU+anc, so that $S_{H}^{LU+anc}$ is also convex. Recall also that $S_{H}^{LU+anc} \subset S_{H}^{LOCC}$. Now, Eq. (\[eq:convexsum\]) says that all points in $S_{H}^{LOCC}$ can be obtained as a convex combination of terms of the form (UV )H (U\^ V\^) . \[eq:extreme\] In addition, in appendix A we show that any such a term can be obtained in a simulation protocol using LU+anc. It follows that ($i$) any extreme point of $S_{H}^{LOCC}$ is of the form (\[eq:extreme\]), and that ($ii$) any extreme point of $S_H^{LOCC}$ belongs to $S_{H}^{LU+anc}$, so that $S_{H}^{LU+anc} = S_{H}^{LOCC}$. This finishes the proof of the fact that infinitesimal time simulations using LOCC can always be accomplished using LU+anc. Summarizing, we have seen that any (rescaled) two-qubit Hamiltonian $sH'$ achievable in branch $\Gamma$ of our LOCC-simulation protocol (cf. Eq. (\[eq:convexsum\])) can also be achieved, with the same time efficiency, by just using local unitary transformations and ancillas as extra resources. It is now straightforward to generalize the above argument to $N$ systems, each one having two or more levels, thereby extending the equivalence of LOCC and LU+anc protocols to general multiparticle interactions. Indeed, for any $d$-level system involved in the simulation, we just need to require that its entanglement with some remote, auxiliary $d$-level system be preserved, and we readily obtain that all measurements performed during the simulation protocol can be replaced with local unitary operations. We thus can conclude, using the notation introduced in section II.B, that H’ \_[LOCC]{} H H’ \_[LU+anc]{} H. LU+anc protocols are not equivalent to LU protocols =================================================== The equivalence between infinitesimal–time simulations using LOCC and LU+anc may be conceived as a satisfactory result. On the one hand, it discards local measurements and classical communication as useful resources for the simulation of non–local Hamiltonians. This essentially says that in order to simulate Hamiltonian dynamics, we can restrict the external manipulation to unitary operations, possibly involving some ancillary system. In this way the set of interesting simulation protocols has been significantly simplified. On the other hand, it is reassuring to see that, despite the diversity of classes of operations that we may use as a criterion to characterize the non-local properties of multiparticle interactions, most of these criteria (LOCC, LO and LU+anc) yield an equivalent classification and quantification. In other words, we do not have to deal with a large number of alternative characterizations. We shall show here, however, that simulation using only LU, that is, without ancillas, is not equivalent to that using LU+anc. The reason for this inequivalence is the following. Consider a multi–partite Hamiltonian of the form $H_A\otimes H_{BC\cdots}$, where $H_A$ acts on a $d$ dimensional space $\H_A$ and $H_{BC\cdots}$ acts on $\H_B\otimes \H_C \cdots$. In the presence of an ancilla $\H_{A'}$, LU can be used so that operator $H_A$ acts on some $d$ dimensional [*factor*]{} space $\K$ of $\H_{AA'}$ ($\H_{AA'} = \K\otimes \K'$). The net result is an effective Hamiltonian acting on $\H_A$. As the following examples show, some of these effective Hamiltonians can not be achieved (at least with the same time efficiencies) by using only LU. LU+anc protocols versus LU protocols ------------------------------------ In the previous section we saw that, in the fast control limit, the extreme points of the convex set $S_H^{LU+anc}$ of bipartite Hamiltonians that can be efficiently simulated with $H$ using LU+anc, \[equivalently, those of the set $S_H^{LOCC}$\] are, up to local terms, of the form (H) U V (HI\_[A’B’]{}) U\^ V\^ \[eq:extreme2\] (an analogous expression holds for the multi–partite case). Notice that in Eq. (\[eq:extreme2\]) we have replaced operator $H$ of Eq. (\[eq:extreme\]) with $H\otimes I_{A'B'}$ to make more explicit that ancillas are being used. Can all simulations of this type be achieved by using only LU? The most general simulation that can be achieved from $H$ and by LU reads (see [@wir] for more details) &&\_k p\_k u\_kv\_k Hu\_k\^v\_k\^\ &+& mI\_B + I\_A n + aI\_[AB]{}, \[eq:uniconj\] where $\{p_k\}$, $\sum_k p_k=1$, is a probability distribution, $\{u_k\}$ and $\{v_k\}$ are local unitaries acting on $A$ and $B$, $m$ and $n$ are self-adjoint, trace-less operators and $a$ is a real constant. The previous question translates then into whether for any $U$ and $V$ in Eq. (\[eq:extreme2\]), we can find a set $\{p_k, u_k, v_k\}$, $m$, $n$ and $a$ such that Eq. (\[eq:uniconj\]) equals $\E(H)$ in (\[eq:extreme2\]). In [@wir] it was shown that, in the particular case of two-qubit systems, the previous conditions can always be fulfilled. Next we shall show that this is sometimes not the case for Hamiltonians of two $d$-level systems for $d>2$, and also for Hamiltonians of more than two systems. Inequivalence between LU+anc and LU protocols --------------------------------------------- ### Example 1: two $d$-level systems ($d>2$) We first consider two $d$–level systems $A$ and $B$, $d>2$, that interact according to KP\_[0]{}P\_[0]{} + \_[i=1]{}\^[d-1]{} P\_[i]{}P\_[i]{}, where $P_i\otimes P_j\equiv \proj{i_{A}}\otimes\proj{j_{B}}$. We will show that by means of LU+anc, Hamiltonian $K$ can be used to efficiently (that is, with unit efficiency factor $s$) simulate K’ P\_0P\_1 + \_[i=1]{}\^[d-1]{} P\_[i]{}P\_[i]{}. We will also show that $K'$ can not be efficiently simulated using only LU. Let $A'$ be a $d$-level ancilla. We need a unitary transformation $U$ satisfying U = + \_[i=1]{}\^[d-1]{} . As we discuss in appendix A, the transformation of a Hamiltonian $H$ acting on $AB$, (H) U (HI\_[A’B’]{})U\^, can be achieved using LU+anc \[notice that this corresponds to choosing $V_{BB'}=I_{BB'}$ in Eq. (\[eq:extH’\])\]. In particular, this transformation takes any term of the form $P_i\otimes P_j$ into (P\_iP\_j)= { [cc]{} 0 & i = 0\ (P\_0+P\_1)P\_j & i=1\ P\_iP\_j & i&gt;1. ., which in particular implies (K) = K’. Now, if this simulation is to be possible with the same time efficiency by using only LU, then we must have, because of Eq. (\[eq:uniconj\]), K’ = Q + mI\_B + I\_A n + aI\_[AB]{}, \[eq:igual\] where $Q \equiv \sum_{i=0}^{d-1} \sum_k p_k u_kP_iu_k^{\dagger}\otimes v_k P_i v_k^{\dagger} \geq 0$, but this is not possible. Indeed, we first notice that, by taking the trace of this expression we obtain $a=0$, whereas by tracing out only system $B$ we obtain I = I + dm, and thus $m=0$. Tracing out only system $A$ leads to 2 P\_1 + \_[i=2]{}\^[d-1]{} P\_i = I + dn, so that $n = (-P_0+P_1)/d$ and condition (\[eq:igual\]) becomes K’=P\_0P\_1 + \_[i=1]{}\^[d-1]{} P\_[i]{}P\_[i]{} = Q + (-P\_0+P\_1). Then, recalling positivity of $Q$, we obtain the following contradiction 0 &=& \[P\_2P\_1 K’\] = \[P\_2P\_1 Q\] \ &+& \[(P\_2P\_1) ((-P\_0+P\_1))\] \ &=& \[(P\_2P\_1) Q\]  + 1/d 1/d. Thus, for any $d>2$, we have explicitly constructed an example of LU+anc simulation for Hamiltonians acting on two $d$-level systems that can not be achieved using only LU. We recall, however, that for two–particle Hamiltonians, LU+anc and LU protocols only differ quantitatively, for LU protocols are able to simulate any bipartite Hamiltonian $H'$ starting from any other $H$ with non-vanishing $s_{H'|H}$ [@Dod; @wir; @Woc2; @Nie; @Woc3]. ### Example 2: a $2\times 2\times 2$ composite system Let us consider now the simulation, for an infinitesimal time $t$, of the three–qubit Hamiltonian K’ I\_3 \_3 by the Hamiltonian K \_3 \_3 \_3, where \_3 ( [rr]{}[1]{}&[0]{}\ [0]{}&[-1]{} ) . This is possible, when allowing for LU+anc operations, by considering the transformation $U$ acting on qubit $A$ and on a one–qubit ancilla $A'$ in state $\ket{0_{A'}}$, where U = + , Indeed, we have that $\bra{0_{A'}}U (\sigma_3\otimes I_{A'}) U^{\dagger}\ket{0_{A'}} = I_{A}$, so that UKU\^=K’. On the other hand it is impossible to simulate $K'$ by $K$ and LU, for it would imply to transform $\sigma_3$ into $I$ through unitary mixing, which is a trace-preserving operation. It is straightforward to construct similar examples in higher dimensional systems, and also with more than three systems. We note that, as far as interactions involving more than two systems are concerned, the inequivalence between LU+anc and LU simulation protocols is not only quantitative, leading to different simulation factors, but also qualitative. The last example above shows that LU protocols can not be used to simulate Hamiltonians that can be simulated using LU+anc and the same interaction $H$. Optimal simulation of two-qubit Hamiltonians using LOCC ======================================================= In this last section we address the problem of optimal Hamiltonian simulation using LU for the case of two-qubit interactions. We recover the results of [@wir], but through an alternative, simpler proof, based on known results of majorization theory —and thus avoiding the geometrical constructions of the original derivation [@wir]. The equivalence of LOCC and LU+anc strategies presented in section III, together with that of LU+anc and LU strategies for two-qubit Hamiltonians proved in [@wir], imply that these results are also optimal in the context of LOCC, LO and LU+anc Hamiltonian simulation. We start by recalling some basic facts. Any two–qubit Hamiltonian $H$ is equivalent, as far as LU simulation protocols are concerned, to its canonical form [@Dur; @wir] H = \_[i=1]{}\^3 h\_i \_i\_i, \[eq:cano1\] where $h_1 \geq h_2 \geq |h_3| \geq 0$ and the operators $\sigma_i$ are the Pauli matrices, \_1 ( [rr]{}[0]{}&[1]{}\ [1]{}&[0]{} ) , \_2 ( [rr]{}[0]{}&[-i]{}\ [i]{}&[0]{} ) ,\_3 ( [rr]{}[1]{}&[0]{}\ [0]{}&[-1]{} ) . A brief justification for this canonical form is as follows. Any two-qubit Hamiltonian H\_AI\_B + I\_AH\_B + \_[ij]{} h\_[ij]{} \_i\_j can efficiently simulate (or be efficiently simulated by) its canonical form (\[eq:cano1\]): on the one hand we can always use traceless operators $m$ and $n$ as in Eq. (\[eq:uniconj\]) to remove (or introduce) the local operators $H_A$ and $H_B$; then the remaining operator $\sum_{ij} h_{ij} \sigma_i\otimes\sigma_j$ can be taken into the canonical form by means of one-qubit unitaries $u$ and $v$ such that $(u\otimes v)\sum_{ij} h_{ij} \sigma_i\otimes\sigma_j (u^{\dagger}\otimes v^{\dagger})$ is diagonal when expressed in terms of Pauli matrices. The coefficients $h_i$ in Eq. (\[eq:cano1\]) turn out to be related to the singular values of the matrix $h_{ij}$. Therefore we only need to study the conditions for efficient simulation between Hamiltonians which are in a canonical form. Let $\{\ket{\Phi_i}\}$ stand for the basis of maximally entangled vectors of two qubits ( + ),   ( + ),\ ( - ),   ( - ). \[eq:max\] Then $H$ can be alternatively expressed as H = \_[i=1]{}\^[4]{} \_i , \[eq:cano2\] where $\lambda_i$ are decreasingly ordered, real coefficients fulfilling the constraint $\sum_{i} \lambda_i = 0$ (coming from the fact that $H$ has no trace) and \_1 &=& h\_1+h\_2-h\_3\ \_2 &=& h\_1-h\_2+h\_3\ \_3 &=& -h\_1+h\_2+h\_3\ \_4 &=& -h\_1-h\_2-h\_3. The most general simulation protocol using $H$ and LU leads to H’ = \_k p\_k u\_kv\_k H u\_k\^v\_k\^, \[eq:unimixing\] where we have assumed, without loss of generality, that $H'$ is also in its canonical form, as in Eqs. (\[eq:cano1\]) and (\[eq:cano2\]), with corresponding coefficients $h_i'$ and $\lambda_i'$. Necessary and sufficient conditions for efficient simulation and optimal simulation factor ------------------------------------------------------------------------------------------ Let us derive the necessary and sufficient conditions for $H$ to be able to simulate $H'$ using LU and for infinitesimal simulation times. Uhlmann’s theorem [@Uhl] states that the eigenvalues $\lambda'_i$ of operator $H'$ in Eq. (\[eq:unimixing\]), a unitary mixing of operator $H$, are majorized by the eigenvalues $\lambda_i$ of $H$, that is ’\_1 && \_1,\ ’\_1 + ’\_2 && \_1 + \_2,\ ’\_1 + ’\_2 + ’\_3 && \_1 + \_2 + \_3,\ ’\_1 + ’\_2 + ’\_3 + ’\_4 &=& \_1 + \_2 + \_3 + \_4, \[eq:maj\] where the last equation is trivially fulfilled due to the fact that $H$ and $H'$ are trace-less operators. Succinctly, we shall write $\vec{\lambda'} \prec \vec{\lambda}$, as usual [@maj]. In terms of the coefficients $h_i$ and $h'_i$ the previous conditions read h\_1’ && h\_1,\ h\_1’ + h\_2’ - h\_3’ && h\_1 + h\_2 - h\_3,\ h\_1’ + h\_2’ + h\_3’ && h\_1 + h\_2 + h\_3, \[eq:omaj\] and correspond to the s(pecial)-majorization relation, $\vec{h'} \prec_s \vec{h}$, introduced in Ref. [@wir]. Thus, we have already recovered the necessary conditions of [@wir] for $H$ to be able to [*efficiently*]{} simulate $H'$ in LU protocols [@Bernstein] (and thus, since we are in the two–qubit case, also in LOCC protocols). In order to see that conditions (\[eq:maj\]) \[and thus conditions (\[eq:omaj\]) \] are also sufficient for efficient LU simulation, we concatenate two other results of majorization theory. The first one (see theorem II.1.10 of [@maj]) states that $\vec{\lambda'}\prec \vec{\lambda}$ if and only if a doubly stochastic matrix $m$ exists such that $\lambda_i'= \sum_j m_{ij} \lambda_j$. The second result is known as Birkhoff’s theorem [@maj], and states that the matrix $m$ can always be written as a convex sum of permutation operators $\{P_k\}$, so that ( [c]{} \_1’\ \_2’\ \_3’\ \_4’ ) = \_k p\_k P\_k ( [c]{} \_1\ \_2\ \_3\ \_4 ). This means that whenever conditions (\[eq:maj\]) are fulfilled we can obtain $H'$ from $H$ by using a mixing of unitary operations $T_i$, where each $T_i$ permutes the vectors $\{\ket{\Phi_i}\}$, H’ = \_i p\_i T\_i H T\_i\^. Then, all we still need to see is that all $4!=24$ possible permutations of the vectors $\{\ket{\Phi_i}\}$ can be performed through [*local*]{} unitaries $T_i$. Recall, however, that any permutation $\sigma$, taking elements $(1,2,3,4)$ into $(\sigma(1),\sigma(2),\sigma(3), \sigma(4))$, can be obtained by composing (several times) the following three transpositions, (1,2,3,4) (2,1,3,4),\ (1,2,3,4) (1,3,2,4),\ (1,2,3,4) (1,2,4,3), where each permutation affects two neighboring elements. The corresponding three basic permutations of $(\Phi_1,\Phi_2,\Phi_3,\Phi_4)$ can be easily obtained using LU. Indeed, in order to permute $(\Phi_1,\Phi_2,\Phi_3,\Phi_4)$ into (\_2,\_1,\_3,\_4),\ (\_1,\_3,\_2,\_4),\ (\_1,\_2,\_4,\_3), we can simply apply, respectively, the following local unitaries: ,\ ,\ . \[eq:locper\] Therefore, any permutation $\sigma$ of the states (\[eq:max\]) can be accomplished through local unitaries $T_i$, and any Hamiltonian $H'$ satisfying conditions (\[eq:omaj\]) \[equivalently, conditions (\[eq:maj\])\] can be efficiently simulated with $H$ and LU. In the following we condense the previous findings into two results, R1 and R2, which provide an explicit answer to problems P1 and P2, respectively, announced in section II.C of the paper. We assume that the two–qubit Hamiltonians $H$ and $H'$ are in their canonical form, with $\vec{\lambda}$, $\vec{h}$, $\vec{\lambda'}$ and $\vec{h'}$ the corresponding vectors of coefficients. [**R1:**]{} [*Hamiltonian $H'$ can be efficiently simulated by $H$ and LOCC —or LU, LU+anc, or LO— if and only if conditions (\[eq:omaj\]) \[or, equivalently, conditions (\[eq:maj\])\] are fulfilled, i.e.* ]{} H’ \_[LOCC]{} H      \_s      . [**R2:**]{} [*The simulation factor $s_{H'|H}$ for LOCC —or LU, LU+anc, or LO— protocols is given by the maximal $s>0$ such that $s\vec{h'} \prec_s \vec{h}$ or, equivalently, such that $s\vec{\lambda'} \prec \vec{\lambda}$.*]{} Explicit optimal LU protocols ----------------------------- The last question we address is how to actually construct a simulation protocol. That is, given $H$ and $H'$, we show how to simulate $sH'$ using $H$ and LU, for any $s\in [0,s_{H'|H}]$. A complete answer to this question is given by a probability distribution $\{p_k\}$ and a set of unitaries $\{u_k\otimes v_k\}$ such that sH’ = \_k p\_k u\_kv\_k H u\_k\^ v\_k\^, where $s\in [0,s_{H'|H}]$, and $s_{H'|H}$ can be obtained using result R2. We already argued that it is always possible to choose all $u_k\otimes v_k$ such that they permute the vectors of Eq. (\[eq:max\]), so that each $u_k\otimes v_k\equiv T_k$ is just a composition of the local unitaries of Eqs. (\[eq:locper\]). As before, let $\{P_k\}_{k=1}^{24}$ denote the $24$ permutations implemented by the local unitaries $\{T_k\}_{k=1}^{24}$. Then the above problem reduces to finding an explicit probability distribution $\{p_k\}$ such that s\_[H’|H]{} H’ = \_k p\_k T\_k H T\_k\^, or, equivalently, such that s\_[H’|H]{} = \_k p\_k P\_k . \[eq:decompo\] This is done on appendix B using standard techniques of convex set theory. There we show how to construct a solution involving at most 4 terms $p_kT_k$ for $s<s_{H'|H}$, and at most 3 terms for optimal simulation, that is, when $s=s_{H'|H}$. Conclusions =========== In this paper we have studied Hamiltonian simulation under the broader scope of LOCC protocols. We have focused on infinitesimal–time simulations, for which we have shown that LOCC protocols are equivalent to LU+anc protocols, also that LU+anc protocols are in general inequivalent to LU protocols (two–qubit Hamiltonians being an exception). For two–qubit Hamiltonians we have rederived and extended the results of [@wir], to finally provide the optimal solution using LOCC. Thus, the problem of simulating Hamiltonian evolutions has received a complete answer for infinitesimal times and using LOCC, for the simplest case of two-qubit systems. Several interesting questions remain open. On the one hand, the generalization of these results to systems other than two qubits appears as challenging. On the other hand, the asymptotic scenario for Hamiltonian simulation, where $H$ is used to simulate $H'$ many times on different systems, certainly deserves a lot of attention. Finally, we note that entangled ancillary systems have been recently shown to be of interest in non–local Hamiltonian simulation [@new]. In particular, entanglement can act a catalyst for simulations, both in the infinitesimal–time and finite–time regimes, in that in the presence of entanglement better time efficiencies can be obtained, although the entanglement is not used up during the simulation but is fully recovered after the manipulations. The authors acknowledge discussions on the topic of Hamiltonian simulation with Charles H. Bennett, Debbie W. Leung, John A. Smolin and Barbara M. Terhal. Valuable and extensive comments from an anonymous referee are also acknowledged. This work was supported by the Austrian Science Foundation under the SFB “control and measurement of coherent quantum systems” (Project 11), the European Community under the TMR network ERB–FMRX–CT96–0087, the European Science Foundation, the Institute for Quantum Information GmbH. and the National Science Foundation (of USA) through grant No. EIA-0086038. G.V also acknowledges a Marie Curie Fellowship HPMF-CT-1999-00200 (European Community). Extreme points of the set non–local Hamiltonian simulations achievable by LU+anc ================================================================================ In this appendix we show that in LU+anc simulations any Hamiltonian of the form H’ = UV (HI\_[A’B’]{}) U\^ V\^ \[eq:extH’\] can be efficiently simulated by $H$, for any couple of unitaries $U$ and $V$ acting on $AA'$ and $BB'$. The result is valid also for more than two systems after a straightforward generalization of the following proof. Notice that we can always write $U$ and $V$ using product basis $\{\ket{i_{A}j_{A'}}\}$ and $\{\ket{i_{B}j_{B'}}\}$ as U &=& \_[i=0]{}\^[d\_A-1]{} \_[j=0]{}\^[d\_[A’]{}-1]{}\ V &=& \_[i=0]{}\^[d\_B-1]{} \_[j=0]{}\^[d\_[B’]{}-1]{} , where $\{\ket{\phi_{ij}}\}$ and $\{\ket{\psi_{ij}}\}$ are other orthonormal basis of systems $AA'$ and $BB'$, respectively, and $d_{\kappa}$ denotes the dimension of system $\kappa$. To perform this simulation, we need to make the output of the ancilla be the state $\ket{0_{A'} 0_{B'}}$, unentangled with the systems $AB$. This can not be achieved by performing just transformations $U$ and $V$, but by considering also a series of local unitaries $\{U_a\otimes V_b\}$, $a\in\{0,\cdots, d_{A'}-1\}$, $b\in\{0,\cdots, d_{B'}-1\}$, U\_a I (\_[l=0]{}\^[d\_[A’]{}-1]{}e\^[i2]{} )U,\ V\_b I (\_[l=0]{}\^[d\_[B’]{}-1]{}e\^[i2]{} )V, and a constant probability distribution $\{p_{ab}\}$, $p_{ab}= 1/(d_{A'}d_{B'})$. Then we have that $U_a^{\dagger}\ket{0_{A'}}=U^{\dagger}\ket{0_{A'}}$, and that $\sum_a U_a = d_{A'} \proj{0_{A'}}U$, and similarly for $V_b$, so that we obtain \_[ab]{} p\_[ab]{} U\_aV\_b (HI\_[A’B’]{}) U\_a\^V\_b\^ =\ UV (HI\_[A’B’]{}) U\^V\^.\[eq:lastpro\] Therefore Eq. (\[eq:lastpro\]) defines a protocol that simulates the Hamiltonian of Eq. (\[eq:extH’\]) with unit time efficiency. Explicit two-qubit LU simulation protocols ========================================== In this appendix we show how to find a probability distribution $\{p_k\}$ and permutations $\{P_k\}$ such that = \_k p\_k P\_k , for any two given four-dimensional, real vectors $\vec{\lambda}$ and $\vec{\mu}$ ($\vec{\mu} = s\vec{\lambda'}$ in section V.B) such that $\vec{\mu} \prec \vec{\lambda}$, where $\sum_{i=1}^4 \lambda_i = \sum_{i=1}^4 \mu_i = 0$. We first note two facts that will allow us to use standard techniques of convex set theory: $(i)$ the set $S \equiv \{\vec{\tau}~|~ \vec{\tau} \prec \vec{\lambda}\}$ is convex, and ($ii$) $\{P_k\vec{\lambda}\}_{i=1}^{24}$ are the extreme points of $S$, as it follows from Birkhoff’s theorem [@maj]. We can then proceed as follows. Step ($a$): we check whether $\vec{\mu} = P_i \vec{\lambda}$ for any $i=1,\cdots,24$. If we find one such permutation we are done. Otherwise we move to step ($b$). Step ($b$): Facts $(i)$ and $(ii)$ guarantee that there is at least one permutation $P_k$, that we call $Q_1$, and a positive $\epsilon>0$ such that = Q\_1 + (1-), where $\vec{\tau}$ also belongs to $S$, and therefore satisfies $\vec{\tau} \prec \vec{\lambda}$. In other words, we have to search until we find a permutation $Q_1$ such that ( - Q\_1 )/(1-) , \[eq:desig\] for some $\epsilon>0$. Once we have found it we only need to increase $\epsilon$ to its maximal value compatible with Eq. (\[eq:desig\]). Let $q_1$ be this maximal value of $\epsilon$. Then we can write = q\_1 Q\_1 + (1-q\_1)\_2, where $\vec{\mu}_2 \prec \vec{\lambda}$ is on one of the surfaces of $S$ —otherwise we could have taken a greater $q_1$. Such a surface is, again, a (lower dimensional) convex set, whose extreme points are some of the $P_k\vec{\lambda}$’s, and whose elements $\vec{\tau}$ fulfill $\vec{\tau}\prec \vec{\lambda}$ but with one of the majorization inequalities replaced with an equality. This allows us to repeat points $(a)$ and $(b)$, but now aiming decomposing $\vec{\mu}_2$ as a convex sum of vectors $P_k\vec{\lambda}$. That is, first we check whether $\vec{\mu}_2$ corresponds to $P_k\vec{\lambda}$ for some $k$. And, if not, we search until we find a permutation $P_k$, let us call it $Q_2$, such that, again, (\_2 - Q\_2 )/(1-) . \[eq:desig2\] The maximum value of $\epsilon$ compatible with this equation, say $q$, leads to a second term $q_2Q_2$ ($q_2=(1-q_1)q$) for the decomposition of $\vec{\mu}$, = (q\_1Q\_1+q\_2Q\_2)+(1-q\_1-q\_2)\_3, and to a new $\vec{\mu}_3$, that lies on a surface of yet lower dimensionality of the original convex set $S$. We iterate the procedure until the remaining vector $\vec{\mu}_l$ lies on a convex surface of $S$ of dimension zero, which means that the surface contains only one element, $\vec{\mu}_l$. In this way we obtain the desired decomposition, = \_[k=1]{}\^l q\_kQ\_k . What is the minimal value of $l$? For non-optimal simulation protocols we have that $\vec{\mu} = s\vec{\lambda}'$, where $s<s_{H'|H}$, and $\vec{\mu}$ is in the interior of $S$, which is a three-dimensional set. Therefore the above procedure has to be iterated at most three times before we are left with a zero-dimensional surface of $S$, and the minimal decomposition contains at most $l=4$ terms. For optimal simulation protocols $\vec{\mu} = s_{H'|H}\vec{\lambda}'$ is already in a surface of $S$, and therefore the minimal decomposition contains from $1$ to $3$ terms. [20]{} W. Dür, G. Vidal, J. I. Cirac, N. Linden and S. Popescu, quant-ph/0006034. J.L. Dodd, M.A. Nielsen, M.J. Bremner and R.T. Thew, quant-ph/0106064. P. Wocjan, D. Janzing and Th. Beth, quant-ph/0106077. C. H. Bennett, J. I. Cirac, M. S. Leifer, D. W. Leung, N. Linden, S. Popescu and G. Vidal, quant-ph/0107035. P. Wocjan, M. Rotteller, D. Janzing and T. Beth, quant-ph/0109063. M. A. Nielsen, M. J. Bremner, J. L. Dodd, A. M. Childs, and C. M. Dawson, quant-ph/0109064. P. Wocjan, M. Rotteller, D. Janzing and T. Beth, quant-ph/0109088. See for instance A. G. Butkovskiy and Y. L. Samoilenko, [*Control of Quantum Mechanical Processes and Systems*]{}, Kluwer Academic, Dordrecht, 1990. See for instance C. P. Slichter, [*Principles of Magnetic Resonance*]{}, Springer, Berlin, 1996. W. Dür, G. Vidal and J. I. Cirac, quant-ph/0112124. In this work we assume that local ancillary systems are available at wish in LO protocols (see section III), and thus we make no distinction between LO and LO+anc (as opposed to [@wir]). In practice, to simulate the [*entire*]{} evolution $e^{-it'H'}$ means to fix a division of $T'$ into a (large) number $N$ of regular time intervals of length $\tau$, $T' = N\tau'$, and to have the system, initially in state $\ket{\psi}$, sequentially driven into the state $e^{-ik\tau'H'}\ket{\psi}$, for $k=1,2,\cdots,N$. Perfect dynamics simulation corresponds to taking the large $N$ limit. Subsequent studies have shown that, in a two–qubit system, $s_{H'|H}$ is often also the minimal time (or [*interaction cost*]{}) required to perform the gate $e^{-iH'T'}$ by $H'$ and LU. G. Vidal, K. Hammerer and J. I. Cirac, to appear in Phys. Rev. Lett. (quant-ph/0112168). M. A. Nielsen and I. L. Chuang, section 2.2.3. As explained in [@wir], local terms such as $I_{A}\otimes O_B$ can be added and removed by means of local unitary operations, and thus we can safely omit them in the present argumentation. G. Vidal, D. Jonathan, M. A. Nielsen, Phys. Rev. A 62, 012304 (2000). One can use the results of [@VJN], concerning the robustness of entanglement transformations under a perturbation of the state under consideration, to show that the argument of section III.B.2 of this paper is also robust under $\O(t)$ modifications of a maximally entangled state. P.M. Alberti and A. Uhlmann, [*Stochasticity and partial order: doubly stochastic maps and unitary mixing*]{}, Dordrecht, Boston, 1982. R. Bhatia, [*Matrix analysis*]{}. Springer-Verlag, New York, 1997. Herbert J. Bernstein, [*private communication*]{}, has obtained, independently, a similar majorization-based proof of the necessary conditions for efficient simulation of two-qubit Hamiltonians. G. Vidal and I. Cirac, Phys. Rev. Lett. 88 (2002) 167903.
--- abstract: | The origin of cosmic rays at all energies is still uncertain. In this paper we present and explore an astrophysical scenario to produce cosmic rays with energy ranging from below $10^{15}$ to $3 \times 10^{20}$ eV. We show here that just our Galaxy and the radio galaxy Cen A, each with their own galactic cosmic ray particles, but with those from the radio galaxy pushed up in energy by a relativistic shock in the jet emanating from the active black hole, are sufficient to describe the most recent data in the energy range PeV to near ZeV. Data are available over this entire energy range from the experiments KASCADE, KASCADE-Grande and Pierre Auger Observatory. The energy spectrum calculated here correctly reproduces the measured spectrum beyond the knee, and contrary to widely held expectations, no other extragalactic source population is required to explain the data, even at energies far below the general cutoff expected at $6 \times 10^{19}$ eV, the Greisen-Zatsepin-Kuzmin turn-off due to interaction with the cosmological microwave background. We present several predictions for the source population, the cosmic ray composition and the propagation to Earth which can be tested in the near future. author: - 'Peter L. Biermann' - Vitor de Souza title: 'Centaurus A: the one extragalactic source of cosmic rays with energies above the knee' --- Introduction ============ Cosmic rays have been originally discovered in 1912/13 by Hess (Hess 1912) and Kohlh[ö]{}rster (Kohlh[ö]{}rster 1913) and still today we have no certainty where they come from. Their overall spectrum has been shown to be essentially a power-law with a bend down near $10^{15}$ eV, called the knee, and a turn towards a new flatter component near $\sim \, 3 \times 10^{18}$ eV, called the ankle, with a final turn-off just around $10^{20}$ eV, summarized in (Gaisser & Stanev 2008). It is thought, that the component below about $3 \times 10^{18}$ eV is Galactic and the component above this energy is extragalactic, on the basis that particles above such an energy would be hard to contain and isotropize in the magnetic fields in the interstellar medium disk of the Galaxy. Different astrophysical scenarios have been proposed to explain the Galactic and the extragalactic components of the cosmic radiation, see the overview (Stanev 2010a, 2010b). The basic paradigm for Galactic cosmic rays has been acceleration in the shock waves caused by supernova explosions (Baade & Zwicky 1934). The process of acceleration is diffusive shock acceleration (Fermi 1949) and it is based on the compression experienced by particles that get reflected by magnetic irregularities from both sides of a shock in an ionized magnetic plasma (Drury 1983). Supernovae are exploding stars and they may explode either directly into the interstellar medium or into the stellar wind of the predecessor star (Woosley 2002). Lagage & Cesarsky (Lagage & Cesarsky 1983) showed that acceleration at the shocks caused explosions into the normal interstellar medium cannot reach even the energies at the knee. Heavy nuclei can be accelerated up to about $10^{18}$ eV in Galactic sources, such as supernova explosions of massive stars which explode into their wind (Völk & Biermann 1988), OB-star super-bubbles, gamma ray bursts (Dermer 2004) or micro-quasars, active accreting black holes in stellar binary systems. For higher energies ($> 10^{18}$ eV) extragalactic sources are the most accepted candidates. Nearby active radio galaxies with black hole activity were first proposed by Ginzburg & Syrovatskii (Ginzburg & Syrovatskii 1963) (see also, e.g., (Lovelace 1976) and (Biermann & Strittmatter 1987)) as possible sources. Gamma ray bursts in other galaxies have also been suggested by Waxman (Waxman 1995) and Vietri (Vietri 1995). The radio galaxy Cen A is a prime example of a possible astrophysical source (Anchordoqui et al. 2011). The interaction of high energy particles with the microwave background limits the distance of the sources (Greisen 1966; Zatsepin & Kuzmin 1966; Allard et al. 2008; Stanev 2010b). For protons the energy at which the all particle spectrum from many sources should turn off is estimated to be $6 \times 10^{19}$ eV and for other chemical elements the energy turn off is lower (Allard et al. 2008). As the interaction distance becomes very large for particles with energy between $3 \times 10^{18}$ and $6 \times 10^{19}$ eV, the calculations predict a large number of extragalactic sources to contribute to the measured flux in this energy range. This is a primary prediction of these calculations. Of special interest is also the transition between galactic to extragalactic predominance which should happen in the energy range between $10^{16}$ to about $10^{18}$ eV. Several experiments are presently taking data in this energy range: KASCADE-Grande, Telescope Array, IceTop and the Pierre Auger Observatory. The transition is a very important feature because it is foreseen that breaks in the all particle spectrum and in the composition can reveal the details of the particle production mechanisms, the source population and propagation in the Universe. There are a number of recent attempts to explain the cosmic ray spectrum in the transition range, e.g. by Hillas (Hillas 2006), and by Berezinsky et al. (Berezinsky et al. 2009). In this paper, we focus on the previously inaccessible continuous energy range above 10 PeV extending to the highest energies measured. Today it is possible to compare the predictions with high precision data over the entire energy range. Therefore it becomes important to have predictive power, i.e. test quantitative hypotheses, which were developed long before much of the new data was known. We revisit here an idea originally proposed in 1993 (Biermann 1993; Stanev et al. 1993) and we show how our Galaxy and the radio galaxy Cen A can describe the energy spectrum from 10 PeV up to $3 \times 10^{20}$ eV and describe the galactic to extragalactic transition at the same time. In the following sections, we first go through the tests the 1993 original model has undergone to date as regards spectra, transport, secondaries, and composition; secondly we confirm the predictions of the original model with the newly available data beyond the knee energy, and finally we present the high energy model which describe the transition between Galactic and extragalactic cosmic rays. Original model and its tests to date ==================================== In a series of papers started in 1993 (Biermann 1993; Stanev et al. 1993; Biermann 1994) a astrophysics scenario was proposed which emphasized the topology of the magnetic fields in the winds of exploding massive stars (Parker 1958). In (Stanev et al. 1993), a comprehensive spectrum was predicted for six element groups separately: H, He, CNO, Ne-S, Mn-Cl, Fe. The key points of this original model are: a) The shock acceleration happens in a region, which is highly unstable and shows substructure, detectable in radio polarization observation of the shock region, also found in theoretical exploration (e.g. (Bell & Lucek 2001; Caprioli et al. 2010; Bykov et al. 2011). Therefore the particles go back and forth across the shock gaining momentum, while the scattering on both sides is dominated by the scale of these instabilities, which are assumed to be given by the limit allowed by the conservation laws in mass and momentum; b) There are cosmic ray particles which get accelerated by a shock in the interstellar medium, produced by the explosion of a relatively modest high mass star, or, alternatively, by a low mass supernova Ia. This is most relevant for Hydrogen and less for Helium and heavier nuclei; c) Heavy cosmic ray nuclei derive from very massive stars, which explode into stellar winds already depleted in Hydrogen, and also in Helium for the most massive stars. These explosions produce a two part spectrum with a bend that is proposed to explain the knee. In this scenario the knee is due to the finite containment of particles in the magnetic field of the predecessor stellar wind, which runs as $\sin \theta /r$ in polar coordinates (Parker 1958). Towards the pole region only lower energies are possible and the knee energy itself is given by the space available in the polar region. There is a polar cap component of cosmic rays associated to the polar radial field with a flatter spectrum; d) Diffusive leakage from the cosmic ray disk steepens all these spectra by 1/3 for the observer; e) Very massive stars eject most of their zero-age mass before they explode and so form a very massive shell around their wind (Woosley 2002). This wind-shell is the site of most interaction for the heavy nuclei component of cosmic rays. For stellar masses above about 25 solar masses in zero age main sequence mass (Biermann 1994) the magnetic irregularity spectrum is excited by the cosmic ray particles themselves. The spectral steepening due to the interactions is $E^{-5/9}$ for the most massive star shells. The final spectrum is a composition of these components, see Figure 1 of Ref. (Stanev et al. 1993). The spectra predicted by these arguments match the data such as shown by the recent CREAM results (Wiebel-Sooth et al. 1998; Biermann et al. 2009). This scenario has undergone detailed tests as regards propagation and interactions (Biermann 1994; Biermann et al. 2009) so as to describe both Galactic propagation and the spectra of the spallated isotopes as well as the resulting positron spectra, the flatter cosmic ray positron and electron data, the WMAP haze and the spectral behavior of its inverse Compton emission, and the 511 keV emission from the Galactic Center region. New Tracer results (Obermeier 2011) are also consistent in terms of a) the low energy source spectrum, b) the energy dependence of interaction, c) a finite residual path-length at higher energy, and d) a general upturn in the individual element spectra. The newest Pamela results (Adriani 2011) are also consistent with the 1993 original model, in which Hydrogen was the only element to have a strong ISM-SN cosmic ray component, and so has a steeper spectrum than Helium. A test beyond the knee ---------------------- This original model was proposed to explain the particles observed above $10^{9}$ eV per nuclear charge. Here we first test the original model to the KASCADE data. The most accurate measurement of the energy spectrum in the knee energy range has been done by the KASCADE experiment (KASCADE-Grande Coll. 2010). Figure 1 shows for the first time the comparison of the original model to the measured data from KASCADE. KASCADE reconstructs the spectrum using two hadronic interaction programs (QGSJet and Sibyll) in the analysis procedure. In the figure we show the data and the original model, and also include the ratio of the difference between original model and data divided by the experimental error. For the ratio shown we only use one of these interaction codes, as an example we use QGSJet. The figure shows good agreement between data and the original model to within the errors of the data. This confirms that the original model in its last remaining energy range, where it had not previously been tested for lack of good data. This is the first key result of this paper. Transport and interaction test ------------------------------ One question, which invariably comes up is how this model deals with propagation and spallation. For this line of reasoning it is important to note two aspects: a) Plasma simulations, the Solar wind data (Mattheaus & Zhou 1989), the interstellar medium data (Rickett 1977, Spangler & Gwinn 1990, Goldstein et al. 1995) as well as radio galaxy data (Biermann 1989) all are consistent with the interpretation that in an ionized magnetic plasma in near equipartition we have an approximate Kolmogorov spectrum (Kolmogorov 1941) running without break from very large scales down to dissipation scale. b) Very massive stars eject most of the initial zero age main sequence mass in their powerful winds, which then builds up a correspondingly massive shell at the outer boundary of the wind. It is this massive shell the cosmic ray loaded supernova shock encounters and interacts with. All those cosmic ray particles accelerated by a supernova shock in the heavily enriched winds of Wolf-Rayet stars then excite magnetic irregularities, which can be described following Bell (1978); it is these self-excited irregularities that describe the path of the cosmic ray particles through the massive shell. This gives (Biermann 1998, Biermann et al. 2001) a Boron/Carbon ratio energy dependence of $E^{-5/9}$; this was found to be consistent with data by Ptuskin et al. (Ptuskin et al. 1999), who determined $E^{-0.54 }$. Since the straight-line path gives a minimum path, this model also gives a finite path-length of interaction at high energy, consistent with the new data as noted above. Other tests are: The cosmic ray electron spectrum has been determined to be $E^{-3.23 \pm 0.06}$ (Wiebel-Sooth & Biermann 1999) in the energy range up to a few TeV, a spectrum which is dominated by losses (Kardashev 1962). Therefore the injection is with a spectrum flatter by unity. This has to be compared with the proton spectrum at a corresponding energy, which suggests $E^{-2.66 \pm 0.02}$ near TeV energies (Yoon et al. 2011); this spectrum has been steepened by the energy dependence of the diffusion, and so the difference to the inferred cosmic ray electron spectrum gives this dependence as $E^{-0.43 \pm 0.06}$, consistent with $E^{-1/3}$ as deduced from the Kolmogorov assumption. Another consistency check is the time scale inferred at the highest energies for leaking out of the kpc-thick cosmic ray disk near the Sun (Biermann 1993). Since we have about $10^{7}$ yrs at GeV energies for protons, we infer with the Kolmogorov assumption a time scale of $10^{4.3}$ yrs at $10^{17}$ eV - adopting the point of view that the highest energies cosmic ray particles from the Galaxy are Fe (Stanev et al. 1993) at $10^{18.5}$ eV, matching in their scattering protons of $10^{17}$ eV; however, this is not yet finally settled. This time scale is still significantly longer than the simple transit time across the thick disk of about $10^{3.5}$ yrs, so that the isotropy observed can be understood without already invoking the effect of the Galactic wind. Using three times the simple transit time as the minimum to give isotropy we can invert this line of reasoning and deduce a maximum energy dependence of the scattering of $E^{-0.38}$ (Biermann 1993), under the assumption again, that the entire energy range is covered by the same powerlaw. The Wolf-Rayet star model is able to also explain the positron spectra and positron production (Biermann et al. 2009, 2010) as noted above. The high energy model ===================== Based on the original model we propose here a high energy model to explain the cosmic ray data from 10 PeV to 300 EeV. We analyze the possibility that the *very same spectrum* proposed in 1993 however shifted in energy can explain all the data up to 300 EeV including the knee region, the highest energy range ($> 10^{18}$ eV) and at the same time the middle energy range ($10^{16} < E < 10^{18}$ eV). The cosmic ray particles as seen in our galaxy were argued to provide the seed particles for further acceleration to ultra-high energy by a relativistic shock emanating from an active black hole in the nearby radio galaxy Cen A (Gopal-Krishna et al. 2010). This idea is explored here beyond what has been proposed in (Gopal-Krishna et al. 2010) which demonstrated that Cen A can provide a sufficient flux for the highest energy particles by working out the energetic particle flux traversed by a shock surface in the jet of the radio galaxy Cen A with the one-step further acceleration in a relativistic shock as proposed by (Achterberg et al. 2001). They have used the spectral shape of the original model (Stanev et al. 1993) to fit the Pierre Auger data, however the fit of the measured spectrum was not constrained by the low energy spectrum as proposed in the original model (Stanev et al. 1993). The energy spectrum calculated here is simply a shift of the particle spectra proposed in the original model (Stanev et al. 1993) for low energies to the highest energies preserving the relative abundances of the original model. This proposal is considerable stronger than the previous one presented in (Gopal-Krishna et al. 2010). The energy shift corresponds to a factor of 2800 within the limits of a one-step acceleration by a relativistic shock as proposed by (Achterberg et al. 2001). The original model (Stanev et al. 1993) has a number of parameters, which were set in 1992, see Fig.6 in (Stanev et al.1993). None of these parameters had to be changed significantly in the analysis presented here. Match to data from 10 PeV to 300 EeV ==================================== Finally, we can construct the energy spectrum of cosmic rays by adding the galactic component to the extragalactic component as shown in figure 1. In this figure we show differential flux $\times E^{3}$ versus energy per particle as predicted by this analysis compared to the data from KASCADE (KASCADE Coll. 2009), KASCADE-Grande (KASCADE-Grande Coll. 2010) and the Pierre Auger Observatory (Pierre Auger Coll. 2010a). We have shifted within the experimental uncertainties the KASCADE and KASCADE-Grande flux down by 14% and the Auger flux up by 14 % in order to match. We distinguish six groups, H, He, CNO, Ne-S, Cl-Mn, and Fe. Particles were subjected to the losses in the intergalactic radiation field (Allard et al. 2008). The numbers above the lines correspond to error estimations (Model - Data)/(Experimental Error). We note the good agreement between data and model from below $10^{15}$ to $3 \times 10^{20}$ eV. One extra assumption of the original model, namely the energy shift, allows a description of the energy spectrum above $10^{18}$ eV. Below the critical energy for interactions with the microwave background, it has been expected that we would observe a very large number of sources, at large distances. However, no other source population is needed to describe the energy spectrum above $3 \times 10^{16}$ eV. This is the second key new result of this paper. To summarize, a few results can be extracted from the proposal here: I) There are no other sources necessary to provide extra flux in the energy range $3 \times 10^{16}$ and $3 \times 10^{20}$ eV, the second key result of this paper. A detailed analysis of radio galaxies (Caramete et al. 2011) shows that the next strongest radio galaxy to contribute is Virgo A, as already predicted by Ginzburg & Syrovatskii (1963). We estimate the maximum possible extra flux from other sources within this distance limit at 0.1 in the log in Fig. 1, so at 25 percent. We note that the self-consistent MHD-simulations for cosmological magnetic fields presented in (Ryu et al. 2008) were carried out for protons and so, for heavy nuclei the magnetic horizon in intergalactic space is small, less than 100 Mpc consistent with the measurements of the Pierre Auger Observatory (Pierre Auger Coll. 2010c) which suggest a heavier composition for energies above $10^{19}$ eV. However, at yet lower energies the sum of the more distant sources might exceed the flux predicted from single sources such as Cen A and Vir A; the magnetic scattering as predicted in (Ryu et al. 2008) seems to prevent this at all energies above the transition to Galactic cosmic rays; II) The dip near $3\times 10^{18}$ eV is explained by the switch-over between the galactic cosmic rays and the extragalactic cosmic rays (Rachen et al. 1993); III) The spectra of Galactic cosmic rays beyond the knee are adequately modeled by our approach suggesting that the Wolf-Rayet star explosion model matches also the newest data beyond the knee; IV) There is no abrupt change in composition in the energy range from $3 \times 10^{16}$ to $3 \times 10^{18}$ eV; In order to describe the data (Pierre Auger Coll. 2010a) the high energy model presented here requires minimal interaction along the path between the radio galaxy Cen A and us, and so indeed near isotropic scattering in the magnetic wind of our Galaxy. Isotropy? --------- The most recent results from the Pierre Auger Observatory indicate an excess in the direction of Cen A (Pierre Auger Coll. 2010b) which might corroborate the analysis presented here. The Pierre Auger Observatory measures an isotropic sky for energies below $6 \times 10^{19}$ eV and a weakly anisotropic sky above this energy (Pierre Auger Coll. 2010b). The isotropy for energies below $6 \times 10^{19}$ eV can be explained by a turbulent magnetic wind of our Galaxy (Everett et al. 2008). This wind is thought to be driven by cosmic rays and hot gas, and so is itself unstable, giving irregular magnetic fields, akin to radiation driven winds of stars, (Cassinelli 1994; Owocki 1990). The magnetic fields in the wind are strong enough to isotropize incoming heavy element particles at very high energy and protons at lower energy. Scattering in a magnetic wind with $B_{\phi} \sim 1/R$ as a function of radial distance $R$ (Parker 1958), is strongly enhanced due to the extra factor derived from integrating the Lorentz force $\ln \{R_{max}/R_{min}\} \, \sim \, 5$. The maximal energy for total bending can then be given by the magnetic field strength (Everett et al. 2008) at the base $\sim 8$ $\mu$Gauss, the length scale at the base $\sim 5$ kpc, this logarithmic factor $\, \sim 5$, and so is given by $10^{20.2} \, Z$ eV, where $Z$ is the charge of the cosmic ray particle. Since at those energies the data measured by the Pierre Auger Observatory suggest that we actually have heavy nuclei, complete bending is assured. However, this would lead to a second problem, in that we then might find a complete shielding for any particles coming from outside, so this magnetic wind must also have considerable irregularities; these irregularities in the wind need to be scale-free (implying a saturated spectrum of irregularities, or inverse cascade $I(k) k \sim const$, where $I(k)$ is the energy per wave number $k$ per volume), so as to avoid a characteristic energy, below which all particles are cut off; or, if such an energy exists, it must be low enough not to disturb the spectral sum. The key point is that Parker-winds (Parker 1958) are very effective at bending orbits. Obviously, the scattering might not be complete, so that a small anisotropy is left possibly explaining the Auger data clustering of events near the direction to the radio galaxy Cen A. Predictions =========== Some predictions of the high energy model presented here are: 1\) The calculations presented here predict the individual spectra for six element groups. The future data analysis from KASCADE-Grande, IceTop (Stanev 2009), Telescope Array and the Pierre Auger Observatory will be able to test this prediction; 2) The trend to heavier nuclei from 2 to $6 \times 10^{19}$ eV has been suggested by measurements of the depth of shower maximum done by the Pierre Auger Observatory (Pierre Auger Coll. 2010c); we caution, that the interpretation of the data in terms of mass composition depends on hadronic interaction extrapolations; 3) An isotropic background contribution of high energy cosmic rays from other more distant sources is compatible with our analysis up to 25 percent of the total flux. An important caveat of the analysis refers to gamma ray bursts. We could obtain similar results in describing the cosmic ray spectrum if instead of exploding Wolf-Rayet stars we would have used gamma ray bursts exploding into Wolf-Rayet star winds. This assumption allows that Wolf-Rayet star explosions might be due to the same mechanism as gamma ray bursts, but just completely stifled by the mass burden, possibly implying the magneto-rotational mechanism of Bisnovatyi-Kogan (Bisnovatyi-Kogan 1970). This would suggest that most Galactic cosmic rays be attributed to gamma ray bursts (Dermer 2004), and that the gamma ray burst rate is substantially higher than heretofore believed. The predictive power for the spectral indices and the energy scales is lost in this alternative. Finally, we show how the high energy model proposed here could be falsified: A) If all ultra high energy cosmic rays could be shown to be of one and only one chemical element, like all Proton, or all Iron; B) If neutrino or gamma ray data would unequivocally show that many nearby extragalactic sources contribute equivalently to the radio galaxy Cen A. This could occur naturally in a gamma ray burst hypothesis, since many nearby starburst galaxies with high rates of star formation, supernova explosions, and gamma ray bursts could all contribute at comparable levels (Caramete et al. 2011); C) If it could be clearly shown that the turbulent magnetic wind of our Galaxy does not have the required strength of magnetic field and spatial extent to effect near isotropy by magnetic scattering; D) If an abrupt change of the composition is measured between the iron knee and the dip or ankle. Conclusions =========== We conclude that our own Galactic cosmic rays plus the galactic cosmic rays from a radio galaxy shifted in energy in the relativistic shock of an accreting super-massive black hole reproduces the all particle energy spectrum from $10^{15}$ to $10^{20}$ eV as measured by the KASCADE, KASCADE-Grande and the Pierre Auger Observatories. In the scenario proposed here no additional extragalactic source population for ultra high energy particles is required, contrary to many years of expectation. That implies that no other sources within the magnetic horizon is viable even at lower particle energies ($> 3 \times 10^{18}$ eV) above the switchover between galactic and extragalactic cosmic rays. The scenario proposed here gives a number of predictions, especially as regards the chemical element composition across this entire energy range. An detailed comparison of the measured and predicted composition is yet to be done. Once these predictions have been falsified or confirmed we will be closer to an understanding of the origin of cosmic rays 100 years after their discovery. PLB acknowledges discussions with J. Becker, J. Bl[ü]{}mer, L. Caramete, R. Engel, J. Everett, H. Falcke, T.K. Gaisser, L.A. Gergely, A. Haungs, S. Jiraskova, H. Kang, K.-H. Kampert, A. Kogut, Gopal Krishna, R. Lovelace, K. Mannheim, I. Maris, G. Medina-Tanco, A. Meli, B. Nath, A. Obermeier, J. Rachen, M. Romanova, D. Ryu, E.-S. Seo, T. Stanev, and P. Wiita. VdS and PLB both acknowledge their KASCADE, KASCADE-Grande and Pierre Auger Collaborators. VdS is supported by FAPESP (2008/04259-0, 2010/07359-6) and CNPq. Achterberg, A., et al. 2001, MNRAS, 328, 393. Adriani, O. (Pamela-Coll.) 2011, astro-ph:1103.4055. Allard, D., et al. 2008, , 10, 33. Anchordoqui, L.A., et al. 2011, astro-ph:1103.0536; Fargion, D., & D’Armiento, D. 2011, astro-ph:1101.0273. Baade, W., & Zwicky, F. 1934, Proc. Nat. Acad. Sci., 20, 259. Bell, A. R. 1978a, MNRAS, 182, 147; MNRAS, 182, 443. Bell, A. R., & Lucek, S. G. 2001, , 321, 433. Berezinsky, V. 2009, Nucl. Phys. B Proc. Suppl., 188, 227. Biermann, P.L., & Strittmatter, P.A. 1987, , 322, 643. Biermann, P. L., in Proc. [*Hot spots in extragalactic radio sources*]{}; Workshop, Tegernsee, 1989, [*Lect. Not. Phys.*]{} [**327**]{}, 261 Biermann, P.L. 1993, Å, 271, 649; idem & Cassinelli, J.P. 1993, Å, 277, 691; idem & Strom, R.G. 1993, Å, 275, 659. Biermann, P.L. 1994 in Proc. “Invited, Rapporteur and Highlight papers", 23rd ICRC Calgary; Eds. D. A. Leahy et al., World Scientific, Singapore, p. 45. Biermann, P.L. 1998, in Proc. [*Nuclear Astrophysics*]{} meeting at Hirschegg, GSI, Darmstadt, p. 211. Biermann, P.L., et al. 2001, Å, 369, 269. Biermann, P. L., et al. 2009 , 103, 061101. Biermann, P. L., et al. 2010a, , 710, L53. Biermann, P. L., et al. 2010b, , 725, 184. Bisnovatyi-Kogan, G. S., 1970, Astron. Zh., 47, 813; transl. 1971, Sov. Astron., 14, 652; Biermann, P.L., et al. 2005, AIP Proc., 784, 385; Bisnovatyi-Kogan, G. S., et al. 2005, astro-ph:0511173. Bykov, A. M., et al. 2011 , 410, 39. Caprioli, D., et al. 2010, , 33, 307. Caramete, L., et al. 2011, astro-ph:1106.5109. Cassinelli, J. P. 1994, Astroph. & Sp. Sci., 221,483. Dermer, Ch.D. 2004, in Proc. 13th Course of the Int. School of Cosmic Ray Astrop.; Eds: M.M. Shapiro, T. Stanev, & J.P. Wefel, World Scientific, p. 189. Drury, L. O’C. 1983, Rep. Progr. Phys., 46, 973. Everett, J., et al. 2008, , 674, 258. Fermi, E. 1949, , 75, 1169; 1954, , 119, 1. Gaisser, T.K. & Stanev, T. 2008, in Review of Particle Physics, , 667, 1. Ginzburg, V. L., & Syrovatskii, S. I. 1963, Astron. Zh., 40, 466; transl. in 1963, Sov. Astron. A.J., 7, 357; 1964, The origin of cosmic rays, Pergamon Press, Oxford, orig. Russ. ed. (1963). Goldstein, M. L., Roberts, D. A., Matthaeus, W. H. 1995, , 33, 283. Gopal-Krishna, et al. 2010, , 720, L155. Greisen, K. 1966, , 16, 748. Hess, V.F. 1912, Physik. Z., 13, 1084. Hillas, A. M. 2006 astro-ph:0607109. Kardashev, N. S. 1962, Astron. Zh., 39, 393; transl. 1962, Sov. Astron. A.J., 6, 317. KASCADE Coll. 2009, , 31, 86. KASCADE-Grande Coll. 2010 astro-ph:1009.4716. Kolmogorov, A. 1941, Dokl. Akad. Nauk SSSR, 30, 299; 31, 538; and 32, 19. Kohlh[ö]{}rster, W. 1913, Physik. Z., 14, 1153. Lagage, P. O. & Cesarsky, C. J. 1983, Å,125, 249. Lovelace, R. V. E. 1976, , 262, 649. Matthaeus, W. H. & Zhou, Y. 1989, Physics of Fluids B, 1,1929. Obermeier, A. 2011, Ph.D. thesis, Radboud University Nijmegen. Owocki, S. P. 1990,Rev. Mod. Astron., 3,98. Parker, E.N. 1958, , 128, 664. Pierre Auger Coll. 2010a, Phys. Lett. B, 685, 239. Pierre Auger Coll. 2010b, ,34, 314. Pierre Auger Coll. 2010c, 104, 091101. Ptuskin, V., Lukasiak, A., Jones, F.C., Webber, W.R. 1999, ICRC Salt Lake City, vol. 4, p. 291 Rachen, J.P., et al. 1993, Å, 272, 161; 1993, Å, 273, 377. Rickett, B.J. 1977, , $\;$ 15, 479. Ryu, D., et al., 2008, Science, 320, 909); Das, S., et al. 2008, , 682, 29; Cho, J., & Ryu, D. 2009, , 705, L90. Spangler, St. R., & Gwinn, Carl R. 1990, , 353, L29. Stanev, T., et al. 1993, Å, 274, 902. Stanev, T. (IceCube Coll.) 2009 astro-ph:0903.0576. Stanev, T. 2010a Review at Vulcano Workshop 2010, astro-ph:1011.1872; Stanev, T. 2010b, High Energy Cosmic Rays, Springer. Vietri, M. 1995, , 453, 883. V[ö]{}lk, H.J. & Biermann, P.L. 1988, , 333, L65. Waxman, E. 1995, , 75, 386. Wiebel-Sooth, B., et al. 1998, Å, 330, 389. , Wiebel-Sooth, B., & Biermann, P.L., in Landolt-B[ö]{}rnstein, Handbook of Physics, Springer Publ. Comp., p. 37 - 91, 1999 Woosley, S. E., Heger, A., Weaver, T. A. 2002, , 74, 1015. Yoon, Y.S., et al. 2011, , id.122 Zatsepin, G. T., Kuz’min, V. A. 1966, Zh. Exp. Th. Fis. Pis’ma, 4, 114; engl. transl. 1966 J. of Exp. & Th. Phys. Lett., 4, 78. ![The energy spectrum calculated with this model compared to the data from KASCADE (KASCADE Coll. 2009) , KASCADE-Grande (KASCADE-Grande Coll. 2010) and Pierre Auger Observatory (Pierre Auger Coll. 2010a). The numbers in the upper part of the figure shows the error of the model defined as (Model - Data)/(Experimental Error). The shape of the six element spectra from the Galactic and the extragalactic component is the same, by model assumption.](fig-1.eps) \[fig:spectrum:all\]
--- abstract: 'We experimentally demonstrate that the statistical properties of distances between pedestrians which are hindered from avoiding each other are described by the Gaussian Unitary Ensemble of random matrices. The same result has recently been obtained for an $n$-tuple of non-intersecting (one-dimensional, unidirectional) random walks. Thus, the observed behavior of autonomous walkers conditioned not to cross their trajectories (or, in other words, to stay in strict order at any time) resembles non-intersecting random walks.' address: - '$^1$ University of Hradec Králové, Rokitanského 62, Hradec Králové, Czech Republic' - '$^2$ Institute of Physics, Academy of Sciences of the Czech Republic, Na Slovance 2, Prague, Czech Republic' - '$^3$ Doppler Institute for Mathematical Physics and Applied Mathematics, Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University, Břehová 7, Prague, Czech Republic' author: - 'Daniel Jezbera$^1$, David Kordek$^1$, Jan Kříž$^{1,3}$, Petr Šeba${}^{1,2,3}$ and Petr Šroll$^1$' --- [*Keywords*]{}: Traffic and crowd dynamics, Random Matrix Theory and extensions The fact that non-intersecting one-dimensional random walks lead to universal system behavior has been known and discussed for at least 10 years [@hob], [@grab], [@kon]. It is also known that the results can be described in terms of random matrix theory - see for instance [@joh]. This fact is usually expressed in abstract mathematical theorems of universal validity, see for instance [@eichel]. Our aim here is to use these abstract results in order to explain certain aspects of the observed behavior of pedestrians. A comprehensible application of the complicated mathematical theory is given e. g. in [@baik]. It analytically explains an experimental observation that the schedule of the city transport in Cuernavaca (Mexico) conforms to the predictions of the Gaussian Unitary Ensemble of random matrices (GUE). The reason for this interesting observation is the absence of a bus timetable, and primarily the fact that the buses do not overtake each other and hence their trajectories do not cross - see also [@seba] for the details. Our focus in this letter is pedestrians in a situation when they cannot avoid each other. Pedestrian flow is a subject of intense study. This is understandable since the movement of large groups of people inevitably leads to injuries and deaths caused by trampling and by crowd-pressure. The consequences can be disastrous: for instance more than 1400 people were trampled to death during a stampede in Mecca in 1990. A proper understanding of the process how groups of people move is vital for taking effective precautions. The mathematical description is usually based on the pedestrian interactions denoted as “social forces” [@h1]. The exact character of these forces and of their cultural dependence remains unclear, however, and is a focus of recent discussions [@sey2]. An evacuation dynamics of buildings is modeled in a similar way [@h2]. But not only panic situations are of importance. The comfortable and safe movement of people through corridors and on sidewalks is also of interest. Although people are autonomous individuals following their own destinations, they cannot move freely as soon as the pedestrian density exceeds a certain limit. For higher densities self-organizing phenomena occur. A typical example is the stratification of pavement walkers into layers for different direction [@h4]. We will discuss a unidirectional pedestrian motion in the range of intermediate density and in a narrow corridor that hinders mutual avoidance. Otherwise the people can move freely. Since to avoid another walker is not possible, the walker’s attention naturally focuses on the preceding fellow, in order not to collide with him. The situation resembles the assumptions of the model of vicious random walkers introduced by Fisher [@fis]. In a typical case, the vicious walkers move randomly on a one-dimensional discrete lattice. At each time step, that walker can move either to the left or to the right. The only constraint is that two walkers cannot occupy the same site at the same time. The model is easily modified to the situation when the motion is unidirectional (for instance right moving). In this case, the walkers are staying at the same site instead of moving to the left. The model has surprising relations with various fields of mathematics like combinatorics or random matrix theory [@baik2]. The corresponding random matrix ensemble is, however, not fixed solely by the dynamics. It depends also on the particular initial and terminal conditions of the model [@kat]. Our measurement is inspired by the paper [@sey] where the fundamental diagram (i.e. the dependence of the pedestrian flow on the pedestrian density) has been measured experimentally with volunteers walking in a circle. Fundamental diagrams are one of the basic tools used in car flow modeling and traffic jam prediction. In highway traffic, its shape is influenced by many factors like the road topology, the existence of a near slip road, and so on. The interesting question of how cultural differences influence the flow-density relation for pedestrians has been discussed in [@chat]. Beside the fundamental diagram, the work also discusses density fluctuations which are of vital interest for the stampede dynamics. In a crowd, it is sudden density changes that lead to the abrupt release of the local pressure and finally cause people to fall and be trampled [@h3]. In a one-dimensional system, the local density is inversely proportional to the local distance among the walkers (the pedestrian clearance). So we will discuss the statistical properties of the experimentally measured pedestrian distances. During the public action called “Let us use our heads to play” (an event serving to popularize physics among schoolchildren), we prepared a circular corridor of a diameter of 4.5 m, built with chairs and ropes, see Figure \[fotka\]. \ It was placed on a grass plot in front of the university building. As various school classes participated in this activity, we asked them to walk in the corridor for a time period of 3 minutes. The motion of walkers was registered by two light gates placed in a fixed distance of less than 1 m. The times when the light of the gates was interrupted by the walker were recorded by a computer at a rate of 50 samples per second. It means that at the mean walking speed of 1.25 m/s, the device resolution was approximately 0.025 m. The interruption onset of the gate was taken as the pedestrian arrival. In this way, we obtained the record of 26 groups consisting of 12 to 20 walkers. Using these data, it is easy to obtain the pedestrian headway and velocity, and combining these quantities, the pedestrian clearance is obtained straightforwardly. We will, however, focus directly on the headway statistics, i.e. the statistics of the time intervals between two subsequent walkers. To avoid global density effects (like small jams inside the circle when one walker slows down unexpectedly) the headway data of each particular measurement were unfolded and scaled to a mean headway equal to one. The theory published in [@baik] predicts that the headway of autonomous (random) pedestrians hindered from avoiding each other is described by the level spacing distribution of the Gaussian unitary ensemble of random matrices. In other words, the headway statistics are universal and given by the Wigner formula for GUE: $$%\begin{equation}%\label{wigner} P(s)=\frac{32}{\pi^2} s^2 \exp\left(-\frac{4 s^2}{\pi}\right)$$ ![ The headway probability density evaluated with data measured for 26 different pedestrian groups (crosses) is compared with the prediction of GUE (full line). The mean headway is scaled to unity. []{data-label="head"}](figure2.eps "fig:"){height="9cm" width="15cm"}\ The results obtained for the circular corridor with 13-15 years old children from 26 different groups is plotted in the Figure \[head\]. As already mentioned, understanding the local fluctuations of the density is of particular interest since the unexpected density changes are suspected to be the main cause for people falling and being trampled [@h3]. In the one-dimensional pedestrian flow, the density fluctuations were investigated in [@sey], [@chat]. The authors argued conjecturally that the fluctuations display a small dependence on the cultural backgrounds of the walkers (Indian and German pedestrians were compared). On the other hand, the mathematical theory developed for non-crossing trajectories predicts a universal behavior pattern. This means that the cultural factors should be statistically irrelevant. The question is how these two points can be reconciled. To investigate it, we will study the number of people passing a given point within certain time interval. Let $n(T)$ be the number of walkers passing the measuring point within the time interval $T$. The fluctuation $\Sigma(T)$ is defined as $$%\begin{equation}%\label{number} \nonumber \Sigma(T)=\left \langle \left( n(T)-\langle n(T)\rangle\right) ^2 \right \rangle$$where $\langle ... \rangle$ means the system average. Traditionally, this quantity is called the number variance, and its behavior is well understood. For uncorrelated events (Poisson process) $\sigma(T)=T$ for large $T$. For the headway governed by the GUE ensemble, we get $\Sigma(T)\approx (ln(2\pi T)+1.5772..)/\pi^2$ and the fluctuation of the number of walkers passing a given point increases only logarithmically with the time. Generally, point processes leading to $\Sigma(T)<T$ for large $T$ are denoted as superhomogeneous (see for instance [@tor1], [@tor2]). We used the measured data to evaluate the number variance for the pedestrians inside the circular corridor with foregoing restrictions. The results are plotted in Figure \[numb\]. We see that $\Sigma$ follows the prediction of GUE up to $T\approx 3$. For larger $T$ the increase is higher than the logarithmic prediction of GUE. It remains, however, substantially below the line $\Sigma(T)=T$ obtained for the uncorrelated events. So the headway sequence is superhomogeneous and follows the GUE result for $T\lesssim 3$. The cultural differences reported in [@chat] can be related with the non-universal increase of $\Sigma$ for $T \gtrsim 3$. Similar behavior has been also observed for bus transport [@seba]. ![ The number variance $\Sigma(T)$ evaluated with data measured for 26 different pedestrian groups (crosses) is compared with the prediction of GUE (full line) and with the prediction for a random headway sequence (dashed line). []{data-label="numb"}](figure3.eps "fig:"){height="9cm" width="15cm"}\ The interactions between pedestrians are obviously asymmetrical - the walker is observing the walker ahead of him (so as not to collide with him) and has much less information on the walker behind him. Thus the Newton’s law of “actio = reaction” does not hold for pedestrians. According to our results, the distance distribution, however, agrees with classical many-particle systems, such as Dyson’s gas, where the interactions are symmetrical. The statistical properties of one-dimensional many-particle systems violating the “actio=reactio” law were studied in [@TreiberHelbing]. They have analytically shown that such systems exhibit the same statistics as classical Newtonian systems. To summarize: we have presented experimental data on pedestrians walking in a circular corridor when they are hindered from avoiding each other. The results are in agreement with the prediction obtained for the mathematical models of one-dimensional vicious random walkers. Our results can also be regarded as (another) experimental demonstration of the theoretical results of [@TreiberHelbing]. The research was supported by the Czech Ministry of Education, Youth and Sports within the project LC06002 and the project of specific research of Faculty of Education, University of Hradec Králové No. 2102/2008. The authors are very grateful to Dita Golková and Padraig McGrath for language corrections. References {#references .unnumbered} ========== [10]{} Hobson D and Werner W, [*Non-colliding Brownian motions on the circle*]{} 1996 [*Bull. London Math. Soc.*]{} [**28**]{} 643 Grabiner D J, [*Brownian motion in a Weyl chamber, non-colliding particles, and random matrices*]{} 1999 [*Ann. Inst. Henri Poincare-Probab. Stat.*]{} [**35**]{} 177 König W, O’Connell N and Roch S, [*Non-colliding random walks, tandem queues, and discrete orthogonal polynomial ensembles*]{} 2002 [*Electron. J. Probab.*]{} [**7**]{} 1 Johansson K, [*Non-intersecting paths, random tilings and random matrices*]{} 2002 [*Probab. Theory Relat. Fields*]{} [**123**]{} 225 Eichelsbacher P and König W, [*Electronic Journal of Probability*]{} [**13**]{} (2008), 1307 Baik J, Borodin A, Deift P and Suidan T, [*A model for the bus system in Cuernavaca (Mexico)*]{} 2006 8965 Krbálek M and Šeba P, [*The statistical properties of the city transport in Cuernavaca (Mexico) and random matrix ensembles*]{} 2000 L229 Helbing D and Molnár P, [Social force model for pedestrian dynamics]{} 1995 4282 Seyfried A, Steffen B and Lippert T, [*Basics of modelling the pedestrian flow*]{} 2006 [*Physica A*]{} [**368**]{} 232 Helbing D, Farkas I and Vicsek T, [it Simulating dynamical features of escape panic]{} 2000 [*Nature*]{} [**407**]{} 487 Helbing D, Molnár P, Farkas I and Bolay K, [*Self-organizing pedestrian movement*]{} 2001 [*Environment and Planning B: Planning and Design*]{} [**28**]{} 361 Fisher M E, [*Walks, walls, wetting, and melting*]{} 1984 [*J. Stat. Phys.*]{} [**34**]{} 667 Baik J, [*Random vicious walks and random matrices*]{} 2000 [*Comm. Pure Appl. Math.*]{} [**53**]{} 1385 Katori M and Tanemura H,[*Scaling limit of vicious walks and two-matrix model*]{} 2002 011105 Seyfried A, Steffen B, Klingsch W and Boltes M, [*The fundamental diagram of pedestrian movement revisited*]{} 2005 [*J. Stat. Mech.-Theory Exp.*]{} P10002 Chattaraj U,Seyfried A and Chakroborty P, [*Comparison of Pedestrian Fundamental Diagram Across Cultures*]{} 2009 [*Preprint*]{} arXiv:0903.0149 Helbing D, Johansson A and Al-Abideen H Z, [*Dynamics of crowd disasters: An empirical study*]{} 2007 046109 Gabrielli A and Torquato S, [*Voronoi and void statistics for superhomogeneous point processes*]{} 2004 041105 Scardicchio A, Zachary C E and Torquato S, [*Statistical properties of determinantal point processes in high-dimensional Euclidean spaces*]{} 2009 041108 Treiber M and Helbing D, [*Hamilton-like statistics in onedimensional driven dissipative many-particle systems*]{} 2009 [*Eur. Phys. J. B*]{} [**68**]{} 607
--- abstract: 'Polarized light microscopy, as a contrast-enhancing technique for optically anisotropic materials, is a method well suited for the investigation of a wide variety of effects in solid-state physics, as for example birefringence in crystals or the magneto-optical Kerr effect (MOKE). We present a microscopy setup that combines a widefield microscope and a confocal scanning laser microscope with polarization-sensitive detectors. By using a high numerical aperture objective, a spatial resolution of about 240nm at a wavelength of 405nm is achieved. The sample is mounted on a $^4$He continuous flow cryostat providing a temperature range between 4K and 300K, and electromagnets are used to apply magnetic fields of up to 800mT with variable in-plane orientation and 20mT with out-of-plane orientation. Typical applications of the polarizing microscope are the imaging of the in-plane and out-of-plane magnetization via the longitudinal and polar MOKE, imaging of magnetic flux structures in superconductors covered with a magneto-optical indicator film via Faraday effect or imaging of structural features, such as twin-walls in tetragonal SrTiO$_3$. The scanning laser microscope furthermore offers the possibility to gain local information on electric transport properties of a sample by detecting the beam-induced voltage change across a current-biased sample. This combination of magnetic, structural and electric imaging capabilities makes the microscope a viable tool for research in the fields of oxide electronics, spintronics, magnetism and superconductivity.' author: - 'M. Lange' - 'S. Guénon' - 'F. Lever' - 'R. Kleiner' - 'D. Koelle' bibliography: - 'LTSPMbib\_arxiv.bib' title: 'A High-Resolution Combined Scanning Laser- and Widefield Polarizing Microscope for Imaging at Temperatures from 4K to 300K' --- Introduction {#sec:introduction} ============ The properties of ferroic materials and devices are strongly affected by their microscopic domain structure. Knowledge about the domains often plays a key role in the understanding and interpretation of integral measurements, which puts an emphasis on the importance of imaging techniques. Polarized light microscopy is an excellent tool for this purpose and has been successfully applied to ferromagnetic [@McCord2015], ferroelastic [@Erlich2015] and ferroelectric [@Ye2000; @Tu2001] domain imaging. Alternative methods for imaging of magnetic domains include Bitter decoration [@Bitter1932], Lorentz microscopy [@Jakubovics1997], electron holography [@Tonomura1983], magnetic force microscopy [@Hartmann1999], scanning SQUID microscopy [@Kirtley1999], scanning Hall probe microscopy [@Chang1992], nitrogen vacancy center microscopy [@Maertz2010], X-ray magnetic circular dichroism [@Schneider1997], scanning electron microscopy (SEM) [@Akamine2016], and SEM with polarization analysis (SEMPA) [@Scheinfein1990]. A comparison of most of these methods can be found in Ref.. Imaging of ferroelectric domains has also been accomplished by etching [@Hooton1955], nanoparticle decoration [@Ke2007], scanning electron microscopy [@Aristov1984], piezoresponse force microscopy [@Güthner1992] and X-ray diffraction [@Fogarty1996]. These techniques have been reviewed by Potnis *et al.* [@Potnis2011] and by Soergel [@Soergel2005]. Polarized light imaging provides a non-destructive, non-contact way to observe ferroic domains with sub-$\mu$m resolution and high sensitivity that can be carried out in high magnetic fields. The contrast for imaging of ferroelastic or ferroelectric domains arises from birefringence or bireflectance [@Schmid1993; @Grechishkin99], which is a consequence of the anisotropic permittivity tensor of these materials. Ferromagnetic domains, on the other hand, can be imaged via the magneto-optical Kerr effect [@Kerr1877] (MOKE) or the Faraday effect [@Faraday1846]. Both confocal laser scanning  [@Webb96] and widefield [@Inoue94] microscopy can be used for imaging with polarized light contrast. In confocal laser scanning microscopy the image is captured sequentially by scanning a focussed laser beam across the sample. A confocal pinhole eliminates light that does not originate from the focal volume. This results in a high depth discrimination, contrast enhancement and a 28% increase in lateral resolution. Widefield microscopy, on the other hand, has the advantage of faster acquisition rates and simultaneous image formation. The instrument discussed below is based on an earlier design by Guénon [@Guenon11] and combines a widefield- and a confocal laser scanning microscope with polarization-sensitive detectors. To study effects at low temperatures and in magnetic fields, the sample is mounted on a liquid-helium continuous flow cryostat offering a temperature range of 4K to 300K and magnetic fields up to 800mT with variable orientation can be applied. The confocal laser scanning microscope offers an additional imaging mechanism: a beam-induced voltage across a current-biased sample can be generated by the local perturbation of the laser beam. This beam-induced voltage can be used to extract local information on the electric transport properties of the sample [@Kittel97; @Li2015; @Benseman2015; @Sivakov2000; @WangHB2009]. While several examples of low-temperature widefield [@Kirchner73; @Goa03; @Golubchik2009] and laser scanning polarizing microscopes [@Henn2013; @Murakami2010; @Matsuzaka2009] have been published, the instrument presented here stands out with regard to the versatility offered by combining widefield and confocal laser scanning imaging modes, the accessible temperature range, as well as the very high lateral resolution it provides at low temperatures. This paper is organized as follows. Given the relevance for the design of the microscope, a brief overview of MOKE and Faraday effect is given in Section \[sec:MOKE\]. The cryostat and the generation of magnetic fields is described in Section \[sec:Cryostat\]. The widefield polarizing microscope is discussed in Section \[sec:LTWPM\] and a detailed description of the scanning laser microscope is presented in Section \[sec:LTSPM\]. Imaging of electric transport properties is addressed in Section \[sec:electrictransport\] and examples demonstrating the performance of the instrument are presented in Section \[sec:Examples\]. Magneto-optical Kerr effect (MOKE) and Faraday effect {#sec:MOKE} ===================================================== The observation of domains in magnetic materials relies mainly on two magneto-optical effects: the magneto-optical Kerr effect (MOKE) in reflection and the Faraday effect in transmission. Both effects lead to a rotation of the plane of polarization that depends linearly on magnetization and is caused by different refractive indices for left-handed and right-handed circularly polarized light (magnetic circular birefringence). A distinction between three types of MOKE with regard to the orientation of the magnetization and the plane of incidence is made: polar, longitudinal and transverse MOKE, being sensitive to the out-of-plane magnetization component, the in-plane magnetization component along the plane of incidence and the in-plane magnetization component perpendicular to the plane of incidence, respectively. For linearly polarized light, both the longitudinal and the polar MOKE lead to a rotation of the plane of polarization upon reflection on the sample surface, while the transverse MOKE leads to a modulation of the reflected intensity. In addition to the rotation of the plane of polarization the longitudinal and polar MOKE also lead to elliptically polarized light caused by a difference in absorption for left- and right-handed circularly polarized light. Furthermore, the MOKE also depends on the angle of incidence (AOI). The polar MOKE is an even function of AOI and has the largest amplitude for normal incidence. The longitudinal MOKE is an odd function of AOI and increases with increasing AOI. The Faraday effect can be observed when light is transmitted through transparent ferromagnetic or paramagnetic materials. It describes a rotation of the plane of polarization that is proportional to the magnetization component along the propagation direction of the light and the length of the path on which the light interacts with the material. An important application of the Faraday effect are magneto-optical indicator films [@Goernert2010] (MOIF), that can be used to image the stray field above a sample. These typically consist of a thin garnet film with in-plane anisotropy that is coated with a mirror on one side. The mirror side of the MOIF is placed in direct contact with the sample under investigation and observed under perpendicular illumination with a polarizing microscope. The stray field of the sample leads to a deflection of the magnetization in the MOIF, that now has a component along the propagation direction of the light and thus becomes observable via the Faraday effect. Additional magneto-optical effects that are rarely used for domain imaging are the Voigt effect and the Cotton-Mouton effect. A detailed description of the various magneto-optical effects can be found in Ref. . Cryostat, Electromagnet {#sec:Cryostat} ======================= The cryostat and microscope are mounted on a vibrationally isolated optical table. Since the sample is fixed on a coldfinger, the microscope needs to be positioned relative to the sample with sub-$\mu$m resolution. A very sturdy, yet precise, positioning unit needs to be used. The microscope is connected to the cryostat via flexible bellows to allow for the positioning of the microscope. The cryostat is a liquid-helium continuous flow cryostat with the sample in vacuum. The temperature can be adjusted in a range of $T=4$K to $T=300$K. The coldfinger and sample holder have a diameter of $25.4$mm. Electrical contacts and mounting screws around the perimeter of the sample holder limit the available space for sample mounting. Samples with a dimension of up to $12\,\mathrm{mm}\times 12\,\mathrm{mm}$ can be conveniently mounted. Two electromagnets are used to generate in-plane magnetic fields of up to $B_\parallel=\pm800$mT and out-of-plane magnetic fields of up to $B_\perp=\pm20$mT. The electromagnets are mounted on a frame that is separated from the rest of the setup to reduce the risk of vibrations being transferred to the microscope. The out-of-plane magnetic field is generated by a Helmholtz coil that achieves a field homogeneity of 0.2% in a cylindrical volume of 5mm length in out-of-plane direction and 20mm diameter in the sample plane. The magnet, generating the in-plane magnetic field, can be rotated around the cryostat to allow for an adjustment of the orientation of the in-plane magnetic field. The in-plane magnetic field is homogeneous to within 1% in a cubic volume with an edge length of $12\,\mathrm{mm}$. The magnetic fields for both magnets have been calibrated using a Hall probe with an accuracy of 2%. Due to spatial constraints limiting the size of the Helmholtz coil, the achievable out-of-plane magnetic field strength is limited to 20mT. The range of applications of the instrument could be enhanced by replacing the two electromagnets by a superconducting vector magnet that allows the application of magnetic fields with a strength $>1\,\mathrm{T}$ and variable orientation. ![image](TTRPM_ray4.pdf) Low-Temperature Widefield Polarizing Microscope (LTWPM) {#sec:LTWPM} ======================================================= The optical setup combines two imaging paths, a scanning polarizing microscope and a widefield polarizing microscope; the latter can be selected by inserting a mirror into the optical path. A schematic drawing of the imaging setup is displayed in Fig. \[fig:ttrpm\]. Conjugate planes to the image plane are denoted as image planes (IP) and shown in orange. Conjugate planes to the back focal plane (BFP) of the microscope objective are denoted as aperture planes (AP) and shown in green. AP and IP have a reciprocal relationship [@Inoue94]: rays that are parallel in one set of planes are focussed in the other set of planes. The position of a point in the IP translates to an angle in the AP and the angle of a ray in the IP corresponds to a point in the AP. In this Section we begin by giving a general description of the optical setup of the low-temperature widefield polarizing microscope (LTWPM) before we proceed with a detailed description of the components and their function. The LTWPM can be used by inserting the removable mirror, as indicated by the broken lines in the ray diagram (Fig. \[fig:ttrpm\]). The illumination follows a Koehler scheme [@Koehler1893] and the sample is illuminated through the microscope objective. The light source for the LTWPM is realized by fiber-coupled light-emitting diodes (LED), that are combined into a fiber bundle. The fiber bundle end face is imaged into the back focal plane of the microscope objective. To achieve this, the fiber output is collimated (collimator-2) and the field lens is used to image the fiber ends into an AP at the position of an interchangeable aperture stop (aperture stop-2). A rotatable polarizer in front of the aperture stop is used to define the plane of polarization. A field stop, aligned with the shared focal plane of collimator-2 and the field lens, can be used to confine the illuminated sample area. After passing a polarization maintaining beam splitter (PMBS-2), the light is reflected by the removable mirror onto the part of the setup that is shared with the scanning polarizing microscope. The light passes an afocal relay, which is used to extend the optical path into the cryostat and to image the light source into the BFP of the microscope objective. The afocal relay is discussed in detail in Section \[subsec:relay\]. After passing the afocal relay, the light is focussed onto the sample by the microscope objective. The light is reflected back from the sample, passes the microscope objective, afocal relay and removable mirror and is deflected by the PMBS-2. Subsequently, it passes the rotatable analyzer, that is used to adjust the polarized light contrast. The tube lens forms the image on the sensor of the low noise, high dynamic range sCMOS camera. Light Source {#subsec:illumination} ------------ The light source consists of nine high-power LEDs with a dominant wavelength of 528nm that are coupled into multimode fibers. The LEDs are temperature-stabilized by thermo-electric coolers and driven with highly stable current sources. The use of LEDs offers the advantage of low noise, compact size and excellent stability. The nine fibers are combined into a fiber bundle consisting of one central fiber of 1mm diameter and eight surrounding fibers of 0.8mm diameter as shown in Fig. \[fig:fiberbundle\]. A similar illumination concept has been developed by Soldatov *et al.* [@Soldatov2017; @Soldatov2017b]. Since the LEDs can be controlled individually and the fiber ends are imaged into the back focal plane of the microscope objective, it is possible to switch between different angles of incidence. In magneto-optical imaging, the plane of incidence together with the plane of polarization, is used to adjust the sensitivity of the instrument to the polar, longitudinal or transverse MOKE [@McCord2015; @Stupakiewicz2014]. As was demonstrated by Soldatov *et al.*, this illumination concept can be used to separate longitudinal and polar MOKE and to achieve a contrast enhancement. The LEDs can be pulsed with a frequency of more than 1MHz, which enables time-resolved imaging with $\mu$s temporal resolution. ![a) Cross-section of the fiber-bundle: The central fiber is aligned with the optical axis. The eight surrounding fibers are equally distributed on a concentric circle. b) The fiber end faces are imaged into the microscope objective back focal plane. The light originating from each fiber reaches the sample at a specific angle. The angle of incidence for the illumination can be selected by controlling the LEDs output power individually. \[fig:fiberbundle\] ](TTRPM_fiberbundle_RSI.pdf) Microscope Objective {#subsec:microscopeobjective} -------------------- In order to achieve a high resolution it is necessary to use a microscope objective with a high numerical aperture (NA). These typically have short working distances, which do not allow for the use of a vacuum window in front of the objective. Also, a window in front of the objective would introduce unwanted variations to the polarization of the light. Therefore, the microscope objective needs to be mounted inside the cryostat. This imposes the requirement for the microscope objective to be vacuum-compatible. At the same time, the possibly large difference in temperature between the microscope objective and the sample prohibits the use of immersion objectives or objectives with excessively short working distances, which limits the choice of high NA objectives. We decided on a commercial infinity corrected microscope objective with a NA of 0.8 and a focal length of $f_{MO}=4\,\mathrm{mm}$. It has a working distance of $WD=1\,\mathrm{mm}$ and is designed for polarized light microscopy with lenses that have been mounted strain-free. It has a field of view (FOV) of 500$\mu$m diameter and features high transmission of about 80% at 405nm and 87% at 528nm wavelength. The exit pupil diameter is 6.4mm. Because it is also used for the scanning polarizing microscope, it needs to be suitable for confocal scanning. Camera, Polarization Sensitivity and Resolution {#subsec:magnification} ----------------------------------------------- The small variations in polarization, caused for example by magneto-optical effects, only lead to a very weak modulation of the intensity reaching the camera. Therefore, cameras with low noise and high dynamic range are required for imaging in polarized light microscopy. The camera we use is a thermoelectrically cooled sCMOS camera with 2048x2048 pixels, a pixel size of $6.5\,\mu$mx$6.5\,\mu$m and a high quantum efficiency of $QE \approx 80\,\%$ at a wavelength of $\lambda=528$nm. The full-well capacity (maximum number of photo-electrons) that can be stored on a pixel is $FW=30000$. The camera’s noise specifications are given as a number of electrons. It features low median read noise of $N_r=0.9$ and a dark current of $D=0.10\,\mathrm{s}^{-1}$ per pixel. The dynamic range of 33000:1 is sampled with a bit depth of 16bit. The achievable polarized light contrast depends on the extinction ratio of the polarizer and analyzer, as well as on the depolarizing effects occuring at the optics in between the polarizer and analyzer. Due to their compact size and moderate cost, we use polarizers based on oriented silver nanoparticles featuring an extinction ratio exceeding $\kappa = 1\cdot10^{-5}$ at a wavelength of 528nm. Depolarizing effects occuring at the optical elements between polarizer and analyzer [@Shribak2002] lead to a reduction of the extinction ratio to a value around $\kappa=1\times10^{-2}$. We estimated the polarization sensitivity based on the extinction ratio of the microscope and the camera specifications. The signal is given by the number of photo-electrons $N=M \cdot B \cdot P\cdot t \cdot QE$ that are generated by a photon flux $P$ during the exposure time $t$ on a pixel with the quantum efficiency $QE$ for integrating over $M$ exposures and binning of $B$ pixels. The noise sources contributing to the overall noise are: The signal related shot noise $\delta_{shot}=\sqrt{N}$, dark noise $\delta_{dark}=\sqrt{M\cdot B \cdot D\cdot t}$ due to thermally generated electrons and readout noise $\delta_{read}=\sqrt{M \cdot B} \cdot N_r$. By adding the noise contributions in quadrature we derive the signal-to-noise ratio $$\mathrm{SNR} = \sqrt{M \cdot B} \cdot \frac{P\cdot t \cdot QE}{\sqrt{P\cdot t \cdot QE +D\cdot t+N_r^2}}~.$$ ![Detection limit $\theta_{min}\sqrt{M \cdot B}$ (solid blue line) and sensitivity (dashed red line) as a function of analyzer angle. \[fig:sens\_vs\_M\]](sens_lim_rev1.pdf) The SNR increases with the square root of the number of integrated images and binned pixels. In polarized light imaging, the signal $N$ has to be substituted by the difference in photo-electrons that is generated by a rotation of the plane of polarization by an angle $\theta$. This is a function of the analyzer angle $\beta$ and is given by $N_{pol}= N \cdot (1-\kappa)(\mathrm{sin}^2(\beta + \theta)-\mathrm{sin}^2(\beta))$. The shot noise has to be replaced by the expression $\delta_{shot}=\sqrt{N \cdot ((1-\kappa) \cdot \mathrm{sin}^2(\beta + \theta) + \kappa)}$. If we further consider that the full well capacity $FW$ (maximum number of photo-electrons) of a camera pixel is limited we see that the highest SNR for a single exposure is achieved after an exposure time of $t_1=FW/(P \cdot QE ((1-\kappa)\mathrm{sin}^2(\beta + \theta)+\kappa))$. Now we can evaluate the smallest angle $\theta_{min}$, the detection limit, that can be measured with a SNR of 1 as a function of analyzer angle $\beta$ for a exposure time of $t_1$. As can be seen in Fig. \[fig:sens\_vs\_M\], a detection limit of $\theta_{min}\sqrt{M \cdot B}= 6\cdot 10^{-4} \,\mathrm{rad }$ is reached for an analyzer angle of $\beta=5.7\,^{\circ}$. However, since the exposure time increases with decreasing analyzer angle $\beta$, the highest sensitivity of $\theta_{min}\cdot\sqrt{t}=1.0\cdot 10^{-4} \,\mathrm{rad/\sqrt{Hz}}$ is realized for an analyzer angle of $\beta=17.6\,^{\circ}$. The magnification $m_{wf}$ of the image, formed by the tube lens on the camera sensor, is determined by the ratio of the 200mm focal length $f_{TL}$ of the tube lens and the 4mm focal length $f_{MO}$ of the microscope objective, divided by the angular magnification $M_{AFR}$ of the afocal relay (see Sec. \[subsec:relay\]). This results in a total magnification of $m_{wf}=f_{TL}/(f_{MO}M_{AFR})=27$. The diffraction limited resolution at a wavelength of 528nm and for the NA of 0.8 is given by the Rayleigh criterion $d_{min}=0.61\cdot\lambda/NA=403\,\mathrm{nm}$, which equals to about $11\,\mu$m on the camera sensor. The image is sampled with a resolution of $d_{px}=6.5\,\mu$m on the image sensor. Therefore, the achievable resolution is effectively limited by the Nyquist sampling theorem and is given by $$d_{eff}=\frac{2\cdot d_{px}}{m_{wf}}=481\,\mathrm{nm}~.$$ Low-Temperature Scanning Polarizing Microscope (LTSPM) {#sec:LTSPM} ====================================================== In principle, two different scanning mechanisms can be utilized in scanning laser microscopy. First, there is mechanical scanning, where either the microscope or the sample is moved with piezo drives or stepper motors. The second possibility is optomechanical scanning, where the optical path is modified by a movable component, for example a mirror, so that the spot changes its position on the sample. We decided on the use of optomechanical scanning, because it offers several advantages over mechanical scanning. While providing a very good spatial resolution, optomechanical scanning has the advantage of a larger field of view and enables faster acquisition rates. Another benefit is that electromagnetic noise, that is generated by piezo drives and could disturb measurements of the electric transport properties, is avoided. Additionally, if the sample is to be mounted in a cryostat, good thermal coupling of the sample to the coldfinger is easily achieved. However, in optomechanical scanning, greatest care has to be taken to ensure that the modification of the lightpath doesn’t cause abberrations that prevent diffraction limited imaging or lead to a non-negligible influence on the polarization state. Therefore, all the components that are used in the beam scanning part of the microscope need to be polarization maintaining and designed for confocal scanning. Hereafter, a general description of the LTSPM setup, shown in Fig. \[fig:ttrpm\], is given and subsequently, the individual components are discussed in detail. The light source for the LTSPM is a 405nm wavelength laser diode coupled into a single-mode polarization maintaining fiber. The fiber output is collimated and the beam diameter is defined by an adjustable aperture stop. The plane of polarization is defined by a Glan-Thompson polarizer. The light passes a polarization-maintaining beamsplitter (PMBS-1) with a splitting ratio of 50:50, the light that is deflected by the beamsplitter is captured by a photodiode and is used as feedback for the stabilization of the laser intensity. The transmitted light is deflected by a two-axis fast steering mirror (FSM), which is used to set the position of the laser spot on the sample. The mirror is in a conjugate plane to the backfocal plane (aperture plane AP) of the infinity corrected microscope objective. An afocal relay is used to image the mirror onto the backfocal plane. The angular position of the FSM thereby defines the XY-position of the spot on the sample. This results in a telecentric illumination of the sample, where the beam is reflected and consequently passes the microscope objective, afocal relay and FSM in opposite direction. The light is then deflected by the PMBS-1. A quarter-wave plate ($\lambda$/4) is used to correct for the ellipticity of the reflected beam’s polarization and a half-wave plate ($\lambda$/2) can be used to rotate the plane of polarization. A beamreducer is necessary to match the beam diameter to the size of the photodiodes. In the intermediate image plane of the beamreducer, a pinhole aperture is inserted to make the microscope confocal. The beam then passes a 405nm center wavelength band-pass filter and is split into two perpendicularly polarized beams by the Wollaston prism. These two beams are detected using two quadrant photodiodes. Light Source {#subsec:lightsource} ------------ The key requirements for the light source are long-term stability and low noise. The light source consists of a temperature-stabilized diode laser with a wavelength of $\lambda = 405$nm and a maximum output power of $P_{max} = 50$mW, which is coupled into a polarization maintaining single-mode fiber. The control electronics of the laser diode operate on a battery power supply to reduce noise and the output power is controlled using the photodiode at the PMBS-1 as feedback. The laser power can be modulated with frequencies up to $f_{max} = 1$MHz. Fast Steering Mirror (FSM) {#subsec:FSM} -------------------------- Laser-beam scanning can be accomplished by means of galvanometric scanners [@Montagu2011], acousto-optical deflectors [@Lv2006] or fast steering mirrors. Galvanometric scanners and acousto-optic deflectors provide angular displacement of the beam about a single axis. Therefore, it is necessary to use two separate scanners in a perpendicular orientation to achieve XY-scanning. Unless additional relay optics are used in between the two scanners, this results in linear displacement of the laser-beam from the optical axis. The main advantage of fast steering mirrors is that they provide angular displacement about two perpendicular axes in a single device. Thus, the mirror can be placed in a conjugate plane to the backfocal plane of the microscope objective. This results in pure angular displacement of the beam and a telecentric illumination of the sample. We use a two-axis fast steering mirror with a mechanical scan range of $\pm{1.5}\,^\circ$ and an angular resolution of better than $2\,\mu$rad. As has been discussed by Ping *et al.* [@Ping95], reflection of a linearly polarized laser beam at a mirror surface will lead to depolarization, caused by the difference in reflectivity and phase for s- and p-polarized light, if the beam is not purely s- or p-polarized. Pure s- or p-polarization can only be realized for a single scan axis. Since we use a two-axis scan mirror, depolarizing effects cannot be avoided. To minimize these effects, we use a dielectric mirror with a phase-difference of less than $5\,^\circ$ and a reflectivity-difference below $0.05\,\%$ for s- and p-polarized light at angles of incidence of $45 \pm 3^\circ$ and a wavelength of 405nm. The Afocal Relay (AFR) {#subsec:relay} ---------------------- In confocal laser beam scanning microscopy, the laser spot is scanned across the sample by pivoting the laser beam in the back focal plane of the microscope objective. To achieve this, the scan mirror is imaged into the backfocal plane using an afocal relay. In our case, the microscope objective is mounted inside the cryostat and consequently a vacuum window is needed at some point before the beam enters the microscope objective. We decided to use one of the lenses of the afocal relay as vacuum window. An afocal relay is realized by combining two focal systems in such a way that the rear focal point of the first focal system is coincident with the front focal point of the second focal system. A collimated beam entering the first focal system will exit the second focal system as a collimated beam. The linear magnification $m=f_2/f_1$ and the angular magnification $M=f_1/f_2$ are determined by the equivalent focal lengths $f_1$ and $f_2$ of the first and second focal system, respectively. Cemented achromatic doublets are often used to realize relay lenses. However, they are not well suited for confocal imaging, mainly because of the astigmatism and field curvature they introduce [@Ribes2000; @Negrean2014]. Therefore, more complex lens systems need to be used, which are designed to correct these aberrations. We decided on air spaced triplets as a starting point for the two lens groups that make up the afocal relay, since they offer sufficient degrees of freedom to be made anastigmatic [@Laikin2006]. However, a fourth lens, which acts as the cryostat window, has to be added to the lens group facing the microscope objective, because the mechanical stress exerted by the vacuum needs to be handled. The afocal relay has been specifically designed for use with the microscope objective and scan mirror. It has an angular magnification of $M_{AFR}=1.85$, so that the range of the scan mirror is matched to the field of view of the microscope objective, and the rear aperture of the microscope objective will be completely filled using a laser beam with a diameter of 12mm. The relay consists of two lens groups: A triplet of two bi-convex and one bi-concave lenses and a quadruplet of two bi-convex, one bi-concave and one meniscus lens (Fig. \[fig:relay\]). The two lens groups are aligned as a 4f system, with the scan mirror placed in the front focal plane of the triplet lens group and the microscope objective back focal plane being coincident with the back focal plane of the quadruplet lens group. The lens parameters for the afocal relay are given in table \[tab:relay\]. surface no. radius \[mm\] separation \[mm\] material description ------------- --------------- ------------------- ---------- ---------------------------- 1 inf 119.637 air entrance pupil 2 159.604 9.809 N-LASF44 bi-convex 3 -159.604 14.000 air lens 4 -89.906 3.000 SF1 bi-concave 5 89.906 13.356 air lens 6 113.967 3.431 N-BAF10 bi-convex 7 -113.967 125.237 air lens 8 inf 82.248 air intermediate image plane 9 50.810 3.647 N-LAF34 bi-convex 10 -61.500 4.945 air lens 11 -43.268 3.000 SF1 bi-concave 12 43.268 12.000 air lens 13 104.606 6.928 N-LAK33A bi-convex 14 -104.606 8.549 air lens 15 22.402 7.000 SF57HHT meniscus lens and cryostat 16 18.607 42.571 vacuum window 17 inf exit pupil : Lens data for the AFR: Radius of curvature of the surfaces defining the optical system and their separation along the optical axis. Material specifies the medium that fills the space between the current surface and the next surface. The surface numbering is consistent with Fig. \[fig:relay\]. \[tab:relay\] ![image](afocalrelay.pdf) The AFR design has been optimized using a ray tracing software, Zemax OpticStudio[@Zemax], to reduce all aberrations, especially astigmatism, coma, spherical aberration, field curvature and distortion for wavelengths of $\lambda=405$nm and $\lambda=528$nm. Since the relay needs to be polarization maintaining, the design has also been optimized in this regard, including strain-free mounting of the lenses. The meniscus lens acts as the cryostat window and is, unavoidably, under considerable mechanical stress. This lens was not only optimized with regard to its optical performance but also with regard to the mechanical demands imposed by the vaccuum. This lens has been fabricated from SF57HHT glass, which has an extremely low stress-optical coefficient, to minimize stress birefringence. Optical performance of the relay is diffraction limited over the entire scan range, which is essential for confocal imaging. Beam Reducer, Confocal Pinhole and Resolution {#subsec:beamreducer} --------------------------------------------- The beam reducer is built from two commercial achromatic doublets with a design wavelength of 405nm. They have equivalent focal lengths of $f_{BR1}=125\,\mathrm{mm}$ and $f_{BR2}=25\,\mathrm{mm}$, so that the exiting beam is matched to the photodiode diameter of 2.5mm. The confocal pinhole aperture is mounted in the intermediate image plane of the beamreducer, which is a conjugate plane to the sample, and consequently blocks light that is not originating from the focal volume. The pinhole diameter $d_{ph}$ is determined by the diameter of the airy disc and the magnification $m_{cf}=f_{BR1}/(f_{MO}\cdot M_{AFR})$ of the microscope. It is given in Airy units ($\mathrm{AU}$), with $1\,\mathrm{AU}$ being the diameter of the image of the airy disc in the intermediate image plane of the beamreducer. The 125mm achromat was selected, so that $1\,\mathrm{AU}=10\,\mu\mathrm{m}$, with $$1 \mathrm{AU}=\frac{1.22\cdot\lambda}{NA}\cdot m_{cf}~.$$ The resolution in confocal microscopy is increased by 28% in comparison to widefield microscopy. In widefield microscopy, the resolution is given by the distance between two points for which their point spread functions (PSF) can be distinguished and is expressed by the Rayleigh criterion $d_{min}=0.61\cdot\lambda/NA$. In the Rayleigh criterion, the point-spread function is described by the Airy disk. Two point sources are considered to be resolvable, if the first minimum of the Airy disk of one point coincides with the global maximum of the other. In this case, the combined intensity profile shows a dip of $\approx26\,\%$ between the maxima corresponding to the two points. The increase in resolution in confocal microscopy originates from the fact, that the confocal volume is defined by the product of the illumination PSF and the convolution of the detection PSF with the pinhole [@Webb96]. For a pinhole with a diameter of $0.5\,\mathrm{AU}$, this results in a function with a sharper peak compared to the widefield PSF. In this case, the confocal resolution $d_{cf}$ is given by $$\label{eq:cfres} d_{cf}=\frac{0.44\cdot\lambda}{NA}=\frac{0.44\cdot405\,\mathrm{nm}}{0.8}=222\,\mathrm{nm}~.$$ Using pinholes smaller than $0.5\,\mathrm{AU}$ does not increase the resolution, but deteriorates the SNR. A linescan across the edge of a patterned structure is shown in Fig. \[fig:linescan\]. A confocal pinhole of $0.5\,\mathrm{AU}$ diameter was used for the acquisition of the image and the linescan. ![Linescan (red) across the structure shown in the inset. The red arrow indicates the position and direction of the linescan. The data has been fitted (dashed black line) using Eq. \[eq:resfit\]. A resolution of $242\,\mathrm{nm}$ is achieved. \[fig:linescan\]](resolution_LTSPM_rev1.pdf) The intensity profile of the linescan can be used to evaluate the width of the PSF and the corresponding resolution. For a Gaussian laser beam, the PSF has a Gaussian profile with maximum intensity $I_0$ and $1/\mathrm{e}$-width $\omega$ $$I(x,y)=I_0 \,\mathrm{e}^{-\frac{(x^2+y^2)}{\omega^2}}~.$$ The edge-spread function (ESF), obtained by scanning over a sharp edge at $x=0$, is the convolution of the PSF at position $X$ and the edge profile defined by a reflectance $R_1$ for $x<0$ and $R_2$ for $x\geq0$. The ESF is given by $$\begin{aligned} \label{eq:resfit} &\begin{aligned} P(X)=\pi\omega^2I_0R_1&-R_1\int\limits_{-\infty}^0 \int\limits_{-\infty}^{\infty} I(x-X,y)\,dx\,dy \nonumber\\ &+ R_2\int\limits_0^{\infty} \int\limits_{-\infty}^{\infty} I(x-X,y)\,dx\,dy \end{aligned} \\ &=\pi\omega^2I_0\left[\frac{R_1}{2}\left(1-\mathrm{erf}\frac{X}{\omega}\right)+\frac{R_2}{2}\left(1+\mathrm{erf}\frac{X}{\omega}\right)\right]~.\end{aligned}$$ A resolution criterion for two Gaussian PSF at a distance $d$, that is similar to the Rayleigh criterion, is realized when their combined intensity profile shows a $\approx26\,\%$ dip. This distance is related to the $1/\mathrm{e}$-width of the PSF by $d_{Rayleigh}\approx 1.97\,\omega$. To determine the resolution, we fitted the linescan in Fig. \[fig:linescan\] using Eq. \[eq:resfit\] and obtained a value for the $1/\mathrm{e}$-width of the PSF of $\omega=122.6\,\mathrm{nm}$. the resolution, according to the Rayleigh criterion, is found to be $d_{Rayleigh}\approx242\,\mathrm{nm}$, which is close to the theoretical value (Eq. \[eq:cfres\]). Detector {#subsec:detector} -------- Detectors featuring a high sensitivity for the orientation of the plane of polarization $\theta$ are needed for polarized light microscopy. A useful measure to assess the detector sensitivy is the noise spectral density for the orientation of the plane of polarization $S_{\theta}^{1/2}$ , given in $\mathrm{rad}/\sqrt{\mathrm{Hz}}$. Different approaches towards the measurement of the polarization of light have been employed for scanning polarizing microscopy. The most basic one is the use of a polarizer that is rotated close to $90\,^\circ$ relative to the polarization of the beam. More advanced designs use photoelastic modulators [@Vavassori2000] or Faraday modulators [@Hornauer1990] to modulate the polarization of the light before it is passed through the analyzer. This generates a signal at the second harmonic of the modulation frequency that is proportional to the Kerr rotation and a signal at the modulation frequency that is proportional to the Kerr ellipticity. Cormier *et al.* [@Cormier2008] measured a polarization noise of $0.3\,\mathrm{mdeg}$ for an integration time of $50\,\mathrm{s}$ using polarization modulation. This corresponds to a sensitivity of $3.7\times10^{-5}\,\mathrm{rad}/\sqrt{\mathrm{Hz}}$. Another approach is to use a differential detector [@Kasiraj86]. Flaǰsman *et al.*  [@Flajsman2016] report a sensitivity of $5\times10^{-7}\,\mathrm{rad}$ using a differential detector, however they do not provide information on the bandwidth of this measurement. Spielman *et al.* [@Spielmann1990] developed a detector based on a Sagnac interferometer. A sensitivity of $1\times10^{-7}\,\mathrm{rad}/\sqrt{\mathrm{Hz}}$ using a Sagnac interferometer was demonstrated by Xia *et al.* [@Xia2006]. We decided to use a differential detector based on the design by Clegg *et al.* [@Clegg1991]. It uses a Wollaston prism to split the incident beam into two beams with orthogonal polarization (Fig. \[fig:detscheme\]a) with intensities $I_1\propto I_0 \sin^2\theta$ and $I_2\propto I_0 \cos^2\theta$, for an incident beam with intensity $I_0$ and linear polarization at an angle $\theta$ relative to the s-polarized beam. The orientation of the polarization, the angle $\theta$, can be extracted from the intensities $I_1$ and $I_2$ $$I_2-I_1 = I_0(\cos^2 \theta-\sin^2 \theta)=I_0 \cos(2\theta)$$ $$\theta = \frac{1}{2}\arccos \left (\frac{I_2-I_1}{I_0} \right )= \frac{1}{2}\arccos \left (\frac{I_2-I_1}{(I_1+I_2)} \right )~.$$ Note that, with this detection mechanism, the orientation of the plane of polarization can be measured independently from the intensity $I_0$. This differential detection scheme is an important feature, because it cancels to a high degree variations in reflectivity that may occur due to the sample topography. ![Selection of the illumination path: a) Scheme of the detector: The beam is split into two orthogonally polarized beams by the Wollaston prism (WP). These beams are detected by two quadrant photo diodes (QPD). b) Simplified lightpath for the read out of one quadrant. Each quadrant corresponds to a different illumination direction. \[fig:detscheme\]](TTRPM_quaddiode_RSI.pdf) The intensity of the two beams is measured using two quadrant photodiodes. Since the microscope objective focusses the beam onto the sample and the quadrant photo diodes are in a conjugate plane to the backfocal plane of the microscope objective, each of the quadrants corresponds to a unique range of angles of incidence , as shown in Fig.\[fig:detscheme\]b. Furthermore, opposite quadrants correspond to opposite angles of incidence. Because the longitudinal MOKE is an odd function of angle of incidence, while the polar MOKE is an even function of angle of incidence, the polar and longitudinal MOKE can be separated by measuring the Kerr rotation individually for every quadrant [@Clegg1991; @Ding2000]. For a quadrant X, the plane of polarization is given by $$\theta_X = \frac{1}{2}\arccos \left (\frac{I_{X2}-I_{X1}}{(I_{X1}+I_{X2})} \right ) ,\, X = \{A,B,C,D\}~.$$ In total, six different signals can be measured simultaneously. The polar Kerr signal $\theta_{K}^{polar}$ which is sensitive to the out-of-plane magnetization $$\theta_{K}^{polar}= \theta_A + \theta_B + \theta_C + \theta_D~,$$ the longitudinal Kerr signal $\theta_{K,ij}^{long}$ which is sensitive to the magnetization component along the $ij=\{xy,x\bar{y},x,y\}$ direction in the image plane $$\theta_{K,x\bar{y}}^{long} = \theta_A - \theta_C$$ $$\theta_{K,xy}^{long} = \theta_D - \theta_B$$ $$\theta_{K,y}^{long} = (\theta_C + \theta_D) - (\theta_A + \theta_B)$$ $$\theta_{K,x}^{long} = (\theta_A + \theta_D) - (\theta_B + \theta_C)$$ and the reflected intensity (conventional image) $$I_{tot}= \sum_{X=\{A,B,C,D\}} I_{X1}+I_{X2}~.$$ For reasons of noise suppression and dc error cancellation of the electronics, the measurement is performed in a lock-in configuration where the laser is modulated with a frequency of several kHz and a lock-in amplifier is used to extract the signal. The photocurrent from each quadrant is converted to a voltage using a transimpedance amplifier with a gain of $G_{TI}=2\,\mathrm{MV/A}$. A second programmable gain amplifier (PGA) stage provides additional gain from $G=1$ to $G=8000$. The PGA is ac-coupled by inserting a high-pass filter with a cut-off frequency $f_c=200$Hz at the input of the PGA. The output of the PGAs is recorded with a sampling rate of 2MHz and a resolution of 16bit by a simultaneous sampling data acquisition card. The signal is demodulated using a software-based lock-in algorithm. Noise Considerations {#subsec:noise} -------------------- The different noise sources contributing to the overall system noise can be divided into three main parts: laser intensity noise, detector noise and mechanical noise. The laser intensity noise consists of the shot noise and excess noise related to the pump current, mode hopping, thermal fluctuations, etc. The excess noise can be reduced by measuring the output power and using it as feedback to control the pump current, as well as stabilizing the temperature of the laser diode. The shot noise, however, cannot be overcome and is a fundamental limit to the intensity noise. Since the orientation of the plane of polarization is measured using a balanced detector, laser intensity noise is cancelled to a high degree. The detector noise is produced by the electronic components within the detector and does not depend on the signal reaching the detector. The dark noise of one quadrant of the detector, measured at the output of the PGA, is shown by the green curve in Fig.\[fig:detnoise\]a. The dark noise at the output of the digital lock-in amplifier, for a reference frequency of $f_{ref}=10$kHz and a integration time of $t_{int}=1\,\mathrm{ms}$ is shown in red. Using lock-in detection shifts the signal to higher frequencies, where the $1/f$ part of the noise is negligible. The rms noise at the output of the lock-in amplifier is proportional to the detection bandwidth, which is controlled by the lock-in integration time. In the case of frequency independent white noise, it is calculated by dividing the noise spectral density at the PGA output by the square root ot the integration time $\Delta P_{rms}=S_P^{1/2}/ \sqrt{t_{int}}$. The noise spectral density $S_P^{1/2}$ for the PGA output is shown in Fig.\[fig:detnoise\]b. Noise below the cut-off frequency ($f_c=200$Hz) of the ac-coupling filter is suppressed and $1/f$ noise is practically eliminated. The lock-in detection results in a noise reduction of more than one order of magnitude that can be further increased by increasing the integration time $t_{int}$. An additional noise source is mechanical noise, which is related to perturbations to the imaging path caused by vibrations, noise in the angular position of the FSM, vibrations of the coldfinger, etc. We measured a value of $5\times10^{-6}\,\mathrm{rad}/\sqrt{\mathrm{Hz}}$ for the sensitivity to the orientation of the plane of polarization. ![Noise characteristics of the detector: a) detector noise at the PGA output (green) and after lock-in detection with $f_{ref}=10\,\mathrm{kHz}$, $t_{int}=1\,\mathrm{ms}$ (red). Lock-in detection reduces noise by more than an order of magnitude. b) noise spectral density at the PGA output. $1/f$ noise is elliminated by ac-coupling the PGA.\[fig:detnoise\]](detnoise_2.pdf) Imaging of electric transport properties {#sec:electrictransport} ======================================== Imaging of electric transport properties using low-temperature scanning electron microscopy (LTSEM) has first been demonstrated by Stöhr *et al.* [@Stöhr1979]. In LTSEM, the electron beam generates a local perturbation to the electric transport characteristics of typically a current-biased sample that leads to a global voltage response, which serves as image contrast. A general response theory for this imaging mechanism was developed by Clem *et al.* [@Clem1980]. For a detailed description of the LTSEM technique see the reviews by Huebener [@Huebener1988] and by Gross *et al.* [@Gross1994]. A similar technique using a focussed laser beam to perturb the sample was used by Divin *et al.* [@Divin1991; @Divin1994]. It was shown by Dieckmann *et al.* [@Dieckmann1997] that this technique, low-temperature scanning laser microscopy (LTSLM), delivers results that are equivalent to LTSEM. LTSLM has, among others, been applied to research on superconductors [@Sivakov1996; @Werner2013; @WangHB2009; @Benseman2015], the quantum Hall effect [@Shashkin1997], spintronics [@Wagenknecht06; @Werner2011] and spin-caloritronics [@Weiler2012]. ![Principle for LTSLM imaging of local electric transport properties. The laser beam, that is intensity-modulated at the reference frequency of the lock-in amplifier, locally perturbs the electric transport properties of the current-biased sample. The perturbation leads to a change $\Delta V$ in global voltage, which is detected by the lock-in amplifier. \[fig:voltimage\]](TTRPM_volt2.pdf) The LTSPM can be operated in LTSLM mode to gain information on the electric transport properties of a sample. The intensity-modulated laser beam generates a periodic perturbation that leads to a periodic global voltage response $\Delta V$, which can be measured with a lock-in amplifier (Fig. \[fig:voltimage\]). Maps of the voltage response $\Delta V$, the voltage-image, can be acquired by scanning the laser across the sample. Usually, the primary nature of the perturbation is local heating [@Zhuravel2006b], although additional mechanism such as the photovoltaic-effect, photoconductivity or quasiparticle creation in superconductors [@Zhuravel2003] are possible and may need to be considered for the interpretation of these images. The spatial resolution of this technique is governed by the length scale on which the laser beam induced perturbation decays. In the case of local heating this is the thermal decay length [@Gross1994] and a typical resolution of a few $\mu$m can be achieved. Examples {#sec:Examples} ======== In this Section, we present a few examples demonstrating the capabilities of the system. We will not give a detailed description of the underlying physics for each of the examples as this would go beyond the scope of this paper. However, we include a short motivation and description for each study. Magnetic domains in Barium Hexaferrite {#subsec:BaFeO} -------------------------------------- Magnetic domains in the basal plane of a barium hexaferrite (BaFe$_{12}$O$_{19}$(0001)) crystal observed with the LTSPM at room-temperature and zero magnetic field are shown in Fig. \[fig:BaFeO\]a) and b). This ferromagnetic material has been extensively studied by magnetic force microscopy and scanning Hall probe microscopy [@Yang2004; @Yang2006; @Yang2011] as a substrate for superconductor-ferromagnet hybrids. The magnetic domains in BaFe$_{12}$O$_{19}$(0001) form a labyrinth pattern with zigzag domain-walls [@Yang2011] in the remanent state. Due to its uniaxial anisotropy, the magnetization within the domains is parallel/anti-parallel to the \[0001\]-direction [@Yang2006]. The width of the Bloch-type domain walls was found to be around 200nm [@Yang2004; @Yang2006]. For a superconductor-ferromagnet hybrid, where a superconducting film is deposited on top of a ferromagnetic substrate, the magnetic landscape within the superconducting film that is generated by the ferromagnetic substrate modifies the nucleation of superconductivity. This can lead to reverse-domain and domain-wall superconductivity, which has been imaged by LTSLM [@Fritzsche06; @Werner11a]. The polar MOKE ($\theta_{K}^{polar}$) image is shown in Fig. \[fig:BaFeO\] a). The zigzag folding of the domain walls between domains with magnetization parallel/anti-parallel to the z-direction can clearly be seen. Fig. \[fig:BaFeO\] b) is acquired at the same time and displays the longitudinal MOKE ( $\theta_{K,y}^{long}$) image, which is sensitive to the magnetization component along the y-direction. No contrast is obtained between adjacent domains, where the magnetization points in the out-of-plane/into-the-plane direction. However, a magneto-optical signal is obtained along the Bloch domain-walls, where the magnetization has an in-plane component along the direction of the domain-wall. Since the displayed longitudinal MOKE signal $\theta_{K,y}^{long}$ is only sensitive to the y-component of the magnetization, the contrast is best if the domain-wall runs in y-direction and is lost if the domain-walls run in x-direction. ![Magnetic domains in BaFe$_{12}$O$_{19}$ observed via the polar MOKE $\theta_{K}^{polar}$ (a) and the longitudinal MOKE $\theta_{K,y}^{long}$ (b) at room-temperature and zero magnetic field.\[fig:BaFeO\]](BaFeO_collage2.pdf) Magnetic flux structures in a superconducting niobium coplanar waveguide resonator {#subsec:Nbresonator} ---------------------------------------------------------------------------------- The performance of superconducting niobium coplanar half-wavelength resonators for hybrid quantum systems in perpendicular magnetic fields is detrimentally affected by the presence of Abrikosov vortices [@Bothner2011; @Bothner12a; @Bothner12b]. The motion of Abrikosov vortices leads to energy dissipation and hence to increased losses that reduce the quality factor of the resonator. The distribution of magnetic flux within the superconducting resonator can be visualized by placing a magneto-optic indicator film (MOIF) [@Goernert2010; @johansen2004] on top of the resonator. The Faraday effect in the MOIF leads to a rotation of the plane of polarization that is proportional to the magnetic field at the position of the MOIF. Fig. \[fig:resonator\] a) shows an optical image of a capacitatively coupled niobium half-wavelength resonator. The ground planes and the center conductor are the bright parts, while the gaps between them appear dark. We investigated the penetration of magnetic flux into the part of the resonator that is highlighted by the blue square using the LTWPM and a MOIF. The MOIF used for this study consists of a $4.9\,\mu$m thick bismuth substituted rare-earth (RE) iron garnet, of composition (Bi,RE)$_3$(Fe,Ga)$_5$O$_{12}$, grown by liquid phase epitaxy on a gadolinium gallium garnet (Gd$_3$Ga$_5$O$_{12}$) substrate. The MOIF is covered with a mirror layer and a $4\,\mu$m thick protective layer of diamond-like carbon. Because of this relatively thick protective layer, the MOIF is at a distance to the superconductor where it is not possible to resolve the stray field of individual vortices. We expect that we will reach single vortex resolution with improved MOIF and that the LTSPM can be used to manipulate vortices, as has been demonstrated by Veshchunov *et al.* [@Veshchunov2016]. To obtain calibration data, the magneto-optic response of the MOIF was measured at $10\,\mathrm{K}$, where the niobium film is in the normal state. The resonator was cooled to a temperature of 5.3K in zero magnetic field and a reference image was aquired, before the magnetic field was increased. For the subsequent acquisitions, it is possible to convert the raw image data to magnetic field maps, by subtracting the reference image and rescaling the pixel values according to the calibration data that has been acquired at $10\,\mathrm{K}$. This was done using the image processing software FIJI [@Schindelin2012]. Fig. \[fig:resonator\] b) shows the magnetic field distribution at an externally applied magnetic field, with an orientation perpendicular to the substrate surface, of $B_\perp=1.4$mT. The magnetic field is focussed through the gap between the center conductor and the ground planes and magnetic flux in the form of Abrikosov vortices has entered the superconducting film. A few locations along the edge of the niobium film are favored for flux entry, indicating a locally reduced surface barrier for flux penetration. The vortices that enter at these sites tend to get pinned at defects in the superconducting film. This results in a highly non-homogeneous distribution of Abrikosov vortices in the niobium film. As the magnetic field is increased to $B_\perp=1.8$mT (Fig. \[fig:resonator\] c)) further flux entry sites appear and additional vortices enter the superconducting film. A comprehensive study of magnetic flux penetration in niobium thin films and the effect of indentations and roughness at the film border has been carried out by Brisbois *et al.* [@Brisbois2016]. ![Optical image of a superconducting niobium half-wavelength resonator (a). The ground plane and center conductor appear bright, the gaps between them appear dark. The blue square indicates the area that has been imaged in (b) and (c) using a MOIF and the LTWPM. Magnetic field distribution above the resonator at an externally applied field of $B_{\perp}=1.4$mT (b) and $B_{\perp}=1.8$mT (c) at a temperature of 5.3K.\[fig:resonator\]](resonator2.pdf) Twin-walls between ferroelastic domains in SrTiO$_3$ and their effect on electric transport at the SrTiO$_3$/LaAlO$_3$ interface {#subsec:LAOSTO} -------------------------------------------------------------------------------------------------------------------------------- The interface between LaAlO$_3$ (LAO) and SrTiO$_3$ (STO) has attracted considerable interest over the past years. Although both materials are insulating in bulk, the interface between a LAO-layer of at least four unit cells thickness and its TiO$_2$-terminated STO-substrate becomes electrically conducting [@Ohtomo04]. Upon cool-down, STO undergoes a ferroelastic phase transition from cubic to tetragonal at around 105K. The presence of twin-walls between ferroelastic domains in tetragonal STO has an influence on the two-dimensional electron gas at the LAO/STO interface, that has been studied by scanning SQUID microscopy [@Kalisky13], scanning single-electron transistor microscopy [@Honig13] and LTSEM [@Ma2016]. Imaging of twin-walls in ferroelastic STO is also possible by polarized light microscopy [@Erlich2015]. Birefringence in the STO substrate [@Geday2004] leads to a contrast at the twin-walls between adjacent ferroelastic domains. A LAO Hall bar on a (110)-oriented STO substrate, imaged with the LTWPM at a temperature of 107K, is shown in Fig. \[fig:LAOSTO\] a). The long side of the Hall bar is aligned with the \[001\]-axis of the STO substrate. At 107K, the STO is in its cubic phase and no ferroelastic domains can be observed. After cooling the sample through the cubic to tetragonal phase transition of STO, twin-walls between ferroelastic domains in the STO substrate appear, as can be seen for a temperature of 75K in Fig. \[fig:LAOSTO\] b). The twin-walls are oriented at angles of $0\,^{\circ}$, $55\,^{\circ}$ and $125\,^{\circ}$ with the \[001\]-axis of the STO substrate, which corresponds exactly to the expected values calculated in Ref. . The images have been corrected for uneven illumination by using the ’subtract background’ feature in FIJI [@Schindelin2012]. A LTSLM $\Delta V$ image of a Nb/LAO/STO(001) sample at a temperature of 5K is shown in Fig. \[fig:LAOSTO\] c). The LAO-layer is patterned into a similar structure as in (a) and (b), however the substrate has a different orientation. Since LAO and STO are highly transparent for visible light, a niobium layer has been added on top of the sample to absorb the laser beam, resulting in a local perturbation to the sample. A bias current of 11$\mu$A was sent through the two-dimensional electron gas (from I to GND). The voltage response $\Delta V$, due to the perturbation of the laser beam, was measured between V+ and V- using a lock-in amplifier at a reference frequency of 10kHz and with a time constant of 1ms. Twin-walls in the STO-substrate lead to a modification of the LTSLM response and are visible as stripe-like structures at an angle of $90\,^{\circ}$ with the long side of the Hall bar in the LTSLM image. For this substrate, the expected angles of the twin-walls with the long side of the Hall bar are $0\,^{\circ}$, $45\,^{\circ}$, $90\,^{\circ}$ and $135\,^{\circ}$. Similar investigations have been conducted using low-temperature scanning electron microscopy [@Ma2016]. ![LTWPM images of a LAO/STO(110) Hall bar at (a) 107K and (b) 75K. Twin walls between ferroelastic domains appear below 105K. (c) LTSLM $\Delta V$ image of a Nb/LAO/STO(001) Hall bar at a bias current of 11$\mu$A and a temperature of 5K.\[fig:LAOSTO\]](LAOSTO.pdf) Summary {#sec:summary} ======= LTSPM LTWPM ------------------- ------------------------------------------ -------------------------------------------- resolution 242nm 481nm sensitivity $5\times10^{-6}\,\mathrm{rad/\sqrt{Hz}}$ $1.0\times10^{-4}\,\mathrm{rad/\sqrt{Hz}}$ field of view temperature range $B_\parallel$ $B_\perp$ : system specifications\[tab:specs\] We have presented a versatile polarizing microscope that offers the possibility to image ferromagnetic, ferroelectric and ferroelastic domains by using either confocal laser scanning (LTSPM) or widefield microscopy (LTWPM). Both imaging modes achieve excellent lateral resolution over a wide field of view. The lateral resolution of the LTSPM of 242nm is close to the resolution limit for imaging with visible light. The instrument is equipped with highly sensitive polarized light detectors that provide a sensitivity of $1.0\times10^{-4}\,\mathrm{rad/\sqrt{Hz}}$ for the LTWPM and $5\times10^{-6}\,\mathrm{rad/\sqrt{Hz}}$ for the LTSPM. A $^4$He continuous flow cryostat enables observations at sample temperatures ranging from 4K to 300K, and magnetic fields with variable orientation can be applied to the sample. The system specifications are summarized in table \[tab:specs\]. We have demonstrated the capability of the microscope to image ferromagnetic domains and domain-walls in BaFe$_{12}$O$_{19}$, ferroelastic domains in SrTiO$_3$, the magnetic field distribution above a superconducting Nb film and electrical transport characteristics of the 2-dimensional electron gas at the interface of LaAlO$_3$ and SrTiO$_3$. We expect that the instrument will prove to be useful for investigations of a wide variety of solid-state effects. The combination of magnetic, structural and electric imaging with high lateral resolution and variable temperature has the potential to deliver important insights for research in spintronics, spin caloritronics, superconductivity, magnetism and their hybrid-systems. We thank J. Fritzsche for providing the BaFe$_{12}$O$_{19}$ crystal, D. Bothner and B. Ferdinand for providing the Nb resonator, M. Lindner (Innovent e.V. Jena) for providing the MOIF, and H.J.H Ma and A. Stöhr for providing the LAO/STO samples. This work was funded by the Deutsche Forschungsgemeinschaft (DFG) via project No. KO 1303/8-1.
--- abstract: 'In this paper, we study the performance of network–coded cooperative diversity systems with practical communication constraints. More specifically, we investigate the interplay between diversity, coding, and multiplexing gain when the relay nodes do not act as dedicated repeaters, which only forward data packets transmitted by the sources, but they attempt to pursue their own interest by forwarding packets which contain a network–coded version of received and their own data. We provide a very accurate analysis of the Average Bit Error Probability (ABEP) for two network topologies with three and four nodes, when practical communication constraints, *i.e.*, erroneous decoding at the relays and fading over all the wireless links, are taken into account. Furthermore, diversity and coding gain are studied, and advantages and disadvantages of cooperation and binary Network Coding (NC) are highlighted. Our results show that the throughput increase introduced by NC is offset by a loss of diversity and coding gain. It is shown that there is neither a coding nor a diversity gain for the source node when the relays forward a network–coded version of received and their own data. Compared to other results available in the literature, the conclusion is that binary NC seems to be more useful when the relay nodes act only on behalf of the source nodes, and do not mix their own packets to the received ones. Analytical derivation and findings are substantiated through extensive Monte Carlo simulations.' author: - title: 'Diversity, Coding, and Multiplexing Trade–Off of Network–Coded Cooperative Wireless Networks' --- Cooperative/Multi–Hop Networks, Network Coding, Diversity Gain, Coding Gain, Multiplexing, Performance Analysis. Introduction {#Intro} ============ /multi–hop networking has recently emerged as a strong candidate technology for many future wireless applications [@Nosratinia], [@Laneman]. The basic premise of cooperative/multi–hop communications is to achieve and to exploit the benefits of spatial diversity without requiring each mobile node to be equipped with co–located multiple antennas. On the contrary, each mobile node becomes part of a large distributed array and shares its single–antenna (as well as hardware, processing, and energy resources) to help other nodes of the network to achieve better performance/coverage. However, the efficient exploitation of cooperative/multi–hop networking is faced by the following challenges [@Krikidis], [@Zorzi]: i) due to practical considerations, such as the half–duplex constraint or to avoid interference caused by simultaneous transmissions, distributed cooperation needs extra bandwidth resources (*e.g.*, time slots or frequencies), which might result in a loss of system throughput; ii) relay nodes are forced to use their own resources to forward the packets of other nodes, usually without receiving any rewards, except for the fact that the whole system can become more efficient; and iii) in classical cooperative protocols, the relay nodes that perform a retransmission on behalf of other nodes must delay their own frames, which has an impact on the latency of the network. To overcome these limitations, a new technology named Network Coding (NC) has recently been introduced to improve the network performance [@Ahlswede]–[@Katti_PhD]. NC can be broadly defined as an advanced routing or encoding mechanism at the network layer, which allows network nodes not only to forward but also to process incoming data packets. Different forms of NC exist in the literature, *e.g.*, algebraic NC, physical–layer NC, and Multiple–Input–Multiple–Output (MIMO–) NC, which offer a different trade–off between achievable performance and implementation complexity. The interested reader might consult [@Zorzi] for a recent survey and comparison of these methods. The common feature of all NC approaches is that the network throughput is improved by allowing some network nodes to combine many incoming packets, which, after being mixed, need a single wireless resource (*e.g.*, a time slot or a frequency) for their transmission. Thus, NC is considered a potential and effective enabler to recover the throughput loss experienced by cooperative/multi–hop networking [@Krikidis]. Theory and experiments have shown that network–coded cooperative/multi–hop systems can be extremely useful for wireless networks with disruptive channel and connectivity conditions [@Gerla], [@Katti_PhD]. The performance of cooperative/multi–hop networks has been studied extensively during the last years, see, *e.g.*, [@Ribeiro]–[@MDR_TCOMFeb2010], and many important conclusions have been drawn about the achievable diversity and coding gain over fading channels. On the other hand, the analysis of the performance of cooperative/multi–hop systems with NC is almost unexplored so far. More specifically, understanding the interplay between the multiplexing gain introduced by NC and the achievable diversity/coding gain introduced by cooperation is an open and challenging research problem, especially when practical communication constraints (erroneous decoding and fading) are taken into account [@MDR_Springer2010]–[@AlHabian2011]. Some recent results on this matter are [@Cano]–[@Iezzi_TIT2011]. In particular, [@Nasri] and [@Iezzi_GLOBECOM2011] have recently provided an accurate and closed–form analysis of network–coded cooperative/multi–hop systems by estimating both diversity and coding gain with realistic source–to–relay links. These papers have highlighted, for some network topologies and encoding schemes, the potential benefits of NC to recover the throughput loss of cooperative/multi–hop networking. However, the analysis in [@Nasri] and [@Iezzi_GLOBECOM2011] considers the classical scenario where some network nodes (*i.e.*, the relays) operate only on behalf of other network nodes (*i.e.*, the sources) when forwarding data to a given destination. In other words, the relays are dedicated network elements with no data to transmit and, thus, they receive no direct reward from cooperation. In this paper, we are interested in studying the interplay between diversity, coding, and multiplexing gain of network–coded cooperative/multi–hop wireless networks when the relays have their own data packets to be transmitted to a common destination, and exploit NC to transmit them along with the packets that have to be relayed on behalf of the sources. This way, the relays can help the sources without the need to: i) delay the transmission of their own data packets; and ii) use specific resources (energy and processing) to forward the packets of the sources. Thus, NC can potentially avoids throughput and energy loss. However, it is not clear whether performing NC at the relay nodes entail any performance (*i.e.*, diversity or coding gain) loss with respect to classical cooperative diversity. The main aim of this paper is to shed lights on this matter, and to highlight the fundamental diversity, coding, and multiplexing trade–off with realistic communication constraints and binary NC at the relays. To this end, two network topologies are considered with 3 (1 source, 1 relay, 1 destination) and 4 nodes (1 source, 2 relays, 1 destination), and the end–to–end Average Bit Error Probability (ABEP) over independent but non–identically distributed (i.n.i.d) Rayleigh fading channels is computed in closed–form. Our results highlight that the throughput increase introduced by NC is offset by a loss of the diversity gain. More specifically, it is shown that, when the relays forward a network–coded version of received and their own data packets, there is neither a coding nor a diversity gain for the source. Compared to other results available in the literature [@Nasri], [@Iezzi_GLOBECOM2011], the conclusion is that binary NC seems to be more useful when the relays act on behalf of the sources only, and do not mix their own packets to the received ones. The remainder of this paper is organized as follows. In Section \[SystemModel\], system model and problem statement are summarized. In Section \[Framework\], the analytical framework to compute the ABEP is described. In Section \[Comparison\], the achievable diversity, coding, and multiplexing gain of various schemes with and without NC are analyzed and compared. In Section \[Results\], some numerical results are shown. Finally, Section \[Conclusion\] concludes this paper. ![1–source ($S$), 1–relay ($R$), 1–destination ($D$) network topology. Nodes $S$ and $R$ have data packets to transmit to $D$. $X^{\left( Y \right)} \to Z$ denotes that node $X$ processes/manipulates the data packet of node $Y$ to forward it to node $Z$. Scenarios: (a) non–cooperative; (b) cooperative ($R$ acts as a relay for $S$); and (c) network–coded cooperative ($R$ acts as a relay for $S$ and at the same time transmits its own data to $D$).[]{data-label="Fig_1"}](1_Relay_Net_Scenario_converted.eps){width="0.40\columnwidth"} ![1–source ($S$), 2–relay ($R$ and $T$), 1–destination ($D$) network topology. Nodes $S$, $R$, and $T$ have data packets to transmit to $D$. Notation: i) $X^{\left( Y \right)} \to Z$ denotes that node $X$ processes/manipulates the data packet of node $Y$ to forward it to node $Z$. Scenarios: (a) non–cooperative; (b) cooperative ($R$ and $T$ act as relays for $S$); (c) network–coded cooperative ($R$ and $T$ act as relays for $S$ and at the same time transmit their own data to $D$); (d) hybrid network–coded cooperative ($R$ acts only as a relay for $S$, while $T$ acts as a relay for $S$ and at the same time transmits its own data to $D$).[]{data-label="Fig_2"}](2_Relay_Net_Scenario_converted.eps){width="0.70\columnwidth"} $$\scriptsize \label{Eq_2} \hspace{-0.3cm} \left[ {\hat b_S^{\left( D \right)} ,\hat b_T^{\left( D \right)} } \right] = \mathop {\arg \min }\limits_{\scriptstyle \tilde b_S \in \left\{ {0,1} \right\} \hfill \atop \scriptstyle \tilde b_T \in \left\{ {0,1} \right\} \hfill} \left\{ {\underbrace {\frac{{\left| {y_{SD} - \sqrt {E_m } h_{SD} \left( {1 - 2\tilde b_S } \right)} \right|^2 }}{{N_0 }} + \lambda _R \frac{{\left| {y_{RD} - \sqrt {{{E_m } \mathord{\left/ {\vphantom {{E_m } 2}} \right. \kern-\nulldelimiterspace} 2}} h_{RD} \left( {1 - 2\tilde b_S } \right)} \right|^2 }}{{N_0 }} + \lambda _T \frac{{\left| {y_{TD} - \sqrt {E_m } h_{TD} \left[ {1 - 2\left( {\tilde b_S \oplus \tilde b_T } \right)} \right]} \right|^2 }}{{N_0 }}}_{\Lambda \left( {\tilde b_S ,\tilde b_T ;b_S ,b_T ,\hat b_S^{\left( R \right)} ,\hat b_S^{\left( T \right)} } \right)} } \right\} \vspace{-5pt}$$ System Model and Problem Statement {#SystemModel} ================================== We study two cooperative network topologies with three and four nodes, as shown in Fig. \[Fig\_1\] and Fig. \[Fig\_2\], respectively. We consider a Time–Division–Multiple–Access (TDMA) protocol, where all transmissions take place in non–overlapping time–slots ($T_S$ denotes the duration of a time–slot). Also, we assume the half–duplex constraint, *i.e.*, nodes cannot transmit and receive at the same time [@Krikidis]. Furthermore, we analyze the MIMO–NC approach, where network decoding and demodulation at the final destination are jointly performed at the physical layer, which results in a cross–layer decoding algorithm [@Zorzi]. For analytical tractability, we assume that each node uses uncoded Binary Phase Shift Keying (BPSK) modulation. In those scenarios where NC is exploited, we consider binary NC (exclusive OR denoted by $\oplus$) as this provides a low–complexity design of the relays. Each wireless channel is assumed to experience Rayleigh fading. More specifically, the fading coefficient between two generic nodes $X$ and $Y$ is denoted by $h_{XY}$, and it is assumed to be a circular symmetric complex Gaussian Random Variable (RV) with zero mean and variance $\sigma _{XY}^2$ per dimension. Fading over different links is assumed to be i.n.i.d to account for different propagation distances and shadowing effects. The noise at the input of node $Y$ and related to the transmission from node $X$ to node $Y$ is denoted by $n_{XY}$, and it is assumed to be complex Additive White Gaussian (AWG) with variance ${{N_0 } \mathord{\left/{\vphantom {{N_0 } 2}} \right. \kern-\nulldelimiterspace} 2}$ per dimension. Finally, $n_{XY}$ at different time–slots or at the input of different nodes are assumed to be independent and identically distributed (i.i.d.). Problem Statement {#ProblemStatement} ----------------- The main objective of this paper is to understand the performance vs. throughput trade–off provided by NC over fading channels. To be more specific, let us consider the 3–node scenario in Fig. \[Fig\_1\]. Similar comments apply to the 4–node scenario in Fig. \[Fig\_2\]. We have two nodes ($S$ and $R$), which have data to transmit to node $D$. In Scenario (a), both nodes perform their transmission to $D$ in a selfish mode, *i.e.*, no cooperation. In Scenario (b), node $R$ is willing to help node $S$ to forward the overheard packet to node $D$. In this case, node $S$ acts as a “golden user”, and node $R$ delays the transmission of its own data packet to help node $S$ first. In this case, node $S$ can take advantage of cooperation to improve its performance. However, node $R$ has to share its transmission energy with node $S$, and it must delay its own transmission: this is the price of cooperation. In Scenario (c), node $R$ uses NC to avoid the limitations just mentioned. By using NC, node $R$ can avoid to delay its own packet, and it can transmit a coded (XOR) version of overheard packet from node $S$ and its own packet. The gain is twofold: i) no transmission delay; and ii) no need to share transmission energy with node $S$. In this case, the overall transmission can be completed in two time–slots rather than in three time–slots as in Scenario (b). Thus the network throughput increases. The fundamental questions we want to address in this paper are: i) *Is there any performance (diversity/coding gain) loss, with respect to selfish and cooperative scenarios, for this throughput gain*?; and ii) *In case of performance loss, is this only due to erroneous decoding at node $R$ or is this related to NC operations too*? Our closed–form asymptotic analysis will provide a clear answer to both questions. Similar questions hold for Fig. \[Fig\_2\] as well, where we can see that, depending on the level of cooperation and NC, the throughput of the network, *i.e.*, the number of time–slots, is different. Due to space limitations, we are unable to provide a step–by–step analysis and derivation for all the scenarios shown in Fig. \[Fig\_1\] and Fig. \[Fig\_2\]. However, the analytical development is very similar for all of them. Thus, for ease of exposition and clarity, we have decided to focus our attention on a scenario only. We have chosen Scenario (d) in Fig. \[Fig\_2\], as it is the most general one. So, in the remainder of this paper only this scenario will be analyzed analytically. However, in Section \[Comparison\] we will summarize the final expression of the ABEP for all the scenarios in Fig. \[Fig\_1\] and Fig. \[Fig\_2\], and we will compare achievable performance and throughput of all of them. Signal Model {#SignalModel} ------------ Let us consider Scenario (d) in Fig. \[Fig\_2\]. During the first time–slot, node $S$ broadcasts a BPSK modulated bit, $x_{S} = \sqrt {E_m } \left( {1 - 2b_{S} } \right)$, where $E_m$ is the average transmitted energy and $b_{S} \in \left\{ {0,1} \right\}$ is the bit emitted by $S$. The signals received at nodes $R$, $T$, and $D$ are given by $y_{S X} = h_{S X } x_{S } + n_{S X }$, where $X=R$, $X=T$, and $X=D$, respectively. Similar to [@Nasri], [@Iezzi_ICC2011], [@Iezzi_GLOBECOM2011], the intermediate nodes $R$ and $T$ demodulate the received bit by using conventional Maximum–Likelihood (ML–) optimum decoding: $$\scriptsize \label{Eq_1} \hat b_S^{\left( X \right)} = \mathop {\arg \min }\limits_{\tilde b_S \in \left\{ {0,1} \right\}} \left\{ {\left| {y_{SX} - \sqrt {E_m } h_{SX} \left( {1 - 2\tilde b_S } \right)} \right|^2 } \right\}$$ where $X=R$ and $X=T$, and $({\hat{\cdot} })$ and $({\tilde {\cdot}})$ denote detected/estimated and trial bit of the hypothesis–detection problem, respectively. $\hat b_S^{\left( X \right)}$ is the estimate of $b_S$ at node $X$. During the second time–slot, node $R$ remodulates and forwards its estimate of $b_S$, *i.e.*, $\hat b_S^{\left( R \right)}$, to node $D$. The transmitted bit is $x_R = \sqrt {{{E_m } \mathord{\left/ {\vphantom {{E_m } 2}} \right. \kern-\nulldelimiterspace} 2}} \left( {1 - 2\hat b_S^{\left( R \right)} } \right)$. Let us note that node $R$ uses only half of its available energy to forward $\hat b_S^{\left( R \right)}$ on behalf of node $S$, as it needs half energy to transmit its own data during the fourth time–slot. This allows us to consider a total energy constraint, and it guarantees a fair comparison among the scenarios. Similar considerations apply to all the scenarios shown in Fig. \[Fig\_1\] and Fig. \[Fig\_2\]. The signal received at node $D$ is $y_{RD} = h_{RD} x_R + n_{RD}$. During the third time–slot, node $T$ performs similar operations as node $R$ in the second time–slot. However, node $T$ applies binary NC to avoid to use two time–slots to help nodes $S$ and to transmit its own data. More specifically, the bit transmitted by node $T$ is $x_T = \sqrt {E_m } \left[ {1 - 2\left( {\hat b_S^{\left( T \right)} \oplus b_T } \right)} \right]$, where $b_T$ is the bit that $T$ wants to transmit to node $D$. Unlike node $R$, node $T$ uses full transmission energy, since, with the help of NC, it does not need an extra time–slot to forward its own data. The signal received at $D$ is $y_{TD} = h_{TD} x_T + n_{TD}$. Finally, let us note that the fourth time–slot is not of interest in the detection process, as the bit transmitted in this time–slot is independent of all the others. So, it can be demodulated without considering previous received bits. However, the need of this time–slot to complete the overall communication is important to assess the network throughput of the system. Detection at Node $D$ {#Receiver} --------------------- Upon reception of signals ${y_{SD}}$, ${y_{RD}}$, and ${y_{TD}}$ in time–slot one, two, and three, respectively, node $D$ can perform joint demodulation of $b_S$ and $b_T$. As mentioned above, $b_R$ is treated independently as the related packet is independent of the others. To avoid the analytical intractability and implementation complexity of the ML–optimum demodulator, we consider the sub–optimal, but asymptotically–tight (for high Signal–to–Noise–Ratio, SNR), Cooperative Maximum Ratio Combining (C–MRC) detector shown in (\[Eq\_2\]) on top of this page [@Nasri], [@GiannakisLaneman], where: i) $\lambda _R = {{\min \left\{ {\gamma _{SR} ,\gamma _{RD} } \right\}} \mathord{\left/ {\vphantom {{\min \left\{ {\gamma _{SR} ,\gamma _{RD} } \right\}} {\gamma _{RD} }}} \right. \kern-\nulldelimiterspace} {\gamma _{RD} }}$ and $\lambda _T = {{\min \left\{ {\gamma _{ST} ,\gamma _{TD} } \right\}} \mathord{\left/ {\vphantom {{\min \left\{ {\gamma _{ST} ,\gamma _{TD} } \right\}} {\gamma _{TD} }}} \right. \kern-\nulldelimiterspace} {\gamma _{TD} }}$ account for the reliability of the $S$–to–$R$ and $S$–to–$T$ links, respectively; and ii) $\gamma _{XY} = \left| {h_{XY} } \right|^2 \left( {{{E_m } \mathord{\left/ {\vphantom {{E_m } {N_0 }}} \right. \kern-\nulldelimiterspace} {N_0 }}} \right)$ with $X$ and $Y$ being two generic nodes of the network. The derivation of (\[Eq\_2\]) follows the same arguments as in [@Nasri], [@GiannakisLaneman], and it is here omitted to avoid repetitions. $$\scriptsize \label{Eq_4} \begin{split} & \hspace{-0.5cm} {\rm{APEP}}\left( {{\bf{c}} \to {\bf{\tilde c}}} \right) = \Pr \left\{ {\Lambda _{{\bf{\tilde c}}} < \Lambda _{\bf{c}} } \right\} = \Pr \left\{ {\Delta _{{\bf{c}},{\bf{\tilde c}}} = \Lambda _{{\bf{\tilde c}}} - \Lambda _{\bf{c}} < 0} \right\} \\ & \hspace{-0.5cm} \mathop = \limits^{\left( a \right)} {\rm{E}}_{h_{SR} ,h_{ST} } \left\{ {\Pr \left\{ {\left. {\Delta _{{\bf{c}},{\bf{\tilde c}}} < 0 } \right|\hat b_S^{\left( R \right)} = b_S ,\hat b_S^{\left( T \right)} = b_S } \right\}\Pr \left\{ {\hat b_S^{\left( R \right)} = b_S ,\hat b_S^{\left( T \right)} = b_S } \right\}} \right\} + {\rm{E}}_{h_{SR} ,h_{ST} } \left\{ {\Pr \left\{ {\left. {\Delta _{{\bf{c}},{\bf{\tilde c}}} < 0 } \right|\hat b_S^{\left( R \right)} = b_S ,\hat b_S^{\left( T \right)} \ne b_S } \right\}\Pr \left\{ {\hat b_S^{\left( R \right)} = b_S ,\hat b_S^{\left( T \right)} \ne b_S } \right\}} \right\} \\ & \hspace{-0.5cm} + {\rm{E}}_{h_{SR} ,h_{ST} } \left\{ {\Pr \left\{ {\left. {\Delta _{{\bf{c}},{\bf{\tilde c}}} < 0 } \right|\hat b_S^{\left( R \right)} \ne b_S ,\hat b_S^{\left( T \right)} = b_S } \right\}\Pr \left\{ {\hat b_S^{\left( R \right)} \ne b_S ,\hat b_S^{\left( T \right)} = b_S } \right\}} \right\} + {\rm{E}}_{h_{SR} ,h_{ST} } \left\{ {\Pr \left\{ {\left. {\Delta _{{\bf{c}},{\bf{\tilde c}}} < 0 } \right|\hat b_S^{\left( R \right)} \ne b_S ,\hat b_S^{\left( T \right)} \ne b_S } \right\}\Pr \left\{ {\hat b_S^{\left( R \right)} \ne b_S ,\hat b_S^{\left( T \right)} \ne b_S } \right\}} \right\} \\ \end{split}$$ Performance Analysis {#Framework} ==================== The aim of this section is to estimate the performance of the detector in (\[Eq\_2\]), by providing a closed–form expression of the ABEP for high–SNR. The ABEP of node $S$ and node $T$, *i.e.*[^1], ${\rm{ABEP}}_S = \Pr \left\{ {b_S \ne \hat b_S^{\left( D \right)} } \right\}$ and ${\rm{ABEP}}_T = \Pr \left\{ {b_T \ne \hat b_T^{\left( D \right)} } \right\}$, respectively, can be computed by using the methodology described in [@Iezzi_ICC2011 Sec. IV]. In particular, we have: $$\scriptsize \label{Eq_3} {\rm{ABEP}}_X \le \frac{1}{{{\rm{card}}\left\{ \mathcal{C} \right\}}}\sum\limits_{b_S = 0}^1 {\sum\limits_{\tilde b_S = 0}^1 {\sum\limits_{b_T = 0}^1 {\sum\limits_{\tilde b_T = 0}^1 {{\rm{APEP}}_X \left( {{\bf{c}} \to {\bf{\tilde c}}} \right)} } } }$$ where: i) ${\rm{APEP}}_X \left( {{\bf{c}} \to {\bf{\tilde c}}} \right){\rm{ = APEP}}\left( {{\bf{c}} \to {\bf{\tilde c}}} \right)\bar \Delta \left( {b_X , {\tilde b}_X } \right)$; ii) $\mathcal{C} = \left\{ {000,010,111,101} \right\}$ is the codebook of Scenario (d) in Fig. \[Fig\_2\], which takes into account forwarding and NC operations performed at nodes $R$ and $T$. The generic element of $\mathcal{C}$ is ${\bf{c}} = \left[ {b_S ,b_S ,b_S \oplus b_T } \right]$; iii) ${\rm{card}}\left\{ \mathcal{C} \right\} = 4$ is the cardinality of $\mathcal{C}$, *i.e.*, the number of codewords ${\bf{c}}$ in $\mathcal{C}$; iv) ${\rm{APEP}}\left( {{\bf{c}} \to {\bf{\tilde c}}} \right)$ is the Average Pairwise Error Probability (APEP) of the generic pair of codewords ${\bf{c}} = \left[ {c_1 ,c_2 ,c_3 } \right] = \left[ {b_S ,b_S ,b_S \oplus b_T } \right]$ and ${\bf{\tilde c}} = \left[ {\tilde c_1 ,\tilde c_2 ,\tilde c_3 } \right] = \left[ {\tilde b_S ,\tilde b_S ,\tilde b_S \oplus \tilde b_T } \right]$ of the codebook, *i.e.*, the probability of estimating ${\bf{\tilde c}}$ in (\[Eq\_2\]), when, instead, ${\bf{c}}$ has actually been transmitted, and ${\bf{c}}$ and ${\bf{\tilde c}}$ are the only two codewords possibly being transmitted; and v) $\bar \Delta \left(b_X ,\tilde b_X \right) = 1 - \Delta \left( b_X ,\tilde b_X \right)$, where $\Delta \left( {\cdot,\cdot} \right)$ is the Kronecker delta function, *i.e.*, $\Delta \left( {b_X ,\tilde b_X } \right) = 1$ if $b_X = \tilde b_X$ and $\Delta \left( {b_X ,\tilde b_X } \right) = 0$ if $b_X \ne \tilde b_X$. This function is used to include in the computation of ${\rm{ABEP}}_X$ only those APEPs which result in an error for the information bit of interest, *i.e.*, $X=S$ or $X=T$ [@Iezzi_ICC2011]. Computation of ${\rm{APEP}}\left( {{\bf{c}} \to {\bf{\tilde c}}} \right)$ {#APEP} ------------------------------------------------------------------------- From (\[Eq\_3\]), it follows that that ABEP can be estimated if ${\rm{APEP}}\left( {{\bf{c}} \to {\bf{\tilde c}}} \right)$ is available in closed–form, where the average is over fading channel statistics and AWGN. In this section, we compute an asymptotically–tight formula for ${\rm{APEP}}\left( {{\bf{c}} \to {\bf{\tilde c}}} \right)$, which is accurate for high–SNR. From (\[Eq\_2\]), by definition, we have (\[Eq\_4\]) on top of this page, where: i) $\Lambda _{\bf{c}} = \Lambda \left( {b_S ,b_T ;b_S ,b_T ,\hat b_S^{\left( R \right)} ,\hat b_S^{\left( T \right)} } \right)$ and $\Lambda _{{\bf{\tilde c}}} = \Lambda \left( {\tilde b_S ,\tilde b_T ;b_S ,b_T ,\hat b_S^{\left( R \right)} ,\hat b_S^{\left( T \right)} } \right)$; ii) ${\rm{E}}_X \left\{ \cdot \right\}$ is the expectation operator computed over RV $X$; and iii) $\mathop = \limits^{\left( a \right)}$ is obtained by using the total probability theorem and by conditioning upon possible decoding errors at nodes $R$ and $T$ [@Proakis]. Since demodulation outcomes at node $R$ and $T$ are independent, we have: i) [$\Pr \left\{ {\hat b_S^{\left( R \right)} = b_S ,\hat b_S^{\left( T \right)} = b_S } \right\} = \left[ {1 - Q\left( {\sqrt {2\gamma _{SR} } } \right)} \right]\left[ {1 - Q\left( {\sqrt {2\gamma _{ST} } } \right)} \right]$]{}; ii) [$\Pr \left\{ {\hat b_S^{\left( R \right)} = b_S ,\hat b_S^{\left( T \right)} \ne b_S } \right\} = \left[ {1 - Q\left( {\sqrt {2\gamma _{SR} } } \right)} \right]Q\left( {\sqrt {2\gamma _{ST} } } \right)$]{}; iii) [$\Pr \left\{ {\hat b_S^{\left( R \right)} \ne b_S ,\hat b_S^{\left( T \right)} = b_S } \right\} = Q\left( {\sqrt {2\gamma _{SR} } } \right)\left[ {1 - Q\left( {\sqrt {2\gamma _{ST} } } \right)} \right]$]{}; and iv) [$\Pr \left\{ {\hat b_S^{\left( R \right)} \ne b_S ,\hat b_S^{\left( T \right)} \ne b_S } \right\} = Q\left( {\sqrt {2\gamma _{SR} } } \right)Q\left( {\sqrt {2\gamma _{ST} } } \right)$]{}, where $Q\left( x \right) = \left( {{1 \mathord{\left/ {\vphantom {1 {\sqrt {2\pi } }}} \right. \kern-\nulldelimiterspace} {\sqrt {2\pi } }}} \right)\int_x^{ + \infty } {\exp \left( { - {{t^2 } \mathord{\left/ {\vphantom {{t^2 } 2}} \right. \kern-\nulldelimiterspace} 2}} \right)dt}$ is the Q–function and these probabilities are due to using BPSK modulation [@Proakis]. From these expressions, it follows that conditioning upon decoding errors at node $R$ and node $T$ implies conditioning upon the fading channel gains $h_{SR}$ and $h_{ST}$. This explains the presence of the expectations in (\[Eq\_4\]). $$\scriptsize \label{Eq_7} \begin{split} {\rm{APEP}}^{\left( {\rm{4}} \right)} \left( {{\bf{c}} \to {\bf{\tilde c}}} \right) &= \frac{1}{{2\pi j}}\int\nolimits_{\delta - j\infty }^{\delta + j\infty } {{\rm{E}}_{\left\{ {h_{XY} } \right\},\left\{ {n_{XY} } \right\}} \left\{ {\exp \left[ { - s\mathcal{F}\left( {\left\{ {h_{XY} } \right\},\left\{ {\bar n_{XY} } \right\}} \right)} \right]Q\left( {\sqrt {2\gamma _{SR} } } \right)Q\left( {\sqrt {2\gamma _{ST} } } \right)} \right\}\frac{{ds}}{s}} \\ &\mathop = \limits^{\left( a \right)} \frac{1}{{2\pi ^3 j}}\int\nolimits_{\delta - j\infty }^{\delta + j\infty } {\int\nolimits_{\rm{0}}^{{\pi \mathord{\left/ {\vphantom {\pi {\rm{2}}}} \right. \kern-\nulldelimiterspace} {\rm{2}}}} {\int\nolimits_{\rm{0}}^{{\pi \mathord{\left/ {\vphantom {\pi {\rm{2}}}} \right. \kern-\nulldelimiterspace} {\rm{2}}}} {{\rm{E}}_{\left\{ {h_{XY} } \right\},\left\{ {n_{XY} } \right\}} \left\{ {\exp \left[ { - s\mathcal{F}\left( {\left\{ {h_{XY} } \right\},\left\{ {\bar n_{XY} } \right\}} \right)} \right]\exp \left( { - \frac{{\gamma _{SR} }}{{\sin ^2 \left( {\theta _1 } \right)}}} \right)\exp \left( { - \frac{{\gamma _{ST} }}{{\sin ^2 \left( {\theta _2 } \right)}}} \right)} \right\}d\theta _1 d\theta _2 \frac{{ds}}{s}} } } \\ &\mathop = \limits^{\left( b \right)} \frac{1}{{2\pi ^3 j}}\int\nolimits_{\delta - j\infty }^{\delta + j\infty } {\int\nolimits_{\rm{0}}^{{\pi \mathord{\left/ {\vphantom {\pi {\rm{2}}}} \right. \kern-\nulldelimiterspace} {\rm{2}}}} {\int\nolimits_{\rm{0}}^{{\pi \mathord{\left/ {\vphantom {\pi {\rm{2}}}} \right. \kern-\nulldelimiterspace} {\rm{2}}}} {\mathcal{G}\left( {s,\theta _1 ,\theta _2 } \right)d\theta _1 d\theta _2 \frac{{ds}}{s}} } } \mathop = \limits^{\left( c \right)} \frac{1}{{2\pi ^3 j}}\int\nolimits_{\delta - j\infty }^{\delta + j\infty } {\Psi _0 \left( s \right)\left( {\int\nolimits_{\rm{0}}^{{\pi \mathord{\left/ {\vphantom {\pi {\rm{2}}}} \right. \kern-\nulldelimiterspace} {\rm{2}}}} {\Psi _1 \left( {s,\theta _1 } \right)d\theta _1 } } \right)\left( {\int\nolimits_{\rm{0}}^{{\pi \mathord{\left/ {\vphantom {\pi {\rm{2}}}} \right. \kern-\nulldelimiterspace} {\rm{2}}}} {\Psi _2 \left( {s,\theta _2 } \right)d\theta _2 } } \right)\frac{{ds}}{s}} \\ \end{split} \vspace*{-10pt}$$ $$\scriptsize \label{Eq_8} \mathcal{F}\left( {\left\{ {\gamma_{XY} } \right\},\left\{ {\bar n_{XY} } \right\}} \right) = \gamma _{SD} d_S^2 + 2\sqrt {\gamma _{SD} } d_S {\mathop{\rm Re}\nolimits} \left\{ {\bar n_{SD}^ * } \right\} + \lambda _R \left( {\gamma _{RD} \hat d_R^{\left( {{\rm{nok}}} \right)} + 2\sqrt {\gamma _{RD} } d_R {\mathop{\rm Re}\nolimits} \left\{ {\bar n_{RD}^ * } \right\}} \right) + \lambda _T \left( {\gamma _{TD} \hat d_T^{\left( {{\rm{nok}}} \right)} + 2\sqrt {\gamma _{TD} } d_T {\mathop{\rm Re}\nolimits} \left\{ {\bar n_{TD}^ * } \right\}} \right) \vspace*{-10pt}$$ $$\scriptsize \label{Eq_9} \begin{split} \mathcal{G}\left( {s,\theta _1 ,\theta _2 } \right) &= {\rm{E}}_{\gamma _{SD} } \left\{ {\exp \left( { - s\gamma _{SD} d_S^2 + s^2 \gamma _{SD} d_S^2 } \right)} \right\}{\rm{E}}_{\gamma _{SR} ,\gamma _{RD} } \left\{ {\exp \left( { - \frac{{\gamma _{SR} }}{{\sin ^2 \left( {\theta _1 } \right)}} - s\min \left\{ {\gamma _{SR} ,\gamma _{RD} } \right\}\hat d_R^{\left( {{\rm{nok}}} \right)} + s^2 \frac{{\min \left\{ {\gamma _{SR} ,\gamma _{RD} } \right\}}}{{\gamma _{RD} }}d_R^2 } \right)} \right\} \\ &\times {\rm{E}}_{\gamma _{ST} ,\gamma _{TD} } \left\{ {\exp \left( { - \frac{{\gamma _{ST} }}{{\sin ^2 \left( {\theta _2 } \right)}} - s\min \left\{ {\gamma _{ST} ,\gamma _{TD} } \right\}\hat d_T^{\left( {{\rm{nok}}} \right)} + s^2 \frac{{\min \left\{ {\gamma _{ST} ,\gamma _{TD} } \right\}}}{{\gamma _{TD} }}d_T^2 } \right)} \right\} \\ \end{split} \vspace*{-10pt}$$ $$\scriptsize \label{Eq_10} \Psi _0 \left( s \right) = \begin{cases} \left[ {\bar \gamma _{SD} d_S^2 s\left( {1 - s} \right)} \right]^{ - 1} \hspace{-0.25cm} & d_S \ne 0 \\ 1 \hspace{-0.25cm} & d_S = 0 \\ \end{cases}, \, \Psi _1 \left( {s,\theta _1 } \right) = \begin{cases} \left[ {\bar \gamma _{SR} \left( {s\hat d_R^{\left( {{\rm{nok}}} \right)} + \sin ^{ - 2} \left( {\theta _1 } \right)} \right)} \right]^{ - 1} \hspace{-0.25cm} & d_R \ne 0 \\ 0 \hspace{-0.25cm} & d_R = 0 \\ \end{cases}, \, \Psi _2 \left( {s,\theta _2 } \right) = \begin{cases} \left[ {\bar \gamma _{ST} \left( {s\hat d_T^{\left( {{\rm{nok}}} \right)} + \sin ^{ - 2} \left( {\theta _2 } \right)} \right)} \right]^{ - 1} \hspace{-0.25cm} & d_T \ne 0 \\ 0 \hspace{-0.25cm} & d_T = 0 \\ \end{cases} \vspace*{-5pt}$$ $$\scriptsize \label{Eq_12} \mathcal{I}_4 \left( {\hat d_R^{\left( {{\rm{nok}}} \right)} ,\hat d_T^{\left( {{\rm{nok}}} \right)} } \right) = \frac{1}{{2\pi j}}\int\nolimits_{\delta - j\infty }^{\delta + j\infty } {\frac{1}{{s^4 \left( {1 - s} \right)}}\left[ {1 - \left( {1 + s\hat d_R^{\left( {{\rm{nok}}} \right)} } \right)^{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}} } \right]\left[ {1 - \left( {1 + s\hat d_T^{\left( {{\rm{nok}}} \right)} } \right)^{ - {1 \mathord{\left/ {\vphantom {1 2}} \right. \kern-\nulldelimiterspace} 2}} } \right]ds} \vspace*{-5pt}$$ The next step is the computation of each conditional probability $\Pr \left\{ {\left. {\Delta _{{\bf{c}},{\bf{\tilde c}}} } < 0 \right|\left( \cdot \right)} \right\}$. To this end, a closed–form expression of ${\Delta _{{\bf{c}},{\bf{\tilde c}}} }$ is needed. This can be obtained by substituting $y_{SD}$, $y_{RD}$, and $y_{TD}$ in (\[Eq\_2\]), and through some algebraic manipulations. The final result is as follows: $$\scriptsize \label{Eq_5} \begin{split} {\Delta _{{\bf{c}},{\bf{\tilde c}}} } &= \gamma _{SD} d_S^2 + 2\sqrt {\gamma _{SD} } d_S {\mathop{\rm Re}\nolimits} \left\{ {\bar n_{SD}^ * } \right\} \\ &+ \lambda _R \left( {\gamma _{RD} \hat d_R + 2\sqrt {\gamma _{RD} } d_R {\mathop{\rm Re}\nolimits} \left\{ {\bar n_{RD}^ * } \right\}} \right) \\ &+ \lambda _T \left( {\gamma _{TD} \hat d_T + 2\sqrt {\gamma _{TD} } d_T {\mathop{\rm Re}\nolimits} \left\{ {\bar n_{TD}^ * } \right\}} \right) \\ \end{split}$$ where: i) ${\mathop{\rm Re}\nolimits} \left\{ \cdot \right\}$ is the real part operator; ii) $\left( \cdot \right)^ *$ denotes complex conjugate; iii) $j = \sqrt { - 1}$ is the imaginary unit; iv) $\phi _{XY}$ is the phase of the generic fading gain $h_{XY}$, *i.e.*, $h_{XY} = \left| {h_{XY} } \right|\exp \left( {j\phi _{XY} } \right)$; v) $\bar n_{XY}^ * = {{n_{XY}^ * \phi _{XY} } \mathord{\left/ {\vphantom {{n_{XY}^ * \phi _{XY} } {\sqrt {N_0 } }}} \right. \kern-\nulldelimiterspace} {\sqrt {N_0 } }}$ is the normalized AWGN for the generic $X$–to–$Y$ link, which has zero mean and unit variance; vi) $d_S = 2\left( {\tilde c_1 - c_1 } \right) = 2\left( {\tilde b_S - b_S } \right)$, $d_R = 2\left( {\tilde c_2 - c_2 } \right) = 2\left( {\tilde b_S - b_S } \right)$, $d_T = 2\left( {\tilde c_3 - c_3 } \right) = 2\left[ {\left( {\tilde b_S \oplus \tilde b_T } \right) - \left( {b_S \oplus b_T } \right)} \right]$; and vii) $\hat d_R = 2\left( {1 - 2\hat b_S^{\left( R \right)} } \right)d_R$, $\hat d_T = 2\left[ {1 - 2\left( {\hat b_S^{\left( T \right)} \oplus b_T } \right)} \right]d_T$. Finally, it is worth noticing that the expression given in (\[Eq\_5\]) is useful whichever the conditioning on the bits estimated at node $R$ and node $T$ are. Only $\hat d_R$ and $\hat d_T$ change for different detection outcomes. To make this aspect more explicit, we use the notation ($X=R$, $X=T$): i) $\hat d_X = \hat d_X^{\left( {{\rm{ok}}} \right)}$ if $\hat b_S^{\left( X \right)} = b_S$; and ii) $\hat d_X = \hat d_X^{\left( {{\rm{nok}}} \right)}$ if $\hat b_S^{\left( X \right)} \ne b_S$. To compute $\Pr \left\{ {\left. {\Delta _{{\bf{c}},{\bf{\tilde c}}} < 0} \right|\left( \cdot \right)} \right\}$, we exploit the Laplace inversion transform method in [@Biglieri Eq. (5)]: $$\scriptsize \label{Eq_6} \Pr \left\{ {\left. {\Delta _{{\bf{c}},{\bf{\tilde c}}} < 0} \right|\left( \cdot \right)} \right\} = \frac{1}{{2\pi j}}\int\nolimits_{\delta - j\infty }^{\delta + j\infty } {\frac{{\mathcal{M}_{\Delta _{{\bf{c}},{\bf{\tilde c}}} } \left( {\left. s \right|\left( \cdot \right)} \right)}}{s}ds}$$ with: i) $\mathcal{M}_{\Delta _{{\bf{c}},{\bf{\tilde c}}} } \left( {\left. s \right|\left( \cdot \right)} \right) = {\rm{E}}_{\left\{ {h_{XD} } \right\},\left\{ {n_{XD} } \right\}} \left\{ {\left. {\exp \left( { - s\Delta _{{\bf{c}},{\bf{\tilde c}}} } \right)} \right|\left( \cdot \right)} \right\}$ being the (two–sided) Moment Generating Function (MGF) of the conditional RV ${\Delta _{{\bf{c}},{\bf{\tilde c}}} }$. The average is computed over fading gains and AWGN of all the links $X$–to–$D$ for $X = \left\{ {S,R,T} \right\}$; and ii) $\delta$ being a real number such that the contour path of integration is in the region of convergence of ${\mathcal{M}_{\Delta _{{\bf{c}},{\bf{\tilde c}}} } \left( {\left. \cdot \right| \cdot } \right)}$. Then, ${\rm{APEP}}\left( {{\bf{c}} \to {\bf{\tilde c}}} \right)$ can be obtained by substituting (\[Eq\_6\]) in (\[Eq\_4\]), by computing the expectation over fading statistics, AWGN, and by solving the inverse Laplace transform. In particular, since in this paper we are interested in high–SNR analysis, *i.e.*, ${{E_m } \mathord{\left/ {\vphantom {{E_m } {N_0 \to \infty }}} \right. \kern-\nulldelimiterspace} {N_0 \to \infty }}$, an asymptotic expression of the MGF in (\[Eq\_6\]) is needed [@Biglieri Eq. (12)]. Due to space constraints, in this paper we cannot provide all the details of the derivation. As an illustrative example, we provide a brief description of the main steps behind the computation of one addend in (\[Eq\_4\]). In particular, we focus our attention on the fourth addend in (\[Eq\_4\]), which is denoted by ${\rm{APEP}}^{\left( {\rm{4}} \right)} \left( {{\bf{c}} \to {\bf{\tilde c}}} \right)$. The reason is that this term is the most complicated to be computed. ${\rm{APEP}}^{\left( {\rm{4}} \right)} \left( {{\bf{c}} \to {\bf{\tilde c}}} \right)$ in (\[Eq\_4\]) can be written as shown in (\[Eq\_7\]) on top of the next page, where: i) $\mathcal{F}\left( \cdot, \cdot \right)$ is defined in (\[Eq\_8\]) on top of the next page; ii) $\mathop = \limits^{\left( a \right)}$ is obtained by using the Craig’s representation of the Q–function [@Simon]; iii) $\mathop = \limits^{\left( b \right)}$ is obtained by averaging over the AWGN with $\mathcal{G}\left( {\cdot,\cdot,\cdot} \right)$ being defined in (\[Eq\_9\]) on top of the next page; and iv) $\mathop = \limits^{\left( c \right)}$ is obtained by averaging over channel fading and using some simplifications that hold for high–SNR. In particular, ${\Psi _0 \left( \cdot \right)}$, ${\Psi _1 \left( {\cdot,\cdot } \right)}$, and ${\Psi _2 \left( {\cdot,\cdot } \right)}$ are defined in (\[Eq\_10\]) on top of the next page, where $\bar \gamma _{XY} = 2\sigma _{XY}^2 \left( {E_m /N_0 } \right)$ for the generic pair of nodes $X$ and $Y$. Note that, for $X=R$ and $X=T$, $d_X \ne 0 \Leftrightarrow \hat d_X \ne 0$. Let us consider the most general case with $d_S \ne 0$, $d_R \ne 0$, and $d_T \ne 0$. Both integrals in the brackets in (\[Eq\_10\]) can be computed in closed–form with the help of [@Simon Eq. (5A.9)]. Thus, ${\rm{APEP}}^{\left( {\rm{4}} \right)} \left( {{\bf{c}} \to {\bf{\tilde c}}} \right)$ simplifies as follows: $$\scriptsize \label{Eq_11} {\rm{APEP}}^{\left( {\rm{4}} \right)} \left( {{\bf{c}} \to {\bf{\tilde c}}} \right) = \frac{{\mathcal{I}_4 \left( {\hat d_R^{\left( {{\rm{nok}}} \right)} ,\hat d_T^{\left( {{\rm{nok}}} \right)} } \right)}}{{4\bar \gamma _{SD} \bar \gamma _{SR} \bar \gamma _{ST} d_S^2 \hat d_R^{\left( {{\rm{nok}}} \right)} \hat d_T^{\left( {{\rm{nok}}} \right)} }}$$ where $\mathcal{I}_4 \left( { \cdot , \cdot } \right)$ is defined in (\[Eq\_12\]) on top of the next page. Some important considerations are worth being made about ${\rm{APEP}}^{\left( {\rm{4}} \right)} \left( {{\bf{c}} \to {\bf{\tilde c}}} \right)$ in (\[Eq\_11\]). First, we notice that the asymptotic behavior of the APEP is clearly shown, and, for the considered case study, a diversity order equal to three is obtained [@Ribeiro]. Second, the integral $\mathcal{I}_4 \left( { \cdot , \cdot } \right)$ can be computed, either analytically or numerically, by using one of the many methods described in [@Biglieri]. Finally, we would like to mention that the case study investigated in this section, *i.e.*, ${\rm{APEP}}^{\left( {\rm{4}} \right)} \left( {{\bf{c}} \to {\bf{\tilde c}}} \right)$, is the most complicated addend, as it is the only term involving the product of two Q–functions. All the other cases are much simpler to be computed, and all integrals similar to $\mathcal{I}_4 \left( { \cdot , \cdot } \right)$ in (\[Eq\_12\]) can be computed in closed–form by using the method of residues [@Biglieri Eq. (6)]. The details of the derivation are omitted, but final results are summarized and discussed in Section \[Comparison\]. Performance Comparison: Is NC Useful? {#Comparison} ===================================== The aim of this section is to compare the performance of the different scenarios and network topologies shown in Fig. \[Fig\_1\] and Fig. \[Fig\_2\]. For all cases of interest, the methodology described in Section \[Framework\] is used to compute the ABEP. In particular, (\[Eq\_3\]) is applied for all possible codewords of the codebook. The final results are summarized in Table \[Tab\_1\], by assuming, for a fair comparison, the total energy constraint mentioned in Section \[SignalModel\]. Furthermore, since we are interested in high–SNR analysis, Table \[Tab\_1\] shows only the dominant terms in (\[Eq\_3\]), *i.e.*, those APEPs having the slowest decaying behavior as a function of ${{E_m } \mathord{\left/ {\vphantom {{E_m } {N_0 \to \infty }}} \right. \kern-\nulldelimiterspace} {N_0 \to \infty }}$ [@Iezzi_ICC2011]. In fact, these terms determine both diversity and coding gain. The accuracy of the frameworks shown in Table \[Tab\_1\] is validated in Section \[Results\] through Monte Carlo simulations. Important considerations can be drawn from our analysis. Let us consider the 3–node network topology. The ABEP of Scenario (b) shows that node $S$ can exploit distributed diversity to improve the diversity gain, but the price to pay is a performance degradation for node $R$, whose ABEP is worse than in the non–cooperative case, *i.e.*, Scenario (a). Very interestingly, we notice that the network–coded scenario, *i.e.*, Scenario (c), is the worst one in terms of performance. Node $S$ has no gain from cooperation, and the diversity order is equal to one. Furthermore, and very surprisingly, node $S$ has the same ABEP as in the non–cooperative case. In other words, there is neither power nor diversity gain. As far as node $R$ is concerned, the situation is even worse: the ABEP is worse than the non–cooperative case. Also, we notice that this performance penalty depends only in part on decoding errors on the $S$–to–$R$ link. In fact, even assuming $\bar \gamma _{SR} \to \infty$, *i.e.*, no decoding errors at node $R$, the ABEP is worse because of performing NC. In conclusion, unlike [@Nasri], [@Iezzi_ICC2011]–[@Iezzi_TIT2011] where it shown that NC is beneficial in cooperative networks when some nodes act only as relays and have no data to transmit, Table \[Tab\_1\] points out that, if the relay nodes have their own data to transmit, NC introduces no gain when compared to the non–cooperative scenario, and, in some cases, NC might also be harmful. To the best of the authors knowledge, this important behavior has never been reported in the open technical literature [@Zorzi]. Similar comments apply to the 4–node network topology. In particular, we notice that node $S$ has a diversity order that depends on the number of relay nodes that do not perform NC but just forward the received packets. Finally, we would like to emphasize that, unlike state–of–the–art performance analysis of cooperative networks (see [@Cano], [@Nasri], [@Iezzi_GLOBECOM2011] for further comments), our analysis encompasses a very accurate estimation of the coding gain. This is instrumental to clearly assess diversity and coding trade–off summarized in Table \[Tab\_1\]. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ${\rm{ABEP}}_S$ ${\rm{ABEP}}_R$ ${\rm{ABEP}}_T$ -------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------ 3–Node Network (a) $\left( {1/4} \right)\bar \gamma _{SD}^{ - 1}$ $\left( {1/4} \right)\bar \gamma _{RD}^{ - 1}$ – 3–Node Network (b) $\left( {3/8} \right)\bar \gamma _{SD}^{ - 1} \bar \gamma _{RD}^{ - 1} + \left[ {\left( {45 + \sqrt 5 } \right)/160} \right]\bar \gamma_{SD}^{ - 1} \bar \gamma _{SR}^{ - 1}$ $\left( {1/2} \right)\bar \gamma _{RD}^{ - 1}$ – 3–Node Network (c) $\left( {1/4} \right)\bar \gamma _{SD}^{ - 1}$ $\left( {1/4} \right)\bar \gamma _{SD}^{ - 1} + \left( {1/4} \right)\bar \gamma _{SR}^{ - 1} + \left( {1/4} \right)\bar \gamma _{RD}^{ - 1}$ – 4–Node Network (a) $\left( {1/4} \right)\bar \gamma _{SD}^{ - 1}$ $\left( {1/4} \right)\bar \gamma _{RD}^{ - 1}$ $\left( {1/4} \right)\bar \gamma _{TD}^{ - 1}$ 4–Node Network (b) $\begin{array}{l} $\left( {1/2} \right)\bar \gamma _{RD}^{ - 1}$ $\left( {1/2} \right)\bar \gamma _{TD}^{ - 1}$ \left( {5/8} \right)\bar \gamma _{SD}^{ - 1} \bar \gamma _{RD}^{ - 1} \bar \gamma _{TD}^{ - 1} + k_1 \bar \gamma _{SD}^{ - 1} \bar \gamma _{SR}^{ - 1} \bar \gamma _{ST}^{ - 1} \\ + k_2 \bar \gamma _{SD}^{ - 1} \bar \gamma _{SR}^{ - 1} \bar \gamma _{TD}^{ - 1} + k_2 \bar \gamma _{SD}^{ - 1} \bar \gamma _{ST}^{ - 1} \bar \gamma _{RD}^{ - 1} \\ \end{array}$ 4–Node Network (c) $\left( {1/4} \right)\bar \gamma _{SD}^{ - 1}$ $\left( {1/4} \right)\bar \gamma _{SD}^{ - 1} + \left( {1/4} \right)\bar \gamma _{SR}^{ - 1} + \left( {1/4} \right)\bar \gamma _{RD}^{ - 1}$ $\left( {1/4} \right)\bar \gamma _{SD}^{ - 1} + \left( {1/4} \right)\bar \gamma _{ST}^{ - 1} + \left( {1/4} \right)\bar \gamma _{TD}^{ - 1}$ 4–Node Network (d) $\left( {3/8} \right)\bar \gamma _{SD}^{ - 1} \bar \gamma _{RD}^{ - 1} + \left[ {\left( {45 + \sqrt 5 } \right)/160} \right]\bar \gamma _{SD}^{ - 1} \bar \gamma _{SR}^{ - 1}$ $\left( {1/2} \right)\bar \gamma _{RD}^{ - 1}$ $\left( {1/4} \right)\bar \gamma _{ST}^{ - 1} + \left( {1/4} \right)\bar \gamma _{TD}^{ - 1}$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Numerical and Simulation Results {#Results} ================================ In this section, we compare the frameworks summarized in Table \[Tab\_1\] with Monte Carlo simulations. More specifically, simulation results are obtained through a brute force implementation of (\[Eq\_2\]). Some selected curves are shown in Fig. \[Fig\_3\] and Fig. \[Fig\_4\] for the 3–node and 4–node scenario, respectively. For simplicity, but without loss of generality, i.i.d. fading is considered. We can see that the framework in Table \[Tab\_1\] closely overlaps with Monte Carlo simulations for high–SNR. This confirms the accuracy of the analytical derivation in Section \[Framework\], and the theoretical findings Section \[Comparison\]. Conclusion {#Conclusion} ========== In this paper, we have studied the performance of network–coded cooperative wireless networks with practical communication constraints. A general framework has been proposed, which can capture diversity and coding gain, and provides insightful information about the performance of the system, along with the tradeoff and the interplay of cooperation and NC. Unlike common belief, our analysis has clearly shown that using NC might be harmful for the system. In fact, we have shown that the diversity order is determined only by those nodes that act as repeaters and do not network–code their own data to the received packets. These results and conclusions are valid for binary modulation and binary NC. Current research activity is now concerned with the investigation of wireless networks with non–binary modulation and non–binary NC. ![ABEP against $E_m/N_0$ for the 3–node network topology in Fig. \[Fig\_1\]. Solid lines show the analytical framework and markers show Monte Carlo simulations. Setup: i) i.i.d. fading with $\sigma _0^2 = 1$; and ii) $\bar \gamma _0 = 2\sigma _0^2 \left( {E_m /N_0 } \right)$. ${\rm{ABEP}}_S$ and ${\rm{ABEP}}_R$ of Scenario (a) are given by the single–hop bound.[]{data-label="Fig_3"}](1-Relay_Net__Results.eps){width="\columnwidth"} ![ABEP against $E_m/N_0$ for the 4–node network topology in Fig. \[Fig\_2\]. Solid lines show the analytical framework and markers show Monte Carlo simulations. Setup: i) i.i.d. fading with $\sigma _0^2 = 1$; and ii) $\bar \gamma _0 = 2\sigma _0^2 \left( {E_m /N_0 } \right)$. ${\rm{ABEP}}_S$, ${\rm{ABEP}}_R$, and ${\rm{ABEP}}_T$ of Scenario (a) are given by the single–hop bound.[]{data-label="Fig_4"}](2-Relay_Net__Results.eps){width="\columnwidth"} Acknowledgment {#acknowledgment .unnumbered} ============== This work is supported, in part, by the research projects “GREENET” (PITN–GA–2010–264759), “WSN4QoL” (IAPP–GA–2011–286047), and the Lifelong Learning Programme (LLP) – ERASMUS Placement. [99]{} A. Nosratinia, T. E. Hunter, and A. Hedayat, “Cooperative communications in wireless networks”, *IEEE Commun. Mag.*, vol. 42, no. 10, pp. 74–80, Oct. 2004. J. N. Laneman, D. Tse, and G. Wornell, “Cooperative diversity in wireless networks: Efficient protocols and outage behavior”, *IEEE Trans. Inform. Theory*, vol. 50, no. 12, pp. 3062–3080, Dec. 2004. Z. Ding *et al.*, “On combating the half–duplex constraint in modern cooperative networks: Protocols and techniques”, *IEEE Wireless Commun. Mag.*, Apr. 2011. \[Online\]. Available: http://www.staff.ncl.ac.uk/z.ding/WC\_magazine.pdf. F. Rossetto and M. Zorzi, “Mixing network coding and cooperation for reliable wireless communications”, *IEEE Wireless Commun. Mag.*, vol. 18, no. 1, pp. 15–21, Feb. 2011. R. Ahlswede *et al.*, “Network information flow”, *IEEE Trans. Inform. Theory*, vol. 46, no. 4, pp. 1204–1216, July 2000. J.–S. Park *et al.*, “Codecast: A network–coding–based ad hoc multicast protocol”, *Wireless Commun.*, vol. 13, no. 5, pp. 76–81, Oct. 2006. S. Katti, “Network coded wireless architecture”, *Ph.D. Dissertation*, Massachusetts Institute of Technology, Sep. 2008. A. Ribeiro, X. Cai, and G. Giannakis, “Symbol error probabilities for general cooperative links”, *IEEE Trans. Wireless Commun.*, vol. 4, no. 3, pp. 1264–1273, May 2005. M. Di Renzo, F. Graziosi, and F. Santucci, “A unified framework for performance analysis of CSI–assisted cooperative communications over fading channels”, *IEEE Trans. Commun.*, pp. 2552–2557, Sep. 2009. —, “A comprehensive framework for performance analysis of dual–hop cooperative wireless systems with fixed–gain relays over generalized fading channels”, *IEEE Trans. Wireless Commun.*, vol. 8, Oct. 2009. —, “A comprehensive framework for performance analysis of cooperative multi–hop wireless systems over log–normal fading channels”, *IEEE Trans. Commun.*, vol. 58, no. 2, pp. 531–544, Feb. 2010. M. Di Renzo *et al.*, “Robust wireless network coding – An overview”, *Springer Lecture Notes*, LNICST 45, pp. 685–698, 2010. S. L. H. Nguyen *et al.*, “Mitigating error propagation in two–way relay channels with network coding”, *IEEE Trans. Wireless Commun.*, vol. 9, pp. 3380–3390, Nov. 2010. G. Al–Habian *et al.*, “Threshold–based relaying in coded cooperative networks”, *IEEE Trans. Veh. Technol.*, vol. 60, pp. 123–135, Jan. 2011. A. Cano *et al.*, “Link–adaptive distributed coding for multi–source cooperation”, *EURASIP J. Adv. Signal Process.*, vol. 2008, Jan. 2008. A. Nasri, R. Schober, and M. Uysal, “Error rate performance of network–coded cooperative diversity systems”, *IEEE Global Commun. Conf.*, pp. 1–6, Dec. 2010. H.–Q. Lai and K. J. Ray Liu, “Space–time network coding”, *IEEE Trans. Signal Process.*, vol. 59, no. 4, pp. 1706–1718, Apr. 2011. G. Li *et al.*, “High–throughput multi–source cooperation via complex–field network coding”, *IEEE Trans. Wireless Commun.*, vol. 10, no. 5, pp. 1606–1617, May 2011. M. Iezzi, M. Di Renzo, and F. Graziosi, “Network code design from unequal error protection coding: Channel–aware receiver design and diversity analysis”, *IEEE Int. Commun. Conf.*, pp. 1–6, June 2011. —, “Closed–form error probability of network–coded cooperative wireless networks with channel–aware detectors”, *IEEE Global Commun. Conf.*, pp. 1–6, Dec. 2011. —, “Diversity and coding gain of multi–source multi–relay cooperative wireless networks with binary network coding”, pp. 1–56, Sep. 2011, submitted. \[Online\]. Available: http://arxiv.org/pdf/1109.4599v1.pdf. T. Wang *et al.*, “High–performance cooperative demodulation with decode–and–forward relays”, *IEEE Trans. Commun.*, vol. 55, no. 7, pp. 1427–1438, Jul. 2007. J. J. Proakis, *Digital Communications*, McGraw–Hill, 4th ed., 2000. E. Biglieri *et al.*, “Computing error probabilities over fading channels: A unified approach”, *European Trans. Telecommun.*, vol. 9, no. 1, pp. 15–25, Jan.–Feb. 1998. M. K. Simon and M.–S. Alouini, *Digital Communication over Fading Channels*, John Wiley $\&$ Sons, Inc., 1st ed., 2000. [^1]: $\Pr \left\{ \cdot \right\}$ denotes probability.
--- abstract: 'This paper offers a review of numerical methods for computation of the eigenvalues of Hermitian matrices and the singular values of general and some classes of structured matrices. The focus is on the main principles behind the methods that guarantee high accuracy even in the cases that are ill-conditioned for the conventional methods. First, it is shown that a particular structure of the errors in a finite precision implementation of an algorithm allows for a much better measure of sensitivity and that computation with high accuracy is possible despite a large classical condition number. Such structured errors incurred by finite precision computation are in some algorithms e.g. entry-wise or column-wise small, which is much better than the usually considered errors that are in general small only when measured in the Frobenius matrix norm. Specially tailored perturbation theory for such structured perturbations of Hermitian matrices guarantees much better bounds for the relative errors in the computed eigenvalues. Secondly, we review an unconventional approach to accurate computation of the singular values and eigenvalues of some notoriously ill-conditioned structured matrices, such as e.g. Cauchy, Vandermonde and Hankel matrices. The distinctive feature of accurate algorithms is using the intrinsic parameters that define such matrices to obtain a non-orthogonal factorization, such as the factorization, and then computing the singular values of the product of thus computed factors. The state of the art software is discussed as well.' author: - | Zlatko Drmač${}^\dag$\ ${}^\dag$Department of Mathematics, Faculty of Science, University of Zagreb, Croatia bibliography: - 'eig-review\_bib.bib' title: Numerical methods for accurate computation of the eigenvalues of Hermitian matrices and the singular values of general matrices --- backward error, condition number, eigenvalues, Hermitian matrices, Jacobi method, LAPACK, perturbation theory, rank revealing decomposition, singular value decomposition Introduction ============ In real world applications, numerical computation is done with errors (model errors, measurement errors, linearization errors, truncation/discretization errors, finite computer arithmetic errors). This calls for caution when interpreting the computed results. For instance, any property or function value we obtain from finite precision computation with a nontrivial matrix $A$ stored in the computer memory (for instance, the rank or the eigenvalues of $A$) very likely holds true for some unknown $A+\delta A$ in the vicinity of $A$, but not for $A$. In order to estimate the level of accuracy that can be expected in the output, we need to know the level of initial uncertainty in the data, the analytical properties of the function of $A$ that we are attempting to compute, the numerical properties of the algorithm used and the parameters of the computer arithmetic. A better understanding of the sensitivity of numerical problems, together with the adoption of new paradigms in the algorithmic development over the last few decades have opened new possibilities, allowing for high accuracy solutions to problems that were previously considered numerically intractable. In this paper we give an overview of such advances as regards the computation to high accuracy of the eigenvalues of Hermitian matrices and the singular values of general and some special classes of matrices. The focus is on the main principles, and technical details will be mostly avoided. Computing the eigenvalues with high accuracy means that for each eigenvalue (including the tiniest ones, much smaller than the norm of the matrix) as many correct digits are computed as warranted by the data. In other words, for the eigenvalues $\lambda_1\geq\cdots\geq\lambda_n$ of a nonsingular Hermitian matrix $H=H^*\in\mathbb{C}^{n\times n}$ and their computed approximations $\widetilde\lambda_1\geq\cdots\geq\widetilde\lambda_n$ we want a bound of the form $$\label{zd:eq:intro:rel_error} \max_{i=1:n} \frac{|\widetilde\lambda_i-\lambda_i|}{|\lambda_i|} \leq \bfkappa \cdot O(\roff) ,$$ where $\bfkappa$ represents a hopefully moderate condition number, and $\roff$ is the round-off unit of the computer arithmetic. For this kind of accuracy, the standard paradigm (algorithm based on orthogonal transformations, small backward error and perfect stability of the symmetric eigenvalue problem) is not good enough. Namely, the conventional approach of showing that the computed $\widetilde\lambda_i$’s are the exact eigenvalues of a nearby $H+\delta H$ with $\|\delta H\|_2 \leq O(\roff) \|H\|_2$, and then applying Weyl’s theorem, which guarantees that $\max_{i=1:n} |\widetilde\lambda_i-\lambda_i| \leq \|\delta H\|_2$, yields, for each eigenvalue index $i$, $$\label{zd:eq:intro:rel_norm_error} \frac{|\widetilde\lambda_i-\lambda_i|}{\|H\|_2} \leq O(\roff),\;\;\mbox{i.e.}\;\; \frac{|\widetilde\lambda_i-\lambda_i|}{|\lambda_i|} \leq O(\roff) \frac{\|H\|_2}{|\lambda_i|} \leq O(\roff) \frac{\|H\|_2}{|\lambda_n|} = O(\roff) \|H\|_2 \|H^{-1}\|_2.$$ Clearly, (\[zd:eq:intro:rel\_norm\_error\]) will give a satisfactory bound of the form (\[zd:eq:intro:rel\_error\]) only for absolutely large eigenvalues (those with $|\lambda_i|$ of the order of the norm $\|H\|_2$), while the relative error in the smallest eigenvalues ($|\lambda_i|\ll \|H\|_2$) is up to $O(\roff) \kappa_2(H)$, where $H$ is assumed nonsingular, $\kappa_2(H)=\|H\|_2 \|H^{-1}\|_2$ is the condition number, and $\|\cdot\|_2$ is the spectral operator norm, induced by the Euclidean vector norm. Hence, in the conventional setting, the eigenvalues of Hermitian/symmetric matrices are not always perfectly well conditioned in the sense that we can compute them in finite precision with small relative error (\[zd:eq:intro:rel\_error\]). We need to identify classes of matrices that allow for such high relative accuracy. To that end, we may need to restrict the classes of permissible perturbations – instead of in matrix norm, one may consider finer, entry-wise changes in matrix entries. As a result of such stronger requirements of relative accuracy, there will be a new condition number able to distinguish between well- and ill-behaved matrices with respect to such perturbations. Hence, for some classes of matrices we will be able to compute even the tiniest eigenvalues even if $\kappa_2(H)$ is extremely large. For that, however, we will have to rethink and redefine the paradigms of algorithm development. For the sake of brevity, in this review we do not discuss the accuracy of the computed eigenvectors and the singular vectors. This is an important issue, and interested readers will find the relevant results in the provided references. The new structure of the perturbation (finer than usually required by $\|\delta H\|_2 \leq O(\roff) \|H\|_2$) and the condition number governing high relative accuracy are not invariant under general orthogonal similarities. This means that using an algorithm based on orthogonal transformations does not automatically guarantee results that are accurate in the sense of (\[zd:eq:intro:rel\_error\]). Some algorithms are more accurate than the others, see [@dem-ves-92]. For best results, separate perturbation theories and numerical algorithms have to be developed for the positive definite and the indefinite matrices. Analogous comments apply to the computation of the singular values $\sigma_1\geq\cdots\geq\sigma_n$ of $A\in\mathbb{C}^{m\times n}$ – conventional algorithms in general cannot approximate a small singular value $\sigma_i$ to any correct digit if $\sigma_i < \roff \sigma_1$, despite the fact that only orthogonal or unitary transformations are used. This review of the development of new theory and new algorithms is organized as follows. For the readers’ convenience, in §\[S=back-stability\] we first give a brief review of the key notions of backward stability, perturbation theory, condition number and forward error. Then, in §\[S=HPD\], we review the state of the art numerical methods for computing the eigenvalues of real symmetric and Hermitian matrices. Brief description of the algorithms in §\[ss=Classical-Methods\] is followed by a general framework for the classical backward error analysis in §\[SS=backward-stability\], and its limitations with respect to the high accuracy of the form (\[zd:eq:intro:rel\_error\]) is shown in §\[zd:SSS:Example\_3x3\] using a $3\times 3$ symmetric matrix as a case study. The conditions for achieving (\[zd:eq:intro:rel\_error\]) are analyzed in §\[SS=posdef-accurate\] for positive definite matrices, and in §\[SSS=symm-jac-pd\] we show that the symmetric Jacobi algorithm in finite precision arithmetic satisfies these conditions, which makes it provably more accurate than any tridiagonalization based algorithm [@dem-ves-92]. This theory does not include indefinite matrices, which are analyzed separately in §\[S=HID\]. It will become clear that there is a fundamental difference between the two classes. We conclude the numerical computation with positive definite matrices by arguing in §\[SS=implicit-pd\] that in many cases such matrices are best given implicitly by a factor $A$ such that $A^*A = H$, and that accurate eigenvalue computation follows from accurate computation of the singular values of $A$. In §\[S=SVD\] we study algorithms for computing the singular values to high relative accuracy. After reviewing the bidiagonalization based methods in §\[SS=bidiagSVD\], and the one-sided Jacobi in §\[SS=One-sided-jacobi\] and its preconditioned version in §\[SS=J+QRCP\], in §\[SSS=Impl-Jacobi-eig\], we show how the combination of the Cholesky factorization and the one-sided Jacobi computes the eigenvalues of general (non-structured) positive definite matrices to the optimal accuracy (\[zd:eq:intro:rel\_error\]) permitted by the perturbation theory. Section \[S=PSVD\] reviews accurate computation of the of certain products of matrices (), which is the core procedure for the new generation of highly accurate algorithms, based on the so-called rank-revealing decompositions (s). In §\[S=RRD+PSVD\] we illustrate the + concept in action. In particular, we discuss structured matrices such as the Cauchy, Vandermonde and Hankel matrices, which are the key objects in many areas of numerical mathematics, in particular in rational approximation theory [@Gutknecht:1982:RPC], [@Gonnet:2011:RRI], [@haut-beylkin-coneig-2011], where e.g. the coefficients of the approximant are taken from the singular vectors corresponding to small singular values of certain matrices of these kinds. The fact that these structured matrices can be extremely ill-conditioned is the main obstacle that precludes turning powerful theoretical results into practical numerical procedures. We review recent results that allow for highly accurate computations even in extremely ill-conditioned cases. Section \[S=HID\] is devoted to accurate computation of the eigenvalues of Hermitian indefinite matrices. The key steps towards understanding the sensitivity of the eigenvalues are reviewed in §\[SS=HID-pert-theory\], and in §\[SS=sym-indef-fact\], §\[SS=OJ\] we review numerical algorithms that compute the eigenvalues of indefinite matrices to the accuracy deemed possible by the corresponding perturbation theory. Three different approaches are presented, and, interestingly, all based on the Jacobi algorithm but with some nonstandard features. In §\[SS=JJ\], the problem is transformed to a generalized eigenvalue problem, and the Jacobi diagonalization process is executed using transformations that are orthogonal in an indefinite inner product whose signature is given by the inertia of $H$. In §\[SSS=ISJM\], the classical Jacobi algorithm is carefully implemented implicitly on an accurately computed symmetric indefinite factorization, and in §\[SS=OJ\] the spectral decomposition of $H$ is carefully extracted from its accurate SVD. Extending the results to classes of non-symmetric matrices is a challenging problem and in §\[SS=nonsymm-TN\] we briefly review the first results in that direction. Backward stability, perturbation theory and condition number {#S=back-stability} ============================================================ The fundamental idea of backward stability is introduced by Wilkinson [@Wilinson-RoundingErrors-63], [@Wilkinson-AEP-65]. In an abstract formulation, we want to compute $Y=\mathcal{F}(X)$ using an algorithm $\mathcal{A}_{\mathcal F}(X)$ that returns only an approximation $\widetilde{Y}$ of $Y$. In the *backward error analysis* of the computational process $\mathcal{A}_{\mathcal F}(X)$, we prove existence of a small perturbation $\delta X$ of the input data $X$ such that $\widetilde{Y}=\mathcal{F}(X+\delta X)$. If the perturbation $\delta X$, called *backward error*, is acceptably small relative to $X$ (for instance, of the same order as the initial uncertainty $\delta_0 X$ already present in $X=X_{\textrm{true}}+\delta_0 X$, where $X_{\textrm{true}}$ is the unaccessible exact value), the computation of $\widetilde{Y}$ by $\mathcal{A}_{\mathcal F}(\cdot)$ is considered *backward stable*. The sources of error can be the use of finite precision arithmetic or any other approximation scheme. Sometimes, a backward error is constructed artificially just in order to justify the computed output. For instance, if we compute an approximate eigenvalue $\lambda$, with the corresponding eigenvector $v\neq \mathbf{0}$, of the matrix $X$, the residual $r = Xv-\lambda v$ will be in general nonzero, but hopefully small. We can easily check that $( X + \delta X) v = \lambda v$, with $\delta X = - r v^*/(v^* v)$, i.e. we have found an exact eigenpair $\lambda, v$ of a nearby matrix $X+\delta X$, with $\|\delta X\|_2=\|r\|_2/\|v\|_2$. If, for given $v$, we choose $\lambda$ as the Rayleigh quotient $\lambda=v^* X v/v^*v$, then $r^*v=0$ and $(X +\Delta X)v=\lambda v$ with Hermitian $\Delta X=\delta X + (\delta X)^*$, which is favorable interpretation if $X$ is Hermitian: we have solved a nearby Hermitian problem. Backward stability does not automatically imply that $\widetilde{Y}$ is close to $Y$. It is possible that a backward stable algorithm gives utterly wrong results even in the case when the computation is considered stable; see §\[zd:SSS:Example\_3x3\] below for an example and its discussion. The error in the result (*forward error*) $\delta Y = \widetilde{Y}-Y = \mathcal{F}(X+\delta X) - \mathcal{F}(X)$ will depend on the function $\mathcal{F}(\cdot)$, i.e. on its sensitivity to the change in the argument $X$, and on the size and the structure of $\delta X$. This sensitivity issue is the subject of *perturbation theory*. If the forward error $\delta Y$ is small, the algorithm is called *forward stable*. Claiming backward stability of an algorithm depends on how the size of the backward error is measured, as well as on other factors, such as the structure of the backward error. For instance, if $X$ is a symmetric matrix, it is desirable to prove existence of a symmetric perturbation $\delta X$, see e.g. [@Smoktunowicz:1995:NSC]. If $X$ lives in a normed space $(\mathcal{X},\|\cdot\|_x)$, then we usually seek a bound of the form $\|\delta X\|_x \leq \epsilon \|X\|_x$. The error in the computed result, which is assumed to live in a normed space $(\mathcal{Y},\|\cdot\|_y)$, is estimated as $\|\delta Y\|_y \leq C \epsilon \|Y\|_y$. The amplification factor $C$ is the *condition number*. An abstract theory of the condition number, ill-conditioning and related problems in the setting of normed manifolds is given e.g. in [@Rice-TheoryofCondition-66]. As an illustration of such an abstract analytical treatment, we cite one result: (Rice [@Rice-TheoryofCondition-66]) Let $(\mathcal{X},\|\cdot\|_x)$, $(\mathcal{Y},\|\cdot\|_y)$ be normed linear spaces and $\mathcal{F}: \mathcal{X} \longrightarrow \mathcal{Y}$ be a differentiable function. The absolute and the relative condition numbers, respectively, of $\mathcal{F}$ at $X_0$ are defined as $$\begin{aligned} \alpha(\mathcal{F},X_0;\delta) &=& \inf\{C\geq 0\; :\; \|X-X_0\|_x < \delta \Longrightarrow \| \mathcal{F}(X)-\mathcal{F}(X_0)\|_y < C \delta \}\\ \rho(\mathcal{F},X_0;\delta) &=& \inf\{C\geq 0\; :\; \|X-X_0\|_x < \delta \|X_0\|_x \Longrightarrow \| \mathcal{F}(X)-\mathcal{F}(X_0)\|_y < C \delta \|\mathcal{F}(X_0)\|_y\}. \end{aligned}$$ Let the corresponding asymptotic condition numbers be defined as $$\alpha(\mathcal{F},X_0) = \lim_{\delta\rightarrow 0} \alpha(\mathcal{F},X_0;\delta),\;\;\;\; \rho(\mathcal{F},X_0) = \lim_{\delta\rightarrow 0} \rho(\mathcal{F},X_0;\delta).$$ If $\mathcal{J}$ is the Jacobian of $\mathcal{F}$ at $X_0$, then $ \alpha(\mathcal{F},X_0) = \| \mathcal{J}\|_{xy}$, $\rho(\mathcal{F},X_0) = \frac{\| \mathcal{J}\|_{xy}}{\|\mathcal{F}(X_0)\|_y}\|X_0\|_x, $ where $\|\cdot\|_{xy}$ denotes the induced operator norm. For a systematic study of the condition numbers, we refer to [@book-condition], and for various techniques of backward error analysis see [@book-higham]. The backward error analysis is often represented using commutative diagrams such as in Figure \[backdiag\]. (10,4.0)(1.5,0) (-0.10,2.5)[$X\;\;\bullet$]{} (4.95,2.5)[[$\bullet\;\;\widetilde{Y}=Y+\delta Y=\mathcal{F}(X+\delta X)$]{}, $\|\delta Y\|_y \leq C \epsilon \|Y\|_y$]{} (0.8,2.45)[(1,-1)[2.0]{}]{} (3,0.5)[(1,1)[2.0]{}]{} (0.90,2.60)[(1,0)[3.98]{}]{} (-0.5,1.2)[backward]{} (-0.5,0.7)[error]{} (4.2,1.2)[exact application of $\mathcal{F}$]{} (1.2,3.0)[computed by $\mathcal{A}_{\mathcal{F}}$]{} (5.5,3.3)[[$\bullet\;\; Y=\mathcal{F}(X)$]{}]{} (1.0,0)[$X+\delta X\;\;$ $\|\delta X\|_x \leq \epsilon \|X\|_x$]{} (2.8,0.2)[$\bullet$]{} Scaling and numerical stability {#SS:scaling-Hankel-SVD} ------------------------------- When doing numerical calculations and estimating errors, computing residuals, or making decisions about the ranks of matrices by declaring certain quantities sufficiently small to be considered negligible, it is often forgotten that those numbers represent physical quantities in a particularly chosen system of units. A small entry in the matrix may be merely noise, but it could also be a relevant small physical parameter. Very often, the system under consideration represents couplings between quantities of different physical natures, each one given in its own units on a particular scale. In fact, it is a matter of engineering design and ingenuity to choose the units so that the mathematical model faithfully represents the physical reality, and that the results of the computations can be meaningfully measured in appropriate norms and interpreted and used with confidence in applications. As an illustration, we briefly discuss one simple example: consider a state space realization of a linear time invariant () dynamical system $$\begin{aligned} \dot x(t) &=& A x(t) + B u(t),\; x(0) = x_0 , \label{eq:LTI1}\\ y(t) &=& C x(t) . \label{eq:LTI2}\end{aligned}$$ In general, switching to different units implies a change of variables $x(t) = S\hat{x}(t)$, where $S$ denotes the corresponding diagonal scaling matrix. In the new set of state variables defined by $x(t)= S \hat x(t)$, the system goes over into $ \dot{\hat{x}}(t) = (S^{-1}AS) \hat{x}(t) + (S^{-1}B) u(t)$, $y(t) = (CS) \hat{x}(t) . $ Hence, the state space description $(A,B,C)$ changes into the equivalent triplet $(S^{-1} A S, S^{-1}B, CS)$. From the system-theoretical point of view, nothing has changed, the new state space representation represents the same physical reality: the transfer function $\mathcal{G}(s) = C(sI-A)^{-1}B = (CS) (sI - S^{-1}A S)^{-1}(S^{-1}B)$ is the same, the system poles (the eigenvalues of $A$, i.e. of $S^{-1}AS$) are also the same. Unfortunately, this invariance is not inherited in finite precision computation. For example, the important property of stability requires the eigenvalues of the system matrix $A$ to be in the open left complex half plane. If the numerically computed eigenvalues do satisfy that condition but are too close to the imaginary axis, how can we be certain that the system is stable? Since the system matrix is determined up to a similarity, how can we be sure that our numerical algorithm will not be influenced by a particular representation? Important properties of the system (\[eq:LTI1\],\[eq:LTI2\]), such as controllability and observability, are encoded in the symmetric positive semidefinite matrices $H= \int_0^\infty e^{t A} BB^T e^{t A^T}dt$ and $M= \int_0^\infty e^{t A^T}C^T C e^{t A} dt$, called system Gramians, which are computed as the solution of the dual pair of Lyapunov equations $ {{A}H + H {A}^{T} = - {B} {B}^{T}}, \;\; {{A}^{T}M+M{A} = - {C}^{T}{C}}. $ The joint spectral properties of the Gramians provide information on deep structural properties of the system. The key quantities in this respect are the Hankel singular values, defined as the square roots $\sigma_i = \sqrt{\lambda_i(HM)}$ of the eigenvalues of $HM$. These are proper invariants of the system, independent of the state space realization (\[eq:LTI1\],\[eq:LTI2\]). It can be easily checked that changing into $\hat{x}(t)$ changes system Gramians by the so called contragredient transformation $H \longrightarrow \widehat{H} = S^{-1} H S^{-T}$, $M \longrightarrow \widehat{M} = S^T M S$, which does not change the Hankel singular values, since $\widehat{H}\widehat{M} = S^{-1}(HM)S$. The numerics, on the other hand, may react sharply. A change of units (scaling) changes classical condition numbers $\kappa_2(A)$, $\kappa_2(H)$, $\kappa_2(M)$ thus potentially making an algorithm numerically inaccurate/unstable, while, at the same time, the underlying problem is the same. With a particularly chosen $S$ we can manipulate the numerical rank of any of the two Gramians, and thus mislead numerical algorithms. Is this acceptable? If a rank decision has to be made, and if the determined *numerical rank* (cf. [@Golub-Klema-Stewart-Numerical_rank]) sharply changes with the change of physical units in which the variables are expressed, one definitely has to ask many nontrivial questions. It is also possible that two algebraically equivalent methods for checking controllability of a LTI system give completely different estimates of numerical rank. For an excellent discussion on these topics we refer to [@Paige_Numerics_in-Control]. Computing eigenvalues of Hermitian matrices {#S=HPD} =========================================== Numerical computation of the eigenvalues and eigenvectors of Hermitian matrices is considered as an example of a perfect computational process. This is due to several important spectral properties of Hermitian matrices, see e.g. [@hor-joh-90 Ch. 4]. The Schur form of Hermitian matrices is diagonal: for any Hermitian $H\in\mathbb{C}^{n\times n}$ there exists a unitary $U\in\mathbb{C}^{n\times n}$ and a real diagonal $\Lambda=\mathrm{diag}(\lambda_i)_{i=1}^n$ such that $H = U \Lambda U^*$. If $U=\begin{pmatrix} u_1 & u_2 & ,\ldots,& u_n\end{pmatrix}$ is the column partition of $U$, then $H u_i = \lambda_i u_i$, $i=1,\ldots, n$. Hence, the diagonalization is performed by a unitary (or real orthogonal if $H$ is real symmetric) matrix of eigenvectors, which allows for numerical algorithms based only on unitary transformations (unitary transformations are preferred in finite precision arithmetic because they preserve relevant matrix norms, e.g., $\|\cdot\|_2$ and the Frobenius norm $\|\cdot\|_F$, and thus will not inflate initial data uncertainties and unavoidable rounding errors). Furthermore, the eigenvalues of $H\equiv H^*$ have far-reaching variational characterizations. \[TM-minmax\] Let $H$ be $n\times n$ Hermitian with eigenvalues $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_n$. Then $$\lambda_j = \max_{{\mathcal{S}_j}} \min_{x\in\mathcal{S}_j\setminus \{0\}} \frac{x^* H x}{x^* x}$$ where the maximum is taken over all $j$-dimensional subspaces $\mathcal{S}_j$ of $\mathbb{C}^n$. This characterization generates a sound perturbation theory that provides a basis for assessing the accuracy of numerical methods. For the sake of completeness and for the reader’s convenience, we cite one of the Weyl-type theorems: \[zd:TM:Weyl\] (Weyl) Let $H$ and $H+\delta H$ be Hermitian matrices with eigenvalues $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_n$ and $\widetilde{\lambda}_1\geq\widetilde{\lambda}_2\geq \cdots\geq\widetilde{\lambda}_n$, respectively. Write $\widetilde{\lambda}_i=\lambda_i+\delta\lambda_i$. Then $ \max_{i=1:n} |\delta\lambda_i| \leq \|\delta H\|_2. $ For a [more]{} detailed overview [of the specific spectral properties of Hermitian matrices, including the perturbation theory,]{} we refer to [@ste-sun-90], [@Bhatia-MatrixAnalysis-1997]. Classical methods {#ss=Classical-Methods} ----------------- The common paradigm of modern numerical algorithms for computing a unitary eigenvector matrix $U$ and the real diagonal $\Lambda$ is to build a sequence of unitary similarities such that $$\label{eq:Hk-Lambda} H^{(k+1)} = (U^{(k)})^* \cdots ((U^{(2)})^* ((U^{(1)})^* H U^{(1)}) U^{(2)}) \cdots U^{(k)} \longrightarrow \Lambda= \left(\begin{smallmatrix} \lambda_1 & 0& 0\cr 0 & \ddots & 0 \cr 0 & 0 & \lambda_n\end{smallmatrix}\right),\;\;\quad \mbox{as }\ \ k\longrightarrow\infty.$$ The accumulated infinite product $U^{(1)} U^{(2)}\cdots U^{(k)}\cdots$ provides information about the eigenvectors and eigenspaces (in case of multiple eigenvalues). The choice of unitary matrices $U^{(i)}$ defines the specific algorithm. For a detailed overview with references we recommend [@parlett-sevp-98], [@golub-vnl-4 Ch. 8] and [@hogben14 Ch. 55]. The two typical classes of methods are: - *Tridiagonalization-based methods*: The matrix $H$ is first reduced to a Hermitian tridiagonal matrix $T$: $$\label{eq:tridiagonal} V^* H V = T = \left(\begin{smallmatrix} \alpha_1 & \beta_1 & & \cr \beta_1 & \alpha_2 & \ddots & \cr & \ddots & \ddots & \beta_{n-1} \cr & & \beta_{n-1} & \alpha_n \end{smallmatrix}\right),$$ where $V$ denotes a unitary matrix composed as the product of $n-2$ Householder reflectors. The tridiagonalization can be illustrated in the $4\times 4$ case as $$H^{(1)}=V_1^* H V_1 = \left(\begin{smallmatrix} \star & \star & 0 & 0\cr \star & \star & \times & \times\cr 0 & \times & \times & \times\cr 0 & \times & \times & \times\end{smallmatrix}\right),\;\; H^{(2)}=V_2^* H^{(1)} V_2 = \left(\begin{smallmatrix} \star & \star & 0 & 0\cr \star & \star & \star & 0\cr 0 & \star & \star & \star\cr 0 & 0 & \star & \star\end{smallmatrix}\right) = T = (V_2^* V_1^*) H (V_1 V_2),$$ [where each $\star$ denotes an entry which has already been modified by the algorithm and set to its final value]{}. In the second stage, fast algorithms specially tailored for [tridiagonal]{} matrices, such as , divide and conquer, , inverse iteration or the method, are deployed to compute the spectral decomposition of $T$ as $T = W \Lambda W^*$. Assembling back, the spectral decomposition of $H$ is obtained as $H = U\Lambda U^*$ with $U = V W$. For studying excellent tridiagonal eigensolvers with many fine mathematical and numerical details we recommend [@mgu-eis-tridiag-95], [@PARLETT2000121], [@dhillon-parlett-2004], [@DHILLON20041], [@MRRR-2006]. - *Jacobi type methods*: The classical Jacobi method generates a sequence of unitary congruences, ${H}^{(k+1)} = ({U}^{(k)})^* {H}^{(k)} {U}^{(k)}$, where ${U}^{(k)}$ differs from the identity only at some cleverly chosen positions $(i_k,i_k)$, $(i_k,j_k)$, $(j_k,i_k)$, $(j_k,j_k)$, with $$\begin{pmatrix}{U}^{(k)}_{i_k,i_k} & {U}^{(k)}_{i_k,j_k}\cr\\[0.1pt] {U}^{(k)}_{j_k,i_k} & {U}^{(k)}_{j_k,j_k}\end{pmatrix} = \begin{pmatrix}\cos\phi_k & e^{i\psi_k}\sin\phi_k \cr -e^{-i\psi_k}\sin\phi_k & \cos\phi_k\end{pmatrix}.$$ The angles $\phi_k$, $\psi_k$ of the $k$-th transformation are determined to annihilate the $(i_k,j_k)$ and $(j_k,i_k)$ positions in ${H}^{(k)}$, namely, $$\label{eq:2x2_rotation} \left(\!\begin{smallmatrix} \cos\phi_k & - e^{i\psi_k}\sin\phi_k \cr e^{-i\psi_k}\sin\phi_k & \cos\phi_k\end{smallmatrix}\right) \left(\!\begin{smallmatrix}{H}^{(k)}_{i_k i_k} & {H}^{(k)}_{i_k j_k}\cr\\[4pt] {H}^{(k)}_{j_k i_k} & {H}^{(k)}_{j_k j_k}\end{smallmatrix}\!\right) \left(\!\begin{smallmatrix}\cos\phi_k & e^{i\psi_k}\sin\phi_k \cr -e^{-i\psi_k}\sin\phi_k & \cos\phi_k\end{smallmatrix}\!\right) = \left(\!\begin{smallmatrix}{H}^{(k+1)}_{i_k i_k} & 0 \cr 0 & {H}^{(k+1)}_{j_k j_k}\end{smallmatrix}\!\right) .$$ If the matrix $H$ is real, then $\psi_k\equiv 0$ and the transformation matrices are (real plane) Jacobi rotations. Unlike tridiagonalization-based methods, the Jacobi method does not preserve any zero structure. This method, originally proposed by Jacobi for the real symmetric matrices [@jac-46] was rediscovered by Goldstine, Murray and von Neumann in [@gol-mur-neu-59], and the extension to complex Hermitian matrices was done by Forsythe and Henrici [@for-hen-60]. An instructive implementation with fine numerical details was provided by Rutishauser [@rutishauser-jacobi-66], and an analysis of asymptotic convergence by Hari [@har-91-2]. The beautiful simplicity of these methods allows for quite some elegant generalizations. The Jacobi method, for instance, has been formulated and analyzed in the context of Lie algebras [@Kleinsteuber-Helmke-Huper-Jacobi-CLA], [@KLEINSTEUBER2009155], and the method has its continuous form, the so-called Toda flow [@Isospectral-Watkins-1984], [@QR-Toda-flow-Chu-1984]. Backward stability in the conventional error analysis {#SS=backward-stability} ----------------------------------------------------- In finite precision (floating point) arithmetic, [not only are all processes described above polluted by rounding errors, but the iterations (\[eq:Hk-Lambda\]) must be terminated at some appropriately chosen finite index $k_\star$]{}. In the $k$-th step, say, instead of $H^{(k)}$ we will have its computed approximation $\widetilde{H}^{(k)}$, for which a numerically unitary matrix $\widetilde{U}^{(k)}$ (i.e. $\| (\widetilde{U}^{(k)})^* \widetilde{U}^{(k)} - I\|_2 \leq O(n\roff)$) will be constructed, and the congruence transformation by $\widetilde{U}^{(k)}$ will be executed with rounding errors. By a backward error analysis, the new computed iterate $\widetilde{H}^{(k+1)}$ satisfies $\widetilde{H}^{(k+1)} = (\widetilde{U}^{(k)})^* ( \widetilde{H}^{(k)} + E_k)\widetilde{U}^{(k)}$. In general, this congruence is not a unitary similarity. Its software implementation, however, is carefully designed to ensure that both $E_k$ and $\widetilde{H}^{(k+1)}$ are Hermitian. Since the computation has to be finite in time, one must also carefully determine the terminating index $k_\star$ such that the effectively computed matrix $$\label{eq:tlde-H-k} \widetilde{H}^{(k_\star)} = (\widetilde{U}^{(k_\star-1)})^* ((\cdots ((\widetilde{U}^{(2)})^* ((\widetilde{U}^{(1)})^* (H + E_1) \widetilde{U}^{(1)} + E_2) \widetilde{U}^{(2)} + E_3) \cdots) +E_{k_\star-1})\widetilde{U}^{(k_\star-1)} $$ is nearly diagonal, so that its sorted diagonal elements can be taken as approximate eigenvalues $\widetilde{\lambda}_1\geq\cdots\geq\widetilde{\lambda}_n$ of $H$. Let us assume that this sorting permutation is implicitly built in the transformation $\widetilde{U}^{(k_\star-1)}$, and let us write $$\widetilde{H}^{(k_\star)} = \widetilde{\Lambda} + \Omega(\widetilde{H}^{(k_\star)}),\;\;\mbox{where}\;\; \widetilde{\Lambda}=\begin{pmatrix} \widetilde{\lambda}_1 & 0& 0\cr 0 & \ddots & 0 \cr 0 & 0 & \widetilde{\lambda}_n\end{pmatrix},\;\; \widetilde{\lambda}_i = (\widetilde{H}^{(k_\star)})_{ii},\;\;i=1,\ldots, n,$$ [and $\Omega(\cdot)$ denotes the off-diagonal part of its matrix argument]{}. If the eigenvectors are also needed, they are approximated by the columns of the accumulated product $\widetilde{U}$ of the transformations $\widetilde{U}^{(k)}$. Since the accumulation is performed in finite precision, it can be represented as $$\widetilde{U} = (((( \widetilde{U}^{(1)} + F_1)\widetilde{U}^{(2)} + F_2) \widetilde{U}^{(3)} + \cdots ) + F_{k_\star-2})\widetilde{U}^{(k_\star-1)} .$$ An error analysis ([see e.g. [@parlett-sevp-98 §6.5], [@book-higham Ch. 19]]{}) shows that for some small $\delta\widetilde{U}$ the matrix $\widehat{U}=\widetilde{U}+\delta\widetilde{U}$ is unitary, and that there exists a Hermitian backward error $\delta H$ such that $$\label{zd:eq:tildeHk} \widetilde{H}^{(k_\star)} = \widehat{U}^* (H+\delta H)\widehat{U}.$$ Another tedious error analysis proves that $\|\delta H\|_2 \leq f(n)\roff \|H\|_2$, where the mildly growing function $f(n)$ depends on the details of each specific algorithm. The key ingredient in the analysis is the numerical unitarity of the transformations $\widetilde{U}^{(k)}$ in (\[eq:tlde-H-k\]). In the last step, the off-diagonal part $\Omega(\widetilde{H}^{(k_\star)})$ is deemed negligible and we use the approximate decomposition $\widetilde{U}^* H \widetilde{U} \approx \widetilde{\Lambda}$. The whole procedure is represented by the diagram in Figure \[zd:FIG:eig\_commutative\_diagram\], and summarized in Theorem \[zd:TM:eig\_backward\]. (10,3.0)(0,0) (0.7,2.5)[$H$]{} (3.4,2.5)[$\widetilde{H}^{(k_\star)} = \widehat{U}^* (H+\delta H)\widehat{U} = \widetilde{\Lambda} + \Omega(\widetilde{H}^{(k_\star)})$]{} (0.7,0.2)[$H+\delta H$]{} (3.7,0.2)[$\widehat{U}^* (H+\delta H - \widehat{U}\Omega(\widetilde{H}^{(k_\star)}) \widehat{U}^*)\widehat{U}=\widetilde{\Lambda} \approx \widetilde{U}^* H \widetilde{U}$]{} (1.1,2.55)[(0,-1)[2.1]{}]{} (6.85,2.4)[(0,-1)[1.7]{}]{} (7.0,1.7)[set off-diagonal to zero]{} (7.0,1.3)[equivalent to backward error]{} (7.0,0.9)[$-\widehat{U}\Omega(\widetilde{H}^{(k_\star)})\widehat{U}^*$]{} (1.2,0.4)[(1,1)[2.1]{}]{} (1.1,2.60)[(1,0)[2.2]{}]{} (1.8,0.25)[(1,0)[1.8]{}]{} (-0.25,1.8)[backward]{} (-0.25,1.5)[error $\delta H$]{} (2.2,1.2)[exact similarity]{} (1.22,2.8)[finite precision]{} \[zd:TM:eig\_backward\] For each $i=1,\ldots, n$, let $\widetilde{\lambda}_{i}$ and $\widetilde{u}_i$ be the approximate eigenvalue and the corresponding approximate eigenvector of the Hermitian $n\times n$ matrix $H$, computed by one of the algorithms from §\[ss=Classical-Methods\]. Then there exists a backward error $\Delta H$ and a unitary matrix $\widehat{U}$ such that $H+\Delta H = \widehat{U}\widetilde\Lambda\widehat{U}^*$ and $\|\Delta H\|_2/\|H\|_2 \approx f(n)\roff$, $\|\widetilde U-\widehat U\|_2\approx O(n\roff)$, [where $\widetilde{U}$ is the $n\times n$ matrix with $i$-th column $\widetilde{u}_i$]{}. The finite precision computation can be represented by a commutative diagram as in Figure \[zd:FIG:eig\_commutative\_diagram\]. We have therefore a seemingly perfect situation: (i) The computed eigenvalues $\widetilde{\lambda}_i$ are the exact eigenvalues of $H+\Delta H$, where $\|\Delta H\|_2/\|H\|_2$ is up to a factor of the dimension $n$ at the level of the roundoff unit $\roff$. (ii) By Theorem \[zd:TM:Weyl\], the absolute error in each $\widetilde{\lambda}_i$ is at most $\|\Delta H\|_2$. Case study: A numerical example {#zd:SSS:Example_3x3} ------------------------------- To put the framework of §\[SS=backward-stability\] under a stress test, we will compute the eigenvalues of a contrived $3\times 3$ real symmetric matrix, using the function `eig()` from the software package Matlab. This function is based on the subroutine `DSYEV` from LAPACK [@LAPACK], which implements a tridiagonalization-based algorithm. \[zd:EX:H3x3\] *We use [Matlab R2010b]{} on a Linux workstation; the roundoff unit is $\roff \approx 2.2\cdot 10^{-16}$. The function ${\tt eig()}$ computes the approximate eigenvalues $\widetilde{\lambda}_1\geq\widetilde{\lambda}_2\geq\widetilde{\lambda}_3$ ($n=3$) of $$\label{zd:eq:H3x3_eig()} H = \begin{pmatrix} 10^{40}& -2\cdot 10^{29} & 10^{19}\\[4pt] -2\cdot 10^{29} & 10^{20} &10^9 \\[4pt] 10^{19} & 10^{9} & 1 \end{pmatrix}\;\;\mbox{as:}\;\;\; \begin{array}{r|c|} & {\tt eig}(H) \\\hline \widetilde{\lambda}_1 & \;\;\; 1.000000000000000e+040\\ \widetilde{\lambda}_2 & -1.440001124147376e+020\\ \widetilde{\lambda}_3 & -1.265594217409065e+024 \end{array}\;.$$ Hence, the function ${\tt eig()}$ sees the matrix $H$ as indefinite with two negative eigenvalues. Following Theorem \[zd:TM:eig\_backward\] and Theorem \[zd:TM:Weyl\], we know that $\max_{i=1:n}|\widetilde{\lambda}_i-\lambda_i| \leq O(\roff)\|H\|_2$, and that our computed eigenvalues are true eigenvalues of a nearby matrix $H+\delta H$, where $\|\delta H\|_2/\|H\|_2 \leq O(\roff)$. Since $n=3$, the effect of accumulated roundoff is negligible.* However, to assess the quality of the approximation $\widetilde{\lambda}_i$ in terms of the number of its accurate digits, we need a bound to the relative error: $$\label{zd:eq:H3x3_classical_bound} \frac{|\widetilde{\lambda}_i-\lambda_i|}{|\lambda_i|} \leq \frac{O(\roff)\|H\|_2}{|\lambda_i|}\leq \frac{O(\roff)\|H\|_2}{\min_{j=1:n}|\lambda_j|} \leq O(\roff)\|H\|_2 \|H^{-1}\|_2 \equiv O(\roff)\kappa_2(H).$$ Thus, if $|\lambda_i| \approx \|H\|_2$, the computed approximation $\widetilde{\lambda}_i$ will have many correct digits. But if $|\lambda_i| \ll \|H\|_2$, then the above error bound cannot guarantee any correct digit in $\widetilde{\lambda}_i$. (It is immediately clear that $\lambda_1> 10^{40}$ and that $0<\lambda_3<1$, and thus $\kappa_2(H)>10^{40}$.) A second look at the matrix $H$ reveals its graded structure. In fact $$\label{eq:H=DAD-3x3} H = D A D,\;\; A=\begin{pmatrix} 1 & -0.2\; & 0.1 \cr -0.2 & 1 & 0.1 \cr 0.1 & 0.1 & 1\end{pmatrix}, \; D = \begin{pmatrix} 10^{20} & 0 & 0 \cr 0 & 10^{10} & 0 \cr 0 & 0 & 1\end{pmatrix},$$ which means that $H$ is positive definite, and has a Cholesky factorization $H=LL^T$ (using Gershgorin circles one may immediately conclude that $A$ is positive definite.) Hence, the values of $\widetilde{\lambda}_2$ and $\widetilde{\lambda}_3$ in (\[zd:eq:H3x3\_eig()\]) are utterly wrong, although the result of ${\tt eig}(H)$ is within the framework of §\[SS=backward-stability\]. Hence, the common routine of justifying the computed result by combining backward stability, ensured by unitary transformations, and the well-posedness in the sense of Weyl’s Theorem is leading us to accept entirely wrong results. $\boxtimes$ At this point, one might be tempted to deem the matrix ill-conditioned and simply give up computing the tiniest eigenvalues (those with $|\lambda_i| < O(\roff) \|H\|_2$) to high accuracy, because they may not be well determined by the data; one may think it is simply not feasible. [However, let us explore this further:]{} \[zd:SSS:Example\_3x3-b\] ** We continue with numerical experiments using the matrix $H$ from [Example \[zd:EX:H3x3\]]{}; we apply ${\tt eig()}$ to the similar matrices $H_\zeta = P_\zeta^T H P_\zeta$, $H_\xi = P_\xi^T H P_\xi$, where $P_\zeta$, $P_\xi$ are the matrix representations of the permutations $\zeta=(3,2,1)$, $\xi=(2,1,3)$, respectively (in an application, this reordering could represent just another enumeration of the same set of variables and equations, thus describing precisely the same problem). Running ${\tt eig()}$ on these permuted matrices gives the following approximate spectra: $$\label{zd:eq:H3x3_permuted} \begin{array}{r|c||c|} & {\tt eig}(H_\zeta) & {\tt eig}(H_\xi) \\ \hline \widetilde{\lambda}_1 & 1.000000000000000e+040 & 1.000000000000000e+040\\ \widetilde{\lambda}_2 & 9.600000000000000e+019 & 9.600000000000000e+019\\ \widetilde{\lambda}_3 & 9.750000000000001e-001 & 9.750000000000000e-001 \end{array}\; .$$ Similar values are computed with the permutation $\varpi=(3,1,2)$. With the permutations $(1,3,2)$ and $(2,3,1)$, however, ${\tt eig()}$ computes one single negative value, and the two positive values are identical to $\widetilde{\lambda}_1$ and $\widetilde{\lambda}_2$ in (\[zd:eq:H3x3\_permuted\]). All these computed approximate eigenvalues fit into the error estimate (\[zd:eq:H3x3\_classical\_bound\]), but the qualitative difference is striking. Now let us compute, for the sake of experiment, the eigenvalues of $H$ in two bizarre ways: first as reciprocal values of the eigenvalues of $H^{-1}$, then as eigenvalues of numerically computed $H(H H^{-1})$. These are of course not very practical procedures, but we just want to see whether computing the eigenvalues of $H$ to high accuracy is warranted by the input. The results are as follows: $$\label{zd:eq:H3x3_1/inv} \begin{array}{r|c||c|} & 1./{\tt eig}({\tt inv}(H)) & {\tt eig}(H*(H\backslash H)) \\ \hline \widetilde{\lambda}_1 & 1.000000000000000e+040 & 1.000000000000000e+040\\ \widetilde{\lambda}_2 & 9.600000000000000e+019 & 9.600000000000000e+019\\ \widetilde{\lambda}_3 & 9.749999999999999e-001 & 9.750000000000002e-001 \end{array}\; .$$ Similar values to those in (\[zd:eq:H3x3\_permuted\], \[zd:eq:H3x3\_1/inv\]) (up to a relative error of $O(\roff)$) are obtained either by calling ${\tt eig}((H\backslash H)*H)$ or by using the result of ${\tt eig}(D^{-2},A)$ (here we use that the standard eigenproblem $Hx=\lambda x$ is equivalent to the generalized eigenproblem $A y=\lambda D^{-2}y$ with $y=Dx$ and $A$, $D$ as in (\[eq:H=DAD-3x3\])). In view of all these numbers, what would be our best bet for the eigenvalues of $H$? Do the two smallest ones deserve to be computed better than already accepted and rationalized in Example \[zd:EX:H3x3\]? Moreover, the function `chol()` computes the lower triangular Cholesky factor $L$ of $H$ as $${\tt chol}(H)^T = \left(\begin{smallmatrix} \;\;\; 1.000000000000000e+020 & 0 & 0 \\ -2.000000000000000e+009 & 9.797958971132713e+009 & 0 \\ \;\;\; 9.999999999999999e-002 & 1.224744871391589e-001 & 9.874208829065749e-001 \end{smallmatrix}\right) ,$$ and the squared singular values of $L$ (the eigenvalues of $H$) are computed as $${\tt svd}(L).^2 = \left(\begin{matrix} 1.000000000000000e+040, & 9.600000000000002e+019, & 9.750000000000000e-001 \end{matrix}\right) .$$ Note that here the function ${\tt chol()}$, which is based on nonorthogonal transformations, correctly recognizes positive definiteness of $H$ and computes the triangular factor without difficulties. $\boxtimes$ Example \[zd:EX:H3x3\] demonstrates that even in the symmetric case, computing the eigenvalues by the state-of-the-art software tools may lead to difficulties and completely wrong results from a qualitative point of view. It is important to realize that, in the framework described in §\[SS=backward-stability\], the computed spectra (\[zd:eq:H3x3\_eig()\]), (\[zd:eq:H3x3\_permuted\]) $H$ are all equally good and can be justified by a backward error analysis. [ *In Example \[zd:EX:H3x3\] we specified that the results were obtained using Matlab R2010b on a Linux Workstation. On a Windows 10 based machine, the function ${\tt eig()}$ from Matlab R2015.a computes the eigenvalues of $H$ as $${\tt eig}(H)= \left(\begin{matrix} 1.000000000000000e+40, & 9.900074641938021e-01, & -1.929211388222242e+23 \end{matrix}\right) ,$$ and e.g. $ 1./{\tt eig}({\tt inv}(H))$ returns the same result as in (\[zd:eq:H3x3\_1/inv\]). Numerical libraries (such as LAPACK [@LAPACK], which is the computing engine for most of numerical linear algebra functions in Matlab) are often updated with improvements with respect to numerical robustness, optimizations with respect to run time etc. As a result, [[*the same computation may return different results (better or worse) after a mere routine software update*]{}]{}.$\boxtimes$* ]{} Of course, if the initial $H$ is given with an uncertainty $\delta_0 H$ that is only known to be small in norm ($\|\delta_0 H\|_2/\|H\|_2 \ll 1$) then we cannot hope to determine the smallest eigenvalues in the case of large condition number $\kappa_2(H)$. For instance, changing $H=\left(\begin{smallmatrix} 1 & 0 \cr 0 & \epsilon\end{smallmatrix}\right)$, $|\epsilon|\ll 1$, into $\left(\begin{smallmatrix} 1 & 0 \cr 0 & -\epsilon\end{smallmatrix}\right)$ is a small perturbation as measured in the operator norm $\|\cdot\|_2$, but it irreparably changes the smallest eigenvalue. If, however, the data is given with smaller and more structured uncertainties, and if small matrix entries are not merely noise, we ought to [do better/try harder]{}. Using the [customary]{} norm-wise backward stability statement as a universal justification for errors in the result is not enough. This is best expressed by Kahan [@Kahan-Mindless]: *“The success of Backward Error-Analysis at explaining floating-point errors has been mistaken by many an Old Hand as an excuse to do and expect no better.”* Computing the eigenvalues of positive definite matrices with high relative accuracy {#SS=posdef-accurate} ----------------------------------------------------------------------------------- Backward errors as analyzed in §\[SS=backward-stability\] are estimated in a matrix norm (usually $\|\cdot\|_2$ or $\|\cdot\|_F$). Unfortunately, a perturbation that is small in that sense may wipe out matrix entries that are in modulus much smaller than the matrix norm, and Example \[zd:EX:H3x3\] shows that smallest eigenvalues may incur substantial damage. Demmel [@dem-92-2] showed that even for tridiagonal symmetric matrices, there are examples where tridiagonal with any reasonable shift strategy must fail to accurately compute the smallest eigenvalues. However, some algorithms do produce better structured backward error that is gentler to small entries, even if the initial matrix has no particular structure and its entries vary over several orders of magnitude. One consequence of more structured perturbation is that the condition number changes, and that a large standard condition number $\kappa_2(H)=\|H\|_2 \|H^{-1}\|_2$ does not necessarily imply that the computed eigenvalues will have large relative errors. ### Floating point perturbations and scaled condition numbers {#zd:SSS:RelativeFLoatingPointPert} A closer look at some elementary factorizations, such as the Cholesky factorization of positive definite matrices, reveals that the standard norm-wise backward error analysis can be improved by estimating the relative errors in the individual entries. We illustrate this kind of analysis with an important example of the Cholesky factorization of real symmetric positive definite matrices.[^1] \[zd:TM:Demmel\_on\_Cholesky\](Demmel [@demmel-89-Cholesky]) Let an $n\times n$ real symmetric matrix $H$ with strictly positive diagonal entries, stored in format with roundoff $\roff$, be input to the Cholesky algorithm. Let $$H=D H_s D,\; D={\rm diag}(\sqrt{H_{ii}})_{i=1}^n, \;\; ( (H_s)_{ij}= \frac{H_{ij}}{\sqrt{H_{ii}H_{jj}}})$$ and set $\bfeta_C \equiv \frac{\max\{ 3, n\}\roff }{1- 2 \max\{ 3, n\} \roff}>0$. Then: 1. If $\lambda_{\min}(H_s) > n \bfeta_C$, then the algorithm computes a lower triangular matrix $\widetilde{L}$ such that $\widetilde{L} \widetilde{L}^T=H+\delta H$, and for all $i,j=1,\ldots, n$ the backward error $\delta H$ can be bounded by $$|\delta H_{ij}| \leq \bfeta_C \sqrt{H_{ii} H_{jj}}.$$ Thus, [$H\!+\!\delta H \!\!=\!\! D(H_s\! +\! \delta H_s)D$,]{} [where $\delta H_s=D^{-1}\delta H D^{-1}$ satisfies]{} $\max_{i,j}|(\delta H_s)_{ij}|\leq \bfeta_C\approx n\roff$. 2. If $\lambda_{\min}(H_s) < \roff$, then there exists a sequence of simulated rounding errors that will cause the failure of the Cholesky algorithm. 3. If $\lambda_{\min}(H_s) \leq - n \bfeta_C$, then the Cholesky algorithm will fail in floating point arithmetic with roundoff $\roff$. Note that Theorem \[zd:TM:Demmel\_on\_Cholesky\] does not assume that the matrix stored in the machine memory is positive definite. Indeed, it can happen that we know a priori that our problem formulation delivers a positive definite matrix, but the matrix actually stored in the computer memory is not definite, due to rounding errors. The following simple example illustrates this. \[zd:EX:StifnessIndefinite\] [ *(Cf. [@dgesvd-99 §11.1]) Consider the stiffness matrix of a mass spring system with $3$ masses attached to a wall, with spring constants $k_1=k_3=1$, $k_2=\roff/2$: $$\label{eq:mass-spring-3x3-H} {\boxminus\!\!\leftrightsquigarrow\!\!\blacksquare\!\!\leftrightsquigarrow\!\! \blacksquare\!\!\leftrightsquigarrow\!\!\blacksquare}\;\;\;\; H = \begin{pmatrix} k_1+k_2 & -k_2 & 0 \cr -k_2 & k_2+k_3 & -k_3\cr 0 & -k_3 & k_3 \end{pmatrix}, \;\;{\lambda_{\min}(H)\approx \roff/4}.$$ Here $\roff$ denotes the roundoff unit ([eps]{}) in Matlab, so that $1+\roff/2$ is computed and stored as exactly $1$. The true and the computed assembled stiffness matrix are, respectively, $$\label{eq:mass-spring-3x3-tildeH} H = \begin{pmatrix} 1 + \frac{\roff}{2} & -\frac{\roff}{2} & 0 \cr -\frac{\roff}{2} & 1 + \frac{\roff}{2} & -1 \cr 0 & -1 & 1\end{pmatrix},\;\; \widetilde{H} = \begin{pmatrix}1 & -\frac{\roff}{2} & 0 \cr -\frac{\roff}{2} & 1 & -1 \cr 0 & -1 & 1\end{pmatrix}.$$ It is important to note here that the stored matrix $\widetilde{H}$ is component–wise close to $H$ with $${\displaystyle {|\widetilde{H}_{ij}-H_{ij}|}\leq \frac{\roff}{(2+\roff)} {|H_{ij}|} < \frac{\roff}{2} {|H_{ij}|}}\;\;\mbox{for all}\;\;i,j.$$ The matrix $H$ is by construction positive definite, whilst $\mathrm{det}(\widetilde{H})=-\roff^2/4$, and the smallest eigenvalue of $\widetilde{H}$ can be estimated as $\lambda_{\min}(\widetilde{H})\approx -\roff^2/8$. Hence, even computing the eigenvalues of $\widetilde{H}$ exactly could not provide any useful information about the smallest eigenvalue of $H$. The message of this example is: Once the data has been stored in the machine memory, the smallest eigenvalues can be so irreparably damaged that even exact computation cannot restore them. It is not hard to imagine how sensitive and fragile the computation of the smallest eigenvalues of $H$ can be when $\kappa_2(H)$ is large and the dimension of $H$ is in the tens or hundreds of thousands, as e.g. in the case of discretizing elliptic boundary value problems $\nabla\cdot (\bfa \nabla u)=f$ on $\Omega$, $u=g$ on the boundary of $\Omega$, where the scalar coefficient field $\bfa$ on $\Omega$ varies over several orders of magnitude, see e.g. [@vavasis-96-FEMWC]. $\boxtimes$* ]{} [ *Interestingly, if we consider assembling $H$ as in (\[eq:mass-spring-3x3-H\]) as a mapping $(k_1,k_2,k_3)\mapsto H$, then the computation of $\widetilde{H}$ as in (\[eq:mass-spring-3x3-tildeH\]) is not backward stable. There is no choice of stiffnesses $\widetilde{k}_1, \widetilde{k}_2$, $\widetilde{k}_3$ that would assemble (in exact arithmetic) to $\widetilde{H}$ that corresponds to three masses connected with springs as illustrated in (\[eq:mass-spring-3x3-H\]). The reason is the indefiniteness of $\widetilde{H}$. On the other hand, the computation of $\widetilde{H}$ is perfectly forward stable. $\boxtimes$* ]{} ### Characterization of well-behaved positive definite matrices {#SSS=well-behaved-PD} In some cases, the perturbation can be represented in the multiplicative form, i.e. $H+\delta H = (I+E) H (I+E)$ where $E$ can be bounded using the structure of $\delta H$. At the core of eigenvalue perturbation estimates is then Theorem \[zd:TM:Ostrowski\] below. For a detailed study, we refer to [@eis-ips-95] and [@Ren-Cang-Li-98-I], [@Ren-Cang-Li-98-II]. \[zd:TM:Ostrowski\](Ostrowski [@ost-59]) Let $H$ and $\widetilde{H}=Y^* H Y$ be Hermitian matrices with eigenvalues $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_n$ and $\widetilde{\lambda}_1\geq\widetilde{\lambda}_2\geq \cdots\geq\widetilde{\lambda}_n$, respectively. Then, for all $i$, $$\widetilde{\lambda}_i =\lambda_i \xi_i,\;\;\quad \mbox{where }\quad \lambda_{\min}(Y^*\! Y)\leq \xi_i\leq \lambda_{\max}(Y^*\! Y).$$ The following theorem illustrates how the backward error of the structure as in Theorem \[zd:TM:Demmel\_on\_Cholesky\], combined with Theorem \[zd:TM:Ostrowski\], yields a sharp bound on the relative error in the eigenvalues. \[zd:TM:PertScaledPD\] Let $\lambda_1\geq\cdots\geq\lambda_n$ and $\widetilde{\lambda}_1\geq\cdots\geq\widetilde{\lambda}_n$ be the eigenvalues of $H=LL^T$ and of $\widetilde H=H+\delta H=\widetilde L\widetilde L^T$, respectively. If $\|L^{-1}\delta HL^{-*}\|_2 < 1$, then $$\label{eq:dlambda} \max_i \left|\frac{\widetilde{\lambda}_i -\lambda_i}{\lambda_i}\right| \leq {\| H_s^{-1}\|_2} {\left\| \left[ \frac{\delta H_{ij}}{\sqrt{H_{ii}H_{jj}}}\right]_{i,j=1}^n \right\|_2},$$ [where $H_s$ is defined as in the statement of Theorem \[zd:TM:Demmel\_on\_Cholesky\]]{}. (Recall the classical Weyl’s theorem: $\max_{i}\left|\frac{\widetilde{\lambda}_i -\lambda_i}{\lambda_i}\right| \leq \kappa_2(H) \frac{\|\delta H\|_2}{\|H\|_2}$.) *Proof:* Let $Y=\sqrt{I+L^{-1}\delta HL^{-*}}$. Then $ H+\delta H = L(I+L^{-1}\delta HL^{-*})L^* = LYY^* L^*$ is similar to $Y^* L^* LY$, and we can equivalently compare the eigenvalues $\lambda_i(L^* L)=\lambda_i(H)$ and $\lambda_i(Y^* L^* LY)=\lambda_i(H+\delta H)$. Now recall Ostrowski’s theorem: If $\widetilde M = Y^* M Y$, then, for all $i$, $\lambda_i(\widetilde M)=\lambda_i(M)\xi_i$, where $\lambda_{\min}(Y^*\! Y)\leq \xi_i\leq \lambda_{\max}(Y^*\! Y)$. Since $Y^*\! Y=I + L^{-1}\delta HL^{-*}$, we have [$|\lambda_i(H)-\lambda_i(\widetilde H)|\leq \lambda_i(H) \|L^{-1}\delta H L^{-*}\|_2$]{}, with $$\begin{aligned} \|L^{-1}\delta H L^{-*}\|_2&=&\|L^{-1}D( {D^{-1}\delta H D^{-1}}) D L^{-*}\|_2=\|L^{-1} D ({\delta H_s}) D L^{-*}\|_2\\ &\leq& \|L^{-1} D\|_2^2 \|\delta H_s\|_2=\|D L^{-*}L^{-1}D\|_2 \|{\delta H_s}\|_2\\ &=& \| {(D^{-1} H D^{-1})^{-1}}\|_2 \|\delta H_s\|_2 = \| {H_s^{-1}}\|_2 \|\delta H_s\|_2,\end{aligned}$$ [where we have denoted $\delta H_s=D^{-1}\delta H D^{-1}$ as in Theorem \[zd:TM:Demmel\_on\_Cholesky\]]{}. The claim (\[eq:dlambda\]) follows since ${\left(\delta H_s\right)_{ij}}=\delta H_{ij}/\sqrt{H_{ii}H_{jj}}$. $\boxplus$ Since $\| H_s^{-1}\|_2 \leq \kappa_2(H_s)$, we see that, in essence, we have replaced the spectral condition $\kappa_2(H)$ with $\kappa_2(H_s)$, which behaves much better – it is never much larger and it is potentially much smaller. In fact, $\|H_s^{-1}\|_2\leq \frac{n}{\|H_s\|_2}\min_{D=\mathrm{diag}}\kappa_2(D H D)$. This claim is based on the following theorem. \[TM:VanDerSuis\] (Van der Sluis [@slu-69]) Let $H$ be a positive definite Hermitian matrix, $\Delta=\mathrm{diag}(\sqrt{H_{ii}})$ and $H_s = \Delta^{-1}H\Delta^{-1}$. Then $ \kappa_2(H_s)\leq n \min_{D=diag}\kappa_2(D H D), $ [where the minimum is taken over all possible diagonal scalings $D$]{}. \[EX:cond-scond\][ *If we consider the matrix in Example \[zd:EX:H3x3\], we see that if $A\equiv H_s$, then $\|H_s^{-1}\|<1.4$ and $\kappa_2(H_s) < 1.7$. This means that an algorithm with backward perturbation $\delta H$ of the form described in Theorem \[zd:TM:PertScaledPD\] may compute all eigenvalues of $H$ to nearly full machine precision (standard double precision with machine roundoff $\roff \approx 10^{-16}$) despite the fact that $\kappa_2(H)>10^{40}$.*]{} $\boxtimes$ \[REM:scond-unit-i\][*[ The condition number $\kappa_2(H)$ is unitarily invariant: $\kappa_2(W^* H W)=\kappa_2(H)$ for any unitary matrix $W$. On the other hand, $\kappa_2((W^* H W)_s)$ can increase with a big factor. For instance, it is well known that there is a unitary $W$ such that $W^* H W$ has constant diagonal and thus (because of the homogeneity of the condition umber) $\kappa_2((W^* HW)_s)=\kappa_2(W^* H W)=\kappa_2(H)$, which can be much bigger that $\kappa_2(H_s)$, as illustrated in Example \[EX:cond-scond\]. ]{} $\boxtimes$* ]{} The number $\|H_s^{-1}\|_2$ can be interpreted geometrically in terms of the inverse distance to singularity, as measured with respect to entry-wise perturbations. (Demmel [@demmel-89-Cholesky]) Let $H=D H_s D$, where $D={\rm diag}(\sqrt{H_{ii}})_{i=1}^n$, and let $\lambda_{\min}(H_s)$ be the minimal eigenvalue of $H_s$. If $\delta H$ is a symmetric perturbation such that $H+\delta H$ is not positive definite, then $${\displaystyle \max_{1\leq i,j\leq n}\frac{|\delta H_{ij}|}{\sqrt{H_{ii}H_{jj}}} \geq \frac{\lambda_{\min}(H_s)}{n}=\frac{1}{n\|H_s^{-1}\|_2}}.$$ If $\delta H = -\lambda_{\min}(H_s) D^2$, then ${\displaystyle \max_{i,j}\frac{|\delta H_{ij}|}{\sqrt{H_{ii}H_{jj}}}=\lambda_{\min}(H_s)}$ and $H+\delta H$ is singular. This means that in the case where $\|H_s^{-1}\|_2 > 1/\roff$, small entry-wise perturbations can cause $H$ to lose the definiteness. Hence, if we assume no additional structure (such as sparsity pattern or signs distribution) a positive definite Hermitian matrix in floating point can be considered numerically positive definite only if $\|H_s^{-1}\|_2$ is moderate (below $1/\roff$). The following two results further fortify this statement. \[TM-SPD\] (Veselić and Slapničar [@ves-sla-93]) Let $H=D H_s D$, [where $D={\rm diag}(\sqrt{H_{ii}})_{i=1}^n$]{}, be positive definite and let $c>0$ be a constant such that for all $\epsilon \in (0,1/c)$ and for all symmetric perturbations $\delta H$ with $|\delta H_{ij}|\leq\epsilon |H_{ij}|$, $1\leq i,j\leq n$, the ordered eigenvalues $\lambda_i$ and $\widetilde{\lambda}_i$ of $H$ and $H+\delta H$ satisfy ${\displaystyle \max_{1\leq i\leq n} \frac{|\widetilde{\lambda}_i - \lambda_i|}{\lambda_i} \leq c \epsilon.}$ Then ${\displaystyle \|H_s^{-1}\|_2 < (1+c)/2}$. \[COR-SPD\](Demmel and Veselić [@dem-ves-92]) Let $H$ be $n\times n$ positive definite and $\delta H = \eta D^2$, with any $\eta\in (0,\lambda_{\min}(H_s))$ and $D={\rm diag}(\sqrt{H_{ii}})_{i=1}^n$. Then for some index $\ell$ it holds that $$\frac{\widetilde{\lambda}_{\ell}}{\lambda_{\ell}} \geq \sqrt[n]{1 + \eta \| H_s^{-1} \|_2}\equiv \sqrt[n]{1 + \max_{i,j}\frac{|\delta H_{ij}|}{\sqrt{H_{ii}H_{jj}}} \| H_s^{-1} \|_2} \approx 1 + \frac{\| H_s^{-1} \|_2}{n} \max_{i,j}\frac{|\delta H_{ij}|}{\sqrt{H_{ii}H_{jj}}} .$$ Essentially, if we have no additional structure (e.g. sparsity or sign pattern) accurate computation of all eigenvalues of Hermitian positive definite matrices in floating point is feasible if and only if $\|H_s^{-1}\|_2$ is moderate (as compared to $1/\roff$). In that case, allowing that the entries of $H$ are known up to small relative errors, computing the Cholesky factorization $H+\delta H=\widetilde{L}\widetilde{L}^*$ and working with $H$ in factored form via the computed factor $\widetilde{L}\approx L$ transforms the problem into the one of computing the of the computed triangular factor. Since $\kappa_2(\widetilde{L})\approx \sqrt{\kappa_2(H)}$, the major part of the error is in the Cholesky factorization, as described in Theorem \[zd:TM:Demmel\_on\_Cholesky\] and Theorem \[zd:TM:PertScaledPD\]. ### Symmetric Jacobi algorithm for positive definite matrices {#SSS=symm-jac-pd} Demmel and Veselić [@dem-ves-92] proved that the symmetric Jacobi algorithm, when applied to the positive definite $H=H^T \in\mathbb{R}^{n\times n}$, produces backward errors that allow for direct application of Theorem \[zd:TM:PertScaledPD\] at each iteration. To clarify, consider the commutative diagram of the entire process in Figure \[zd:FIG-symmj\]. (15,3.5)(0,0) (0,0)[$\widetilde{H}^{(1)}+\delta\widetilde{H}^{(1)}$]{} (2.2,0.5)[(1,1)[1.9]{}]{} (2.3,1.5)[$\widehat{U}^{(1)}$]{} (2.3,2.75)[$\widetilde{U}^{(1)}$]{} (3.05,0)[$\widetilde{H}^{(2)}+\delta \widetilde{H}^{(2)}$]{} (5.3,0.5)[(1,1)[1.9]{}]{} (5.4,1.5)[$\widehat{U}^{(2)}$]{} (5.4,2.75)[$\widetilde{U}^{(2)}$]{} (6.9,0)[$\cdots$]{} (7.8,0)[$\widetilde{H}^{(k_\star-1)}+\delta \widetilde{H}^{(k_\star-1)}$]{} (10.4,0.5)[(1,1)[1.9]{}]{} (12.0,1.5)[$\widehat{U}^{(k_\star-1)}$]{} (10.7,2.81)[$\widetilde{U}^{(k_\star-1)}$]{} (0.1,2.5)[$H=\widetilde{H}^{(1)}$]{} (1.1,2.35)[(0,-1)[1.8]{}]{} (1.7,2.50)[(1,0)[2.1]{}]{} (3.9,2.5)[$\widetilde{H}^{(2)}$]{} (4.70,2.50)[(1,0)[1.99]{}]{} (4.1,2.35)[(0,-1)[1.8]{}]{} (6.9,2.5)[$\cdots$]{} (7.6,2.50)[(1,0)[1.3]{}]{} (9.1,2.5)[$\widetilde{H}^{(k_\star-1)}$]{} (10.4,2.50)[(1,0)[1.75]{}]{} (9.25,2.35)[(0,-1)[1.8]{}]{} (12.2,2.5)[$\widetilde{H}^{(k_\star)} = \widetilde{\Lambda} + \Omega(\widetilde{H}^{(k_\star)})$]{} Backward error analysis shows that for each iteration index $k$, $$\label{eq:jac-rel-back-err} |(\delta\widetilde{H}^{(k)})_{ij}|\leq O(\roff)\sqrt{(\widetilde{H}^{(k)})_{ii}(\widetilde{H}^{(k)})_{jj}}, \;\; 1\leq i,j\leq n,$$ which means that the relative error introduced in the $k$th step is governed by $\|(\widetilde{H}^{(k)})_s^{-1}\|_2$. (Here, $(\widetilde{H}^{(k)})_s$ is defined analogously to $H_s$ in the statement of Theorem \[zd:TM:Demmel\_on\_Cholesky\], and we apply Theorem \[zd:TM:PertScaledPD\].) The iterations are stopped at the first index $k_{\star}$ for which $$|(\widetilde{H}^{(k_{\star})})_{ij}|\leq O(\roff)\sqrt{(\widetilde{H}^{(k_{\star})})_{ii}(\widetilde{H}^{(k_{\star})})_{jj}}, \;\; 1\leq i,j\leq n,$$ so that setting $(\widetilde{H}^{(k_{\star})})_{ij}$ to zero induces the perturbation $(\delta\widetilde{H}^{(k_{\star})})_{ij}=-(\widetilde{H}^{(k_{\star})})_{ij}$ of the type (\[eq:jac-rel-back-err\]). The overall accuracy depends on $\bfmu(H)=\max_{1\leq k\leq k_{\star}}\|(\widetilde{H}^{(k)})_s^{-1}\|_2$, which in practice is never much larger than $\|H_s^{-1}\|_2$. However, a formal proof of this remains an interesting open problem; for some discussion on this issue see [@mas-94], [@drm-96-conbeh]. An algorithm that computes the eigenvalues of any positive definite Hermitian $H$ to the accuracy determined by $\|H_s^{-1}\|_2$ (independent of $\bfmu(H)$) is given in §\[SSS=Impl-Jacobi-eig\]. Implicit representation of positive definite matrices {#SS=implicit-pd} ----------------------------------------------------- In Example \[zd:EX:StifnessIndefinite\], $H\approx H_s$ and $\|H_s^{-1}\|_2\approx 1/O(\roff)$, and, by Theorem \[TM:VanDerSuis\], no diagonal scaling can substantially reduce its high condition number. One could argue that $H$ is ill-conditioned and that its smallest eigenvalue is not well determined by the data, i.e. by the matrix entries $H_{ij}$, and that it cannot be computed to any digit of accuracy. Indeed, the smallest eigenvalue has been lost at the very moment of storing the positive definite $H$ into the machine memory as the indefinite matrix $\widetilde{H}$, due to small relative changes (of the size of the machine roundoff unit) in the entries $H_{ij}$. Hence, not even the exact computation with $\widetilde{H}$ could restore the information on the smallest eigenvalue of $H$. However, one may ask what data is actually given in this problem, and then argue that the data are the material properties (the stiffnesses $k_1, k_2, k_3$ of the springs) and the structure of the connections between the springs (adjacency). In fact, the stiffness matrix is usually assembled based on that information. In other words, $H$ can be written in a factored form as $$\begin{aligned} H &=& \begin{pmatrix} 1 & -1 & 0 \cr 0 & 1 & -1 \cr 0 & 0 & 1\end{pmatrix} \begin{pmatrix} k_1 & 0 & 0 \cr 0 & k_2 & 0 \cr 0 & 0 & k_3\end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \cr -1 & 1 & 0 \cr 0 & -1 & 1 \end{pmatrix} \equiv B^T \mathrm{diag}(k_i)_{i=1}^3 B \label{eq:GTG1}\\ &=& \begin{pmatrix} \sqrt{k_1} & -\sqrt{k_2} & 0 \cr 0 & \sqrt{k_2} & -\sqrt{k_3}\cr 0 & 0 & \sqrt{k3}\end{pmatrix} \begin{pmatrix} \sqrt{k_1} & 0 & 0 \cr -\sqrt{k_2} & \sqrt{k_2} & 0 \cr 0 & -\sqrt{k_3} & \sqrt{k_3} \end{pmatrix} = G^T G ,\;\; G = \mathrm{diag}(\sqrt{k_i})_{i=1}^3 B, \label{eq:GTG2} \end{aligned}$$ thus clearly separating the adjacency from the material properties. [Furthermore, the of the bidiagonal matrix $G$ can be computed to full machine precision [@dem-kah-90], and once a high-accuracy $G=U\Sigma V^T$ of $G$ is obtained, then $H = V \Sigma^2 V^T$ is the spectral decomposition of $H$.]{} If each $k_i$ is given with an initial uncertainty as $\widetilde{k}_i=k_i(1+\delta k_i/k_i)$, $\max_i|\delta k_i/k_i|\ll 1$, then in this factored representation we operate on $$\widetilde{G} = \begin{pmatrix} \sqrt{1+\delta k_1/k_1} & 0 & 0 \cr 0 & \sqrt{1+\delta k_2/k_2} & 0 \cr 0 & 0 & \sqrt{1+\delta k_3/k_3}\end{pmatrix} \begin{pmatrix} \sqrt{k_1} & 0 & 0 \cr -\sqrt{k_2} & \sqrt{k_2} & 0 \cr 0 & -\sqrt{k_3} & \sqrt{k_3} \end{pmatrix} ,$$ that is, $\widetilde{G}=(I+\Gamma)G$, where $\|\Gamma\|_2 \leq 0.5 \max_{i}|\delta k_i/k_i|$. By [@dem-kah-90] (see also the proof of Theorem \[zd:TM:PertScaledPD\] and Theorem \[zd:TM:Eisenstat\_Ipsen\]) we know that the singular values of $G$ (and also the eigenvalues of $H$) are determined to nearly the same number of digits to which the coefficients $k_i$ are given, and that we can provably compute the singular values to that accuracy. This is in sharp contrast with the situation illustrated in Example \[zd:EX:StifnessIndefinite\]. Hence, a different representation of the same problem is now perfectly well suited for numerical computations. The key for computing the eigenvalues of $H$ accurately is not to build $H$ at all, and to work directly with the parameters of the original problem. This example raises an issue that is well known in the numerical linear algebra community, namely, that it is always advantageous to work with positive definite matrices [*implicitly*]{}. Since each positive definite matrix $H$ can be written as $H = A^* A$ with infinitely many choices for the full column rank (and in general rectangular) matrix $A$, we may find that in our specific situation such a factor is actually available. Here $A$ is not necessarily the Cholesky factor $L$, it need not be even square. Let us briefly comment a few well known examples. - The solution of the linear least squares problem $\|Ax-b\|_2\longrightarrow \min$ with real full column rank $A$ can be computed from the normal equations $A^T A x = A^T b$, but it is well known to numerical analysts that this is not a good idea because $H=A^T A$ satisfies $\kappa_2(H)=\kappa_2(A)^2$, see e.g. [@book-bjorck §2.1.4]. In other words if $A$ is $\epsilon=1/\kappa_2(A)$-close to singularity, then $H$ is $\epsilon^2$-close to some singular matrix. Notice that the positive definite matrix $H=A^T A$ is just an auxiliary object, not part of the initial data ($A$, $b$), and that it has been invoked by the analytical characterization of the optimal $x$, where solving $Hx=b$ is considered simple since $H$ is positive definite. It turns out, in this case it is numerically advantageous to proceed by using the factorization with pivoting (or the ) of $A$, and never to form and use $H$. - Let $A$ be a Hurwitz-stable matrix [(i.e., all its eigenvalues lie on the left half-plane)]{} and suppose the matrix pair $(A,B)$ is controllable. Computing the positive definite solution $H$ (controllability Gramian) of the Lyapunov matrix equation $A H + H A^* = -BB^*$ is difficult in the ill-conditioned cases and any numerical algorithm may fail to compute, in finite precision, the positive definite solution matrix $H$. Namely, if the solution $H$ is ill-conditioned, then it is close to the boundary of the cone of positive definite matrices and $H+\delta H$ may become semidefinite or indefinite, even for small forward error $\delta H$. If the algorithm implicitly uses the assumed definiteness, it may fail to [run to completion/terminate]{} (analogously to the failure of the Cholesky decomposition of a matrix that is not numerically positive definite). It is better to solve the equation with the Cholesky factor $L$ of $H$ as the new unknown ($H=LL^*$, $L$ lower triangular with positive diagonal). Such approach was first advocated by Hammarling [@ham-82], and later improved by Sorensen and Zhou [@Sorensen_Zhou:2003:DirectSylvLyap]. In this case, $L$ defines $H$ implicitly and $H=LL^*$ is positive definite. Also, since $\kappa_2(L)=\sqrt{\kappa_2(H)}$, the computed Cholesky factor $\widetilde{L}\approx L$ is more likely to be well-conditioned and the implicitly defined solution $\widetilde{H}=\widetilde{L}\widetilde{L}^*$ is positive definite. Further, the Hankel singular values discussed in §\[SS:scaling-Hankel-SVD\], can be computed directly from the Cholesky factors of the two Gramians, see §\[S=PSVD\]. - In finite element computations, the symmetric positive definite $n\times n$ stiffness matrix $H$ can be assembled by factors – the assembly process can be rewritten to produce a (generally, rectangular) matrix $A$ such that $H=A^T A$. This is the so called *natural factor formulation*, see Argyris [@argyris-naturalFEM-75]. Such a formulation naturally leads to the Generalized Singular Value Decomposition () introduced by Van Loan [@van-Loan-GSVD]. If e.g. $H_{ij}=\int_{a}^b \rho(x)\phi_i(x)\phi_j(x)dx$, $1\leq i,j\leq n$, and the integrals are evaluated by a quadrature formula $H_{ij}\approx \sum_{k=1}^m \omega_k \rho(x_k)\phi_i(x_k)\phi_j(x_k)$, then $H \approx A^T A$, where $A_{kj} = \sqrt{\omega_k \rho(h_k)}\phi_j(x_k)$, $1\leq k\leq m$, $1\leq j\leq n$. Note that $A = D \Phi$, where $D=\mathrm{diag}(\sqrt{\omega_k \rho(h_k)})_{k=1}^m$ and $\Phi_{kj}=\phi_j(x_k)$. Not only $\kappa_2(A)=\sqrt{\kappa_2(H)}$, but the most likely source of extreme ill-conditioning in $A$ (and thus in $H$) is clearly [isolated within]{} the diagonal matrix $D$. If an algorithm can exploit this and compute with a scaling invariant condition number, then the essential and true condition number is that of $\Phi$, which depends on the choice of the basis functions $\phi_i(\cdot)$. We conclude by noting that in all these examples, eigenvalue computations with $H$ can be equivalently done implicitly via the or the of $A$. [Computing the]{} {#S=SVD} ================== Let $A$ be an $m\times n$ complex matrix. Without loss of generality, we assume $m\geq n$. The $A=U\left(\begin{smallmatrix}\Sigma \cr 0\end{smallmatrix}\right)V^*$ implicitly provides the spectral decompositions of $A^* A$ and $AA^*$, and it is the tool of the trade in matrix computations and applications. Numerical algorithms for computing the are implicit formulations of the diagonalization methods for the Hermitian matrices. In the same way, the perturbation theory for the is derived from the variational principles, and the error estimates of the computed singular values are derived from the combination of the backward error theory and the following classical result: \[TM:SVD-Weyl\] Let the singular values of $A$ and $A+\delta A$ be $\sigma_1 \geq \cdots \geq\sigma_{\min(m,n)}$ and $\widetilde{\sigma}_1 \geq \cdots \geq\widetilde{\sigma}_{\min(m,n)}$, respectively. Then the distances between the corresponding singular values are estimated by a Weyl type bound $$\max_{i}|\widetilde{\sigma}_i-\sigma_i| \leq \| \delta A\|_2 .$$ Further, the Wieland-Hoffman theorem yields ${\displaystyle \sqrt{\sum_{i=1}^{\min(m,n)}|\widetilde{\sigma}_i-\sigma_i|^2}\leq \|\delta A\|_F .} $ To estimate the relative errors $|\widetilde{\sigma}_i - \sigma_i|/\sigma_i$, one uses perturbation in multiplicative form: \[zd:TM:Eisenstat\_Ipsen\] (Eisenstat and Ipsen, [@eis-ips-95]) Let $\sigma_1\geq\cdots\geq\sigma_n$ and $\widetilde{\sigma}_1\geq\cdots\geq\widetilde{\sigma}_n$ be the singular values of $A$ and $A+\delta A$, respectively. Assume that $A+\delta A$ can be written in the form of multiplicative perturbation $A+\delta A = \Xi_1 A \Xi_2$ and let $\xi=\max\{\|\Xi_1\Xi_1^T - I\|_2, \|\Xi_2^T\Xi_2-I\|_2\}$. Then $$|\widetilde{\sigma}_i - \sigma_i| \leq \xi \sigma_i,\;\;i=1,\ldots, n.$$ For relative perturbation theory for the singular values and the singular vectors see [@Ren-Cang-Li-98-I], [@Ren-Cang-Li-98-II], and for an excellent review see [@Ipsen:1998:RPR]. [We now describe and analyze three families of algorithms, highlighting their different properties regarding high-accuracy computations. For more details and further references see [@golub-vnl-4 §8.6], [@hogben14 Ch. 58].]{} Bidiagonalization-based methods {#SS=bidiagSVD} ------------------------------- Tridiagonalization (\[eq:tridiagonal\]) of $H=A^*A$ can be achieved implicitly by reducing $A$ to bidiagonal form [@gol-kah-65] $$\label{eq:bidiag} U_1^* A V_1 = \begin{pmatrix} B \cr 0 \end{pmatrix} ,\;\;U_1, V_1 \;\; \mbox{unitary},\;\;B=\left(\begin{smallmatrix} \alpha_1 & \beta_1 & & \cr & \alpha_2 & \ddots & & \cr & & \ddots & \beta_{n-1} \cr & & & \alpha_n\end{smallmatrix}\right).$$ The bidiagonalization process can be illustrated as follows [(as in §\[ss=Classical-Methods\] above, $\star$ denotes an entry which has already been modified by the algorithm and set to its final value)]{}: $$\begin{aligned} A^{(1)} &=& U_{(1)}^T A = \left(\begin{smallmatrix}\star & \times & \times & \times\cr 0 & \times & \times & \times\cr 0 & \times & \times & \times\cr 0 & \times & \times & \times\end{smallmatrix}\right),\;\; A^{(2)} = A^{(1)} V_{(1)} = \left(\begin{smallmatrix} \star & \star & 0 & 0\cr 0 & \times & \times & \times \cr 0 & \times & \times & \times \cr 0 & \times & \times & \times \end{smallmatrix}\right),\;\; A^{(3)} =U_{(2)}^T A^{(2)} = \left(\begin{smallmatrix} \star & \star & 0 & 0 \cr 0 & \star & {\times} & {\times}\cr 0 & 0 & \times & \times \cr 0 & 0 & \times & \times\end{smallmatrix}\right), \cr A^{(4)}&=&A^{(3)} V_{(2)} = \left(\begin{smallmatrix} \star & \star & 0 & 0 \cr 0 & \star & \star & 0\cr 0 & 0 & \times & \times \cr 0 & 0 & \times & \times\end{smallmatrix}\right),\;\; A^{(5)}= U_{(3)}^* A^{(4)} = \left(\begin{smallmatrix} \star & \star & 0 & 0 \cr 0 & \star & \star & 0\cr 0 & 0 & \star & \star \cr 0 & 0 & 0 & \star\end{smallmatrix}\right) = B = (U_{(3)}^* U_{(2)}^* U_{(1)}^*) A (V_{(1)} V_{(2)}) .\end{aligned}$$ Here $U_{(k)}$ and $V_{(k)}$ denote suitably constructed Householder reflectors, and their accumulated products form the matrices $U_1$ and $V_1$ in (\[eq:bidiag\]). In the next step, the of the bidiagonal $B = U_2 \Sigma V_2^*$, can be computed by several efficient and elegant algorithms that implicitly work on the tridiagonal matrix $B^* B$; see e.g. [@dem-kah-90], [@mgu-eis-95], [@GROSSER200345]. Combining the of $B$ with the bidiagonalization (\[eq:bidiag\]) yields the of $A$: $$A = U_1 \begin{pmatrix} U_2 & 0 \cr 0 & I \end{pmatrix} \begin{pmatrix} \Sigma \cr 0 \end{pmatrix} (V_1 V_2)^* \equiv U \begin{pmatrix} \Sigma \cr 0 \end{pmatrix} V^* ,\;\;\Sigma = \left(\begin{smallmatrix} \sigma_1 & \cr & \ddots & \cr & & \sigma_n\end{smallmatrix}\right),\;\;\sigma_1\geq\cdots\geq\sigma_n.$$ Since only unitary transformations are involved, we can prove existence of a backward error $\delta A$ and unitary matrices $\widehat{U}_1$, $\widehat{V}_1$ such that the computed matrices $\widetilde{U}_1$, $\widetilde{V}_1$, $\widetilde{B}$ satisfy $\widetilde{U}_1\approx \widehat{U}_1$, $\widetilde{V}_1\approx\widehat{V}_1$ ($\widetilde{U}_1$, $\widetilde{V}_1$ are numerically unitary) and $$\label{eq:bidiagSVD} A+\delta A = \widehat{U}_1\begin{pmatrix} \widetilde{B} \cr 0 \end{pmatrix}\widehat{V}_1^* ,\;\;{\|\delta A\|_F} \leq \epsilon_1 {\|A\|_F}.$$ It is important to know to what extent the singular values of $\widetilde{B}$ approximate the singular values of $A$. To that end, we invoke classical perturbation theory: if $\sigma_1(\widetilde{B})\geq\cdots\geq \sigma_{n}(\widetilde{B})$ are the singular values of $\widetilde{B}$ then applying Theorem \[TM:SVD-Weyl\] to (\[eq:bidiagSVD\]) yields $$\label{eq:dsigma(B)} (i)\;\;\max_{i}|\sigma_i(\widetilde{B}) - \sigma_i| \leq \|\delta A\|_2;\;\;\;\;\;\;(ii)\;\;\sqrt{\sum_{i=1}^{n}|{\sigma}_i(\widetilde{B})-\sigma_i|^2}\leq \|\delta A\|_F \leq \epsilon_1 \|A\|_F.$$ The computation of the of $\widetilde{B}$ is also backward stable. If $\widetilde{\Sigma}$ is the diagonal matrix of the computed singular values, then, with some unitary matrices $\widehat{U}_2$, $\widehat{V}_2$ and some backward error $\delta \widetilde{B}$ we have $$\label{ex:SVD(B)} \widetilde{B}+\delta\widetilde{B} = \widehat{U}_2\widetilde{\Sigma}\widehat{V}_2^* ,\;\; {\|\delta \widetilde{B}\|_F} \leq \epsilon_2 {\|\widetilde{B}\|_F}.$$ Both $\epsilon_1$ and $\epsilon_2$ are bounded by the roundoff $\roff$ times modestly growing functions of the dimensions. The composite backward error of (\[eq:bidiagSVD\]) and (\[ex:SVD(B)\]) can therefore be written as $$\label{dA-composite} A + \underbrace{\delta A + \widehat{U}_1 \begin{pmatrix} \delta\widetilde{B} \cr 0 \end{pmatrix}\widehat{V}_1^*}_{{\Delta A}} = \widehat{U}_1 \begin{pmatrix} \widehat{U}_2 & 0 \cr 0 & I \end{pmatrix} \begin{pmatrix} \widetilde{\Sigma} \cr 0 \end{pmatrix} (\widehat{V}_1\widehat{V}_2)^* \equiv \widehat{U}\begin{pmatrix} \widetilde{\Sigma} \cr 0 \end{pmatrix} \widehat{V}^* ,$$ and the backward error is bounded in matrix norm as $$\|{\Delta A}\|_F \leq \epsilon_1 \|A\|_F + \epsilon_2 \|\widetilde{B}\|_F \leq (\epsilon_1 + \epsilon_2 + \epsilon_1\epsilon_2) \|A\|_F.$$ This is the general scheme of a bidiagonalization-based method. Depending on the method for computing the bidiagonal , stronger statements are possible. For instance, if the of $\widetilde{B}$ is computed with the zero-shift method [@dem-kah-90], then all singular values of $\widetilde{B}$ (including the tiniest ones) can be computed to nearly full machine precision: if $\widetilde{\sigma}_1\geq\cdots\geq\widetilde{\sigma}_n$ are the computed values, then $|\widetilde{\sigma}_i - \sigma_i(\widetilde{B})| \leq O(n)\roff \sigma_i(\widetilde{B})$ for all $i$, and the essential part of the error $\widetilde{\sigma}_i - \sigma_i$ is committed in the bidiagonalization, so it is bounded in (\[eq:dsigma(B)\]). Note that, assuming $A$ is of full rank and using (\[eq:bidiagSVD\]), $$\label{dsigma-bidiag} \max_i \frac{|\sigma_i(\widetilde{B})-\sigma_i|}{\sigma_i} \leq \frac{\|\delta A\|_2}{\sigma_{\min}} = \|A\|_2 \|A^\dagger\|_2 \frac{\|\delta A\|_2}{\|A\|_2} \equiv \kappa_2(A) \frac{\|\delta A\|_2}{\|A\|_2} \leq \sqrt{n}\epsilon_1 \kappa_2(A).$$ Hence, although we can compute to nearly machine precision each, no matter how tiny, $\sigma_i(\widetilde{B})$, its value may be a poor approximation of the corresponding singular value $\sigma_i$ of $A$ if $\kappa_2(A)$ exceeds $O(1/\roff)$. For an illustration and explanation of how the reduction to bidiagonal form irreparably damages the smallest singular values see [@drm-xgesvdq §5.3]. For an improvement of the backward error (\[eq:bidiagSVD\]) see [@bar-02-svd]. One-sided Jacobi {#SS=One-sided-jacobi} ----------------- If the Jacobi method is applied to a real[^2] symmetric positive definite matrix $H^{(1)}=H$, then the iterations $H^{(k+1)}=(V^{(k)})^T H^{(k)} V^{(k)}$ can be implemented implicitly: If one factorizes $H^{(k)}=(A^{(k)})^T A^{(k)}$, then $H^{(k+1)}=(A^{(k+1)})^T A^{(k+1)}$, where $A^{(k+1)} = A^{(k)} V^{(k)}$, $A^{(1)}=A$. If the pivot position at index $k$ is $(i_k,j_k)$, then the Jacobi rotation $V^{(k)}$ can be constructed from $A^{(k)}$ as follows: Let $d^{(k)}=(d_1^{(k)},\ldots,d_n^{(k)})$ be the diagonal of $(A^{(k)})^T A^{(k)}$. Compute [${\displaystyle \xi_{i_k,j_k}=A^{(k)}(:,i_k)^T A^{(k)}(:,j_k)}$]{}, [where $A^{(k)}(:,s)$ denotes the $s$-th column of $A^{(k)}$]{}, and $${\vartheta_{i_k,j_k} = \frac{d_{j_k}^{(k)}-d_{i_k}^{(k)}}{2\cdot \xi_{i_k,j_k}}}, \;\; t_k = {\frac{{\rm sign}(\vartheta_{i_k,j_k})}{|\vartheta_{i_k,j_k}|+\sqrt{1+\vartheta_{i_k,j_k}^2}},\;\; c_k = \frac{1}{\sqrt{1+t_k^2}}}, \;\; s_k = t_k \cdot c_k.$$ The transformation $A^{(k+1)}=A^{(k)} V^{(k)}$ leaves $A^{(k+1)}(:,\ell) = A^{(k)}(:,\ell)$ unchanged for $\ell\not\in\{i_k,j_k\}$, while $$\label{eq:one-sided-Jacobi} \begin{pmatrix} A^{(k+1)}(:,i_k), & A^{(k+1)}(:,j_k)\end{pmatrix} = \begin{pmatrix} A^{(k)}(:,i_k), & A^{(k)}(:,j_k) \end{pmatrix} \begin{pmatrix}c_k & s_k \cr -s_k & c_k\end{pmatrix},$$ and the squared column norms are changed to $ d_{i_k}^{(k+1)} = d_{i_k}^{(k)} - t_k \cdot \xi_{i_k,j_k}$, $d_{j_k}^{(k+1)} = d_{j_k}^{(k)} + t_k\cdot\xi_{i_k,j_k}$. If the accumulated product of the transformations $V^{(1)}\ldots V^{(k)}$ is needed, it can be updated analogously to (\[eq:one-sided-Jacobi\]). [Upon convergence, the limit of $(H^{(k)})_{k=1}^{\infty}$ is a diagonal positive definite matrix $\Lambda$, while the limit matrix of $(A^{(k)})_{k=1}^{\infty}$ is $U\Sigma$, where the columns of $U$ are orthonormal and $\Sigma = \sqrt{\Lambda}$. The columns of $U$ are the left singular vectors and the diagonal matrix $\Sigma$ carries the singular values [of $A=U\Sigma V^T$, where $V$, the accumulated product of Jacobi rotations, is orthogonal and has the eigenvectors of $H$ as its columns]{}.]{} This implicit application of the Jacobi method as an algorithm is due to Hestenes [@hes-58]. An excellent implementation is provided by de Rijk [@rij-89]. The key property of the Jacobi rotation, first identified by Demmel and Veselić [@dem-ves-92], is that the backward error in the finite precision implementation of (\[eq:one-sided-Jacobi\]) is small in each pivot column, relative to that column. Hence, in a $k$th step, we have $$\label{eq:one-sided-Jacobi-backward} \begin{pmatrix} \widetilde{A}^{(k+1)}(:,i_k), & \widetilde{A}^{(k+1)}(:,j_k)\end{pmatrix} = \begin{pmatrix} \widetilde{A}^{(k)}(:,i_k) + \delta \widetilde{A}^{(k)}(:,i_k), & \widetilde{A}^{(k)}(:,j_k) + \delta \widetilde{A}^{(k)}(:,{j_k}) \end{pmatrix} \begin{pmatrix}\widetilde{c}_k & \widetilde{s}_k \cr -\widetilde{s}_k & \widetilde{c}_k\end{pmatrix},$$ $$\label{eq:one-sided-Jacobi-backward-columns} \| \delta \widetilde{A}^{(k)}(:,i_k) \|_2 \leq \epsilon \| \widetilde{A}^{(k)}(:,i_k)\|_2,\;\; \| \delta \widetilde{A}^{(k)}(:,j_k) \|_2 \leq \epsilon \| \widetilde{A}^{(k)}(:,j_k)\|_2 ,$$ i.e. $$\widetilde{A}^{(k+1)} = (\widetilde{A}^{(k)} + \delta\widetilde{A}^{(k)}) \widetilde{V}^{(k)} = (I + \delta\widetilde{A}^{(k)} (\widetilde{A}^{(k)})^\dagger) \widetilde{A}^{(k)}\widehat{V}^{(k)}(I+E_k) ,$$ where $\widehat{V}^{(k)}$ is orthogonal, $\|E_k\|_2\leq O(\roff)$. By Theorem \[zd:TM:Eisenstat\_Ipsen\], the essential part of the perturbation of the singular values, caused by $\delta \widetilde{A}^{(k)}$, is bounded by $\|\delta\widetilde{A}^{(k)} (\widetilde{A}^{(k)})^\dagger\|_2$ . Now let $D_k = \mathrm{diag}(\|\widetilde{A}^{(k)}(:,i)\|_2)$ and $\widetilde{A}^{(k)}_c = \widetilde{A}^{(k)} D_k^{-1}$. Then $$\|\delta\widetilde{A}^{(k)} (\widetilde{A}^{(k)})^\dagger\|_2 \leq \| \delta\widetilde{A}^{(k)} D_k^{-1}\|_2 \| (\widetilde{A}^{(k)}_c)^\dagger\|_2 \leq \sqrt{2} \epsilon \| (\widetilde{A}^{(k)}_c)^\dagger\|_2 \leq \sqrt{2}\epsilon \kappa_2(\widetilde{A}^{(k)}_c).$$ Note that $\widetilde{A}^{(k)}_c$ has unit columns and that, by Theorem \[TM:VanDerSuis\], $\kappa_2(\widetilde{A}^{(k)}_c)$ is up to a factor $\sqrt{n}$ the minimal condition number over all diagonal scalings. The important property of the Jacobi algorithm, supported by overwhelming numerical evidence in [@dem-ves-92] is that $\max_{k\geq 1} \kappa_2(\widetilde{A}^{(k)}_c)$ is not much larger than $\kappa_2(A_c)$, where $A_c$ is obtained from $A$ by scaling its columns to unit Euclidean length. (Cf. §\[SSS=symm-jac-pd\].) Further, it is shown in [@drmac-97-rotations] that the Jacobi rotation can be implemented to compute the singular values in the full range of the floating point numbers. See [@drmac-HankelSVD-2015 §5.4] for an example where the is computed to high relative accuracy in double precision (64 bit) complex arithmetic despite the fact that $\sigma_{\max}/\sigma_{\min}\approx 10^{614}$. Although more accurate than a bidiagonalization-based method, the one-sided Jacobi has some drawbacks: its convergence may be slow, there is no sparsity structure to be preserved throughout the iterations and each transformation is on a full dense matrix with low flop count per memory reference. These inconveniences can be alleviated by using the factorization as a preprocessor and a preconditioner for the one-sided Jacobi iterations. Jacobi with preconditioning {#SS=J+QRCP} --------------------------- In any method, the factorization is a useful pre-processor, in particular in the case of tall and skinny matrices, i.e. $m\gg n$. Indeed, if $\Pi_r A\Pi_c= Q \begin{pmatrix} R \cr 0 \end{pmatrix}$ is the factorization with optional row and column pivoting (encoded in the permutation matrices $\Pi_r$, $\Pi_c$), and $R=U_R \Sigma V_R^*$ is the of $R$, then the of $A$ is $A = \Pi_r^T Q \begin{pmatrix} U_R & 0 \cr 0 & I\end{pmatrix} \begin{pmatrix}\Sigma \cr 0\end{pmatrix} (\Pi_c V_R)^*$. In the sequel, we simplify the notation by assuming that the columns of the full column rank $A$ have been permuted so that $A\equiv \Pi_r A\Pi_c$. If the one-sided Jacobi is applied to $R$, then it implicitly diagonalizes $R^* R$. On the other hand, we can implicitly diagonalize $RR^*$ by applying the one-sided Jacobi to $R^*$. In that case the product of Jacobi rotations builds the matrix $U_R$. At first, there seems to be nothing substantial in this – the of $R$ and of $R^*$ are trivially connected. But, this seemingly simple modification is the key for faster convergence of the Jacobi iterations because $RR^*$ is more diagonally dominant than $R^* R$. There are deep reasons for this and the repeated factorization of the transposed triangular factor of the previous factorization is actually a simple way to approximate the , [@mat-ste-93], [@fer-par-95], [@ste-97-qlp]. The key is the column pivoting [@bus-gol-65] that ensures $$|R_{ii}|\geq \sqrt{\sum_{k=i}^j |R_{kj}|^2},\;\;1\leq i\leq j \leq n.$$ Such a pivoted factorization reveals the rank of $A$, it can be used to estimate the numerical rank, and it is at the core of many other methods, e.g. for the solution of least squares problems. Let $A = A_c D_A$, $R = R_c D_c = D_r R_r$ with $D_A=\mathrm{diag}(\|A(:,i)\|_2)$, $D_c={\rm diag}(\| R(:,i)\|_2)$, $D_r={\rm diag}(\| R(i,:)\|_2)$. Then $\kappa_2(A)=\kappa_2(R)$, $D_A = D_c$ and $\kappa_2(A_c)=\kappa_2(R_c)$, i.e. $R$ and $R_c$ inherit the condition numbers from $A$, $A_c$, respectively. Furthermore, $R_r$ is expected to be better conditioned than $A_c$. It holds (see [@drmac-94-thesis; @drm-99-Jacobi]) that $\kappa_2(R_r)$ is bounded by a function of $n$, independent of $A$, and that[^3] $$\label{eq:kappa-R-r} \| R_r^{-1}\|_2 \leq \|\;|R_r^{-1}|\;\|_2 \leq\sqrt{n} \|\;|{R}_c^{-1}|\;\|_2 \leq n \| R_c^{-1}\|_2 .$$ Hence, if $R$ can be written as $R = R_c D_c$ with well-conditioned $D_c$, then $R = D_r R_r$ with well conditioned $R_r$: $\| R_r^{-1}\|_2$ cannot be much bigger than $\| R_c^{-1}\|_2\equiv \|A_c^{\dagger}\|_2$, and it is potentially much smaller. We now illustrate how an extremely simple (but carefully organized) backward error analysis yields sharp error bounds with a condition number that is potentially much smaller than the classical $\kappa_2(A)$. The computed upper triangular $\widetilde{R}\approx R$ can be represented as the result of a backward perturbed factorization with an orthogonal matrix $\widehat{Q}$ and perturbation $\delta A$ such that $$\label{eq:qr:cols} A + \delta A = \widehat{Q}\begin{pmatrix} \widetilde {R}\cr 0\end{pmatrix},\;\; \|\delta A(:,i)\|_2 \leq \roff_{qr} \|A(:,i)\|_2, \;\;i=1,\ldots, n.$$ ($\roff_{qr}$ is bounded by $\roff$ times a factor of the dimensions.) If we write this in the multiplicative form $$\label{eq:dA.A+} A + \delta A = (I + \delta A A^\dagger) A,\;\; \|\delta A A^\dagger\|_2\leq \sqrt{n}\roff_{qr}\|A_c^\dagger\|_2 \leq \sqrt{n}\roff_{qr}\kappa_2(A_c) ,$$ and invoke Theorem \[zd:TM:Eisenstat\_Ipsen\], we obtain $$\max_i \frac{|\sigma_i(A) - \sigma_i(\widetilde{R})|}{\sigma_i(A)}\leq 2 \|\delta A A^\dagger\|_2 + \|\delta A A^\dagger\|_2^2 \leq 2\sqrt{n} \roff_{qr} \kappa_2(A_c) + n (\roff_{qr} \kappa_2(A_c))^2.$$ We conclude that the singular values of $\widetilde{R}$ are accurate approximations of the corresponding singular values of $A$, provided that $\kappa_2(A_c)$ is moderate. The key for invoking $\kappa_2(A_c)$ was (\[eq:qr:cols\]), which was possible thanks to the fact that the factorization is computed by a sequence of orthogonal transformations that changed each column separately, without mixing them by linear combinations.[^4] If the one sided Jacobi is applied to $X=\widetilde{R}^T$, then its finite precision realization can be modeled as $$\label{eq:Jacobi-rows} (X+\delta X)\widehat{V} \equiv (\widetilde{R} + \delta\widetilde{R})^T = \widetilde{U}\widetilde{\Sigma},\;\;\|\delta X(i,:)\|_2 \leq \roff_J \|X(i,:)\|_2,\;\;i=1,\ldots, n,$$ where $\widehat{V}$ s orthogonal and $\roff_J\leq O(n)\roff$. Note a subtlety here. The one sided Jacobi is column oriented - the Jacobi rotations are designed to orthogonalize the columns of the initial matrix and in (\[eq:one-sided-Jacobi-backward\]), (\[eq:one-sided-Jacobi-backward-columns\]) the backward error analysis is performed column-wise. Here, for the purpose of the analysis, we consider the backward error row-wise.[^5] Hence, each row of $X=\widetilde{R}^T$, separately, has been transformed by a sequence of Jacobi rotations, and we have (\[eq:Jacobi-rows\]), where, in terms of the original variable $\widetilde{R}$, $\|\delta\widetilde{R}(:,i)\|_2 \leq \roff_J \|\widetilde{R}(:,i)\|_2 \leq \roff_J (1+\roff_{qr})\|A(:,i)\|_2$. Finally, taking the (\[eq:Jacobi-rows\]) into (\[eq:qr:cols\]), and writing numerically orthogonal $\widetilde{U}^T$ as $(I+E_u)^{-1}\widehat{U}^T$ with $\|E_u\|_2 \leq\roff_J$, we obtain $$A + \underbrace{\delta A + \widehat{Q}\begin{pmatrix} \delta\widetilde{R}\cr 0\end{pmatrix}}_{\Delta A} = \widehat{Q} \begin{pmatrix} \widehat{V} & 0 \cr 0 & I_{m-n}\end{pmatrix} \begin{pmatrix} \widetilde{\Sigma}\cr 0 \end{pmatrix}(I+E_u)^{-1} \widehat{U}^T ,\;\;\|\Delta A(:,i)\|_2 \leq (\roff_{qr} + \roff_J(1+\roff_{qr}))\|A(:,i)\|_2 .$$ This can be written as $$\begin{pmatrix} \widehat{V}^T & 0 \cr 0 & I_{m-n}\end{pmatrix}\widehat{Q}^T (I+\Delta A A^\dagger) A\widehat{U}(I+E_u) = \begin{pmatrix} \widetilde{\Sigma}\cr 0 \end{pmatrix} ,$$ where $\|\Delta A A^\dagger\|_2$ is estimated as in (\[eq:dA.A+\]) with $\roff_{qr}+\roff_{J}(1+\roff_{qr})$ instead of $\roff_{qr}$, and by Theorem \[zd:TM:Eisenstat\_Ipsen\], $$\max_i \frac{|\sigma_i(A) - \widetilde{\Sigma}_{ii}|}{\sigma_i(A)} \leq \max\{ 2\|\Delta A A^{\dagger}\|_2 + \|\Delta A A^{\dagger}\|_2^2, 2\|E_u\|_2 + \|E_u\|_2^2 \}$$ \[TM:jacobi-dsigma\] Let $A$ be of full column rank and let its be computed by Algorithm \[zd:ALG:eig:SVD:Jacobi\] in finite precision with roundoff unit $\roff$, and let $\widetilde{\sigma}_1\geq\cdots\geq\widetilde{\sigma}_n$ be the computed singular values. Assume no underflow nor overflow exceptions occur in the computation and let $\roff_{qr}$ and $\roff_J$ be as in (\[eq:qr:cols\]) and (\[eq:Jacobi-rows\]), respectively. Further, let $\roff_{\triangle}=\roff_{qr}+\roff_{J}(1+\roff_{qr})$. Then $$\label{dsigma-jacobi} \max_{i} \frac{|\widetilde{\sigma}_i - \sigma_i|}{\sigma_i} \leq 2\sqrt{n}\roff_{\triangle} \kappa_2(A_c) + n (\roff_{\triangle} \kappa_2(A_c))^2 .$$ $(\Pi_r A)\Pi_c = Q \begin{pmatrix} R \cr 0 \end{pmatrix}$ $X=R^T$; $X_\infty= X J_1 J_2 \cdots J_{\infty} = U_x \Sigma$ $V_x = J_1 J_2 \cdots J_{\infty}$ $U = \Pi_r^T Q \begin{pmatrix} U_x & 0 \cr 0 & I\end{pmatrix}$, $V = \Pi_c Q_1 \begin{pmatrix} V_x & 0 \cr 0 & I\end{pmatrix}$ Algorithm \[zd:ALG:eig:SVD:Jacobi\] is the simplest form of the preconditioned one-sided Jacobi . For a more sophisticated version, together with a detailed error analysis, including error bounds for the computed singular vectors, we refer to [@drm-ves-VW-1], [@drm-ves-VW-2] and the LAPACK implementations `xGEJSV, xGESVJ`. The accuracy from Theorem \[TM:jacobi-dsigma\] holds for any block oriented and parallelized implementation of Algorithm \[zd:ALG:eig:SVD:Jacobi\], see [@drm-block-jacobi-2010]. Note that (\[dsigma-jacobi\]) is preferred to the classical error bound (\[dsigma-bidiag\]). [*The factorization with column pivoting is at the core of many algorithms in a variety of software packages. Its first widely available robust implementation appeared in LINPACK [@LINPACK-UG] in 1979. It has been cloned and improved in LAPACK [@LAPACK], and through LINPACK and LAPACK it has been incorporated into SLICOT, Matlab, and many other packages. In 2008, it was discovered [@drmac-bujanovic-2008] that it contained a subtle instability that caused severe underestimation of the numerical rank of $A$ if $A$ is too close to the set of rank-deficient matrices. The problem was analyzed in detail and solved in [@drmac-bujanovic-2008], and the new code was incorporated into LAPACK in 2008, into SLICOT in 2010 (see [@buj-drm-slicot-2010]) and into ScaLAPACK in 2019 (see [@PXGEQPF]). This is an example of how numerical instability can remain undetected for almost three decades, even in state-of-the-art software packages, inconspicuously producing bad results. This is also a warning and it calls for utmost rigor when developing and implementing numerical methods as scientific computing software. $\boxtimes$* ]{} \[REM:QR-xGESVD\] [*Since the factorization is an efficient algorithm that reduces the iterative part to the $n\times n$ matrix $R$, the overall computation is more efficient in the case $m\gg n$. In fact, there is a crossover point for the ratio $m/n$ when even the bidiagonalization-based procedure is more efficient if it starts with the factorization and then bidiagonalizes $R$, see e.g. the driver subroutine `xGESVD` in LAPACK. Motivated by [@bar-02-svd], we show in [@drm-xgesvdq] that, after using the factorization with pivoting as a preconditioner, the bidiagonalization becomes more accurate to the extent that in an extensive numerical testing the (`xGESVD`) from LAPACK (applied to $R$ or $R^*$) matches the accuracy of the Jacobi method in Theorem \[TM:jacobi-dsigma\]. This experimental observation seems difficult to prove. The algorithm is available in LAPACK as `xGESVDQ`. $\boxtimes$* ]{} Accurate eigenvalues of positive definite matrices by the one-sided Jacobi algorithm {#SSS=Impl-Jacobi-eig} ------------------------------------------------------------------------------------ From the discussion in §\[zd:SSS:RelativeFLoatingPointPert\], it follows that the Cholesky factorization is the perfect tool for testing definiteness in floating point computation. Since our goal is an accurate eigensolver for positive definite matrices with no [*a priori*]{} given structure (e.g. zero or sign patterns or other structural properties such as the Cauchy structure of the Hilbert matrix), we will use the Cholesky factorization to test numerical positive definiteness. It is remarkable that the following, very simple, Algorithm \[zd:ALG:eig:CholJacobi\], proposed by Veselić and Hari [@ves-har-89], provably achieves the optimal relative accuracy. It is a combination of the Cholesky factorization and the one-sided Jacobi algorithm. \[ALG:jacobi-eig\] $P^T H P = L L^T$ $L_{\infty} = L \left< V \right>$ $\lambda_i = L_{\infty}(:,i)^T L_{\infty}(:,i), \;\;\;i=1,\ldots, n$ ; [$\lambda = (\lambda_1,\ldots,\lambda_n)$]{}. . Raise a warning flag: $H$ is not numerically positive definite. If the Cholesky factorization succeeded to compute $k$ columns of $L$, compute the of the computed part $L(1:n,1:k)$ (as above) and return $k$ positive eigenvalues with eigenvectors. In the sequel, we will simplify the notation and assume that $H$ is already permuted, i.e. we replace $H$ with $P^T HP$ and analyze Algorithm \[zd:ALG:eig:CholJacobi\] with $P=I$. The following proposition is taken from [@drm-xgesvdq]. \[PR-BACKSEVP\] Let $\widetilde L$, $\widetilde L_{\infty}$, $\widetilde U$, $\widetilde\lambda=(\widetilde\lambda_1,\ldots,\widetilde\lambda_n)$ be the computed approximations of $L$, $L_{\infty}$, $U$, $\lambda=(\lambda_1,\ldots,\lambda_n)$, respectively. Let $\widetilde\Lambda={\rm diag}(\widetilde\lambda_i)_{i=1}^n$. Then $\widetilde U \widetilde\Lambda \widetilde U^T=H+\Delta H$ with $$\max_{i,j}\frac{|\Delta H_{ij}|}{\sqrt{H_{ii}H_{jj}}} \leq \widetilde\bfeta_H\equiv \bfeta_C + (1+\bfeta_C)(2\roff_J+ O(\roff)+O(\roff^2)).$$ [*Proof*]{}: We know that $\widetilde L \widetilde L^T = H+\delta H\equiv\widetilde H$ with $|\delta H_{ij}|\leq \bfeta_C \sqrt{H_{ii}H_{jj}}$ for all $i,j$. Further, we can write $\widetilde L_{\infty}=(\widetilde L+\delta\widetilde L)\hat V$, where $\hat V$ is orthogonal and $\|\delta\widetilde L(i,:)\|\leq\roff_J \|\widetilde L(i,:)\|$ for all $i$. Let $\widetilde\Sigma={\rm diag}(\sqrt{\widetilde\lambda_1},\ldots,\sqrt{\widetilde\lambda_n})$. A simple calculation shows that we can write $\widetilde U\widetilde\Sigma = \widetilde L_{\infty}+\delta\widetilde L_{\infty}$, where $|\delta\widetilde L_{\infty}|\leq \epsilon_\lambda |\widetilde L_{\infty}|$, $0\leq\epsilon_\lambda\leq 3\roff$. Now it holds that $\widetilde U \widetilde\Sigma^2\widetilde U^T = H + \delta H + E$, where for all $i,j$ $$|E_{ij}| \leq 2 \left((\roff_J+\epsilon_\lambda (1+\roff_J))+(\bfeta_J+\epsilon_\lambda (1+\roff_J))^2\right) \sqrt{\widetilde H_{ii}\widetilde H_{jj}} \leq 2(\roff_J+O(\roff)+O(\roff^2))(1+\bfeta_C)\sqrt{H_{ii}H_{jj}}.$$ $\boxtimes$ Strictly speaking, this proposition does not claim backward stability of the eigendecomposition because $\widetilde U$ is only nearly orthogonal; see §\[SS=backward-stability\] and Figure \[zd:FIG:eig\_commutative\_diagram\]. However, it is remarkable that $\widetilde U\widetilde\Lambda\widetilde U^T$ recovers the original $H$ up to $O(n\roff)$ entry–wise relative errors in the sense of §\[zd:SSS:RelativeFLoatingPointPert\]. As discussed in [@drm-xgesvdq], in this situation one [suspects we]{} could use the algorithm-based of $L_{\infty}$ to obtain equally good results, but a formal proof of high accuracy in this context is [still]{} lacking, see Remark \[REM:QR-xGESVD\]. Eigenvalues of the pencil $HM-\lambda I$ and the of matrix product {#S=PSVD} ------------------------------------------------------------------ In §\[SS:scaling-Hankel-SVD\], we mentioned the importance of the eigenvalues of the product $HM$ (Hankel singular values), where $H$ and $M$ are real symmetric (or, more generally, Hermitian) positive definite matrices. If $H=L_h L_h^*$, $M=L_m L_m^*$ are the Cholesky factorizations of $H$ and $M$, then $$L_h^{-1}(HM)L_h = L_h^* L_m L_m^* L_h\equiv (L_m^* L_h)^* (L_m^* L_h) ,$$ and the Hankel singular values are just the singular values of $A\equiv L_m^* L_h$. Set $D_h =\mathrm{diag}(\sqrt{H_{ii}})_{i=1}^n$, $H_s=D_h^{-1}HD_h^{-1}$, $L_{h,s}=D_h^{-1}L_h$, $D_m=\mathrm{diag}(\sqrt{M_{ii}})_{i=1}^n$, $M_s=D_m^{-1}MD_m^{-1}$, $L_{m,s}=D_m^{-1}L_m$. Note that $A=L_m^* L_h = L_{m,s}^* (D_m D_h) L_{h,s}$, where both $L_{h,s}$ and $L_{m,s}$ have rows of unit Euclidean length, and that $\kappa_2(L_{h,s})=\sqrt{\kappa_2(H_s)}$, $\kappa_2(L_{m,s})=\sqrt{\kappa_2(M_s)}$. Based on our discussion in §\[zd:SSS:RelativeFLoatingPointPert\], numerical positive definiteness of $H$, $M$ in the presence of perturbations is feasible only if $\kappa_2(H_s)$ and $\kappa_2(M_s)$ are moderate; therefore we may assume that both $L_{h,s}$ and $L_{m,s}$ are well conditioned. This example motivates the study of numerical algorithms for computing the of a matrix $A$ that is given in factored form $A = Z Y^*$, where $Z\in\mathbb{C}^{m\times p}$ and $Y\in\mathbb{C}^{n\times p}$ are full column rank matrices such that $$\label{eq:zeta} \bfzeta (Z,Y) \equiv \max\{ \min_{\Delta=\mathrm{diag}}\kappa_2(Z\Delta), \min_{\Delta=\mathrm{diag}}\kappa_2(Y\Delta)\}$$ is moderate (below $1/\roff$). Towards a more general situation, we may also write $Z Y^*$ as $X D Y^*$, where $D\in\mathbb{C}^{p\times p}$ is diagonal, possibly very ill-conditioned. *Computing the of the product of matrices is an excellent example to illustrate the gap between the purely theoretical and actual computation in finite precision arithmetic: the simplest idea to compute the of $Z Y^*$ (for given $Z$, $Y$) is to first compute the product $A=Z Y^*$ explicitly, and then reduce the problem to computing the of $A$. The following example clearly illustrates the difficulty: If $\epsilon$ is such that $|\epsilon|<\roff$ (so $\pm 2+\epsilon$ is computed as $\pm 2$, and $\pm 1+\epsilon$ is computed as $\pm 1$ in finite precision) then $$\underbrace{\begin{pmatrix} 1 & \epsilon\cr -1 & \epsilon\end{pmatrix}}_{Z} \begin{pmatrix} 2 & 2 \cr 2 & 1\end{pmatrix}= \underbrace{\begin{pmatrix} 1 & 1\cr -1 & 1\end{pmatrix}}_{X} \underbrace{\begin{pmatrix} 1 & 0 \cr 0 & \epsilon \end{pmatrix}}_{D} \underbrace{\begin{pmatrix} 2 & 2 \cr 2 & 1\end{pmatrix}}_{Y^*} = \begin{pmatrix} 2+2\epsilon & 2+\epsilon\cr -2+2\epsilon & -2+\epsilon\end{pmatrix}$$ [will be computed and stored as]{} $\widetilde{A}= \left(\begin{smallmatrix} 2 & 2 \cr -2 & -2\end{smallmatrix}\right)$, which means that the smallest singular value of order $|\epsilon|$ is irreparably lost. This problem is addressed by developing algorithms which avoid explicitly forming the matrix $A$. Instead, $Z$ and $Y$ are separately transformed in a sequence of iterations based on unitary matrices, see [@hea-lau-86]. To ensure efficiency of the Kogbetliantz-type iteration, the matrices are unitarily transformed to triangular forms which are preserved throughout the iterations. Since the entire computation relays on separate unitary transformations, the backward stability in the matrix norm is guaranteed. However, as illustrated in §\[zd:SSS:Example\_3x3\], this still does not guarantee high accuracy in the computed approximations of the smallest singular values.* To illustrate such a procedure and a numerical problem, the product $ZY^*$ is first transformed by $(ZU_1^*)(U_1 Y^*)$ where $U_1$ is orthogonal such that $U_1 Y^*$ is upper triangular: $$U_1=\begin{pmatrix} \frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\cr -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{pmatrix},\;\; U_1 Y^* = \begin{pmatrix} \sqrt{8} & \frac{\sqrt{18}}{2}\cr 0 &-\frac{\sqrt{2}}{2}\end{pmatrix},\;\; Z U_1^* = \frac{1}{\sqrt{2}}\begin{pmatrix} 1+\epsilon & -1+\epsilon\cr -1+\epsilon & 1+\epsilon\end{pmatrix}\approx {\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & -1\cr -1 & 1\end{pmatrix}} .$$ If $|\epsilon|$ is small relative to one, $Z U_1^*$ will be computed and stored as an exactly singular matrix, and its smallest singular value will be lost in the very first step. It is worth noticing that $Z$ has mutually orthogonal columns and that the computed version of $ZU_1^*$ is exactly singular, despite the fact that $U_1$ is orthogonal up to machine precision. On the other hand, the value of $\bfzeta(Z,Y)$, defined in (\[eq:zeta\]), is easily seen to be less than $7$ in this example. ### An accurate algorithm {#SS=PSVD-Algorithm} Applying the techniques from §\[SS=J+QRCP\], we can easily construct an algorithm to compute the of $A$ (given implicitly by $X$, $D$ and $Y$ as $A= X D Y^*$) with accuracy determined by $\bfzeta(X,Y)$ and independent of the condition number of $D$. This allows for ill-conditioned $X$ and $Y$ as well, but such that ill-conditioning can be cured by diagonal scalings (i.e. moderate $\bfzeta(X,Y)$). Here we assume that $X$ and $Y$ are given either exactly, or that their columns are given up to small initial relative errors. Similarly, each diagonal entry of $D$ is given up to a small relative error. Factor $X = X_s \Delta_x$, where $\Delta_x=\mathrm{diag}(\|X(:,i)\|_2)_{i=1}^p$. Compute $Y_1 = Y D \Delta_x$. $Y_1 \Pi = Q \begin{pmatrix} R \cr 0 \end{pmatrix}$ $K=(X_s \Pi)R^*$ $K = U\begin{pmatrix}\Sigma \cr 0 \end{pmatrix} V_1^*$ $V = Q \begin{pmatrix} V_1 & 0 \cr 0 & I\end{pmatrix}$ To see why this algorithm is accurate (despite the fact that it uses the of an explicitly computed matrix product, and that $D$ can be arbitrarily ill-conditioned) note the following: - The column scaling in line 1. introduces entry-wise small relative errors, and it does not increase the condition number of the computation of the factorization in line 2. This is because the accuracy of the computed factorization of $Y_1$ is determined by $\min_{\Delta=\mathrm{diag}}\kappa_2(Y\Delta)$. - $R^*$ can be written as $R_r^* D_r^*$ with diagonal $D_r$ and well conditioned $R_r$. For Businger-Golub column pivoting [@bus-gol-65], $\|R_r^{-1}\|_2$ can be bounded by $O(2^p)$ independent of $Y$, but if $Y$ is well conditioned, then $\|R_r^{-1}\|_2$ is expected to be at most $O(p)$. With so-called strong rank-revealing pivoting [@mgu-eis-96], $\|R_r^{-1}\|_2$ can be bounded by $O(p^{1+(1/4)\log_2 p})$. - The matrix $K$ can be written as $K=K_c D_K$, where $D_K$ is diagonal and $K_c$ is well conditioned with equilibrated Euclidean column norms. The columns of $K$ are computed with small relative errors. However, to preserve accuracy of even the tiniest singular values, the matrix multiplication must use the standard algorithm of cubic complexity. This is because the structure of the error of fast matrix multiplication algorithms (e.g., Strassen) does not fit into the perturbation theory and cannot benefit from scaling invariant condition numbers. - In line 4., the Jacobi algorithm will compute the with the accuracy determined by the condition number of $K_c$. Hence, when it comes to computing the with the condition number that is invariant under diagonal scalings, then we only need to carefully handle the scaling. The same argument applies to our claim that under the assumptions on the initial uncertainties in $X$, $D$ and $Y$, the of $A\equiv XDY^*$ is determined to the accuracy with the condition number essentially given by $$\label{eq:mu} \bfxi = \max\{ \|R_r^{-1}\|_2 \min_{\Delta=\mathrm{diag}}\kappa_2(X\Delta), \min_{\Delta=\mathrm{diag}}\kappa_2(Y\Delta)\}.$$ For a more detailed analysis we refer the reader to [@drm-98-psvd]. For the case of more general $D$ see [@drm-98-tripletSVD]. The decomposition of $A$ as $A = X D Y^*$, with diagonal $D$ and full column rank $X$ and such that $\min_{\Delta=\mathrm{diag}}\kappa_2(X\Delta)$ and $\min_{\Delta=\mathrm{diag}}\kappa_2(Y\Delta)$ are moderate is called a *rank-revealing* decomposition () of $A$. In the next section, we show that for some ill–conditioned matrices an accurate can be computed to high accuracy that allows for computing accurate by applying Algorithm \[zd:ALG:PSVD\]. Accurate as + {#S=RRD+PSVD} ------------- Suppose we want to compute the of $A$, but $A$ is so ill-conditioned that merely storing it in the machine memory may irreparably damage the , or that all conventional algorithms (cf. §\[zd:SSS:Example\_3x3\]) fail due to extreme ill-conditioning (e.g. $A$ is the Hilbert or any other Cauchy or Vandermonde matrix). An idea of how to try to circumvent such situation is presented in Example \[zd:EX:StifnessIndefinite\]: the ill-conditioning of the matrix is avoided by writing the matrix in factored form (\[eq:GTG1\], \[eq:GTG2\]), using only a set of parameters $k_i$. The computed factored form is then used as input to an algorithm capable of exploiting the structure of the factors – in this specific case, bidiagonal form. If we want to be able to tackle larger classes of difficult matrices, then we need to identify a factored form that is general enough and that we know how to use when computing the to high accuracy, e.g. as with Algorithm \[zd:ALG:PSVD\] in §\[SS=PSVD-Algorithm\]. This is the basis of the approach introduced in [@dgesvd-99]. For more fundamental issues of finite precision (floating point) computation with guaranteed high accuracy see [@demmel_dumitriu_holtz_koev_2008]. Suppose that $A$ can be written as $A = X D Y^*$, with $X$, $D$ and $Y$ as discussed in §\[S=PSVD\], and that we have an algorithm that computes $\widetilde{X}=X+\delta X$, $\widetilde{D}=D+\delta D$, $\widetilde{Y}=Y+\delta Y$ such that[^6] $$\label{eq:RRD-computed} \|\delta X(:,i)\|_2 \leq \epsilon_1 \|X(:,i)\|_2,\;\; \|\delta Y(:,i)\|_2 \leq \epsilon_2 \|Y(:,i)\|_2,\;\; |\delta D_{ii}| \leq \epsilon_3 |D_{ii}|,\;\;i=1,\ldots, p.$$ Write $\widetilde{D}$ as $\widetilde{D}=(I+E)D$, where $E$ is diagonal with $\|E\|_2\leq \epsilon_3$. Further, let $\Delta_X = \mathrm{diag}(\|X(:,i)\|_2)$, $X_c=X\Delta_X^{-1}$, $\delta X_c=\delta X\Delta_X^{-1}$; $\Delta_Y = \mathrm{diag}(\|Y(:,i)\|_2)$, $Y_c=Y\Delta_Y^{-1}$, $\delta Y_c=\delta Y\Delta_Y^{-1}$. Then $$\widetilde{X}\widetilde{D}\widetilde{Y}^* = (I+\delta_1 X X^\dagger) X D Y^*(I + \delta Y Y^\dagger)^*,\;\;\delta_1 X = \delta X + XE + \delta X E ,$$ where the multiplicative error terms that determine the relative perturbations of the singular values can be estimated as $$\begin{aligned} \|\delta_1 X X^{\dagger}\|_2 &\leq& \kappa_2(X_c) (\|\delta X_c\|_2 + \|E\|_2 + \|E\|_2\|\delta X_c\|_2) \leq \kappa_2(X_c) (\sqrt{p}\epsilon_1 + \epsilon_3 + \sqrt{p}\epsilon_1 \epsilon_3) \\ \|\delta Y Y^{\dagger}\|_2 &\leq& \|\delta Y_c\|_2\|Y_c^\dagger\|_2 \leq \sqrt{p}\epsilon_2 \kappa_2(Y_c) .\end{aligned}$$ Hence, if $\bfzeta (X,Y)\equiv \max\{ \min_{\Delta=\mathrm{diag}}\kappa_2(X\Delta), \min_{\Delta=\mathrm{diag}}\kappa_2(Y\Delta)\}$ is moderate (below $1/\roff$), then the of $A\equiv XDY^*$ can be accurately restored from the decomposition of $\widetilde{X}\widetilde{D}\widetilde{Y}^*$. For details see [@drm-98-psvd], [@dgesvd-99], [@Dopico-Moro-Mult-Error]. The key advantages of the factored representation are: *(i)* The ill-conditioning is explicitly exposed in the ill-conditioned diagonal matrix $D$, and the factors $X$ and $Y$ are well-conditioned in the sense of (\[eq:zeta\]). *(ii)* The first errors committed in the computation are the small forward errors (\[eq:RRD-computed\]) in $X$, $D$ and $Y$. Hence, the problem is reduced to computing the decomposition $A=XDY^*$. This is solved on a case by case basis: first, a class of matrices is identified for which such a factorization is possible and then an algorithm for computing the decomposition $A= X DY^*$ is constructed. In the last step, the computed factors are given as input to Algorithm \[zd:ALG:PSVD\]. ### -based rank-revealing decompositions {#SS=svd-ldu-classes} The factorization with complete pivoting is used in [@dgesvd-99] as an excellent tool for providing s of several important classes of matrices. If $P_r A P_c = L D U$, with permutation matrices $P_r$, $P_c$, unit lower triangular $L$, diagonal $D$ and upper triangular $U$, then $X=P_r^T U$, $Y^* = U P_c^T$ yields $A = XDY^*$. Depending on the structure of $X$ and $Y$, we can deploy Algorithm \[zd:ALG:PSVD\] (assuming only that $\bfzeta(X,Y)$ is moderate) or some other, more efficient, algorithm tailored for special classes of matrices. For instance, in (\[eq:GTG1\], \[eq:GTG2\]) the problem reduces to the of a bidiagonal matrix and or algorithm can be applied. In some cases the sparsity pattern $\mathcal{S}$ (set of indices in the matrix that are allowed to be nonzero) and the sign distribution are the key properties for computing the singular values accurately. We will here briefly mention few examples; for more detailed review see e.g. [@hogben14 Ch. 59], [@demmel_dumitriu_holtz_koev_2008]. #### Acyclic matrices. Let $A$ be such that small relative changes of its nonzero entries (which are completely arbitrary, without any constraints) induce correspondingly small relative perturbations of its singular values (i.e. with the condition number $O(1)$). Then, equivalently, the associate bipartite graph $\mathcal{G}(A)$ is acyclic (forest of trees) and all singular values can be computed to high accuracy by a bisection method, see [@demmel-gragg-93]. Bidiagonal matrices are acyclic and one can also use e.g. the zero-shift method [@dem-kah-90]. Also, the correspondence between the monomials in determinant expansion and perfect matchings in $\mathcal{G}(A)$ allows for accurate factorization with pivoting. #### Total sign compound () matrices. In some cases it is the sparsity and sign pattern $\mathcal{S}_{\pm}$ that facilitates an accurate decomposition. A sparsity and sign pattern $\mathcal{S}_{\pm}$ is *total signed compound* () if every square submatrix of every matrix $A$ with sign pattern $\mathcal{S}_{\pm}$ is either *sign nonsingular* (nonsingular and determinant expansion is the sum of monomials of like sign) or *sign singular* (determinant expansion degenerates to sum of monomials, which are all zero). Examples of patterns are $$\left(\begin{smallmatrix} + & + & 0 & 0 & 0 \\ + & - & + & 0 & 0 \\ 0 & + & + & + & 0 \\ 0 & 0 & + & - & + \\ 0 & 0 & 0 & + & +\end{smallmatrix}\right) ,\;\; \left(\begin{smallmatrix} + & + & + & + & + \\ + & - & 0 & 0 & 0 \\ + & 0 & - & 0 & 0 \\ + & 0 & 0 & - & 0 \\ + & 0 & 0 & 0 & -\end{smallmatrix}\right).$$ Suppose that every matrix $A$ with pattern ${\cal S}_{\pm}$ has the property that small relative changes of its (nonzero) entries cause only small relative perturbations of its singular values. Then this property is equivalent with ${\cal S}_{\pm}$ being [total signed compound]{} (). The factorization with complete pivoting $P_r A P_c = L D U $ of an matrix $A$ can be computed so that all entries of $L$, $D$, $U$ have small relative errors, and the framework of §\[S=PSVD\] applies. See [@dgesvd-99] for more details. #### Diagonally scaled totally unimodular () matrices. The $m\times n$ matrix $A$ is *diagonally scaled totally unimodular ()* if there exist diagonal matrices $D_1$, $D_2$ and a *totally unimodular* $Z$ (all minors of $Z$ are $-1$, $0$ or $1$) such that $A = D_1 Z D_2$. To ensure that all entries of $L$, $D$ and $U$ are computed to high relative accuracy, catastrophic cancellations (when subtracting intermediate results of the same sign) are avoided by predicting the exact zeros in the process of eliminations. It can be shown that $\kappa_2(L)$ and $\kappa_2(U)$ are at most $O(mn)$ and $O(n^2)$, respectively. #### Cauchy matrices. Consider the of a scaled (generalized) $m\times n$ Cauchy matrix $$C_{ij}=\frac{D_r(i) D_c(j)}{x_i + y_j},\;\; x, D_r \in\mathbb{R}^m,\;\;y, D_c\in\mathbb{R}^n.$$ The key for the accuracy is in the fact that the decomposition with full pivoting of $C$ can be computed as a forward stable function of the vectors $x$ and $y$. More precisely, the decomposition $\Pi_1 C \Pi_2 = L D U$ ($\Pi_1, \Pi_2$ permutations, $L$ unit lower triangular, $U$ unit upper triangular) is such that each entry is computed to high relative accuracy and the triangular factors are well conditioned. (In Algorithm \[zd:ALG:CauchyLDU\], the factorization is computed as $C=XDY^T\equiv (\Pi_1^T L) D (U\Pi_2^T)$.) High accuracy of the computed factors follows from the fact that the Schur complement can be recursively computed by explicit formulas involving only the initial vectors $x$ and $y$. This is shown in Step 14. of Algorithm \[zd:ALG:CauchyLDU\] by Demmel [@Demmel-99-AccurateSVD]. The factors can be used in Algorithm \[zd:ALG:PSVD\] as $X=L$, $Y^T = DU$, resulting in an accurate of the product $LDU$. $m=max(size(x)); n=max(size(y)); p=min(m,n);$ ${\displaystyle C(i,j) = \frac{ D_r(i) \cdot D_c(j)}{x(i)+y(j)}}$; $ir = [1:m]$; $ic = [1:n]$; Find $(i_*,j_*)$ such that $|C(i_*,j_*)| = \max \{ |C(i,j)|\; :\; i=k,\ldots, m;\;j=k,\ldots, n \}$; ${\tt swap}(C(k,:),C(i_*,:))$;   ${\tt swap}(C(:,k),C(:,j_*))$;   ${\tt swap}(x(k),x(i_*))$;   ${\tt swap}(y(k),y(j_*))$; ${\tt swap}(ir(k),ir(i_*))$;   ${\tt swap}(ic(k),ic(j_*))$; ${\displaystyle C(i,j)=C(i,j) \frac{(x(i)-x(k))\cdot (y(j)-y(k))}{(x(k)+y(j))\cdot (x(i)+y(k))}}$; $D = {\tt diag}(C)$ ; $X = {\tt tril}(G,-1) {\tt diag}(1./D) + {\tt eye}(m,n)$ ; $Y = ({\tt diag}(1./D)*{\tt triu}(G(1:n,1:n),1) + {\tt eye}(n))^T$ ; #### Weakly diagonally dominant -matrices. Suppose that the -matrix $A=(A_{ij})\in\mathbb{R}^{n\times n}$ is diagonally dominant and that it is given with the off-diagonal entries $A_{ij}\leq 0$, $1\leq i\neq j\leq n$, and the row-sums $s_i = \sum_{j=1}^n A_{ij}\geq 0$. Note that this set of parameters determines the diagonal entries to high accuracy because $A_{ii}=S_i - \sum_{j, j\ne qi} A_{ij}$ has no subtractions/cancellations. Demmel and Koev [@demmel-koev-2004-M] showed that pivoted Gauss eliminations can be performed accurately in terms of the row sums and the off-diagonal entries, resulting in an accurate decompositions, and an accurate . For further details, see [@demmel-koev-2004-M]. With this unconventional matrix representation (off-diagonal entries and the row sums), it is possible to compute accurate of diagonally dominant matrices, see [@svd-dd-matrix-Ye], [@ldu-dd-matrix-dopico-koev]. [ *Due to pivoting, the factors $X$ and $Y$ are well conditioned. For example, if we factor the $100\times 100$ Hilbert matrix $H_{100}$ (using a specialized version of Algorithm \[zd:ALG:CauchyLDU\] for symmetric positive definite Cauchy matrices), as $H_{100} = X D X^T$ then $\kappa_2(X)\approx 72.24 \ll \kappa_2(H_{100}) > 10^{150}$.*]{} ### Con-eigenvalue problem for Cauchy matrices in the AAK theory More accurate numerical linear algebra impacts other approximation techniques in a variety of applications. An excellent example is the case of $L^\infty$ rational approximations: Haut and Beylkin [@haut-beylkin-coneig-2011] used Adamyan-Arov-Krein theory to show that nearly $L^\infty$–optimal rational approximation on the unit circle of $ f(z) = \sum_{i=1}^n \frac{\alpha_i}{z-\gamma_i} + \sum_{i=1}^n \frac{\overline{\alpha_i}z}{1-\overline{\gamma_i}z}+\alpha_0 $ with a $m$-th order ($m<n$) rational function $ r(z) = \sum_{i=1}^m \frac{\beta_i}{z-\eta_i} + \sum_{i=1}^m \frac{\overline{\beta_i}z}{1-\overline{\eta_i}z}+\alpha_0,\;\;\mbox{such that} \;\;\max_{|z|=1} , |f(z)-r(z)|\longrightarrow\min, $ is numerically feasible if one can compute the con–eigenvalues and con–eigenvectors of the positive definite generalized Cauchy matrix $ {C=\left(\frac{\sqrt{\alpha_i}\sqrt{\overline{\alpha_j}}}{\gamma_i^{-1}-\overline{\gamma_j}}\right)} \in \mathbb{C}^{n\times n}. $ In [@haut-beylkin-coneig-2011] the con–eigenvalue problem $C u = \lambda \overline{u}$ is equivalently solved as the eigenvalue problem $ \overline{C}C u = |\lambda|^2 u, $ where $C$ is factored as $C = X D^2 X^*$, and $\overline{C}$ denotes the entry-wise complex conjugate matrix. The problem further reduces to computing the of the product $G = D X^T X D$, where $X$ is a complex matrix and $D$ is diagonal. Such accurate rational approximation was successfully deployed in solving the initial boundary value problem for the viscous Burger’s equation [[@HAUT201383]]{}. ### Vandermonde matrices and the DFT trick {#SS=vandermonde} In some cases, an is not immediately available, but additional relations between structured matrices can be exploited. Demmel’s algorithm for computing an accurate of Vandermonde matrices [@Demmel-99-AccurateSVD] is a masterpiece of elegance. He used the fact that every $n\times n$ Vandermonde matrix $V=(x_i^{j-1})$ can be written as $V = D_1 C D_2 F^*$, where $F$ is the unitary FFT matrix ($F_{ij}=\omega^{(i-1)(j-1)}/\sqrt{n}$, $\omega = \mathbf{e}^{2\pi\mathfrak{i}/n}$), $D_1$ and $D_2$ are diagonal, and $C$ is a Cauchy matrix, i.e., $$\label{zd:eq:VF-1} (VF)_{ij} = \left[\frac{1-x_i^n}{\sqrt{n}}\right] \left[\frac{1}{\omega^{1-j}-x_i}\right] \left[\frac{1}{\omega^{j-1}}\right],\;1\leq i, j\leq n.$$ After computing the of the generalized Cauchy matrix $VF \equiv D_1 C D_2= U \Sigma W^*$, the of $V$ is $V=U\Sigma (FW)^*$. Note that in both cases the final step is the computation of the of a product of matrices, based on Algorithm \[zd:ALG:PSVD\]. This is turned into an accurate of ${\mathcal{V}}_n(x)$, but with quite a few fine details, tuned to perfection in [@Demmel-99-AccurateSVD], [@demmel-koev-2006-V]. In particular, the possible singularity if some $x_i$ equals the floating point value of an $n$th root of unity is removable. Demmel and Koev [@demmel-koev-2006-V] extended this to polynomial Vandermonde matrices $V$ with entries $v_{ij}=P_i(x_j)$, where the $P_i$s are orthonormal polynomials and the $x_j$s are the nodes. ### Toeplitz and Hankel matrices Let ${\mathcal{H}}$ be a Hankel matrix, ${\mathcal{H}}_{ij}=h_{i+j-1}$. The question is whether we can compute accurate singular values for any input vector $h$. This is equivalent to computing the singular values of the corresponding Toeplitz matrix $\mathcal{T}=P{\mathcal{H}}$, where $P$ is the appropriate permutation matrix. In general, a necessary condition to be able to compute all singular values of a square $A$ to high relative accuracy is that computing the determinant $\mathrm{det}(A)$ is possible to high accuracy. Applying this condition to the problem with Toeplitz matrices yields a negative result. It is impossible to devise an algorithm that can compute to high accuracy the determinant of a Toeplitz or Hankel matrix for any input vector $h$. In the fundamental work [@Demmel-Dumitriu-Holtz §2.6], it is shown that the obstacle in the complex case is the irreducibility of $\mathrm{det}(\mathcal{T})$ (over any field), and in the real case the problem is that $\nabla\mathrm{det}(\mathcal{T})$ has all nonzero entries on a Zariski open set. However, in some settings the Hankel matrix ${\mathcal{H}}$ is given implicitly as ${\mathcal{H}}={\mathcal{V}}^T D {\mathcal{V}}$, with suitable Vandermonde ${\mathcal{V}}$ and diagonal $D$: $$\label{eq:H=VTDV} \left(\begin{smallmatrix} h_1 & h_2 & h_3 & \cdot & h_{n}\cr h_2 & h_3 & \cdot & h_n & h_{n+1} \cr h_3 & \cdot & \cdot & h_{n+1} & \cdot\cr \cdot & h_n & h_{n+1} & \cdot & h_{2n-2}\cr h_n & h_{n+1} & \cdot & h_{2n-2} & h_{2n-1} \end{smallmatrix}\right) \!\! = \!\! \left(\begin{smallmatrix} 1 & 1 & \cdot & 1 & 1\cr x_1 & x_2 & \cdot & x_{n-1} & x_n \cr x_1^2 & x_2^2 & \cdot & x_{n-1}^2 & x_n^2\cr \cdot & \cdot & \cdot & \cdot & \cdot\cr x_1^{n-1} & x_2^{n-1} & \cdot & x_{n-1}^{n-1} & x_n^{n-1} \end{smallmatrix}\right) \!\!\! \left(\begin{smallmatrix} d_1 & & & & \cr & d_2 & & & \cr & & \cdot & & \cr & & & d_{n-1} & \cr & & & & d_n \end{smallmatrix}\right) \!\!\! \left(\begin{smallmatrix} 1 & x_1 & x_1^2 & \cdot & x_{1}^{n-1}\cr 1 & x_2 & x_2^2 & \cdot & x_{2}^{n-1} \cr \cdot & \cdot & \cdot & \cdot & \cdot\cr 1 & x_{n-1} & x_{n-1}^2 & \cdot & x_{n-1}^{n-1}\cr 1 & x_{n} & x_n^2 & \cdot & x_n^{n-1} \end{smallmatrix}\right) .$$ If we refrain to compute ${\mathcal{H}}$ (i.e. its vector $h$) explicitly and think of ${\mathcal{H}}$ as parametrized by the numbers $x_i$, $d_i$, then accurate of ${\mathcal{H}}$ is possible. For tedious details and the full analysis we refer to [@drmac-HankelSVD-2015]. Computing accurate eigenvalues of Hermitian indefinite matrices {#S=HID} =============================================================== The variational characterization of eigenvalues (Theorem \[TM-minmax\]) and the resulting perturbation estimates (Theorem \[zd:TM:Weyl\]) make no reference to the (in)definiteness (i.e. the inertia) of the Hermitian matrix $H$. Similarly, the state-of-the-art numerical software packages, such as LAPACK [@LAPACK], use the generic routines for the Hermitian/symmetric eigenvalue problems that are backward stable in the sense of Theorem \[zd:TM:eig\_backward\] and accurate in the sense of (\[zd:eq:intro:rel\_norm\_error\]). In §\[S=HPD\], we discussed computation with high accuracy only for positive definite matrices (§\[SS=posdef-accurate\]). When it comes to computing the eigenvalues with error bounds of the form (\[zd:eq:intro:rel\_error\]), there is a sharp distinction between definite and indefinite matrices. For instance, for positive definite matrices the symmetric Jacobi eigenvalue algorithm is provably more accurate than the method [@dem-ves-92], but in the case of indefinite matrices such general statement is not possible [@ste-95-qrbeatsjacobi]. Hence, new algorithms must be developed in order to guarantee reliable numerical results for indefinite matrices that are well-behaved with respect to finite precision diagonalization, in the sense that the computed eigenvalues satisfy the error bound (\[zd:eq:intro:rel\_error\]) with a moderate condition number $\bfkappa$. Such matrices must be identified by the corresponding perturbation theory. In this section we give a brief review of theoretical results that have lead to good numerical algorithms. Perturbation theory for computations with indefinite matrices {#SS=HID-pert-theory} ------------------------------------------------------------- Unfortunately, unlike the characterization of positive definite matrices in §\[SSS=well-behaved-PD\], perturbation theory provides no simple description of well-behaved indefinite matrices. The first important contribution to the theoretical understanding and algorithmic development was the analysis of the *$\gamma$-scaled diagonally dominant* matrices [@bar-dem-90], that are written as $H = D A D$, where $A=E+N$, $E$ is diagonal with $E_{ii}=\pm 1$, $D$ is diagonal with $D_{ii}=|H_{ii}|^{1/2}>0$, $N_{ii}=0$, and $\|N\|_2\leq \gamma <1$. (Barlow and Demmel [@bar-dem-90]) Let $H = D A D$ be Hermitian $\gamma$-scaled diagonally dominant matrix with eigenvalues $\lambda_1\geq\cdots\geq\lambda_n$. Let $\delta H$ be a symmetric perturbation with $\| D^{-1}\delta H D^{-1}\|_2=\eta$, and let $H + \xi \delta H$ be $\gamma$-scaled diagonally dominant for all $\xi\in [0,1]$. If $\widetilde{\lambda}_1\geq\cdots\geq\widetilde{\lambda}_n$ are the eigenvalues of $H+\delta H$, then, for all $i=1,\ldots, n$, $$\frac{-\eta}{1-\gamma} + O(\eta^2) \approx e^{-\eta/(1-\gamma)} - 1 \leq \frac{\widetilde{\lambda}_i-\lambda_i}{\lambda_i} \leq e^{\eta/(1-\gamma)} - 1 \approx \frac{\eta}{1-\gamma} + O(\eta^2) .$$ Further, Barlow and Demmel [@bar-dem-90] showed that that a bisection algorithm can compute the eigenvalues of a $\gamma$-scaled diagonally dominant $H$ to high relative accuracy. The key is that in this case, for any real $x$, the inertia of $H - x I$ can be computed with backward error $\delta H$ such that $\| D^{-1}\delta H D^{-1}\|_2$ is of the order of the machine precision. Recall, this means that the computed inertia is exact for the matrix $H+\delta H - x I$. Moreover, [@bar-dem-90] contains detailed analysis and computation of the eigenvectors, as well as extension of the results to symmetric $\gamma$-scaled diagonally dominant pencils $H-\lambda M$. The seminal work of Barlow and Demmel initiated an intensive research, both for eigenvalue computations of Hermitian/symmetric matrices and the of general and structured matrices. For the Hermitian indefinite matrices, Veselić and Slapničar [@ves-sla-93] generalized the results of [@bar-dem-90] to the matrices of the form $H = D A D$, $A = E+N$ with $E=E^*=E^{-1}$, $ED=DE$ and $\|N\|_2 <1$, and described an even larger class of well behaved matrices by identifying a new condition number,[^7] $$C(H) = \sup_{x\neq 0} \frac{|x|^T |H| |x|}{x^* {{\boldsymbol |\!\!|\!\!|}}H {{\boldsymbol |\!\!|\!\!|}}x},$$ where ${{\boldsymbol |\!\!|\!\!|}}H {{\boldsymbol |\!\!|\!\!|}}= \sqrt{H^2}$ is the spectral absolute value of $H$, and $|H|$ is the element-wise absolute value, $|H|_{ij}=|H_{ij}|$. Note that $C(H)$ is finite for nonsingular $H$. (Veselić and Slapničar [@ves-sla-93]) \[TM:VES-SLAP-T1\] Let $H$ and $H+\delta H$ be Hermitian with eigenvalues $\lambda_1\geq\cdots\geq\lambda_n$ and $\widetilde{\lambda}_1\geq\cdots\geq\widetilde{\lambda}_n$, respectively. If the perturbation $\delta H$ is such that, for some $\eta<1$ and all $x\in\mathbb{C}^n$, [${\displaystyle |x^* \delta H x| \leq \eta x^* {{\boldsymbol |\!\!|\!\!|}}H {{\boldsymbol |\!\!|\!\!|}}x}$]{}, then $\widetilde{\lambda}_i=0$ if and only if $\lambda_i=0$, and for all nonzero $\lambda_i$’s $$\label{eq:eta-1} \left| \frac{\widetilde{\lambda}_i - \lambda_i}{\lambda_i}\right| \leq \eta .$$ The condition on $\delta H$ in this theorem is difficult to check in practice, and in particular in case of the so-called floating point perturbations.[^8] The difficulty can be mitigated using $C(H)$ as follows. If we have $\delta H$ such that $|\delta H_{ij}|\leq \varepsilon |H_{ij}|$ for all $i, j$, then for any $x\in\mathbb{C}^n$, we have $$|x^* \delta H x| \leq |x|^T |\delta H| |x| \leq \varepsilon |x|^T |H| |x| \leq \varepsilon C(H) x^* {{\boldsymbol |\!\!|\!\!|}}H{{\boldsymbol |\!\!|\!\!|}}x,$$ provided that ${{\boldsymbol |\!\!|\!\!|}}H {{\boldsymbol |\!\!|\!\!|}}$ is positive definite (i.e. $H$ nonsingular). (Veselić and Slapničar [@ves-sla-93]) Assume that the matrix $H$ in Theorem \[TM:VES-SLAP-T1\] is nonsingular, and that the Hermitian perturbation $\delta H$ is such that $|\delta H_{ij}|\leq \varepsilon |H_{ij}|$ for all $i, j$. If $\varepsilon C(H) <1$, then (\[eq:eta-1\]) holds with $\eta = \varepsilon C(H)$. Note that the condition $|\delta H_{ij}|\leq \varepsilon |H_{ij}|$ does not allow for perturbing zero entries; such strong condition cannot be satisfied in a numerical diagonalization process. To get more practical estimates, we must allow more general perturbations (see e.g. the conditions on $\delta H$ in Theorem \[zd:TM:Demmel\_on\_Cholesky\] and Theorem \[zd:TM:PertScaledPD\]), and have a more intuitive understanding of the factor $C(H)$. To that end, $H$ is assumed nonsingular and $C(H)$ is estimated using the factored form $H = D A D$, where $D$ is diagonal matrix defined as the square root of the diagonal of ${{\boldsymbol |\!\!|\!\!|}}H {{\boldsymbol |\!\!|\!\!|}}$, $D = \mathrm{diag}({{\boldsymbol |\!\!|\!\!|}}H{{\boldsymbol |\!\!|\!\!|}})^{1/2}$. The role of the matrix $H_s$ from §\[SSS=well-behaved-PD\] has the matrix $\widehat{A} = D^{-1} {{\boldsymbol |\!\!|\!\!|}}H{{\boldsymbol |\!\!|\!\!|}}D^{-1}$. (Veselić and Slapničar [@ves-sla-93]) Let $H = D A D$ be nonsingular, where $D$ is diagonal matrix defined as the square root of the diagonal of ${{\boldsymbol |\!\!|\!\!|}}H {{\boldsymbol |\!\!|\!\!|}}$, $D = \mathrm{diag}({{\boldsymbol |\!\!|\!\!|}}H{{\boldsymbol |\!\!|\!\!|}})^{1/2}$. Then $$C(H)\leq \| |A| \|_2 \|\widehat{A}^{-1}\|_2 \leq \mathrm{Trace}(\widehat{A}) \| \widehat{A}^{-1}\|_2 \leq n \| \widehat{A}^{-1}\|_2 .$$ If $\delta H$ is a Hermitian perturbation such that, for all $i, j$, $|\delta H_{ij}| \leq \varepsilon \sqrt{{{\boldsymbol |\!\!|\!\!|}}H{{\boldsymbol |\!\!|\!\!|}}_{ii} {{\boldsymbol |\!\!|\!\!|}}H{{\boldsymbol |\!\!|\!\!|}}_{jj}}$ and $\varepsilon n \| \widehat{A}^{-1}\|_2 < 1$, then (\[eq:eta-1\]) holds for all eigenvalues, with $\eta =\varepsilon n \| \widehat{A}^{-1}\|_2 $. Both definite and indefinite cases are nicely unified by Dopico, Moro and Molera [@dopico-moro-molera--2000-Weyl], by showing that the argument used in the proof of Theorem \[zd:TM:PertScaledPD\] extends, with careful application of the monotonicity principle, to indefinite matrices. (Dopico, Moro and Molera [@dopico-moro-molera--2000-Weyl]) Let $H$ and $H+\delta H$ be Hermitian with eigenvalues $\lambda_1\geq\cdots\geq\lambda_n$ and $\widetilde{\lambda}_1\geq\cdots\geq\widetilde{\lambda}_n$, respectively. Let $H$ be nonsingular, and let $H^{1/2}$ be any normal square root of $H$. If $\eta = \| H^{-1/2} \delta H H^{-1/2}\|_2 \leq 1$, then (\[eq:eta-1\]) holds for all $i=1,\ldots, n$. The difficulty in numerical computation is to have floating point backward errors that are compatible with the condition number. See e.g. the proof of Theorem \[zd:TM:PertScaledPD\], and note how in the relative error bound $\| L^{-1}\delta H L^{-*}\|_2$ the same diagonal scaling matrix results in small relative backward error and improved scaled condition number. At the same time, Algorithm \[ALG:jacobi-eig\] has the backward errors of all iterations pushed back into the original matrix, and the structure of the error is compatible with this scheme, as shown in the proof of Proposition \[PR-BACKSEVP\]. However, this is a special property of a particular algorithm. In general, it may be necessary to apply the perturbation estimate at each step in the sequence (\[eq:Hk-Lambda\]), implemented as (\[eq:tlde-H-k\]), and use the condition number of the current computed iterate $\widetilde{H}^{(k)}$. It is important to note that the scaled condition numbers that are compatible with the structure of numerical errors are not invariant under unitary/orthogonal transformations, see Remark \[REM:scond-unit-i\] and [@mas-94], [@drm-96-conbeh]. These issues have been successfully addressed in the algorithms that we review in §\[SS=sym-indef-fact\] and §\[SS=OJ\]. For simplicity, we consider only real symmetric matrices. Methods based on symmetric indefinite factorizations {#SS=sym-indef-fact} ---------------------------------------------------- The idea of using the pivoted Cholesky factorization of a positive definite $H$ to compute its spectral decomposition via the of its Cholesky factor (see §\[SSS=Impl-Jacobi-eig\]) can be also applied in the indefinite case. The first step is to obtain a symmetric indefinite factorization $H = G \mathcal{J} G^T$ with $\mathcal{J}$ diagonal, $\mathcal{J}_{ii} = \pm 1$, $G$ well conditioned, and with backward error that allows for application of the perturbation theory with moderate condition numbers. This was first done by Slapničar [@slapnicar-GJGT-98], who adapted the Bunch-Parlett factorization [@Bunch-Parlett]. An important feature of this factorization is that (as a result of pivoting), the matrix $G \mathrm{diag}(1/\|G(:,i)\|_2)$ is usually well conditioned, independent of the condition number of $H$. For highly ill-conditioned matrices, accurate symmetric rank-revealing decompositions () $H= X D X^T$ have been computed in particular cases of structured matrices; see [@dopico-koev-2005-accurate] for symmetric totally nonnegative () and Cauchy and Vandermonde matrices, and [@pelaes-moro-dstu-2006] for diagonally scaled totally unimodular () and total signed compound () matrices. Note that with $D = |D|^{1/2} \mathcal{J} |D|^{1/2}$, $\mathcal{J}_{ii}=\mathrm{sign}(D_{ii})$, and $G=X |D|^{1/2}$, the $X D X^T$ can be written as $G\mathcal{J}G^T$. In the next step, the eigenvalues and the eigenvectors of $H$ are computed using $G$ and $\mathcal{J}$ as the input matrices, i.e. $H$ is given implicitly by these factors ($H = G \mathcal{J} G^T$). We now briefly review two fundamentally different algorithms, which illustrate the development of accurate (in the sense of (\[zd:eq:intro:rel\_error\])) numerical methods for the symmetric indefinite eigenvalue problem. ### $J$-orthogonal Jacobi diagonalization {#SS=JJ} Veselić [@veselic-JS-Jacobi-93] noted that the factorization $H= G \mathcal{J} G^T$ can be used to compute the eigenvalues and eigenvectors of $H$ by diagonalizing the pencil $G^T G-\lambda \mathcal{J}$. Here $\mathcal{J}$ is a diagonal matrix with $\pm 1$ on its diagonal, and $G$ has $n$ columns and full column rank; in general $G$ can have more than $n$ rows. The idea is to apply a variant of the one-sided Jacobi method, which we will now briefly describe. In the $k$th step, $G^{(k+1)}=G^{(k)} V^{(k)}$ is computed from $G^{(k)}$ using Jacobi plane rotations, exactly as in §\[SS=One-sided-jacobi\], if $\mathcal{J}_{i_k i_k}$ and $\mathcal{J}_{j_k j_k}$ are of the same sign. On the other hand, if $\mathcal{J}_{i_k i_k}$ and $\mathcal{J}_{j_k j_k}$ have opposite signs, then the Jacobi rotation is replaced with a hyperbolic transformation $$\label{eq:hyp-rot} \begin{pmatrix} V^{(k)}_{i_k i_k} & V^{(k)}_{i_k j_k}\cr V^{(k)}_{j_k i_k} & V^{(k)}_{j_k j_k} \end{pmatrix} = \begin{pmatrix} \cosh\zeta_k & \sinh\zeta_k \cr \sinh\zeta_k & \cosh\zeta_k\end{pmatrix}, \quad \tanh 2\zeta_k = -\frac{2\xi_k}{d_{i_k}+d_{j_k}},$$ $\xi_k=(G^{(k)})_{1:n,i_k}^T (G^{(k)})_{1:n,j_k}$, $d_\ell=(G^{(k)})_{1:n,\ell}^T (G^{(k)})_{1:n,\ell}$, $\ell=i_k, j_k$. The hyperbolic tangent is computed through $${\displaystyle \tanh\zeta_k = \frac{{\rm sign}(\tanh 2\zeta_k)}{|\tanh 2\zeta_k|+\sqrt{\tanh^2 2\zeta_k -1}}}.$$ Note that $V^{(k)}$ belongs to the (unbounded) matrix group of $\mathcal{J}$-orthogonal matrices, $(V^{(k)})^T \mathcal{J} V^{(k)}=\mathcal{J}$. (For basic properties of $\mathcal{J}$-orthogonal matrices, with applications in numerical linear algebra see [@higham-j-ort-review].) The limit of the $G^{(k)}$’s is $U\mathrm{diag}(\sigma_1,\ldots,\sigma_n)$; the $i$th column of $U$ is an eigenvector of $H$ associated with the eigenvalue $\lambda_i = \mathcal{J}_{ii}\sigma_i^2$. Conceptually, this is an unusual approach, since, in the context of the symmetric eigenvalue problem, the use of orthogonal matrices in a diagonalization process is considered natural and optimal, in particular in finite precision computation. Here, the elementary transformations matrices (\[eq:hyp-rot\]) are orthogonal in the indefinite inner product induced by $\mathcal{J}$, and the matrix in the limit is $U\mathrm{diag}(\sigma_i)_{i=1}^n$ with $U^T U=I$. The theoretical error bound for the computed eigenvalues contains a potential growth of the condition number, and in practice only a moderate growth has been observed, and the algorithm is considered accurate in the sense of (\[zd:eq:intro:rel\_error\]). For a detailed analysis and numerical evidence see [@slapnicar-thesis-92], [@drm-har-93], [@slapnicar-GJGT-98], [@slapnicar-HAEVD-2002], [@SLAPNICAR200057]. This approach can be applied to skew-symmetric problems $Sx=\lambda x$, $S\in\mathbb{R}^{2n\times 2n}$, see [@pie-93]. In some applications, it is advantageous to formulate the problems in terms of the factors $G$ and $\mathcal{J}$, and not to assemble the matrix $H$ at all, see [@veselic-GJGPT-2000]. ### Implicit symmetric Jacobi method {#SSS=ISJM} Dopico, Koev and Molera [@Dopico2009] applied, analogously to the algorithm in §\[SS=One-sided-jacobi\], the symmetric Jacobi method implicitly, i.e. by changing only the factor $G$ in $H=G\mathcal{J}G^T = X D X^T$. In the $X D X^T$ representation of the , we can assume (by adjusting $D$) that $X$ has unit columns. Each matrix in the symmetric Jacobi algorithm is given implicitly as $H^{(k)} = G^{(k)} \mathcal{J} (G^{(k)})^T$, and one step of the method only computes $G^{(k+1)} = (V^{(k)})^T G^{(k)}$, thus implicitly defining $H^{(k+1)} = (V^{(k)})^T H^{(k)} V^{(k)} = G^{(k+1)} \mathcal{J} (G^{(k+1)})^T$. This procedure can be preconditioned using the column pivoted factorization $ G \Pi = QR$, i.e. $Q^T H Q = R (\Pi^T \mathcal{J}\Pi) R^T$ and the implicit Jacobi scheme is applied to $R \widetilde{\mathcal{J}} R^T$, $\widetilde{\mathcal{J}} = \Pi^T \mathcal{J}\Pi$. As a result of preconditioning, the convergence may be substantially faster. This implicit Jacobi algorithm can be implemented to deliver the spectral decomposition with the accuracy determined by $\kappa_2(X)$. This includes carefully designed stopping criterion, i.e. conditions to declare numerical convergence and to use the diagonal entries of the last implicitly computed $H^{(k)}$ as the approximate eigenvalues. For details, we refer to [@Dopico2009]. ### A remark on non-symmetric matrices {#SS=nonsymm-TN} It has been noted in [@bar-dem-90], [@ves-sla-93] that some of the perturbation estimates of the type (\[zd:eq:intro:rel\_error\]) extend to diagonalizable non-symmetric matrices. It is a challenging problem to identify classes of non-symmetric matrices for which eigenvalue computation with high accuracy is feasible, and to devise numerical algorithms capable of delivering such accuracy. Following the approach of §\[SS=svd-ldu-classes\], the idea is to use special matrix structure and the parameters that define its entries to find an implicit representation, in form of a decomposition, and then to apply a specially tailored algorithm.[^9] The first successful breakthrough is the work of Koev [@koev-2005-TN] on totally nonnegative matrices. Totally nonnegative () matrices have all its minors nonnegative, and their eigenvalues are real and positive. A nonsingular matrix $A$ is characterized by a unique bidiagonal decomposition $A=L_1\cdots L_{n-1} D U_{n-1}\cdots U_1$, where $D$ is diagonal, the $L_i$’s are unit lower bidiagonal, and the $U_i$’s are unit upper bidiagonal with additional zero and signs structure [@Gasca1996]. Such a decomposition follows from the Neville eliminations. Koev [@koev-2005-TN] used this bidiagonal decomposition of a non-symmetric nonsingular matrix as the starting point for the first accurate algorithm, with detailed perturbation theory and error analysis, for computing eigenvalues of non-symmetric matrices. This led to a more general development of the numerical linear algebra of matrices, with new accurate algorithms for matrices derived from matrices [@koev-tn-2007]. Other examples of highly accurate solutions of non-symmetric eigenvalue problems include e.g. diagonally dominant -matrices parametrized by the off-diagonal entries and the row sums [@eig-M-matrix-Ye]. Eigenvalue computation from the {#SS=OJ} -------------------------------- An accurate diagonalization of an indefinite matrix can be derived from its accurate decomposition, because the spectral and the decomposition are equal up to a multiplication with the inertia of $H$. Furthermore, perturbation theory [@ves-sla-93] ensures that $H$ is well-behaved with respect to computing eigenvalues if it is well-behaved with respect to computing the singular values. To turn this into a robust numerical procedure, one must carefully recover the signs of the eigenvalues from the information carried by the singular vectors. This has been done in [@dopico-molera-moro-2003-SEVP] with Algorithm \[zd:ALG:eig:indef:spanish\] that turns any accurate of a symmetric indefinite $H$ into an accurate spectral decomposition. $H = X D Y^T$. $(\Sigma, Q, V) = \textsf{PSVD}(X,D,Y)$ Recover the signs of the eigenvalues, $\lambda_i=\pm\sigma_i$, using the structure of $V^T Q$. Recover the eigenvector matrix $U$ using the structure of $V^T Q$. Note that the first step aims at an accurate and that symmetry of the decomposition is not the first priority. Also, for provable high accuracy for the computed eigenvalues and eigenvectors, the cases of multiple or tightly clustered singular values must be carefully analyzed; for further details, we refer to [@dopico-molera-moro-2003-SEVP]. Finally, note that we need to compute the full of $H$, even if we only need its eigenvalues. Acknowledgment ============== The author wishes to thank Jesse Barlow, Jim Demmel, Froilán Martínez Dopico, Vjeran Hari, Plamen Koev, Juan Manuel Molera Molera, Eberhard Pietzsch, Ivan Slapničar, Ninoslav Truhar, Krešimir Veselić, for numerous exciting discussions on accurate matrix computations, and in particular to Julio Moro Carreño for encouragement to write this paper and for many useful comments that improved the presentation. [^1]: Here the real case is cited for the sake of simplicity. An analogous analysis applies to the complex Hermitian case. [^2]: Real matrices are used only for the sake of simplicity of the presentation. [^3]: Here the matrix absolute value is defined element-wise. [^4]: It is this mixing of large and small columns by orthogonal transformations oblivious of the difference in length that destroys the accuracy of the bidiagonalizaton. For illustrating examples see [@drm-xgesvdq]. [^5]: We can also consider column-wise backward errors as in §\[SS=One-sided-jacobi\], but this involves the behavior of scaled condition numbers of the iterates. [^6]: Alternatively, we may assume that $X$ and $Y$ are already well conditioned (thus properly scaled) and that the computed matrices satisfy $ \|\delta X\|_2 \leq \epsilon_1 \|X\|_2,\;\; \|\delta Y\|_2 \leq \epsilon_2 \|Y\|_2,\;\; |\delta D_{ii}| \leq \epsilon_3 |D_{ii}|,\;\;i=1,\ldots, p. $ [^7]: The theory in [@ves-sla-93] has been developed for Hermitian pencils $H-\lambda M$ with positive definite $M$. Here we take $M=I$ for the sake of simplicity. [^8]: This term is used for typical errors occurring in the finite precision (computer) floating point arithmetic. [^9]: Recall the example of the of Vandermonde matrices in §\[SS=vandermonde\].
--- abstract: 'We present scalable implementations of spectral-element-based Schwarz overlapping (overset) methods for the incompressible Navier-Stokes (NS) equations. Our SEM-based overset grid method is implemented at the level of the NS equations, which are advanced independently within separate subdomains using interdomain velocity and pressure boundary-data exchanges at each timestep or sub-timestep. Central to this implementation is a general, robust, and scalable interpolation routine, [*gslib-findpts*]{}, that rapidly determines the computational coordinates (processor $p$, element number $e$, and local coordinates $(r,s,t) \in \Oh := [-1,1]^3$) for any arbitrary point $\bx^* =(x^*,y^*,z^*) \in \Omega \subset \RR^3$. The communication kernels in [*gslib*]{} execute with at most $\log P$ complexity for $P$ MPI ranks, have scaled to $P > 10^6$, and obviate the need for development of any additional MPI-based code for the Schwarz implementation. The original interpolation routine has been extended to account for multiple overlapping domains. The new implementation discriminates the possessing subdomain by distance to the domain boundary, such that the interface boundary data is taken from the inner-most interior points. We present application of this approach to several heat transfer and fluid dynamic problems, discuss the computation/communication complexity and accuracy of the approach, and present performance measurements for $P > 12,000$.' author: - 'Ketan Mittal, Som Dutta, Paul Fischer' bibliography: - 'lit.bib' title: 'Nonconforming Schwarz-Spectral Element Methods For Incompressible Flow' --- Overset ,High-order ,Scalability ,Turbulence ,Heat-Transfer Introduction {#intro} ============ High-order spectral element methods (SEMs) are well established as an effective means for simulation of turbulent flow and heat transfer in a variety of engineering applications (e.g., [@dfm02; @dutta2016; @hosseini2016; @merzari2017]). Central to the performance and accuracy of these methods is the use of hexahedral elements, $\Omega^e$, which are represented by isoparametric mappings of the reference element $\Oh:=[-1,1]^d$ in $d$ space dimensions. With such a configuration, it is possible to express all operators in a factored matrix-free form that requires only $O(n)$ storage, where $n=EN^d$ is the number of gridpoints for a mesh comprising $E$ elements of order $N$. The fact that the storage scales as $N^d$ and not $N^{2d}$ is a major advantage of the spectral element method that was put forth in the pioneering work of Orszag [@sao80] and Patera [@patera84] in the early 80s. The work is also low, $O(N^{d+1})$, and can be cast as dense matrix-matrix products, which are highly efficient on modern-day architectures [@dfm02]. The efficiency and applicability of the SEM is tied closely to the ability to generate all-hex meshes for a given computational domain. While all-tet meshing is effectively a solved problem, the all-hex case remains challenging in many configurations. Here, we explore Schwarz overlapping methods as an avenue to support nonconforming discretizations in the context of the SEM. Overlapping grids simplify mesh generation by allowing the user to represent the solution on a complex domain as grid functions on relatively simpler overlapping regions (also known as *overset* or *chimera* grids). These simpler overlapping regions allow grid generation where local mesh topologies are otherwise incompatible, which is a feature of particular importance for complex 3D domains and in the simulation of flows in time-dependent domains subject to extreme mesh deformation. Overlapping grids also enable use of discretization of varying resolution in each domain based on the physics of the problem in that region. $\begin{array}{ccc} \includegraphics[height=15mm,width=24mm]{schwarz1} & \hspace{2mm} \includegraphics[height=15mm,width=15mm]{rectangle} &\hspace{-2mm} \includegraphics[height=15mm]{circle} \\ \end{array}$ Overlapping grids introduce a set of challenges that are not posed by a single conforming grid [@Rogersbestpractices]. Using multiple meshes requires boundary condition for each *interface/artificial* boundary that is interior to the other domain ($\Gamma^{12}$ and $\Gamma^{21}$ in Fig. \[fig:schwarz\]). Since these surfaces are not actual domain boundaries, boundary condition data must be interpolated from a donor element in the overlapping mesh, which must be identified at the beginning of a calculation. For moving or deforming meshes, donor elements must be re-identified after each timestep. Additionally, if multiple overlapping meshes share a target donor point, it is important to pick the donor mesh that will provide the most accurate solution. For production-level parallel computing applications, the identification of donor element, interpolation of data, and communication must be robust and scalable. Additionally, differences in resolution of the overlapping meshes can impact global mass-flux balances, leading to a violation of the divergence-free constraint with potentially stability consequences for incompressible flow calculations. These issues must be addressed in order to enable scalable and accurate turbulent flow and heat transfer calculations. Since their introduction in 1983 [@steger1983], several overset methods have been developed for a variety of problems ranging from Maxwell’s Equations [@angel2018] to fluid dynamics [@chandar2018comparative] to particle tracking [@koblitz2017]. Overset-grid methods are also included in commercial and research software packages such as Star-CCM [@cd2012v7], Overflow [@nicholsoverflowmanual] and Overture [@overturemanual]. Most of these implementations, however, are at most fourth-order accurate in space [@angel2018; @koblitz2017; @coder2017contributions; @henshaw1994; @brazell2016overset; @crabill2016]. Sixth-order finite-difference based methods have been presented in [@aarnes2018high; @nicholsoverflowmanual], while sixth-order finite volume based schemes are available in elsA [@cambier2013onera]. A spectral element based Schwarz method presented by Merrill et. al. [@merrill2016] has demonstrated exponential convergence. Here, we extend the work of Merrill et. al. [@merrill2016] to develop a robust, accurate, and scalable framework for overlapping grids based on the spectral element method. We start with a brief description of the SEM and existing framework which we build upon (Section \[sec: sembackground\]). We discuss our approach to fast, parallel, high-order interpolation of boundary data and a mass-balance correction that is critical for incompressible flow. Finally we present some applications in Section \[sec: apps\] which demonstrate the scalability and accuracy of our overlapping grid framework. SEM-Schwarz for Navier-Stokes {#sec: sembackground} ============================= The spectral element method (SEM) was introduced by Patera for the solution of the incompressible Navier-Stokes equations (NSE) in [@patera84]. The geometry and the solution are represented using the $N$th-order tensor-product polynomials on isoparametrically mapped elements. Variational projection operators are used to discretize the associated partial differential equations. While technically a $Q_N$ finite element method (FEM), the SEM’s strict matrix-free tensor-product formulation leads to fast implementations that are qualitatively different than classic FE methods. SE storage is only $O(n)$ and work scales as $O(EN^{d+1})=O(nN)$, where $n=EN^d$ is the number of points for an $E$-element discretization of order $N$ in $\RR^d$. (Standard $p$-type FE formulations have work and storage complexities of $O(nN^d)$, which is prohibitive for $N>3$.) All $O(N^{d+1})$ work terms in the SEM can be cast as fast matrix-matrix products. A central feature of the SEM is to use nodal bases on the Gauss-Lobatto-Legendre (GLL) points, which lead to an efficient and accurate (because of the high order) diagonal mass matrix. Overintegration (dealiasing) is applied only to the advection terms to ensure stability [@malm13]. For the unsteady NSE, we use semi-implicit BDF$k$/EXT$k$ timestepping in which the time derivative is approximated by a $k$th-order backward difference formula (BDF$k$), the nonlinear terms (and any other forcing) are treated with $k$th-order extrapolation (EXT$k$), and the viscous, pressure, and divergence terms are treated implicitly. This approach leads to a linear unsteady Stokes problem to be solved at each timestep, which is split into independent viscous and pressure (Poisson) updates, with the pressure-Poisson problem being the least well conditioned (and most expensive) substep in the time advancement. We support two spatial discretizations for the Stokes problem: the $\PN2$ formulation with a continuous $Q_N$ velocity space and a discontinuous $Q_{N-2}$ pressure space; and the $\PNN$ approach having equal order continuous velocity and pressure. Full details are provided in [@dfm02; @fischer17]. The Schwarz-SEM formulation of the NSE brings forth several considerations. Like all Schwarz methods, the basic idea is to interpolate data on subdomain boundaries that are interior to $\Omega$ from donor subdomains and to then solve the subdomain problems independently (in parallel) on different processor subsets. In principle, the NSE require only velocity boundary conditions. The first challenge is to produce that boundary data in an accurate and stable way. Spatial accuracy comes from using high-order (spectral) interpolation, as described in the next section. For temporal accuracy, there are several new concerns. As a cost-saving measure, one can simply use lagged donor-velocity values (corresponding to piecewise-constant extrapolation) for the subdomain interface updates. While only first-order accurate, this scheme is stable without the need for subiterations. To realize higher-order accuracy (up to $k$), one can extrapolate the boundary data in time. Generally, this extrapolation must be coupled with a predictor-corrector iteration for the unsteady-Stokes substep. (The nonlinear term is already accurately treated by extrapolation and is stable provided one adheres to standard CFL stability limits.) Typically three to five subiterations ($\kappa_{iter}$) are needed per timestep to ensure stability [@peet2012] for $m$th-order extrapolation of the interface boundary data, when $m>1$. Using this approach, Merrill [*et al.*]{} [@merrill2016] demonstrated exponential convergence in space and up to third-order accuracy in time for Schwarz-SEM flow applications. From a practical standpoint, our SEM domain decomposition approach is enabled by using separate MPI communicators for each overlapping mesh, $\Omega^s$, which allows all of the existing solver technology (100K lines of code) to operate with minimal change. The union of these separate [*session*]{} (i.e., subdomain) communicators is [MPI\_COMM\_WORLD]{}, which is invoked for the subdomain data exchanges and for occasional collectives such as partition-of-unity-weighted integrals over $\Omega=\bigcup \Omega^s$. The data exchange and donor point-set identification is significantly streamlined through the availability of a robust interpolation library, *gslib-findpts*, which obviates the need for direct development of any new MPI code, as discussed next. Interpolation {#sec: findpts} ------------- The centerpiece of our multidomain SEM-based nonconforming Schwarz code is the fast and robust interpolation utility [*findpts*]{}. This utility grew out of a need to support data interrogation and Lagrangian particle tracking on $P=10^4$–$10^6$ processors. High fidelity interpolation for highly curved elements, like the ones supported by SEM, is quite challenging. Thus, [*findpts*]{} was designed with the principles of robustness (i.e., it should never fail) and speed (i.e., it should be fast). *findpts* is part of *gslib*, a lightweight *C* communication package that readily links with any *Fortran*, [*C*]{}, or *C++* code. *findpts* provides two key capabilities. First, it determines computational coordinates $\bq^*=(e^*,p^*,r^*,s^*,t^*)$ (element $e$, processor $p$ and reference-space coordinates $\br = (r,s,t)$) for any given point $\bx^* = (x^*,y^*,z^*) \in \RR^3$. Second, *findpts\_eval* interpolates any given scalar field for a given set of computational coordinates (determined from *findpts*). Because interpolation is nonlocal (a point $\bx^*$ may originate from any processor), *findpts* and *findpts\_eval* require interprocessor coordination and must be called by all processors in a given communicator. For efficiency reasons, interpolation calls are typically batched with all queries posted in a single call, but the work can become serialized if all target points are ultimately owned by a single processor. All communication overhead is at most $\log P$ by virtue of [*gslib*]{}’s generalized and scalable[^1] all-to-all utility, [*gs\_crystal*]{}, which is based on the [*crystal router*]{} algorithm of [@fox88]. To find a given point $\bq^*(\bx^*)$, [*findpts*]{} first uses a hash table to identify processors that could potentially own the target point $\bx^*$. A call to [*gs\_crystal*]{} exchanges copies of the $\bx^*$ entries between sources and potential destinations. Once there, elementwise bounding boxes further discriminate against entries passing the hash test. At that point, a trust-region based Newton optimization is used to minimize $||\bx^{e}(\br)-\bx^*||^2$ and determine the computational coordinates for that point. The initial guess in the minimization is taken to be the closest point on the mapped GLL mesh for element $e$. In addition to returning the computational coordinates of a point, *findpts* also indicates whether a point was found inside an element, on a border, or was not found within the mesh. Details of the *findpts* algorithm are given in [@findpts]. In the context of our multidomain Schwarz solver, [*findpts*]{} is called at the level of [MPI\_COMM\_WORLD]{}. One simply posts all meshes to [*findpts\_setup*]{} along with a pair of discriminators. The first discriminator is an integer field which, at the setup and execute phases, is equal to the subdomain number (the [*session id*]{}). In the [*findpts*]{} setup phase, each element in $\Omega$ is associated with a single subdomain $\Omega^j$, and the subdomain number $j$ is passed in as the discriminator. During the [*findpts*]{} execute phase, one needs boundary values for interface points on $\Omega^j$, but does not want these values to be derived from elements in $\Omega^j$. All interface points associated with $\dO^j$ are tagged with the $j$ discriminator and [*findpts*]{} will only search elements in $\Omega \backslash \Omega^j$. In the case of multiply-overlapped domains, it is still possible to have more than one subdomain claim ownership of a given boundary point $\bx^*$. To resolve such conflicts, we associate with each subdomain $\Omega^j$ a local distance function, $\delta^j(\bx)$, which indicates the minimum distance from any point $\bx \in \Omega^j$ to $\dO^j$. The ownership of any boundary point $\bx^* \in \dO^j$ between two or more domains $\Omega^k$, $k\neq j$, is taken to be the domain that maximizes $\delta^k(\bx^*)$. This choice is motivated by the standard Schwarz arguments, which imply that errors decay as one moves [*away*]{} from the interface, in accordance with decay of the associated Green’s functions. We illustrate this situation in Fig. \[fig:neknekdom\] where $\bx^* \in \dO^2$ belongs to $\Omega^1$ and $\Omega^3$. In this case, interpolated values (from [*findpts\_eval*]{}) will come from $\Omega^1$ because $\bx^*$ is “more interior” to $\Omega^1$ than $\Omega^3$. ![*findpts* considers distance from the interface boundaries ($\delta^{1}$ in $\Omega^1$ and $\delta^{3}$ in $\Omega^3$) when determining the best donor element for $\bx^* \in \dO^2.$](neknek_domains){height="40mm"} \[fig:neknekdom\] Mass-Balance ------------ Since pressure and divergence-free constraint are tightly coupled in incompressible flow, even small errors at interface boundaries (due to interpolation or use of overlapping meshes of disparate scales) can lead to mass-imbalance in the system, resulting in erroneous and unsmooth pressure contours. For a given subdomain $\Omega^j$, the mass conservation statement for incompressible flow is simply $$\begin{aligned} \label{eq:consv1} \int_{\dO^j} \bu \cdot \bhn &=&0,\end{aligned}$$ where $\bhn$ represents the outward pointing unit normal vector on $\dO^j$. Our goal is to find a nearby correction to the interpolated surface data that satisfies (\[eq:consv1\]). Let $\dO_D$ denote the subset of the domain boundary $\dO$ corresponding to Dirichlet velocity conditions and $\dO_N$ be the Neumann (outflow) subset. If $\dO_N \cap \dO^j=0$, then there is a potential to fail to satisfy (\[eq:consv1\]) because the interpolated fluxes on $\dO^j$ may not integrate to zero. Let $\bhu$ denote the tentative velocity field defined on $\dO^j$ through prescribed data on $\dO_d := \dO^j \cap \dO_D$ and interpolation on $\dO_i := \dO^j \backslash \dO_d$. Let $\bu = \bhu + \btu$ be the flux-corrected boundary data on $\dO^j$ and $\btu$ be the correction required to satisfy (\[eq:consv1\]). One can readily show that the choice $$\begin{aligned} \label{eq:corr} \left. \btu \right|^{}_{\dO_i} &=& \left. \gamma \bhn \right|^{}_{\dO_i}\end{aligned}$$ is the $L^2$ minimizer of possible trace-space corrections that allow (\[eq:consv1\]) to be satisfied, provided that $$\begin{aligned} \label{eq:consv2} \nonumber \gamma &=& - \frac{ \int_{\dO_d} \bhu \cdot \bhn \, dA + \int_{\dO_i} \bhu \cdot \bhn \, dA } { \int_{\dO_i} \bhn \cdot \bhn \, dA } \\[1ex] &=& - \frac{ \int_{\dO^j} \bhu \cdot \bhn \, dA } { \int_{\dO_i} \bhn \cdot \bhn \, dA }.\end{aligned}$$ The denominator of (\[eq:consv2\]) of course equates to the surface area of the interface boundary on $\Omega^j$. The correction (\[eq:corr\]) is imposed every time boundary data is interpolated between overlapping domains during Schwarz iterations. Results & Applications {#sec: apps} ====================== We now present several applications that demonstrate the performance and accuracy of the Schwarz-SEM implementation. Parallel Performance -------------------- A principal concern for the performance of Schwarz methods is the overhead associated with interpolation, especially for high-order methods, where interpolation costs scale as $O(N^3)$ per interrogation point in $\RR^3$. Our first examples address this question. Figure 3 shows a test domain consisting of a single spectral element ($N=15$) in $\RR^3$. The spiral configuration leads to many local mimima in the Newton functional, but the use of nearest GLL points as initial guesses generally avoids being trapped in false minima. On a 2.3 GHz Macbook Pro, [*findpts*]{} requires $\approx$ 0.08 seconds to find $\bq$ for 1000 randomly distributed points on $\hat{\Omega}$ to a tolerance of $10^{-14}$. ![2D slice of a 3D spiral at $N=15$. *findpts* robustly finds 1000 random points in $\approx$ 0.08 sec.](spiral){height="45mm"} \[fig:spiral\] In addition to the [*findpts*]{} cost, we must be concerned with the overhead of the repeated [*findpts\_eval*]{} calls, which are invoked once per subiteration for each Schwarz update. In a parallel setting, interpolation comes with the additional overhead of communication, which is handled automatically in $\log P$ time by [*findpts\_eval*]{}. Figure \[fig:scaling\] shows a strong-scale plot of time versus number of processors $P$ for a calculation with $E$=20,000 spectral elements at $N=7$ ($n$=6.8 million grid points) on Cetus, an IBM Blue Gene/Q at the Argonne Leadership Computing Facility. For this geometry, shown in Fig. \[fig:knotch\], there are 2000 elements at the interface-boundary with a total of 128,000 interface points that require interpolation at each Schwarz iteration. Overlapping domains simplify mesh generation for this problem by allowing an inner mesh (twisting in the spanwise direction) to overlap with an extruded outer mesh. $\begin{array}{cc} \includegraphics[height=35mm,width=35mm]{scaling1} & \includegraphics[height=35mm,width=35mm]{scaling2} \\ \end{array}$ $\begin{array}{cc} \includegraphics[height=35mm,width=35mm]{knotch_2dmesh} & \includegraphics[height=35mm,width=35mm]{knotchtwist} \\ \end{array}$ Figure \[fig:scaling\] shows scaling for overall time per step and time spent in *findpts* and *findpts\_eval*. Due to the inherent load imbalance in overlapping grids, owing to the fact that all interface elements might not be located on separate processors, the scaling for *findpts* and *findpts\_eval* is not ideal. However, *findpts* takes only 10% and *findpts\_eval* takes 1% of time compared to the total time to solution per timestep, and as a result the scaling of the overall method is maintained. The parallel efficiency of the calculation is more than $90\%$ until the number of MPI ranks exceeds the number of interface elements. The parallel efficiency drops to $60\%$ at $n/P = 1736$, which is in accord with performance models for the (monodomain) SEM [@fischer2015scaling]. Exact Solution for Decaying Vortices ------------------------------------ Our next example demonstrates that the Schwarz implementation preserves the exponential convergence expected of the SEM. Walsh derived a family of exact eigenfunctions for the Stokes and Navier-Stokes equations based on a generalization of Taylor-Green vortices in the periodic domain $\Omega = [0,2\pi]^2$. For all integer pairs ($m, n$) satisfying $\lambda = -(m^2+n^2)$, families of eigenfunctions can be formed by defining streamfunctions $\psi$ that are linear combinations of $\cos(mx) \cos(ny),$ $\sin(mx) \cos(ny),$ $\cos(mx) \sin(ny),$ and $\sin(mx) \sin(ny).$Taking as an initial condition the eigenfunction $\hat{\bu} = (-\psi_y,\psi_x)$, a solution to the NSE is $\bu = e^{\nu \lambda t} \hat{\bu}(\bx)$. The solution is stable only for modest Reynolds numbers. Interesting long-time solutions can be realized, however, by adding a relatively high-speed mean flow to the eigenfunction. We demonstrate the multidomain capability using the three meshes illustrated in Fig. \[fig:eddy\_nnn\] (left); a periodic background mesh has a square hole in the center while a pair of circular meshes cover the hole. Exponential convergence of the velocity error with respect to $N$ is demonstrated in Fig. \[fig:eddy\_nnn\] (right). (Here, the norm is the pointwise maximum of the 2-norm of the vector field, i.e., $||{\bf \ue}||_{2,\infty} := \max_{i} ||{\bf e}_i||_2$.) For extrapolation orders $m=1$, 2, and 3, $\kappa_{iter}$ was set to 1, 4 and 7 to ensure stability. $\begin{array}{cc} \includegraphics[height=35mm,width=35mm]{eddydonor3} & \includegraphics[height=35mm,width=35mm]{eddy_vxy2} \\ \end{array}$ Vortex Breakdown ---------------- Escudier [@escudier1984] studied vortex breakdown in a cylindrical container with a lid rotating at angular velocity $\Omega$. He considered cylinders for various different $H/R$ i.e. height to radius ratio, at different $Re = \frac{\Omega^2 R}{\nu}$. Sotiropoulos & Ventikos [@sotiropoulos1998] did a computational study on this experiment comparing the structure and location of the bubbles that form as a result of vortex breakdown. Here, we use overlapping grids for $Re=1854$ and $H/R = 2$ case to compare our results against [@escudier1984], [@sotiropoulos1998], and a monodomain SEM based solution. The monodomain mesh has 140 elements at $N=9$. The overlapping mesh was generated by cutting the monodomain across the cylinder axis and extruding the two halves. The calculations were run with $m=1$ and $\kappa_{iter}=1$ (no subiteration) for 2000 convective time-units to reach steady state. Figure \[fig:vbnnvsn\] compares the axial velocity along the centerline for monodomain and overlapping grids based solution, and Table \[table: vbnnvsn\] compares these solution with Escudier and Sotiropoulos. At this resolution, the Schwarz-SEM and SEM results agree to within $<$ 1 percent and to within 1 to 2 percent of the results of [@sotiropoulos1998]. $\begin{array}{cc} \includegraphics[height=30mm]{vbpcfd} & \includegraphics[height=30mm]{stream_inplane} \\ \end{array}$ $z_1$ $z_2$ $z_3$ $z_4$ -------------- -------- -------- -------- -------- Overlapping 0.4222 0.7752 0.9596 1.1186 Monodomain 0.4224 0.7748 0.9609 1.1191 Escudier 0.42 0.74 1.04 1.18 Sotiropoulos 0.42 0.772 0.928 1.09 : Zero-crossings ($z$) for vertical velocity along cylinder centerline for the vortex breakdown.[]{data-label="table: vbnnvsn"} Turbulent Channel Flow ---------------------- Turbulent boundary layers are one of the applications where overlapping grids offer the potential for significant savings. These flows feature fine scale structures near the wall with relatively larger scales in the far-field. As a first step to addressing this class of problems, we validate our Schwarz-SEM scheme for turbulent channel flow, for which abundant data is available in literature. In particular, we compare mono- and multidomain SEM results at Reynolds number $Re_{\tau}=\frac{u_{\tau}h}{\nu}=180$ with direct numerical simulation (DNS) results of Moser et. al. [@moser1999], who used 2.1 million grid points, and with the DNS of Vreman & Kuerten [@vreman2014], which used 14.2 million gridpoints. The Reynolds number is based on the friction velocity $u_{\tau}$ at the wall, channel half-height $h$ and the fluid kinematic viscosity $\nu$, with $u_{\tau} = \sqrt{\frac{\tau_w}{\rho}}$ determined using the wall shear stress $\tau_w$ and the fluid density $\rho$. Table \[table: dns180par\] lists the key parameters for the four different calculations. Following [@moser1999] and [@vreman2014], the streamwise and spanwise lengths of the channel were $4\pi h$ and $4\pi h/3$, respectively. Statistics for all SEM results were collected over 50 convective time units. $Re_{\tau}$ $\Omega_y$ Grid-size ------------- ------------- -------------- -------------------------------------- Monodomain 179.9 $[-1,1]$ $ 18 \times 18 \times 18 \times N^3$ Overlapping 179.9 $[-1,-0.88]$ $19 \times 4 \times 19 \times N^3$ $[-0.76,1]$ $18 \times 15 \times 18 \times N^3$ Moser 178.1 $[-1,1]$ $ 128 \times 129 \times 128$ Vreman 180 $[-1,1]$ $ 384 \times 193 \times 192$ : Parameters for channel flow calculations.[]{data-label="table: dns180par"} $\begin{array}{cc} \includegraphics[height=35mm,width=35mm]{chanumean} & \includegraphics[height=35mm,width=35mm]{chanuvwrms} \\ \end{array}$ Figure \[fig: chandns\] shows the mean streamwise velocity ($U_{mean}$) and fluctuations ($u_{rms}$, $v_{rms}$ and $w_{rms}$) versus $y^+= \frac{u_{\tau} y}{\nu}$ for each case, where $y$ is the distance from the nearest wall. For the Schwarz case, several combinations of resolution ($N$) and subiteration counts ($\kappa_{\mbox{iter}}$) were considered. We quantify the relative error for each quantity by computing the norm of the relative percent difference from the results of Vreman & Kuerten. Specifically, define $$\begin{aligned} \epsilon_{\psi}(y) &:=& 100 \frac{\psi(y) - \psi(y)_{Vreman}}{\psi(y)_{Vreman}} \end{aligned}$$ and $$\begin{aligned} ||\epsilon_{\psi}|| &:=& \frac{1}{2h}\int_{-h}^{h} \! \epsilon_{\psi} \: dy \end{aligned}$$ for each quantity $\psi$ = $U_{mean}$, $u_{rms}$, $v_{rms}$ and $w_{rms}$ in Table \[table: dns180comp\]. All statistics are within one percent of the results of [@vreman2014]. The results indicate that increasing the number of Schwarz iterations and resolution leads to better comparison, as expected. $U_{mean}$ $u_{rms}$ $v_{rms}$ $w_{rms}$ ------------------------------------------------- ------------ ----------- ----------- ----------- Monodomain, $N=9$ 0.17 1.00 0.60 0.57 Monodomain, $N=11$ 0.16 0.46 0.49 0.71 Overlapping, $N=9$, $\kappa_{iter}=1$, $m$ = 1 0.43 1.69 0.89 0.77 Overlapping, $N=9$, $\kappa_{iter}=4$, $m$ = 3 0.21 0.92 0.60 1.05 Overlapping, $N=11$, $\kappa_{iter}=1$, $m$ = 1 0.17 1.58 0.74 0.31 Moser et. al. 0.04 0.15 0.52 1.62 : Relative % difference for channel flow results compared with Vreman & Kuerten [@vreman2014]. []{data-label="table: dns180comp"} Heat Transfer Enhancement ------------------------- The effectiveness of wire-coil inserts to increase heat transfer in pipe flow has been studied through an extensive set of experiments by Collins [*et al.*]{} at Argonne National Laboratory [@collins2002]. Monodomain spectral element simulations for this configuration based on 2D meshes that were extruded and twisted were described in [@goering2016]. Here, we consider an overlapping mesh approach pictured in Fig. \[fig:helixmesh\], in which a 2D mesh is extruded azimuthally with a helical pitch. The singularity at the pipe centeriline is avoided by using a second (overlapping) mesh to model the central flow channel. The overlapping mesh avoids the topological constraint of mesh conformity and leads to better mesh quality. Mesh smoothing [@mittal2018] further improves the conditioning of the system for the pressure Poisson equation. This approach also allows us to consider noncircular (e.g., square) casings, which are not readily accessible with the monodomain approach. As a first step towards these more complex configurations, we validate our Schwarz-SEM method with this real-world turbulent heat transfer problem. $\begin{array}{ccc} \includegraphics[height=30mm]{experiment} & \hspace{1em} \includegraphics[height=30mm]{helix_mesh_nn} & \hspace{1em} \includegraphics[height=30mm]{helixcoil} \\ \end{array}$ Here, we present results from calculations for two different pitches of the wire-coil insert to demonstrate the accuracy of our method. The geometric parameters were $e/p=0.0940$ (long-pitch) and $e/p=0.4273$ (optimal-pitch), with $e/D=0.2507$, where $e$ and $p$ are the respective wire diameter and pitch and $D$ is the inner diameter of the pipe. The Reynolds number of the flow is $UD/\nu=5300$ and the Prandtl number is 5.8. Figure \[fig:helixmesh\] shows a typical wire-coil insert (left), overlapping meshes used to discretize the problem (center), and a plot of velocity magnitude for flow at $Re=5300$ (right). For $e/p=0.0940$, the Nusselt number $Nu$ was determined to be 100.25 by experimental calculation, 112.89 by the monodomain calculation, and 113.34 by the overlapping-grid calculation. For $e/p=0.4273$, $Nu$ was determined to be 184.25 by the experiment, and 186 by the overlapping-grid calculation. The overlapping grid calculations were spatially converged and used $\kappa_{iter}=4$ and $m=3$. Figure \[fig:helixveltemp\] shows a slice of the velocity and temperature contours for the optimal pitch calculation. $\begin{array}{cc} \includegraphics[width=30mm]{helix_vel} & \hspace{1em} \includegraphics[width=30mm]{helix_temp} \\ \multicolumn{2}{c}{\includegraphics[width=60mm]{legend}} \end{array}$ Conclusion ========== We have presented a scalable Schwarz-SEM method for incompressible flow simulation. Use of an extended *gslib-findpts* library for donor-element identification and for data interpolation obviated the need for direct development of MPI code, even in the presence of multiple overlapping meshes. Strong scaling tests show that the parallel performance meets theoretical expectations. We have introduced mass-flux corrections to ensure divergence-free flow, even in the presence of interpolation error. We demonstrated exponential convergence for the method and showed excellent agreememnt in several complex three-dimensional flow configurations. Acknowledgments =============== This work was supported by the U.S. Department of Energy, Office of Science, the Office of Advanced Scientific Computing Research, under Contract DE-AC02-06CH11357. An award of computer time on Blue Waters was provided by the National Center for Supercomputing Applications. Blue Waters is a sustained-petascale HPC and is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. The Blue Waters sustained-petascale computing project is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility. References ========== [^1]: We note that [mpi\_alltoall]{} is [*not*]{} scalable because its interface requires arguments of length $P$ on each of $P$ processors, which is prohibitive when $P>10^6$.
--- abstract: 'Recurrent neural networks (RNNs) are becoming the *de facto* solution for speech recognition. RNNs exploit long-term temporal relationships in data by applying repeated, learned transformations. Unlike fully-connected (FC) layers with single vector matrix operations, RNN layers consist of *hundreds* of such operations chained over time. This poses challenges unique to RNNs that are not found in convolutional neural networks (CNNs) or FC models, namely large dynamic activation. In this paper we present [MASR]{}, a principled and modular architecture that accelerates bidirectional RNNs for on-chip ASR. [MASR]{}is designed to exploit sparsity in both dynamic activations and static weights. The architecture is enhanced by a series of dynamic activation optimizations that enable compact storage, ensure no energy is wasted computing null operations, and maintain high *MAC* utilization for highly parallel accelerator designs. In comparison to current state-of-the-art sparse neural network accelerators (e.g., EIE), [MASR]{}provides 2$\times$ area 3$\times$ energy, and 1.6$\times$ performance benefits. The modular nature of [MASR]{}enables designs that efficiently scale from resource-constrained low-power IoT applications to large-scale, highly parallel datacenter deployments.' author: - | Udit Gupta, Brandon Reagen, Lillian Pentecost, Marco Donato, Thierry Tambe,\ Alexander M. Rush, Gu-Yeon Wei, David Brooks\ [email protected] bibliography: - 'main.bib' title: '[MASR]{}: A Modular Accelerator for Sparse RNNs [^1]' --- [^1]: To appear in 28$^{th}$ International Conference on Parallel Architecture and Compilation Techniques (PACT 2019)
=10000 0.9truecm Introduction ============ It has been appreciated for many years that enlarging the symmetry group of a given model can yield both new physical insights, and the possibility of exact calculations. A familiar example is the generalization of the Heisenberg model of ferromagnetism to the $O(N)$ model[@ma], which allows an exact analysis of the critical properties for $N \rightarrow \infty$. The large-$N$ extension of the non-linear $\sigma$-model[@sig] has been similarly fruitful[@signl]. The idea has also been applied recently to non-equilibrium problems in turbulence[@wei], phase ordering[@po] and interface roughening[@ir]. The implicit assumption involved is that the gross physical features are insensitive to the size of the symmetry group – an assumption which is largely borne out in the above applications. The first main point of this article is to emphasize that there is a wide class of models whose physics does not change smoothly on enlarging the symmetry group – in the language of critical phenomena, these are systems whose ordered phase is [*spatially inhomogeneous*]{}. We shall concentrate exclusively on one such system; namely, the type-II superconductor, which condenses into the so-called mixed-state – a periodic array of flux lines[@ab]. However, non-trivial effects will be apparent in the other systems within this class (which may be fairly well characterized by having a wavelength selection mechanism in their ordered phase.) It is our intention to highlight the main physical changes which occur when one generalizes the Landau-Ginzburg theory of superconductors to a $U(N)$ theory involving $N$ complex order parameters (OP). The model is not new – in fact it has been studied many times in the past two decades or so[@hlm; @bnt; @ba; @lr; @c1; @c2; @mn] – however, the physics we shall discuss here appears to have been overlooked in previous studies[@ba; @lr; @c1; @c2]. Although many of our conclusions apply to arbitrary $N$, such is the richness of these systems, we shall discuss in detail only the cases $N=2$ and $N \rightarrow \infty$. Our use of the $U(N)$ symmetry is for convenience only, as it is the smoothest continuation of the model away from $N=1$. Clearly for higher values of $N$ there are an increasingly large number of possible model Hamiltonians which one may construct from the $N$ OP’s. Within the field of unconventional superconductivity (as evidenced in the heavy-Fermion system UPt$_{3}$), the commonly adopted model is an $N=2$ Ginzburg-Landau theory with a reduced symmetry[@rev]. The richness of the phase diagram is well-documented, with the Abrikosov lattice giving way to exotic structures such as fractionally quantized vortices, and flux textures. An application for $N=3$ can be found in rotating He$_{3}$. Near to the normal-A1 transition, the (18 component) OP may be reduced to three coupled complex fields[@he3], with the external angular velocity playing the role of an applied magnetic field. These examples serve to show that the strategy of enlarging the symmetry group of systems whose condensate is spatially inhomogeneous is a ‘double-edged sword’. The advantage is that one finds an increasingly rich variety of phases as one increases $N$. The disadvantage is that one thereby loses immediate physical contact with the original model of interest ($N=1$). The remainder of this article is dedicated to exhibiting the strengths and weaknesses of the $U(N)$ model, in terms of its relation to conventional superconductivity. In section II, we define the model via the Landau-Ginzburg energy functional, and derive the mean field theory for arbitrary $N$. In section III we study in detail the mean-field solution for $N=2$ and demonstrate that the Abrikosov state is unstable, giving way to a state of interlocked centered rectangular lattices. In section IV we review the large-$N$ limit for this model. All previous studies[@ba; @lr; @c1; @c2] have assumed an Abrikosov state for one condensed mode, and treated the remaining $(N-1)$ modes as massless. We show that this state is unstable, and that the true ordered state corresponds to a complicated structure in which each OP condenses into a periodic state, such that the overall condensate density is spatially constant. This leads to an identification of the transition with that of the $O(2N)$ model of ferromagnetism (in the large-$N$ limit) in two fewer dimensions. An immediate consequence of this is that the transition is continuous and the lower critical dimension of the system is $d_{l}=4$. We also discuss the subtleties of commuting the limit $N \rightarrow \infty$ with the thermodynamic limit. It is of note that the large-$N$ limit may be solved with no approximations – we adopt neither the London limit (gauge field fluctuations only) nor the lowest Landau level (LLL) approximation (OP fluctuations only). We end with our conclusions in section V. Formulation of the model and mean field theory ============================================== We define the model via the Landau-Ginzburg energy functional[@fh], which has the generic form (setting $\hbar = c = 1$) $$\begin{aligned} \label{lg} \nonumber {\cal H}[\psi_{i} & , & {\bf A}] = \int d^{3}x \Bigl \lbrace \sum_{i}[(1/2m^{*})|{\bf D}\psi_i|^{2} + \alpha|\psi_i|^{2}]\\ + ( & \beta & /2)\sum_{i,j}|\psi_i|^{2}|\psi_j|^{2} + (1/2\mu_{0})({\bf H}-\nabla \times {\bf A})^{2} \Bigr \rbrace \ ,\end{aligned}$$ where ${\bf D}=-i\nabla - e^{*}{\bf A}$, $\lbrace \psi_{i} \rbrace$ are a set of $N$ complex order parameters, ${\bf A}$ is the vector potential, ${\bf H}$ is the external magnetic field, $e^{*}$ is the effective charge, and $m^{*}$, $\alpha$ and $\beta $ are phenomenological constants. It is useful to introduce dimensionless units[@fh], such that the energy functional (in the ordered phase) takes the form $$\begin{aligned} \label{lgdless} \nonumber {\cal H}[\psi_{i},{\bf A}] & = & \int d^{3}x \Bigl \lbrace \sum_{i}[|{\bf D}\psi_i|^{2} - |\psi_i|^{2}]\\ & & +(1/2)\sum _{i,j}|\psi_i|^{2}|\psi_j|^{2} + ({\bf H}-\nabla \times {\bf A})^{2} \Bigr \rbrace \ ,\end{aligned}$$ where ${\bf D}=(1/i\kappa) \nabla - {\bf A}$. The parameter $\kappa>1/\surd 2$ is the ratio of the London penetration depth to the coherence length. In these units the critical external field $H_{c2} = \kappa $. When generalizing the above expressions to $d$ dimensions, one should imagine the external field directed along $(d-2)$ dimensions, such that it is transverse to the $(x,y)$ plane in which an Abrikosov-like lattice structure may form. The simplest analysis one can make of this system is mean field theory which amounts to approximating the free energy by the saddle point value of ${\cal H}$. This may be obtained by solving explicitly for the classical fields which are solutions of the Landau-Ginzburg equations $$\begin{aligned} \label{lge1} {\bf D}^{2}\psi_i & = & \Bigl ( 1- \sum _{j}|\psi_j|^{2} \Bigr )\psi_{i} \\ \label{lge2} \nabla \times {\bf b} & = & (1/2)\Bigl ( \psi^{*} {\bf D} \psi + \psi {\bf D}^{*} \psi ^{*} \Bigr ) \ ,\end{aligned}$$ where ${\bf b}={\bf H}-\nabla \times {\bf A}$ is the microscopic magnetic field. Although it is not known how to solve these equations in general, an exact analysis is possible very close to the critical field $H_{c2}$. We shall briefly outline the solution in the case of $N=1$, and then generalize the solution to arbitrary $N$. Further details may be found in refs.[@ab; @fh]. Near to $H_{c2}$ the OP is small, so that the first equation may be linearized, and at this order the gauge field is just given by ${\bf A_{0}} = H_{c2}(0,x,{\bf 0}_{\perp})$. Thus, the OP may be expressed in terms of Landau levels[@llqm]. In fact, the value of $H_{c2}$ itself is determined by associating criticality with the LLL eigenvalue. For our purposes, the key point is that the first equation may be rearranged in the form $$\label{oeq} D_{+}D_{-} \psi = 0 \ ,$$ where $$\label{dpm} D_{\pm} = {\bf D}_{x} \mp i{\bf D}_{y} \ ,$$ and we have used the fact that the OP components are constant in the $(d-2)$ other directions. Thus the ground state ‘wavefunctions’ are characterized by the identity $$\label{a0fi} D_{-}\psi = 0 \ .$$ These functions are the LLL, and have a degeneracy proportional to the area of the system (transverse to the applied magnetic field). One is free to construct different sets of functions from the LLL’s. The most elegant is due to Eilenberger[@ge]. The Eilenberger basis functions span the space of the LLL, yet each basis function is a doubly periodic function (essentially a particular Abrikosov lattice). Given the above properties of the OP near the upper critical field, it is possible to manipulate the second mean field equation such that the r.h.s. is the curl of a vector. In this way, one may integrate the equation to obtain Abrikosov’s first fundamental identity $$\label{a1fi} b({\bf r}) = H - (1/2\kappa) |\psi ({\bf r})|^{2} \ .$$ We see that the magnetic field is reduced within the sample by an amount proportional to the condensate density. The scale of the condensate is set by Abrikosov’s second identity. This is a little more difficult to prove[@fh], but is essentially obtained by spatially averaging the first equation and calculating the first non-trivial corrections to eq.(\[a0fi\]). The result is $$\label{a2fi} (1/\kappa)(\kappa - H)\langle |\psi |^{2} \rangle + ((1/2\kappa ^{2})-1)\langle |\psi |^{4} \rangle = 0 \ ,$$ where $\langle ... \rangle $ denotes a spatial average. Substituting this last relation into eq.(\[a1fi\]) and spatially averaging, we find $$\label{magf} B = \langle b \rangle = H - {(\kappa - H)\over (2\kappa ^{2}-1)\beta _{A} } \ ,$$ where the Abrikosov ratio is given by $$\label{abrat} \beta _{A} = \langle |\psi |^{4} \rangle / \langle |\psi |^{2} \rangle ^{2} \ge 1 \ .$$ One can translate this result in terms of the free energy, which one finds to be proportional to $-1/\beta _{A}$, indicating that the system condenses into a state which minimizes $\beta _{A}$ under the constraint that it is an Eilenberger function. This state turns out to be the well-known triangular lattice, for which $\beta _{A} \simeq 1.1596$. Now we come to the general $U(N)$ case as described by eqs.(\[lge1\]) and (\[lge2\]). It is easy to show that most of the previous analysis follows through in a trivial way. Each OP component must satisfy $D_{-}\psi_{i} = 0$ (although it is important to remember that a given component may be zero). The first Abrikosov identity is generalized to $$\label{ga1fi} b({\bf r}) = H - (1/2\kappa)\sum _{i} |\psi _{i}({\bf r})|^{2} \ .$$ There also exist $N$ relations setting the relative scales of the OP components. These have the form $$\label{ga2fi} (1/\kappa)(\kappa - H)\langle |\psi_{i}|^{2} \rangle + ((1/2\kappa ^{2})-1)\sum _{j}\langle |\psi_{i}|^{2} |\psi_{j}|^{2}\rangle = 0 \ .$$ This allows us to spatially average eq.(\[ga1fi\]) to find the relation between the magnetic flux density and the magnetic field: $$\label{gmagf} B = H - {(\kappa - H)\over (2\kappa ^{2}-1)\beta _{g}(N) } \ ,$$ where the generalized Abrikosov ratio is given by $$\label{gabrat} \beta _{g}(N) = { \sum_{i,j}\langle |\psi _{i}|^{2} |\psi _{j}|^{2} \rangle \over \left (\sum_{i} \langle |\psi_{i}|^{2} \rangle \right ) ^{2}} \ge 1 \ .$$ Again, one may show that the minimal free energy is to be found by minimizing $\beta _{g}$, with the constraint that each condensed OP component is either zero, or an Eilenberger function. For general $N$ one soon appreciates that such a minimization is non-trivial as the $N$ components jostle for favorable positions and lattice structures within the ‘primitive cell’. For this reason we concentrate in the next section on the simplest case. Analysis of the case $N=2$ ========================== We are now only concerned with two OP’s $\psi _{1}$ and $\psi _{2}$. By reducing the $U(2)$ symmetry we could make contact with the two-OP models of UPt$_{3}$[@rev] for which a similar mean field analysis has been performed[@mf2op]. Our purpose here is to exemplify the physics of these multi-component systems – this will be of great benefit in our analysis of the large-$N$ limit in the following section. A few words concerning the Eilenberger basis[@ge] are required at this point. As mentioned before, the Eilenberger functions $\phi ({\bf r}|{\bf r}_{0})$ satisfy $D_{-}\phi = 0$. The amplitude of $\phi$ is a doubly periodic function, with a fundamental cell scaled so as to have unit length in the $x$ direction, and a periodicity vector $(\zeta,\eta)$, where $\eta $ is fixed by the condition of flux quantization. The label ${\bf r}_{0}$ simply fixes the lattice position in space. The functions are normalized such that $\langle |\phi|^{2} \rangle = 1$. The functions also have the symmetry property $\phi ({\bf r}|{\bf r}_{0}) = \exp[2\pi i(y_{0}/\eta)x]\phi({\bf r}+ {\bf r}_{0}|{\bf 0})$. This allows one to recast the normalization condition as a completeness relation: $$\label{com} \int \limits _{\rm cell} d^{2}r_{0} |\phi ({\bf r}|{\bf r}_{0})|^{2} = \eta \ .$$ It is useful to define the integrals $$\label{ints} I({\bf r}_{1},{\bf r}_{2}) = \int \limits _{\rm cell} d^{2}r |\phi ({\bf r}|{\bf r}_{1})|^{2}|\phi ({\bf r}|{\bf r}_{2})|^{2} \ .$$ Returning to our expression for $\beta _{g}$ in the case $N=2$, we have to minimize the expression $$\label{betag2} \beta _{g}(2) = { \langle |\psi _{1}|^{4}\rangle + \langle |\psi _{2}|^{4} \rangle + 2 \langle |\psi _{1}|^{2}|\psi _{2}|^{2}\rangle \over \left ( \langle |\psi_{1}|^{2}\rangle + \langle |\psi_{2}|^{2} \rangle \right ) ^{2}} \ .$$ One may show that a minimum may only exist for either one of the OP’s being zero, or else for each OP to be equivalent (up to a relative spatial shift). In the former case the value of $\beta _{g}$ will be the Abrikosov value ($\simeq 1.1596$). To investigate the latter case, we set $\psi _{1} = A \phi ({\bf r}|{\bf 0})$ and $\psi _{2} = A \phi ({\bf r}|{\bf r}_{0})$ which reduces the task of minimizing $\beta _{g}$ to that of finding the lattice type and the relative shift ${\bf r}_{0}$ which minimize $$\label{nbetag2} \beta _{g}(2) = I({\bf 0},{\bf 0}) + I({\bf 0},{\bf r}_{0}) \ .$$ We have numerically investigated the above expression, restricting our search to lattices within the class of centered rectangular structures (which corresponds to choosing $\zeta=1/2$). This class includes square and triangular lattices. Somewhat surprisingly the minimal energy solution corresponds to each OP component adopting a lattice with a primitive cell with opening angle $\theta $ equal to $15^{\rm o}$. Figures 1 and 2 show contour plots of the two OP components. Regions of lighter shade correspond to higher values of the OP. It is more illuminating to plot the overall condensate $|\phi _{1}|^{2} + |\phi_{2}|^{2}$, as is shown in Fig. 3. We then see a surprisingly rich structure. Maxima of the condensate correspond to regions of minimal magnetic flux and [*vice versa*]{}. The energy for this arrangement corresponds to the value $\beta _{g}(2) \simeq 1.0062$, which is very close to the lower bound of unity. Our initial guess was that the OP components would adopt square lattices ($\theta = 45^{\rm o}$), as this arrangement has a higher symmetry. We plot the components and the overall condensate in Figs. 4, 5 and 6 for this case. The energy of the square lattice arrangement is only fractionally higher with $\beta _{g}(2) \simeq 1.0075$. =5.0cm =5.0cm =7.5cm =5.0cm =5.0cm =7.5cm In Fig.7 we show the generalized Abrikosov number as a function of angle. Interestingly, the triangular lattices correspond to maximal values of $\beta _{g}$. In principle, there may be an even lower energy OP arrangement corresponding to a lattice with an oblique primitive cell lying outside the class of centered rectangular structures. This is a large space within which to search for minima, and we have not pursued this possibility. =7.0cm From this analysis, we see that in general the $U(N)$ models will condense into complicated structures in which each component adopts a particular lattice configuration, such that the overall condensate density is as smoothly varying as possible. The calculation of these structures becomes increasingly difficult as one increases $N$, and we shall desist from any explicit calculations In the next section we turn to the most important part of the article – namely the explicit solution of the $U(N)$ model in the large-$N$ limit. The large-$N$ limit =================== As hinted at in the Introduction, one of the more compelling reasons for generalizing models to higher symmetry groups is to allow exact solutions in the large-$N$ limit. This has proven to be a very useful tool in many contexts[@ma; @signl; @wei; @po; @ir]. In the present case, the large-$N$ limit of the $U(N)$ model of type-II superconductors (in an external field) has been examined by several authors[@ba; @lr; @c1; @c2]. \[One should draw a clear line between these calculations, and those concentrating on the zero field case[@hlm], which is of interest in liquid crystals and also as an exemplification of a fluctuation induced first-order phase transition[@cw].\] Originally studied in 1985[@ba], it was found that below six dimensions, the mean-field continuous transition of Abrikosov becomes first-order – all the way down to a lower critical dimension of $d_{l}=2$. This calculation relied upon a proof by contradiction, as an explicit solution of the model is extremely difficult. An independent study[@lr] was made in 1995 using an Ansatz to solve the model. It was found that the transition was continuous below six dimensions, and that $d_{l}=4$. This result was challenged[@c1] on the grounds that the Ansatz was physically unreasonable and that the original calculation[@ba] was definitely correct. A spirited response[@c2] was made, defending the Ansatz, but admitting that there was no obvious flaw in the original proof by contradiction[@ba]. This is quite an extraordinary situation regarding a model whose [*raison d’etre*]{} is its solvability! We shall resolve this state of confusion using a very simple physical idea that has emerged from the previous two sections. Each of the previous studies follows the conventional route of integrating out $(N-1)$ of the OP components, regarding them as Goldstone modes. This is precisely what one does in the conventional $O(N)$ model of ferromagnetism for instance[@ma; @amit]. The reason it is done there is that the condensed OP is spatially homogeneous, and it makes good sense to globally rotate the OP’s such that the condensate exists exclusively in one component (denoted as longitudinal), and to treat the transverse components as a source of massless fluctuations. Such a course of action is inadmissible in the present model of $U(N)$ superconductivity. Each component of the condensate is spatially inhomogeneous and no global rotation is possible to make $(N-1)$ Goldstone modes. One has to treat all the modes as potentially condensed. For finite $N$ this would lead to a totally intractable problem. However, for $N \rightarrow \infty$ we can take advantage of the completeness relation of Eilenberger functions to demonstrate that although each OP component is a doubly periodic function, the overall condensate is a constant. This drastically simplifies the analysis, and one may show that the solution is self-consistent when one takes into account both higher Landau levels, and gauge field fluctuations. Before turning to the self-consistent large-$N$ limit, we shall obtain the main idea by studying the limit of large $N$ at mean field level. The task is to minimize $\beta _{g}(N)$ as given in eq.(\[gabrat\]). We see that by choosing each component to lie in a different Eilenberger state, the sums over components tend (in the limit of large $N$) to integrals over the basis label ${\bf r}_{0}$. We may then invoke the completeness relation given in eq.(\[com\]) which reduces these component sums to constants. Hence the spatial averages are trivial, and we see that $\beta _{g}$ is saturated at its lower bound of unity. It is a remarkable fact that this particular solution actually solves the mean field equations throughout the mixed phase, and not just in the vicinity of the $H_{c2}$ line. One can see this straightforwardly: since $\sum _{i} |\psi_{i}|^{2}$ is a constant, the first mean-field equation is exactly solved by the Eilenberger functions. The higher Landau levels always remain hard modes as one decreases the temperature, so never contribute to the condensed OP. At low enough temperatures, the Meissner transition will occur, and the condensate $\sum _{i} |\psi_{i}|^{2}$ will saturate at the value $2\kappa H$. Two interesting points follow. In the previous section we found that even for two OP components, the condensed state was very smooth (in terms of the overall condensate). In that sense it is already very similar to the large-$N$ limit in which the condensate is exactly constant. Thus we expect very similar quantitative physics as one increases $N$ in the range $[2,\infty]$. Note also the huge degeneracy of lattice structures underlying this solution. Although each OP component is restricted to be an Eilenberger function with a particular shape of unit cell, there is no energetic selection of that unit cell for $N \rightarrow \infty$, since the completeness relation is independent of this. Presumably the ground-state OP configurations for large but finite $N$ provide a means of determining the large-$N$ configurations, by smoothly continuing $N \rightarrow \infty$. The self-consistent large-$N$ treatment is a non-trivial extension of the above calculation, as it takes into account fluctuations about mean-field theory (albeit in a rather crude fashion.) We shall find that the main characteristics of the mean field state are stable to these fluctuations for $d>d_{l}=4$. For $4<d<6$ the fluctuations give rise to $d$-dependent exponents, which cross over to mean-field values above six dimensions. In fact, from the structure of the theory, one sees that the results are identical to those of the large-$N$ limit of the $O(2N)$ model of ferromagnetism in two dimensions fewer. We can understand this as the correlations of the OP’s are frozen in the $(x,y)$ plane due to each OP component having formed a lattice state, so only transverse fluctuations (in the remaining $(d-2)$ dimensions) can become critical. \[We note here that these results are similar to those of ref.[@lr], but we must stress that the physical condensed state is completely different in the two cases. In ref.[@lr] there is no explicit transverse scale as only one OP component is condensed, and is taken [*ad hoc*]{} to be constant (which is not even a solution of the saddle-point equations[@c1]); whereas in our solution, each OP component has condensed into a lattice state with its own magnetic length scale built in. Real thermal fluctuations about these two states will be of totally different natures.\] The large-$N$ limit is often derived diagrammatically[@ma; @amit]. When one can describe the physics in terms of a longitudinal mode and $(N-1)$ transverse modes, this approach is particularly transparent. However, in the present case we have $N$ condensed modes, so it is better to use an alternative method; namely to introduce an auxiliary field $\chi $ (via a Hubbard-Stratonovich transformation) which will allow us to make the large-$N$ limit explicit. In the presence of fluctuations, we must take a step back from eq.(\[lg\]) and consider the partition function $Z = \int {\cal D}{\bf A} {\cal D}\psi_{i} \exp [-{\cal H}']$, where $$\label{newen} {\cal H}' = {\cal H} - \int d^{d}r \sum _{i} (J_{i}^{*}\psi _{i} + {\rm c.c}) \ .$$ The source terms $J_{i}$ are added in order to derive the equation of state in the ordered phase. It will turn out that each $J_{i}$ is proportional to an Eilenberger function. Introducing the field $\chi $ allows us to rewrite the partition function in the form $$\label{npf} Z[J] = \int {\cal D}{\bf A} {\cal D}\chi {\cal D}\psi_{i} \exp [-{\cal H}''] \ ,$$ where $$\begin{aligned} \label{neweren} \nonumber & {\cal H}'' & = \int d^{d}r \bigl \lbrace N\chi ^{2}/2\beta + N({\bf H}-\nabla \times {\bf A})^{2} \\ & + & \sum_{i} \bigl [ (\alpha+i\chi) |\psi_i|^{2} + (1/2m^{*})|{\bf D}\psi_i|^{2} - J_{i}^{*}\psi _{i} - J_{i}\psi _{i}^{*} \bigr ] \bigr \rbrace \ .\end{aligned}$$ We have scaled the magnetic field and the vector potential so as to extract a clean factor of $N$ in the first two terms[@ba]. (This entails the rescalings $e^{*} \rightarrow e^{*}/ \surd N$, and $\beta \rightarrow \beta /N$ for consistency). As we have already indicated, the self-consistent large-$N$ limit for this problem constitutes a formidable analytic challenge. In fact, it is not possible to solve the problem without resort to some external resource, whether it be an Ansatz, or a piece of physical insight. We shall utilize the latter, thanks to the lessons we have learned both in the $N=2$ case, and also in the mean field analysis of the large-$N$ limit. Just to reiterate, at mean field level the OP components each condense into an Eilenberger state such that the overall condensate is spatially homogeneous, and consequently the magnetic flux is spatially homogeneous also. To proceed we take the simplest possible line. Namely, that the self-consistent treatment retains these features of spatial homogeneity, but that the fluctuations renormalize the mass $\alpha $, resulting in a shift of $T_{c}$. Our task is to show that this is a consistent solution of the problem. The alternative Ansatz is to condense only one OP component[@ba; @lr; @c1; @c2]. Rewriting the energy functional in terms of Eilenberger functions allows one to prove that such a state is energetically unstable[@mn1]. To select the physically motivated condensed state we must choose the source terms to force each component into a (spatially shifted) Eilenberger function. Thus we write $J_{i} = u\phi_{i} = u\phi ({\bf r}|{\bf r}_{i})$, where $u$ is complex. The homogeneity of the magnetic field allows us to write ${\bf B}=B(0,0,{\hat {\bf r}}_{\perp})$. Also it is convenient to set $t=r+i\chi $. The saddle point value of $\chi $ is purely imaginary, such that the effective mass $t$ is purely real. The energy function now takes the explicit form $$\begin{aligned} \label{newesten} \nonumber {\cal H}'' & = & \int d^{d}r \bigl \lbrace - N(t-r)^{2}/2\beta + N(H-B)^{2} \\ & + & \sum_{i} \bigl [ t|\psi_i|^{2} + (1/2m^{*})|{\bf D}\psi_i|^{2} - u\phi_{i}^{*}\psi _{i} - u\phi_{i}\psi _{i}^{*} \bigr ] \bigr \rbrace \ ,\end{aligned}$$ where now ${\bf A} = B(0,x,{\bf 0}_{\perp})$. Each individual OP component is decoupled, and may be associated with a partition function $z(t,u) = \int {\cal D}\psi _{i} \exp(-f)$ where $$\label{indiv} f[\psi _{i}] = \int d^{d}r \bigl [ \psi_{i}^{*}{\hat M}\psi_{i} - u\phi_{i}^{*}\psi _{i} - u\phi_{i}\psi _{i}^{*} \bigr ] \ ,$$ where ${\hat M} = -(1/2m^{*}){\bf D}^{2}+t$, which has eigenvalues $$\label{eigenval} \lambda _{{\bf k},n} = {k^{2}\over 2m^{*}} + t + \left (n+{1\over 2} \right ){e^{*}B\over m^{*}} \ ,$$ where the momentum ${\bf k}$ exists in the $(d-2)$ dimensions transverse to the $(x,y)$ plane, and $n \in [0,\infty]$ labels the Landau levels. We now write the condensed part of the OP explicitly, along with its fluctuation: $\psi _{i} = w\phi _{i} + {\tilde \psi}_{i}$. The prefactor $w$ is chosen to ensure that the fluctuation piece ${\tilde \psi}_{i}$ has zero mean. Substituting this into eq.(\[indiv\]) and performing the integrals over the Eilenberger functions, one finds that the terms linear in ${\tilde \psi}_{i}$ vanish so long as one chooses $w=u/(t+e^{*}B/2m^{*})$. In this case the energy functional for each component becomes $$\label{indiv1} f[\psi _{i}] = -{V|u|^{2}\over 2(t+e^{*}B/2m^{*})} + \int d^{d}r \ {\tilde \psi}_{i}^{*}{\hat M} {\tilde \psi}_{i} \ ,$$ where $V$ is the volume of the system. Returning now to the energy functional, we have from eqs.(\[newesten\]) and (\[indiv1\]), along with assumed spatial constancy of $\chi $ and $B$, $$\begin{aligned} \label{energyfun} \nonumber {\cal H}'' = & - & NV \Biggl [ {(t-r)^{2}\over 2\beta} - (H-B)^{2} + {|u|^{2}\over 2(t+e^{*}B/2m^{*})}\Biggr ] \\ & + & \int d^{d}r \ {\tilde \psi}_{i}^{*}{\hat M} {\tilde \psi}_{i} \ .\end{aligned}$$ The integrals over the fluctuation fields $\lbrace {\tilde \psi}_i \rbrace$ are easily done, and we may re-exponentiate the resulting determinant to give the final result $$\begin{aligned} \label{energyfinal} \nonumber {\cal H}'' = & - & NV \Biggl [ {(t-r)^{2}\over 2\beta} - (H-B)^{2} + {|u|^{2}\over 2(t+e^{*}B/2m^{*})} \\ & - & (D_{L}/A) \int {d^{d-2}k\over (2\pi)^{d-2}}\sum _{n} \log (\lambda _{{\bf k},n})\Biggr ] \ ,\end{aligned}$$ where $D_{L}$ is the Landau degeneracy of each level, which is equal to $e^{*}BA/2\pi$, where $A$ is the area of the system in the $(x,y)$ plane. Now that we have a clean factor of $N$ throughout, we may use steepest descents to determine the self-consistent values of the auxiliary variable $t$ , and also the magnetic field $B$. Differentiating ${\cal H}''$ with respect to $t$ yields $$\begin{aligned} \label{saddle} \nonumber {(t-r)\over \beta} & - & { |u|^{2} \over 2( t+e^{*}B/2m^{*})^{2}}\\ & & \ \ \ \ - {e^{*}B\over 2\pi}\int {d^{d-2}k\over (2\pi)^{d-2}}\sum _{n} {1\over \lambda _{{\bf k},n}} = 0 \ .\end{aligned}$$ It is convenient to define $\xi ^{-2} = t+e^{*}B/2m^{*}$, since $\xi $ can be seen to play the role of the correlation length of the system. The relation between $w$ and $u$ then assumes the form $w=u\xi^{2}$. One may then rewrite eq.(\[saddle\]) as $$\begin{aligned} \label{self-conxi} \nonumber & \xi ^{-2} & = r + e^{*}B/2m^{*} + \beta |w|^{2}/2 \\ + & \beta & {e^{*}B\over 2\pi}\int {d^{d-2}k\over (2\pi)^{d-2}}\sum _{n} {1\over [ k^{2}/2m^{*} + ne^{*}B/m^{*} + \xi ^{-2} ]} \ .\end{aligned}$$ This equation encapsulates most of the information about the phase transition. Above the transition, we can set the condensate ‘amplitude’ $w$ to zero in (\[self-conxi\]). The resulting equation defines the renormalized critical temperature $T_{c}$ through the defining condition of criticality $\xi \rightarrow \infty$. At this point the bare quantity $r=T-T_{c}^{0}$ is equal to $T_{c}-T_{c}^{0}$. We therefore have the explicit shift as $$\begin{aligned} \label{tcshift} \nonumber T_{c} & = & T_{c}^{0} - e^{*}B/2m^{*} \\ - & \beta & {e^{*}B\over 2\pi}\int {d^{d-2}k\over (2\pi)^{d-2}}\sum _{n} {1\over [ k^{2}/2m^{*} + ne^{*}B/m^{*} ]} \ .\end{aligned}$$ As usual, the fluctuations drive the critical temperature to a lower value. The $T_{c}$ shift diverges for $d<4$, suggesting the identification of the lower critical dimension as $d_{l}=4$. On the low temperature side of the transition we can remove the source field $u$, which leaves a non-zero condensate $w$ only if the correlation length is infinite. We therefore have from (\[self-conxi\]), an equation for the condensate amplitude: $$\begin{aligned} \label{amplitude} \nonumber 0 & = & r + e^{*}B/2m^{*} + \beta |w|^{2}/2 \\ + & \beta & {e^{*}B\over 2\pi}\int {d^{d-2}k\over (2\pi)^{d-2}}\sum _{n} {1\over [ k^{2}/2m^{*} + ne^{*}B/m^{*} ]} \ .\end{aligned}$$ Comparing eqs.(\[tcshift\]) and (\[amplitude\]) we find the exact relation $|w|^{2} = 2(T_{c}-T)/\beta$ which immediately yields the OP exponent $\beta=1/2$, and self-consistently confirms the existence of a continuous transition. We can also identify the correlation length exponent by examining (\[self-conxi\]) as $T \searrow T_{c}$. Eliminating the bare critical temperature from (\[self-conxi\]) using (\[tcshift\]), and evaluating the resulting integral for large $\xi $ leads to the expression $$\label{corrlen} c_{1}\xi ^{-2} + \beta e^{*}B c_{2} \xi ^{4-d} = T - T_{c} \ .$$ The constant $c_{1}$ has a contribution from all Landau levels bar the lowest. The fluctuation dominated term $\sim \xi ^{4-d}$ and arises solely from the LLL. As the correlation length diverges for $T \searrow T_{c}$, we see that the first term on the left hand side dominates for $d>6$, whereas the second term dominates for $4<d<6$. This leads to the result $\xi \sim (T-T_{c})^{-\nu}$, with $\nu = 1/2$ for $d>6$ (the mean field result), and $\nu = 1/(d-4)$ for $4<d<6$ (confirming $d_{l}=4$). These results are identical to those obtained for the $O(2N)$ model of ferromagnetism, but in two fewer dimensions. This may be understood from examination of the self-consistent relation for $\xi$ given in (\[self-conxi\]). Apart from the sum over Landau levels, this equation is exactly that which would be obtained for a $(d-2)$-dimensional $O(2N)$ model. The critical modes only exist in the $(d-2)$ dimensions transverse to the Landau levels. The modes in the $(x,y)$ plane contain the frozen length scale associated with the formation of the underlying lattice structure of the OP’s. In the above expressions, we have left the value of the magnetic field $B$ undetermined. However, this is given self-consistently by minimizing the energy functional in (\[energyfinal\]) with respect to $B$. We shall not write the expression explicitly, but it is noteworthy that the integral appearing from the fluctuations is strongly divergent and must be regularized by introducing some microscopic cut-off procedure (for $d>4$), such as adding higher derivative terms not present in our original Landau-Ginzburg energy functional. As a final remark in this section, we should point out that these results are insensitive to the order in which one takes the thermodynamic limit, and the limit $N \rightarrow \infty$. The above analysis has implicitly assumed the thermodynamic limit, which is the correct ‘physical’ choice. However, had one taken the thermodynamic limit second, by fixing the number of vortices and then taking $N \rightarrow \infty$, the calculation would have proceeded as before with one difference. The fluctuations in this case would originate overwhelmingly from the transverse modes, since the number of distinct longitudinal modes is limited to the number of vortices. However the completeness relation still holds, and the condensed state may be taken as spatially homogeneous, thereby enabling a calculation in the same spirit as that above. Conclusions =========== The main point of this article is that special care must be taken when expanding the symmetry group of systems with spatially varying structures. We have concentrated on one class of such systems, namely type-II superconductors in an external magnetic field. As mentioned in the Introduction, there are applications of such models to heavy-Fermion superconductors, and also to rotating superfluid He$_{3}$. We have found a number of interesting results connected with $N$-component superconductors, whose free energy functional maintains a $U(N)$ symmetry. In general we have seen that these systems adopt low-temperature configurations in which many OP components contribute by condensing into periodic structures – leading to a very rich structure for the overall condensate (and hence the magnetic flux). We have examined the mean-field theory for the case $N=2$ in some detail. The two OP components were found to condense into centered rectangular structures, with an opening angle of $15^{\rm o}$. The two structures are shifted relative to one another in such a way that the overall condensate $|\phi_{1}|^{2}+|\phi_{2}|^{2}$ has a surprisingly rich structure, as shown in fig.3. The generalized Abrikosov ratio for this configuration is $\beta_{g}(2) \simeq 1.0062$, almost saturating the lower bound of unity. Although systems with higher values of $N$ will adopt ever more complex structures, we were able to show in section IV that the mean-field theory in the limit $N \rightarrow \infty$ has a simplifying feature, due to the completeness relation of the periodic Eilenberger functions. Each OP component adopts an Eilenberger function, but the overall condensate has no traces of the periodicity, and is spatially a constant. This structure saturates the lower bound of $\beta_{g}(\infty)$. In the remainder of section IV we examined the large-$N$ limit in more detail, by considering a treatment which includes fluctuations self-consistently. This calculation has been attempted several times in the past[@ba; @lr; @c1; @c2], but the previous authors have always assumed that the system allows only one OP component to condense, which we have seen is generically false. Our main finding is that for $d>4$, fluctuations do not disturb the main characteristics found in mean-field theory – namely a continuous transition into a spatially homogeneous condensate, composed of infinitely many OP components having condensed into Eilenberger functions. \[For $d<4$, the mixed phase is destroyed entirely.\] The system is found to have the critical properties of the $O(2N)$ model of ferromagnetism[@ma; @amit], but in two fewer dimensions. Thus, exponents maintain their mean-field values above $d=6$, but take the values $\beta=1/2$ and $\nu = 1/(d-4)$ for $4<d<6$. The solution we have found does not rely upon making the LLL approximation, or upon neglecting gauge field fluctuations. In this paper we have found that within mean-field theory, and also in the large-$N$ limit, the $U(N)$ model undergoes a continuous transition from the normal to the mixed phase. This is in contradiction to the results of some past works[@bnt; @ba; @c1], the latter two of which contain errors of principle. However, one may find a precedent for a continuous transition in our recent study[@mn], in which a functional renormalization group (FRG) approach was applied to the $U(N)$ model via an expansion in $\epsilon = 6-d$. In fact, the FRG study also predicted a mapping from the $U(N)$ model to the $O(2N)$ model of ferromagnetism in two fewer dimensions, for $N \ge 2$. It would be interesting to probe this relationship further by extending the present self-consistent analysis of the $U(N)$ model to finite $N$. One of the more sophisticated means of achieving this would be via the use of the parquet approximation[@ym], which includes corrections far beyond those of $O(1/N)$. Finally we would like to draw the reader’s attention to the fact that in the exactly solvable large-$N$ limit, the mechanism for the transition is the growing (phase) coherence in the direction of the applied field, as the temperature is lowered. A theory of the “melting” transition seen in high-temperature superconductors ($N=1$) has been recently given by one of us[@moore], based on the idea that the apparent melting is just a consequence of crossover effects, when this phase coherence length scale (which is very rapidly growing in three dimensions), becomes comparable to the dimensions of the system. TJN acknowledges financial support from the Engineering and Physical Sciences Research Council. address from 1st August 1997: Department of Physics, Virginia Tech, Blacksburg, VA 24061, USA. S-K. Ma, Phys. Rev. Lett, [**29**]{}, 1311 (1972). M. Gell-Man and M. Lévy, Nuovo Cim. [**16**]{}, 705 (1960). S. Coleman, R. Jackiw and H. D. Politzer, Phys. Rev. D, [**10**]{}, 2491 (1974). C. Y. Mou and P. Weichman, Phys. Rev. Lett. [**70**]{}, 1101 (1993). G. F. Mazenko and O. T. Valls, Phys. Rev. Lett. [**51**]{}, 2044 (1983); A. J. Bray, Adv. Phys. [**43**]{}, 357 (1994). J. P. Doherty et al, Phys. Rev. Lett. [**72**]{}, 2041 (1994). A. A. Abrikosov, Zh. Eskp. Teor. Fiz. [**32**]{}, 1442 (1957) \[Sov. Phys. - JETP [**5**]{} 1174 (1957)\]. B. I. Halperin, T. C. Lubensky and S-K. Ma, Phys. Rev. Lett. [**32**]{}, 292 (1974). E. Brézin, D. R. Nelson and A. Thiaville, Phys. Rev. B [**31**]{}, 7124 (1985). I. Affleck and E. Brézin, Nucl. Phys. B [**257**]{}, 451 (1985). L. Radzihovsky, Phys. Rev. Lett. [**74**]{}, 4722 (1995). I. F. Herbut and Z. Tešanović, Phys. Rev. Lett. [**76**]{}, 4450 (1996) (Comment). L. Radzihovsky, Phys. Rev. Lett. [**76**]{}, 4451 (1996) (Reply). M. A. Moore and T. J. Newman, Phys. Rev. Lett. [**75**]{}, 533 (1995); T. J. Newman and M. A. Moore, Phys. Rev. B [**54**]{}, 6661 (1996). M. Sigrist and K. Ueda, Rev. Mod. Phys. [**63**]{}, 239 (1991); J. A. Sauls, Adv. Phys. [**43**]{}, 113 (1994) and references therein. D. R. T. Jones, A. Love and M. A. Moore, J. Phys. C [**9**]{}, 743 (1976); D. Bailin, A. Love and M. A. Moore, J. Phys. C [**10**]{}, 1159 (1977). A. L. Fetter and P. C. Hohenberg, in [*Superconductivity*]{}, ed. R. D. Parks (Dekker, New York, 1969), Vol. 2. L. D. Landau and E. M. Lifshitz, [*Quantum Mechanics*]{} (Pergamon, London, 1958). G. Eilenberger, Phys. Rev. [**164**]{}, 628 (1967). A. Garg and D-C. Chen, Phys. Rev. B [**49**]{}, 479 (1994). S. Coleman and E. Weinberg, Phys. Rev. D [**7**]{}, 1888 (1973). D. J. Amit, [*Field Theory, the Renormalization Group and Critical Phenomena*]{} 2nd edition (McGraw-Hill, New York, 1988). M. A. Moore and T. J. Newman, unpublished. J. Yeo and M. A. Moore, Phys. Rev. Lett. [**76**]{}, 1142 (1996); Phys. Rev. B [**54**]{}, 4218 (1996). M. A. Moore, Phys. Rev. B [**55**]{}, 14136 (1997).
--- abstract: 'We show convergence of solutions to equilibria for quasilinear and fully nonlinear parabolic evolution equations in situations where the set of equilibria is non-discrete, but forms a finite-dimensional $C^1$-manifold which is normally stable.' title: On normal stability for nonlinear parabolic equations --- <span style="font-variant:small-caps;">Jan Prüss</span> <span style="font-variant:small-caps;">Gieri Simonett</span> <span style="font-variant:small-caps;">Rico Zacher</span> Introduction ============ In this short note we consider quasilinear as well as fully nonlinear parabolic equations and we study convergence of solutions towards equilibria in situations where the set of equilibria forms a $C^1$-manifold.\ Our main result can be summarized as follows: suppose that for a nonlinear evolution equation we have a $C^1$-[*manifold of equilibria*]{} ${\mathcal E}$ such that at a point $u_*\in{\mathcal E}$, the kernel $N(A)$ of the linearization $A$ is isomorphic to the tangent space of ${\mathcal E}$ at $u_*$, the eigenvalue $0$ of $A$ is semi-simple, and the remaining spectral part of the linearization $A$ is stable. Then solutions starting nearby $u_*$ exist globally and converge to some point on ${\mathcal E}$. This situation occurs frequently in applications. We call it the [*generalized principle of linearized stability*]{}, and the equilibrium $u_*$ is then termed [*normally stable*]{}. A typical example for this situation to occur is the case where the equations under consideration involve symmetries, i.e. are invariant under the action of a Lie-group. The situation where the set of equlibria forms a $C^1$-manifold occurs for instance in phase transitions [@ES98; @PrSi06], geometric evolution equations [@EMS98; @ES99], free boundary problems in fluid dynamics [@FR02; @GP97], stability of traveling waves [@PSZ08], and models of tumor growth, to mention just a few. A standard method to handle situations as described above is to refer to [*center manifold theory*]{}. In fact, in that situation the center manifold of the problem in question will be unique, and it coincides with ${\mathcal E}$ near $u_*$. Thus the so-called [*shadowing lemma*]{} in center manifold theory implies the result. Center manifolds are well-studied objects in the theory of nonlinear evolution equations. For the parabolic case we refer to the monographs [@Hen81; @Lun95], and to the publications [@BaJo89; @BJL00; @DPLu88; @LPS08; @Mie91; @Sim95; @VI92]. However, the theory of center manifolds is a technically difficult matter. Therefore it seems desirable to have a simpler, direct approach to the generalized principle of linearized stability which avoids the technicalities of center manifold theory. Such an approach has been introduced in [@PSZ08] in the framework of $L_p$-maximal regularity. It turns out that within this approach the effort to prove convergence towards equilibria in the normally stable case is only slightly larger than that for the proof of the standard linearized stability result - which is simple. The purpose of this paper is to extend the approach given in [@PSZ08] to cover a broader setting and a broader class of nonlinear parabolic equations, including fully nonlinear equations. This approach is flexible and general enough to reproduce the results contained in [@Cui07; @EMS98; @ES98; @ES99; @FR02; @GP97; @PrSi06; @PSZ08], and it will have applications to many other problems. Our approach makes use of the concept of maximal regularity in an essential way. As general references for this theory we refer to the monographs [@Ama95; @DHP03; @Lun95]. Abstract nonlinear problems in a general setting ================================================ Let $X_0$ and $X_1$ be Banach spaces, and suppose that $X_1$ is densely embedded in $X_0$. Suppose that $F:U_1\subset X_1\to X_0$ satisfies $$\label{F} F\in C^k(U_1,X_0),\quad k\in {\mathbb N},\ k\ge 1,$$ where $U_1$ is an open subset of $X_1$. Then we consider the autonomous (fully) nonlinear problem $$\label{FN1} \dot{u}(t)+F(u(t))=0,\quad t>0, \quad u(0)=u_0,$$ for $u_0\in U_1$. In the sequel we use the notation $|\cdot|_j$ to denote the norm in the respective spaces $X_j$ for $j=0,1$. Moreover, for any normed space $X$, $B_X(u,r)$ denotes the open ball in $X$ with radius $r>0$ around $u\in X$.\ Let $ {\mathcal E}\subset U_1$ denote the set of equilibrium solutions of , which means that $$u_\ast\in{\mathcal E}\quad \mbox{ if and only if }\quad F(u_\ast)=0.$$ Given an element $u_*\in{\mathcal E}$, we assume that $u_*$ is contained in an $m$-dimensional manifold of equilibria. This means that there is an open subset $U\subset{\mathbb R}^m$, $0\in U$, and a $C^1$-function $\Psi:U\rightarrow X_1$ such that $$\label{manifold} \begin{aligned} & \bullet\ \text{$\Psi(U)\subset {\mathcal E}$ and $\Psi(0)=u_*$,} \\ & \bullet\ \text{the rank of $\Psi^\prime(0)$ equals $m$, and} \\ & \bullet\ \text{$F(\Psi(\zeta))=0,\quad \zeta\in U.$} \end{aligned}$$ We assume further that near $u_*$ there are no other equilibria than those given by $\Psi(U)$, i.e. ${\mathcal E}\cap B_{X_1}(u_*,{r_1})=\Psi(U)$, for some $r_1>0$.\ Let $u_\ast\in{\mathcal E}$ be given and set $A:=F^\prime (u_\ast)$. Then we assume that $A\in{\mathcal H}(X_1,X_0)$, by which we mean that $-A$, considered as a linear operator in $X_0$ with domain $X_1$, generates a strongly continuous analytic semigroup $\{e^{-At};\,t\ge 0\}$ on $X_0$. In particular we may take the graph norm of $A$ as the norm in $X_1$. For the deviation $v:=u-u_*$ from $u_*$, equation can be restated as $$\label{FN2} \dot{v}(t)+Av(t)=G(v(t)),\quad t>0, \quad v(0)=v_0,$$ where $v_0=u_0-u_*$, and $G(z):=Az-F(z+u_*)$, $z\in V_1:=U_1-u_\ast$. It follows from that $G\in C^k(V_1,X_0)$. Moreover, we have $G(0)=0$ and $G^\prime(0)=0$. Setting $\psi(\zeta)=\Psi(\zeta)-u_*$ results in the following equilibrium equation for problem $$\label{equilibrium-psi} A\psi(\zeta)=G(\psi(\zeta)),\quad \mbox{ for all }\;\zeta\in U.$$ Taking the derivative with respect to $\zeta$ and using the fact that $G^\prime(0)=0$ we conclude that $A\psi^\prime(0)=0$ and this implies that the tangent space of ${\mathcal E}$ at $u_\ast$ is contained in $N(A)$, the kernel of $A$. For $J=[0,a)$, $a\in (0,\infty]$, we consider a pair of Banach spaces $({\mathbb E}_0(J),{\mathbb E}_1(J))$ such that ${\mathbb E}_0(J)\hookrightarrow L_{1,{\rm loc}}(J;X_0)$ and $${\mathbb E}_1(J)\hookrightarrow H^1_{1,{\rm loc}}(J;X_0)\cap L_{1,{\rm loc}}(J;X_1),$$ respectively. Denoting by $X_\gamma=\gamma{\mathbb E}_1$ the trace space of ${\mathbb E}_1(J)$ we assume that - $\gamma{\mathbb E}_1$ is independent of $J$, and the embedding ${\mathbb E}_1(J)\hookrightarrow BU\!C(J;X_\gamma)$ holds. In addition, we assume that there is a constant $c_0> 0$ independent of $J=[0,a)$, $a\in (0,\infty]$, such that $$\label{trace-0} \sup_{t\in J} \no w(t)\no_{\gamma}\le c_0\no w\no_{{\mathbb E}_1(J)},\quad \mbox{for all}\; w\in {\mathbb E}_1(J),\; w(0)=0.$$ We refer to [@Ama95 Section III.1.4] for further information on trace spaces. Moreover, we assume that - $\tilde{w}\in {\mathbb E}_1(J)$ and $|w(t)|_0\le|\tilde{w}(t)|_1$, $t\in J$, imply $\no w\no_{{\mathbb E}_0(J)}\le \no\tilde{w}\no_{{\mathbb E}_1(J)}$; for $\omega>0$ fixed, there exists a constant $c_1>0$ not depending on $J$ and such that $$\label{FNR} \begin{split} &\int_J e^{-\omega s}|w(s)|_1\,ds \le c_1\no w\no_{{\mathbb E}_1(J)},\quad \mbox{for all}\;w\in {\mathbb E}_1(J),\\ &\int_t^\infty e^{-\omega s}|w(s)|_1\,ds \le c_1 e^{-\omega t}\no w\no_{{\mathbb E}_1({\mathbb R}_+)},\quad \mbox{for all}\;w\in {\mathbb E}_1({\mathbb R}_+)\text{ and } t\ge 0. \end{split}$$ Our [*key assumption*]{} is that $({\mathbb E}_0(J),{\mathbb E}_1(J))$ is a pair of maximal regularity for $A$. To be more precise we assume that - the linear Cauchy problem $\dot w +Aw=g,\ w(0)=w_0 $ has for each $(g,w_0)\in {\mathbb E}_0(I)\times \gamma{\mathbb E}_1(I)$ a unique solution $w\in{\mathbb E}_1(I)$, where $I=[0,T]$ is a finite interval. We impose the following assumption for the sake of convenience. For all examples that we have in mind the condition can be derived from (A3). Suppose that $\sigma(A)$, the spectrum of $A$, admits a decomposition $\sigma(A)=\sigma_s\cup \sigma^\prime $, where $\sigma_s\subset\{z\in{\mathbb C}:{\rm Re}\, z>\omega\}$ for some $\omega>0$ and $\sigma^\prime\subset\{z\in{\mathbb C}:{\rm Re}\, z\le 0\}$. Let $P_s$ denote the spectral projection corresponding to the spectral set $\sigma_s$. Then we assume that - there exists a constant $M_0>0$ such that for any $J=[0,a)$, $a\in (0,\infty]$, any $\sigma\in [0,\omega]$, and any function $g$ with $e^{\sigma t}P_s g\in {\mathbb E}_0(J)$ there is a unique solution $w$ of $\dot{w}+A_s w=P_s g$, $t\in J$, $w(0)=0$, satisfying $$\no e^{\sigma t}w\no_{{\mathbb E}_1(J)}\le M_0\no e^{\sigma t}P_s g\no_{{\mathbb E}_0(J)};$$ there exists a constant $M_1>0$ such that for any $J=[0,a)$, $a\in (0,\infty]$, and for any $z\in X_\gamma$ there holds $$\no e^{\sigma t}e^{-A_s t}P_s z\no_{{\mathbb E}_1(J)} +\sup_{t\in J}|e^{\sigma t}e^{-A_s t}P_s z|_\gamma \le M_1 |P_s z|_{\gamma},\quad \sigma\in [0,\omega].$$ We again refer to [@Ama95 Chapter III] for more background information on the notion of maximal regularity. In order to cover the case $X_\gamma\neq X_1$ we assume the following [*structure condition*]{} on the nonlinearity $G$: - there exists a uniform constant $C_1$ such that for any $\eta>0$ there is $r>0$ such that $$\hspace{1cm} |G(z_1)-G(z_2)|_0\le C_1(\eta+|z_2|_1)|z_1-z_2|_1,\quad z_1,\,z_2\in X_1\cap B_{X_\gamma}(0,r).$$ Observe that condition (A5) trivially holds in the case $X_\gamma= X_1$, since $G'(0)=0$. A short computation shows that condition (A5) is also satisfied if $F$ has a quasilinear structure, i.e. if $$\label{quasilinear} F(u)=B(u)u+f(u)\quad\text{for $u\in U_\gamma$},\quad (B,f)\in C^1(U_\gamma,{\mathcal B}(X_1,X_0)\times X_0),$$ where $U_\gamma\subset X_\gamma$ is an open set.\ Lastly, concerning [*solvability*]{} of the nonlinear problem (\[FN2\]) we will assume that - given $b>0$ there exists $r_2>0$ such that for any $v_0\in B_{X_\gamma}(0,r_2)$ problem (\[FN2\]) admits a unique solution $v\in {\mathbb E}_1([0,b])$. Note that since $v=0$ is an equilibrium of (\[FN2\]), condition (A6) is satisfied whenever one has existence and uniqueness of local solutions in the described class as well as continuous dependence of the maximal time of existence on the initial data.\ We conclude this section by describing three important examples of admissible pairs $({\mathbb E}_0(J),{\mathbb E}_1(J))$.\ [**Example 1:**]{} ($L_p$-maximal regularity.)\ In our first example, the spaces $({\mathbb E}_0(J),{\mathbb E}_1(J))$ are given by $${\mathbb E}_0(J):=L_p(J;X_0), \quad {\mathbb E}_1(J):=H^1_p(J;X_0)\cap L_p(J;X_1).$$ The trace space is a real interpolation space given by $\gamma{\mathbb E}_1=X_\gamma=(X_0,X_1)_{1-1/p,p}$ and we have ${\mathbb E}_1(J)\hookrightarrow BU\!C(J;X_\gamma)$, see for instance [@Ama95 Theorem III.4.10.2]. For a proof of we refer to [@PSS07 Proposition 6.2]. This yields Assumption (A1). For Assumption (A2) we note that $$\begin{split} \int_J e^{-\omega s}|w(s)|_1\,ds \le c_1\big(\int_J |w(s)|_1^p\,ds\big)^{1/p} \le c_1 \no w\no_{{\mathbb E}_1(J)} \end{split}$$ for all $w\in {\mathbb E}_1(J)$ by Hölder’s inequality. Moreover, $$\begin{split} \int_t^\infty e^{-\omega s}|w(s)|_1\,ds \le \big(\int_t^\infty e^{-\omega s p^\prime}\,ds\big)^{1/p^\prime} \big(\int_t^\infty |w(s)|^p\,ds\big)^{1/p^\prime} \le c_1 e^{-\omega t}\no w\no_{{\mathbb E}_1({\mathbb R}_+)} \end{split}$$ for $t\ge 0$ and $w\in {\mathbb E}_1({\mathbb R}_+)$. We refer to [@DHP03; @KW04; @PrSi07], [@Ama95 Section III.4.10] and the references therein for conditions guaranteeing that the crucial Assumption (A3) on maximal regularity is satisfied. It is clear that the property of maximal regularity is passed on from $A$ to $A_s$ in the spaces ${\mathbb E}^s_0(J):=L_p(J;X_0^s)$, ${\mathbb E}^s_1(J):=H^1_p(J;X_0^s)\cap L_p(J;X_1^s)$, and this implies Assumption (A4), see for instance [@Ama95 Remark III.4.10.9(a)]. Assumption (A5) is satisfied in case that the nonlinear mapping $F$ has a [*quasilinear structure*]{}, see [@PSZ08]. Assumption (A6) follows in case that $F$ has a quasilinear structure from (A3) and [@Pru03 Theorem 3.1], see also [@Am05 Theorem 2.1, Corollary 3.3]. We remark that the case of $L_p$-maximal regularity has been considered in detail in [@PSZ08]. [**Example 2:**]{} ([Continuous maximal regularity]{}).\ Let $J=[0,a)$ with $0<a\leq\infty$ and set $\dot J:=(0,a)$. For $\mu\in (0,1)$ and $X$ a Banach space we set $$\begin{split} &BU\!C_{1-\mu}(J;X):=\big\{u\in C(\dot{J};X):[t\mapsto t^{1-\mu}u]\in BU\!C(\dot{J};X),\\ & \hspace{6cm}\lim_{t\to 0^+} t^{1-\mu}|u(t)|_X=0\big\},\\ &BU\!C_0(J;X):=BU\!C(J;X). \\ \end{split}$$ $BU\!C_{1-\mu}(J;X)$ is turned into a Banach space by the norm $$\no u\no_{C_{1-\mu}(J;X)}:=\sup_{t\in \dot{J}} t^{1-\mu}|u(t)|_X, \quad \mu\in (0,1].$$ Finally, we set $ BU\!C_{1-\mu}^1(J;X):=\{u\in C^1(\dot{J};X):u,\,\dot{u}\in BUC_{1-\mu}(J;X)\}.$ With these preparations we define $$\label{continuous} \begin{split} {\mathbb E}_0(J):&=BU\!C_{1-\mu}(J;X_0), \\ {\mathbb E}_1(J):&=BU\!C_{1-\mu}^1(J;X_0)\cap BU\!C_{1-\mu}(J;X_1) \end{split}$$ endowed with the canonical norms. Supposing that ${\mathcal H}(X_1,X_0)\neq\emptyset$ the trace space $\gamma{\mathbb E}_1$ is the continuous interpolation space $\gamma{\mathbb E}_1=(X_0,X_1)_{\mu,\infty}^0=:D_A(\mu)$, and we have the embedding ${\mathbb E}_1(J)\hookrightarrow BU\!C(J;\gamma{\mathbb E}_1)$, see [@Ama95 Theorem III.2.3.3]. A proof for estimate can be found in [@ClSi01 Lemma 2.2(c)], and this shows that Assumption (A1) is satisfied. Assumption (A2) holds as $$\begin{split} \int_J e^{-\omega s}|w(s)|_1\,ds= \int_J \frac{e^{-\omega s}}{s^{1-\mu}}s^{1-\mu}|w(s)|_1\,ds \le c_1 \no w\no_{C_{1-\mu}(J;X_1)} \le c_1 \no w\no_{{\mathbb E}_1(J)} \end{split}$$ for all $w\in {\mathbb E}_1(J)$, and $$\begin{split} \int_t^\infty e^{-\omega s}|w(s)|_1\,ds= \int_t^\infty \frac{e^{-\omega s}}{s^{1-\mu}}s^{1-\mu}|w(s)|_1\,ds \le c_1 e^{-\omega t}\no w\no_{{\mathbb E}_1({\mathbb R}_+)} \end{split}$$ for $t\ge 0$ and $w\in {\mathbb E}_1({\mathbb R}_+)$.\ It turns out that maximal regularity cannot hold in the class if $X_1\neq X_0$ and $X_0$ is reflexive. On the other side, there is an interesting class of spaces $(X_0, X_1)$ where Assumption (A3) is indeed satisfied for the pair $({\mathbb E}_0(J),{\mathbb E}_1(J))$ given in , see [@Ang90; @ClSi01; @DPrGr79; @Lun95] and [@Ama95 Theorem III.3.4.1]. $A_s$ inherits the property of maximal regularity from $A$, and this implies Assumption (A4), see [@Ama95 Remark III.3.4.2(b)]. Assumption (A5) holds in the case $\mu=1$ for any function $G\in C^1(U_1,X_0)$ with $G(0)=G^\prime(0)=0$. It also holds for $\mu\in (0,1)$ if the nonlinear function $F$ given in satisfies .\ If $\mu=1$ and $k\ge 1$ then it follows from (A3) and [@Ang90 Theorem 2.7, Corollary 2.9], see also [@Lun95 Section 8.4], that Assumption (A6) is satisfied.\ If $\mu\in (0,1)$, $k\ge 1$ and $F$ has a [*quasilinear*]{} structure, see , then Assumption (A6) follows from (A3) and [@ClSi01 Theorem 5.1], see also [@ClSi01 Theorem 6.1].\ [**Example 3:**]{} (Hölder maximal regularity.)\ Suppose $\rho\in (0,1)$, $I\subset{\mathbb R}_+$, $J\subset{\mathbb R}_+ $ are intervals with $0\in J$. Then we set $$\begin{split} [u]_{C^\rho(I;X)}&:= \sup\Big\{\frac{|u(t)-u(s)|}{|t-s|^\rho}: s,t\in I,\ s\neq t\Big\},\\ [\![u]\!]_{C^\rho_\rho(J;X)} &:=\sup_{2\varepsilon\in\dot J} \varepsilon^\rho[u]_{C^\rho([\varepsilon,2\varepsilon];X)}, \end{split}$$ and $$\begin{split} \no u\no_{C^\rho_\rho(J;X)}&:=\no u\no_{BC(I;X)} +[\![u]\!]_{C^\rho_\rho(J;X)}, \\ BC^\rho_{\rho}(J;X)&:=\{u\in C^\rho(J;X): \no u\no_{C^\rho_\rho(J;X)}<\infty\}. \end{split}$$ Moreover, we set $$\begin{split} BU\!C^\rho_\rho(J;X):= \{u\in BU\!C(J;X)\cap BC^\rho_\rho(J;X): \lim_{\varepsilon\to 0^+} \varepsilon^\rho [u]_{C^\rho_\rho([\varepsilon,2\varepsilon];X)}=0\} \end{split}$$ and equip it with the norm $\no \cdot \no_{C^\rho_\rho(J;X)}$. For the pair $({\mathbb E}_0(J),{\mathbb E}_1(J))$ we take $$\begin{split} \label{H1} &{\mathbb E}_0(J):=BU\!C^\rho_\rho(J;X_0), \\ &{\mathbb E}_1(J):=BU\!C^{1+\rho}_\rho(J;X_0)\cap BU\!C^\rho_\rho(J;X_1), \end{split}$$ where $BU\!C^{1+\rho}_\rho(J;X) :=\{u\in BU\!C^\rho_\rho(J;X_0): \dot u\in BU\!C^\rho_\rho(J;X_0)\}$. The spaces in are given their canonical norms, turning them into Banach spaces.\ We have $\gamma{\mathbb E}_1(J)=X_1$ and it is clear from the definition of (the norm of) ${\mathbb E}_1(J)$ that ${\mathbb E}_1(J)\hookrightarrow BU\!C(J,X_1)$, and that is satisfied for any $w\in{\mathbb E}_1(J)$. This shows that Assumption (A1) holds. By similar arguments as above we see that Assumption (A2) is satisfied as well.\ For the crucial Assumption (A3) we refer to [@Ama95 Theorem III.2.5.6] with $\mu=1$; see also [@Lun95 Corollary 4.3.6(ii)]. It is worthwhile to mention that this maximal regularity result is true for [*any*]{} $A\in{\mathcal H}(X_1,X_0)$ and any pair $(X_0,X_1)$. Assumption (A4) follows then as above, see [@Ama95 Theorem III.2.5.5]. Assumption (A5) holds for [*any*]{} function $G\in (U_1,X_0)$ with $G(0)=G^\prime(0)=0$.\ Finally, it follows from Theorem 8.1.1 and Theorem 8.2.3 in [@Lun95] that Assumption (A6) holds for the fully nonlinear problem in case that $k\ge 2$. (In fact, it suffices to require that the derivative $F^\prime$ of $F$ be locally Lipschitz continuous.) The main result =============== In this section we state and prove our main theorem about convergence of solutions for the nonlinear equation towards equilibria. \[th:1\] Let $u_*\in X_1$ be an equilibrium of (\[FN1\]), and assume that the above conditions (A1)-(A6) are satisfied. Suppose that $u_*$ is normally stable, i.e. assume that - near $u_*$ the set of equilibria ${\mathcal E}$ is a $C^1$-manifold in $X_1$ of dimension $m\in{\mathbb N}$, - the tangent space for ${\mathcal E}$ at $u_*$ is given by $N(A)$, - $0$ is a semi-simple eigenvalue of $A$, i.e. $ N(A)\oplus R(A)=X_0$, - $\sigma(A)\setminus\{0\}\subset {\mathbb C}_+=\{z\in{\mathbb C}:\, {\rm Re}\, z>\omega\}$ for some $\omega>0$. Then $u_*$ is stable in $X_\gamma$, and there exists $\delta>0$ such that the unique solution $u(t)$ of with initial value $u_0\in X_\gamma$ satisfying $|u_0-u_*|_\gamma<\delta$ exists on ${\mathbb R}_+$ and converges at an exponential rate to some $u_\infty\in{\mathcal E}$ in $X_\gamma$ as $t\rightarrow\infty$. The proof to Theorem 2.1 will be carried out in several steps, as follows.\ (a) We denote by $P_l$, $l\in\{c,s\}$, the spectral projections corresponding to the spectral sets $\sigma_s$ and $\sigma_c:=\{0\}$, respectively, and let $A_l=P_l A P_l$ be the part of $A$ in $X_0^l=P_l(X_0)$ for $l\in\{c,s\}$. Note that $A_c=0$. We set $X_j^l:=P_l(X_j)$ for $l\in\{c,s\}$ and $j\in\{0,\gamma,1\}$. It follows from our assumptions that $X^c_0=X^c_1$. In the following we set $X^c:=X^c_0$ and equip $X^c$ with the norm of $X_0$. Moreover, we take as a norm on $X_j$ $$\label{norm-decomposition} |v|_j:=|P_c v|_0 + |P_s v|_j\quad\text{for}\quad j=0,\gamma,1.$$ (b) Next we show that the manifold ${\mathcal E}$ can be represented as the (translated) graph of a function $\phi:B_{X^c}(0,\rho_0)\to X_1^s$ in a neighborhood of $u_\ast$. In order to see this we consider the mapping $$g:U\subset {\mathbb R}^m\to X^c,\quad g(\zeta):=P_c\psi(\zeta),\quad \zeta\in U.$$ It follows from our assumptions that $g^\prime(0)=P_c\psi^\prime(0):{\mathbb R}^m\to X^c$ is an isomorphism. By the inverse function theorem, $g$ is a $C^1$-diffeomorphism of a neighborhood of $0$ in ${\mathbb R}^m$ onto a neighborhood, say $B_{X^c}(0,\rho_0)$, of $0$ in $X^c$. Let $g^{-1}:B_{X^c}(0,\rho_0)\to U$ be the inverse mapping. Then $g^{-1}:B_{X^c}(0,\rho_0)\to U$ is $C^1$ and $g^{-1}(0)=0$. Next we set $\Phi(x):=\psi(g^{-1}(x))$ for $x\in B_{X^c}(0,\rho_0)$ and we note that $$\Phi\in C^1(B_{X^c}(0,\rho_0),X_1^s), \quad \Phi(0)=0, \quad \{u_\ast +\Phi(x) {\,:\,}x\in B_{X^c}(0,\rho_0)\}={\mathcal E}\cap W,$$ where $W$ is an appropriate neighborhood of $u_\ast$ in $X_1$. Clearly, $$P_c \Phi(x)=((P_c\circ \psi)\circ g^{-1})(x)= (g\circ g^{-1})(x)=x,\quad x\in B_{X^c}(0,\rho_0),$$ and this yields $\Phi(x)=P_c\Phi(x)+P_s\Phi(x)=x+P_s\Phi(x)$ for $x\in B_{X^c}(0,\rho_0)$. Setting $\phi(x):=P_s\Phi(x)$ we conclude that $$\label{phi} \phi\in C^1(B_{X^c}(0,\rho_0),X_1^s),\quad \phi(0)=\phi^\prime (0)=0,$$ and that $ \{u_\ast +x+\phi(x) {\,:\,}x\in B_{X^c}(0,\rho_0)\}={\mathcal E}\cap W, $ where $W$ is a neighborhood of $u_\ast$ in $X_1$. This shows that the manifold ${\mathcal E}$ can be represented as the (translated) graph of the function $\phi$ in a neighborhood of $u_\ast$. Moreover, the tangent space of ${\mathcal E}$ at $u_\ast$ coincides with $N(A)=X^c$. By applying the projections $P_l$, $l\in\{c,s\}$, to equation and using that $x+\phi(x)=\psi(g^{-1}(x))$ for $x\in B_{X^c}(0,\rho_0)$, and that $A_c\equiv 0$, we obtain the following equivalent system of equations for the equilibria of $$\label{equilibria-phi} P_cG(x+\phi(x))=0,\quad P_s G(x+\phi(x))=A_s\phi(x), \quad x\in B_{X_c}(0,\rho_0).$$ Finally, let us also agree that $\rho_0$ has already been chosen small enough so that $$\label{estimate-phi} |\phi^\prime(x)|_{{\mathcal B}(X^c,X_1^s)}\le 1 ,\quad |\phi(x)|_1\le |x|,\quad x\in B_{X^c}(0,\rho_0).$$ This can always be achieved, thanks to .\ (c) Introducing the new variables $$\begin{aligned} &x=P_c v=P_c (u-u_*), \\ &y=P_sv-\phi(P_cv)=P_s(u-u_*)-\phi(P_c (u-u_*)) \end{aligned}$$ we then obtain the following system of evolution equations in $X^c\times X^s_0 $ $$\label{system} \left\{ \begin{aligned} \dot{x}=T(x,y), \quad &x(0)=x_0, \\ \dot{y}+A_sy=R(x,y), \quad &y(0)=y_0,\\ \end{aligned} \right.$$ with $x_0=P_cv_0$ and $y_0=P_sv_0-\phi(P_cv_0)$, where the functions $T$ and $R$ are given by $$\begin{aligned} &T(x,y)=P_c G(x+\phi(x)+y), \\ &R(x,y)=P_sG(x+\phi(x)+y)-A_s\phi(x)-\phi^\prime(x)T(x,y). \end{aligned}$$ Using the equilibrium equations , the expressions for $R$ and $T$ can be rewritten as $$\label{R-T} \begin{aligned} &T(x,y)=P_c \big(G(x+\phi(x)+y)-G(x+\phi(x))\big), \\ &R(x,y)=P_s \big(G(x+\phi(x)+y)-G(x+\phi(x))\big)-\phi^\prime(x)T(x,y). \end{aligned}$$ Equation immediately yields $$\label{R=T=0} T(x,0)=R(x,0)=0\quad\text{for all }\ x\in B_{X^c}(0,\rho_0),$$ showing that the equilibrium set ${\mathcal E}$ of near $u_*$ has been reduced to the set $ B_{X^c}(0,\rho_0)\times \{0\}\subset X^c\times X^s_1$.\ Observe also that there is a unique correspondence between the solutions of close to $u_*$ in $X_\gamma$ and those of (\[system\]) close to $0$. We call system the [*normal form*]{} of near its normally stable equilibrium $u_*$.\ (d) Taking $z_1=x+\phi(x)+y$ and $z_2=x+\phi(x)$ it follows from (A5), and that $$\label{estimate-R-T} \begin{aligned} |T(x,y)|,\ |R(x,y)|_0 \le C_1\big(\eta +|x+\phi(x)|_1\big)|y|_1 \le \beta |y|_1, \end{aligned}$$ with $\beta:=C_2(\eta+r)$, where the constants $C_1$ and $C_2$ are independent of $\eta,r$ and $x,y$, provided that $x\in \bar B_{X^c}(0,\rho)$, $y\in \bar B_{X^s_\gamma}(0,\rho)\cap X_1$ and $\rho\in (0,r/3]$ with $r<3\rho_0$. Suppose that $\eta$ and, accordingly, $r$ were already chosen small enough so that $$\label{beta} M_0\beta= M_0C_2(\eta +r)\le 1/2.$$ (e) Suppose now that $v_0\in B_{X_\gamma}(0,\delta)$, where $\delta<r_2$ will be chosen later. By (A6), problem (\[FN2\]) has a unique solution on some maximal interval of existence $[0,t_*)$. Let $\eta$ and $r$ be fixed so that holds and set $\rho=r/3$. Let then $t_1$ be the exit time for the ball $\bar B_{X_\gamma}(0,\rho)$, that is $$t_1:=\sup\{t\in(0,t_*):|v(\tau)|_\gamma\le \rho,\,\tau\in[0,t]\}.$$ Suppose $t_1<t_*$ and set $J_1=[0,t_1)$. The definition of $t_1$ implies that $|x(t)|\le \rho $ for all $t\in J_1$. Assuming wlog that the embedding constant of $X_1\hookrightarrow X_\gamma$ is less or equal to one, we obtain from $$\begin{split} \rho\ge |v(t)|_\gamma &=|x(t)+\phi(x(t))+y(t)|_\gamma =|x(t)|+|\phi(x(t))+y(t)|_\gamma \\ &\ge |x(t)|+|y(t)|_\gamma-|\phi(x(t))|_\gamma \ge |y(t)|_\gamma \end{split}$$ for $t\in J_1$, since $\phi(x)$ is non-expansive for $|x|\le \rho_0$. In conclusion we have shown that $|x(t)|,|y(t)|\le \rho$ for all $t\in J_1$, so that the estimate holds for $(x(t),y(t))$, $t\in J_1$. Then, by (A4) and , we have for $\sigma\in[0,\omega]$ $$\begin{split} \no e^{\sigma t}y\no_{{\mathbb E}_1(J_1)} &\le \no e^{\sigma t}e^{-A_s t}y_0\no_{{\mathbb E}_1(J_1)} +M_0\no e^{\sigma t}R(x,y)\no_{{\mathbb E}_0(J_1)} \\ & \le M_1 |y_0|_\gamma+M_0\beta\no e^{\sigma t}y\no_{{\mathbb E}_1(J_1)}, \end{split}$$ which implies $$\label{FN4} \no e^{\sigma t}y\no_{{\mathbb E}_1(J_1)}\le 2M_1|y_0|_\gamma,\quad \sigma\in[0,\omega],$$ thanks to . Using (A1), (A4) and (\[FN4\]) we then have for $t\in J_1$, $$\begin{split} |e^{\omega t}y(t)|_\gamma &\le |e^{\omega t}y(t)-e^{\omega t}e^{-A_s t}y_0|_\gamma +|e^{\omega t}e^{-A_s t}y_0|_\gamma\\ &\le c_0\no e^{\omega t}y-e^{\omega t} e^{-A_st}y_0\no_{{\mathbb E}_1(J_1)}+M_1|y_0|_\gamma\\ & \le (3c_0M_1+M_1)|y_0|_\gamma, \end{split}$$ which yields with $M_2=3c_0 M_1+M_1$, $$|y(t)|_\gamma\le M_2e^{-\omega t}|y_0|_\gamma,\quad t\in J_1.$$ Using (\[FNR\]) we deduce further from the equation for $x$ and the estimate for $T$ in , and from – that $$\begin{split} |x(t)|\,& \le |x_0|+\int_0^t|T(x(s),y(s))|\,ds \le |x_0|+\beta \int_0^t |y(s)|_1\,ds \\ & \le |x_0|+\beta c_1 \no e^{\omega t}y\no_{{\mathbb E}_1(J_1)} \le |x_0|+M_3|y_0|_\gamma,\quad t\in J_1, \end{split}$$ where $M_3=M_1 c_1/M_0$. Since $v(t)=x(t)+\phi(x(t))+y(t)$, the previous estimates and imply that for some constant $M_4\ge 1$, $$|v(t)|_\gamma\le M_4|v_0|_\gamma, \quad t\in J_1.$$ Choosing $\delta=\min\{\rho,r_2\}/(2M_4)$, we have $|v(t_1)|_\gamma\le \min\{\rho,r_2\}/2$, a contradiction to the definition of $t_1$, and hence $t_1=t_*$. The above argument then yields uniform bounds $\no v\no_{{\mathbb E}_1(J)}\le C$ and $\sup_{t\in J}|v(t)|_\gamma\le r_2/2$ for all $J=[0,a)$ with $a<t_*$. In view of (A6), it follows that $t_*=\infty$.\ (f) Repeating the above estimates on the interval $[0,\infty)$ we obtain $$\label{y-gamma-infty} |x(t)|\le |x_0|+M_3|y_0|_\gamma,\quad |y(t)|_\gamma\le M_2 e^{-\omega t} |y_0|_\gamma, \quad t\in [0,\infty),$$ for $v_0\in B_{X_\gamma}(0,\delta)$. Moreover, $ \lim_{t{\rightarrow}\infty} x(t)= x_0 +\int_0^\infty T(x(s),y(s))ds=:x_\infty $ exists since the integral is absolutely convergent. This yields existence of $$v_\infty:=\lim_{t{\rightarrow}\infty} v(t)=\lim_{t{\rightarrow}\infty} x(t)+\phi(x(t))+y(t)=x_\infty+\phi(x_\infty).$$ Clearly, $v_\infty$ is an equilibrium for equation , and $u_\infty:=u_\ast+ v_\infty\in{\mathcal E}$ is an equilibrium for . It follows from (A2), the estimate for $T$ in , and from that $$\begin{aligned} |x(t)-x_\infty|=\Big|\int_t^\infty T(x(s),y(s))\,ds\Big| &\le \beta \int_t^\infty |y(s)|_1\,ds \\ &\le \beta c_1 e^{-\omega t}\no e^{\omega t}y\no_{{\mathbb E}_1({\mathbb R}_+)} \le M_4 e^{-\omega t} |y_0|_\gamma. \end{aligned}$$ This shows that $x(t)$ converges to $x_\infty$ at an exponential rate. Due to , and the exponential estimate for $|x(t)-x_\infty|$ we now get for the solution $u(t)=u_\ast+v(t)$ of $$\label{exponential-v} \begin{aligned} |u(t)-u_\infty|_\gamma &=|x(t)+\phi(x(t))+y(t)-v_\infty|_\gamma \\ &\le |x(t)-x_\infty|_\gamma +|\phi(x(t))-\phi(x_\infty)|_\gamma +|y(t)|_\gamma \\ &\le (2M_4+M_2)e^{-\omega t}|y_0|_\gamma\\ &\le Me^{-\omega t}|P_sv_0-\phi(P_cv_0)|_\gamma\,, \end{aligned}$$ thereby completing the proof of the second part of Theorem \[th:1\]. Concerning stability, note that given $r>0$ small enough we may choose $0<\delta\le r$ such that the solution starting in $B_{X_\gamma}(u_*,\delta)$ exists on ${\mathbb R}_+$ and stays within $B_{X_\gamma}(u_*,r)$. **Remarks:** (a) Theorem \[th:1\] has been proved in [@PSZ08] in the setting of $L_p$-maximal regularity, and applications to quasilinear parabolic problems with nonlinear boundary conditions, to the Mullins-Sekerka problem, and to the stability of travelling waves for a quasilinear parabolic equation have been given.\ (b) It has been shown in [@PSZ08] by means of examples that conditions (i)–(iii) in Theorem \[th:1\] are also necessary in order to get convergence of solutions towards equilibria $u_\infty\in{\mathcal E}$. [99]{} H. Amann, Monographs in Mathematics 89, Birkhäuser, Boston (MA), 1995. H. Amann, *Quasilinear parabolic problems via maximal regularity,* Adv. Differential Equations, **10** (2005), 1081–1110. S. Angenent, *Nonlinear analytic semiflows,* Proc. Roy. Soc.Edinburgh Sect. A, **115** (1990), 91–107. B. Aulbach, “Continuous and discrete dynamics near manifolds of equilibria," Lecture Notes in Math. [1058]{}, Springer-Verlag, Berlin, 1984. P. Bates and C. Jones, *Invariant manifolds for semilinear partial differential equations,* in “Dynamics Reported," Vol. 2, Wiley, Chichester, (1989), 1–38. C.-M. Brauner, J. Hulshof and A. Lunardi, *A general approach to stability in free boundary problems,* J. Differential Equations, **164** (2000), 16–48. S. Cui, *Lie group action and stability analysis of stationary solutions for a free boundary problem modeling tumor growth,* preprint, arXiv:0712.2483v1. Ph. Cl[é]{}ment and G. Simonett, *Maximal regularity in continuous interpolation spaces and quasilinear parabolic equations,* J. Evol. Equ., **1** (2001), 39–67. G. Da Prato and P. Grisvard, *Equations d’évolution abstraites non linÚaires de type parabolique*, Ann. Mat. Pura Appl., **120** (1979), 329–396. G. Da Prato and A. Lunardi, *Stability, instability and center manifold theorem for fully nonlinear autonomous parabolic equations in Banach space*, Arch. Rational Mech. Anal., **101** (1988), 115–141. R. Denk, M. Hieber and J. Pr[ü]{}ss, “${\mathcal R}$-boundedness and problems of elliptic and parabolic type," Mem. Amer. Math. Soc. 166 (2003), no. 788. J. Escher, U.F. Mayer and G. Simonett, *The surface diffusion flow for immersed hypersurfaces*, SIAM J. Math. Anal., **29** (1998), 1419–1433. J. Escher and G. Simonett, *A center manifold analysis for the Mullins-Sekerka model*, J. Differential Equations, **143** (1998), 267–292. J. Escher and G. Simonett, *Moving surfaces and abstract parabolic evolution equations*, in “Topics in nonlinear analysis," Progr. Nonlinear Differential Equations Appl., 35, Birkhäuser, Basel, (1999), 183–212. A. Friedman and F. Reitich, *Quasi-static motion of a capillary drop. II. The three-dimensional case*, J. Differential Equations, **186** (2002), 509–557. M. Günther and G. Prokert, *Existence results for the quasistationary motion of a free capillary liquid drop*, Z. Anal. Anwendungen, **16** (1997), 311–348. D. Henry, “Geometric theory of semilinear parabolic equations," Lecture Notes in Math. [840]{}, Springer-Verlag, Berlin, 1981. P.C. Kunstmann and L. Weis, *Maximal $L\sb p$-regularity for parabolic equations, Fourier multiplier theorems and $H\sp \infty$-functional calculus*, in “Functional analytic methods for evolution equations," Lecture Notes in Math., 1855, Springer-Verlag, Berlin, (2004), 65–311. Y. Latushkin, J. Prüss, and R. Schnaubelt, *Center manifolds and dynamics near equilibria for quasilinear parabolic systems with fully nonlinear boundary conditions*, Discrete Contin. Dyn. Syst. Ser. B, **9** (2008), 595–633. A. Lunardi, “Analytic Semigroups and Optimal Regularity in Parabolic Problems," Progress in Nonlinear Differential Equations and their Applications [16]{}, Birkhäuser, Basel, 1995. A. Mielke, *Locally invariant manifolds for quasilinear parabolic equations*, Rocky Mountain J. Math., **21** (1991), 707–714. J. Prüss, *Maximal regularity for evolution equations in $L_p$-spaces*, Conf. Semin. Mat. Univ. Bari, **285** (2003), 1–39. J. Prüss, J. Saal, and G. Simonett, *Existence of analytic solutions for the classical Stefan problem*, Math. Ann., **338** (2007), 703–755. J. Prüss and G. Simonett, *$H\sp \infty$-calculus for the sum of non-commuting operators*, Trans. Amer. Math. Soc. **359** (2007), 3549–3565. J. Prüss and G. Simonett, *Stability of equilibria for the Stefan problem with surface tension*, SIAM J. Math. Anal., to appear. J. Prüss, G. Simonett, and R. Zacher, *On convergence of solutions to equilibria for quasilinear parabolic problems*, preprint. G. Simonett, *Center manifolds for quasilinear reaction-diffusion systems*, Differential Integral Equations, **8** (1995), 753–796. A. Vanderbauwhede and G. Iooss, *Center manifold theory in infinite dimensions*, in “Dynam. Report. Expositions Dynam. Systems," Vol. [1]{}, Springer-Verlag, Berlin, (1992), 125–163.
--- abstract: 'We present a new non-abelian generalization of the Born-Infeld Lagrangian. It is based on the observation that the basic quantity defining it is the generalized volume element, computed as the determinant of a linear combination of metric and Maxwell tensors. We propose to extend the notion of determinant to the tensor product of space-time and a matrix representation of the gauge group. We compute such a Lagrangian explicitly in the case of the $SU(2)$ gauge group and then explore the properties of static, spherically symmetric solutions in this model. We have found a one-parameter family of finite energy solutions. In the last section, the main properties of these solutions are displayed and discussed.' author: - 'Emmanuel Serié$^{\, *, **}$' - 'Thierry Masson$^{\, *}$' - 'Richard Kerner$^{\, **}$' bibliography: - 'biblio\_articles.bib' title: | Non-Abelian generalization of Born-Infeld theory\ inspired by non-commutative geometry --- $^{*}$ Laboratoire de Physique Théorique (UMR 8627)\ Université Paris XI,\ Bâtiment 210, 91405 Orsay Cedex, France\ $^{**}$ Laboratoire de Physique Théorique des Liquides,\ Université Pierre-et-Marie-Curie - CNRS UMR 7600\ Tour 22, 4-ème étage, Boîte 142,\ 4, Place Jussieu, 75005 Paris, France\ PACS numbers: 11.10.St, 11.15.-q, 11.27.+d, 12.38.Lg, 14.80.Hv LPT-Orsay 03-51 Introduction ============ Recently there has been rising interest in the Born-Infeld nonlinear theory of electromagnetism ([@born_infeld:34], [@mie:12]) and more general Lagrangians of this type, which appear quite naturally in string theories. Non-Abelian generalizations of a Born-Infeld type Lagrangian were proposed by T. Hagiwara in $1981$, ([@hagiwara:81]), and more recently, including a supersymmetric version, by Schaposnik [*[et al]{}*]{} (see ([@gonorazky:98], [@christiansen:98], [@grandi:00]) and the references within). In ([@gal'tsov:00]) we analyzed one of the possible non-abelian generalizations of the Born-Infeld Lagrangian, and showed the existence of sphaleron-like solutions with a qualitative behavior similar to the solutions of the combined Einstein-Yang-Mills field equations found by Bartnik and McKinnon ([@bartnik:88]). The non-abelian generalization proposed in ([@gal'tsov:00]) was quite straightforward indeed: it consisted in the replacement of the electromagnetic field invariants $$F^{\mu \nu} \, F_{\mu \nu} \, \, \, \ \ {\rm and} \, \ \ \, \ \ ^* F^{\lambda \rho} \, F_{\lambda \rho} = \frac{1}{2}\epsilon^{\mu \nu \lambda \rho} \, F_{\mu \nu} F_{\lambda \rho}$$ by similar expressions formed by taking the traces of corresponding Lorentz invariants in the Lie algebra space: $$g_{ab} \, F^{a \, \mu \nu} \, F^{b}_{\mu \nu} \, \, \, \ \ { \rm and } \, \ \ \, \ \ g_{ab} \, ^* F^{a \, \lambda \rho} \, F^{b}_{\lambda \rho} =\frac{1}{2} g_{ab} \, \epsilon^{\mu \nu \lambda \rho} \, F^a_{\mu \nu} F^{b}_{\lambda \rho}$$ However, except for a straightforward analogy, this expression does not seem to come from any more fundamental theory. In addition, this generalization still keeps a particular dependence on second-order invariants of the field tensor, characteristic for a [*four-dimensional*]{} manifold only; in higher dimensions the determinant would lead quite naturally to Lagrangians depending on higher-order invariants of the field tensor, too. On the other hand, it is well known that a correct mathematical formulation of gauge theories considers the gauge field tensor associated with a compact and semi-simple gauge group $G$ as a connection one-form in a principal fibre bundle over Minkowskian space-time, with values in ${\cal{A}}_G$, the Lie algebra of $G$. In local coordinates we have $$A = A^a_{\mu}\, dx^{\mu} \, L_a$$ where $L_a , \, \ \ a = 1,2,...N=\dim(G)$ is the basis of the adjoint representation of ${\cal{A}}_G$. In many cases another representation must be chosen, especially when the gauge fields are supposed to interact with spinors (cf. ([@domokos:79], [@kerner:81], [@kerner:83])). It is always possible to embed the Lie algebra in an enveloping associative algebra, and to use the tensor product: $$A = A^a_{\mu} \, d x^{\mu} \otimes \, T_a \, ,$$ where $T_a$ is the basis of the matrix representation of ${\cal{A}}_G$, so that now the non-abelian field tensor will have its values in the enveloping algebra: $$F = dA + \frac{1}{2}[A, A] = ( \, F^a_{\mu \nu} \, dx^{\mu} \wedge d x^{\nu} \, ) \otimes \, T_a \, .$$ Now, in order to reproduce as closely as possible the classical Born-Infeld Lagrangian, a natural idea is to embed the space-time metric tensor $g_{\mu \nu}$ also into the enveloping algebra, tensoring it simply with the unit element in the appropriate matrix space, i.e. replacing it by $g_{\mu \nu} \otimes {\mathbb 1}_{N}$ ; then we can add up the metric and the field tensors, and take the determinant in the resulting matrix space. This structure of the matrix is similar to the structures found in certain realizations of gauge theory in non-commutative matrix geometries ([@dubois-violette:90]), or in Lagrangians found in matrix theories ([@dubois-violette:90:II], [@dubois-violette:89:II]). Such a Lagrangian has been proposed by Park ([@park:99]) and reads as follows: $$S_{Park}[F,g] = \int_{\mathbb{R}^4} \alpha \left( \left|{\det_{\mathcal{M}\otimes R}}\left( g_{\mu \nu} \otimes \mathbb{1}_{d_R}+\beta^{-1}\,F^a_{\mu \nu} \otimes T_a \, \right)\right|^{\frac{1}{2 d_R}} -\sqrt{|g|} \right) \ , \label{park:BI}$$ where $\alpha $ and $\beta$ are real positive constants. The $2 d_R$-order root is introduced to ensure the invariance of the resulting action under the diffeomorphisms. As a matter of fact, with the root of this order we are able to factorize out the usual four-dimensional volume element $\sqrt{|g|} d^4x$ and rewrite the action principle with the subsequent scalar quantity: $$\begin{aligned} L_{Park}(F,g) & = \alpha \left(\left| {\det_{\mathcal{M}\otimes R}}\left( \mathbb{1}_{4 \times d_R} + \beta^{-1}\,\hat{F}\,\right)\right|^{\frac{1}{2 d_R}} \, -1\right) \label{park:operateur} \end{aligned}$$ and, $$\begin{aligned} S_{Park}[F,g] &= \int_{\mathbb{R}^4} L_{Park}(F,g) \sqrt{|g|}d^4x \ ,\end{aligned}$$ where $$\begin{aligned} \hat{F} = \ \frac{1}{2} F^a_{\mu \nu} \hat{M}^{\mu \nu} \otimes T_a \, \ \ \, \ \ \, \ \ \, (\hat{M}^{\mu \nu})^{\rho}_{\sigma} = \ g^{\rho \rho'} \delta^{\mu \nu}_{\rho' \sigma} \ ,\end{aligned}$$ $\hat{F}$ is an endomorphism of $\mathbb{R}^4 \otimes \mathbb{C}^{d_R}$, and $M_{\mu \nu}$ denote the generators of the Lorentz group (in the defining representation). It is also useful to introduce the notation: $$\begin{aligned} \hat{F}^a &= \frac{1}{2} F^a_{\mu \nu} \hat{M}^{\mu \nu} \ .\end{aligned}$$ The generalization of the Born-Infeld (BI) Lagrangian proposed in this paper results in a variational principle that leads to a highly nonlinear system of field equations, whose general properties can be analyzed using standard techniques ([@abrahams:95], [@lemos:00], [@gibbons:01:II]). Our aim in this article is to check whether stationary regular solutions with finite energy can be found as in ([@gal'tsov:00]). We consider the standard ’t Hooft’s monopole ansatz which in this particular case leads to one ordinary differential equation for a single function $k(r)$ of radial coordinate $r$. The structure of this equation is similar to the one found in ([@gal'tsov:00]), with a more complicated term corresponding to friction. Nevertheless, the structure of solutions and their energy spectrum are very different, as shown in the last sections of our article. We have not found solutions joining together two different vacuum configurations (called BI [*sphalerons*]{}), as in ([@gal'tsov:00]). We find instead a family of solutions labeled by integer winding number $n$, and a real parameter bounded from below. The energy integral tends with $n \rightarrow \infty$ to the energy of the BI magnetic monopole obtained in ([@gal'tsov:00]). Non-abelian generalization of Born-Infeld Lagrangian ===================================================== Basic properties of the abelian case ------------------------------------ Let us recall several basic properties of the abelian Born-Infeld Lagrangian, which we would like to reproduce in the proposed non-abelian generalization. In their original paper ([@born_infeld:34]) Born and Infeld considered the now famous least action principle: $$\label{BI} \begin{split} S_{BI}[g,F] &= \int_{\mathbb{R}^4} {\cal{L}_{BI}}(g,F) = \int_{\mathbb{R}^4}{L_{BI}(g,F)} \sqrt{|g|}d^4x \\ &= \int_{\mathbb{R}^4} \beta^2 \left( \sqrt{|\det(g_{\mu \nu}) |} - \sqrt{|\det(g_{\mu \nu}+\beta ^{-1} \, F_{\mu \nu}\,)\,|} \right) d^4x \\ &= \int_{\mathbb{R}^4} \beta^2 \left(1-\sqrt{ 1+ \frac{1}{\beta^2} (F,F) - \frac{1}{4 \beta^4}(F,{\star F})^2}\right) \sqrt{|g|}d^4x \\ & = \int_{\mathbb{R}^4} \beta^2\left(1-\sqrt{1+ \frac{1}{\beta^{2}} (\vec{B}^2 - \vec{E}^2) - \frac{1}{\beta^{4}}(\vec{E}.\vec{B})^2}\right) \sqrt{|g|}d^4x \end{split}$$ where $d^4x= dx^0 \wedge dx^1 \wedge dx^2 \wedge dx^3$ , $\vec{B}$ is the magnetic field, $\vec{E}$ is the electric field, $(F,F)=\frac{1}{2} F_{\mu \nu} F^{\mu \nu}$, $(F,{\star F})=\frac{1}{4} \epsilon^{\mu \nu \rho \sigma} F_{\mu \nu} F_{\rho \sigma}$ and $\epsilon^{\mu \nu \rho \sigma}=\frac{1}{\sqrt{|g|}} \delta^{\mu \nu \rho \sigma}_{0123}$. This action can be defined not only on the Minkowskian space-time but also on any locally Lorentzian curved manifold, as in the original case. It is useful to recall here three important properties of the Born-Infeld Lagrangian, which we want to maintain in the case of the non-abelian generalization, also valid in any finite dimension of space-time. 1\) Maxwell’s theory (or, respectively, the usual gauge theory with quadratic Lagrangian density) should be found in the limit $\beta \to \infty$: $$\label{limite} \begin{split} {S_{BI}} &= - \int_{\mathbb{R}^4} \frac{1}{2} (F,F) \sqrt{|g|}d^4x + o(\frac{1}{\beta ^2})\\ &= - \frac{1}{2}\int_{\mathbb{R}^4} F\wedge {\star F} + o(\frac{1}{\beta ^2})\\ &= - \int_{\mathbb{R}^4}\frac{1}{2}(\vec{B}^2 - \vec{E}^2) \sqrt{|g|}d^4x + o(\frac{1}{\beta ^2}) \ . \end{split}$$ 2) There exists an upper limit for the electric field intensity, equal to $\beta$ when there the magnetic component of the field vanishes: $$\label{Mie} {L_{BI}}|_{B=0} =\beta^2 \left(1-\sqrt{1- \beta^{-2} \vec{E}^2}\right) \ .$$ Due to this fact, the energy of a pointlike charge is finite, and the field remains finite even at the origin. This was the main goal pursued by Mie ([@mie:12]), suggesting the choice of nonlinear generalization of Maxwell’s theory. Indeed, one has for a point charge $e$: $$\vec{E}=\frac{e \hat{r}}{\sqrt{e^2+r^4}} \, \ \ \, \ \ \text{Energy} = \int_0^{\infty} \left(\frac{\sqrt{e^2+r^4}}{r^2} -1 \right )r^2 dr < \infty \ .$$ 3) The Born-Infeld action principle is invariant under the diffeomorphisms of $\mathbb{R}^4$. In this respect, this theory can be viewed as a covariant generalization (in the sense of General Relativity) of Mie’s theory, as well as an extension of the usual volume element $\sqrt{|g|} d^4 x$. It is also well known that the Born-Infeld electromagnetism has good causality properties (no birefringence and no shock waves) as well as interesting dual symmetries (electric-magnetic duality, Legendre duality, cf. [@boillat:70] [@plebanski:70] [@bialynicki-birula] [@gibbons:01:II] [@gibbons:95:II]) . Here we shall not consider these aspects of the theory, our main interest being focused on static solutions. The new non-abelian generalization ---------------------------------- Our starting point is the gauge field tensor associated with a compact and semisimple gauge group $G$, defined as a connection one-form in the principal fibre bundle over Minkowskian space-time, with its values in ${\cal{A}}_G$, the Lie algebra of $G$. As explained in the Introduction, we chose the representation of the connection in the tensorial product of a matrix representation of the Lie algebra ${\cal{A}}_G$ and the Grassmann algebra of forms over $M_4$: $$A = A^a_{\mu}\, dx^{\mu} \otimes \, T_a \label{connection}$$ where $T_a , \, \ \ a= 1,2,...N=\dim(G)$ are anti-hermitian matrices which form a basis of the particular representation $R$ of dimension $d_R$ of $\cal{A}_G$, specified later on. By analogy with the abelian case, we want the Lagrangian to satisfy the following properties: 0.2cm 1) One should find the usual Yang-Mills theory in the limit $\beta \to \infty$ 0.2cm 2) The (non-abelian) analogue of the electric field strength should be bounded from above when the magnetic components vanish. \[ To satisfy this particular constraint, we must ensure that the polynomial expression under the root should start with terms $1-\beta^{-2}(\vec{E}^a)^2 + ...$ when $\vec{B}^a =0$ \]. 0.2cm 3) The action should be invariant under the diffeomorphisms of $\mathbb{R}^4$. 0.2cm 4) The action has to be real. 0.2cm This enables us to introduce the following generalization of the Born-Infeld Lagrangian density for a non-abelian gauge field: $$\begin{aligned} {\cal{L}} =\sqrt{g} \,L = \sqrt{|g|} - \left| \, \det\left( \mathbb{1}_2 \otimes g_{\mu \nu} \otimes \mathbb{1}_{d_R} +\beta ^{-1} \, {J}\otimes F_{\mu \nu}^a \otimes T_a \,\right)\, \right|^{\frac{1 }{4 d_R}} \end{aligned}$$ In the expression above, $J$ denotes a $SL(2, \mathbb{C})$ matrix satisfying $J^2 = -\mathbb{1}_2$, thus introducing a quasicomplex structure. This extra doubling of tensor space is necessary in order to ensure that the resulting Lagrangian is real. We are left with the root of order $4d_R$, so that the invariance of our action under the space-time diffeomorphism is preserved. Let us recall a few arguments in favor of this construction: The simplest way to generalize the Born-Infeld action principle to the non-abelian case seems at first glance the substitution of real numbers by corresponding hermitian operators, like in quantum mechanics or in non-commutative geometry. Then one would arrive at the following expression: $$\left\{\begin{array}{lcl} U(1)& \leftrightsquigarrow & G \\ i F_{\mu \nu} & \leftrightsquigarrow & F_{\mu \nu}^a \otimes T_a \\ g_{\mu \nu} & \leftrightsquigarrow & g_{\mu \nu} \otimes \mathbb{1}_{d_R} \ ,\\ \end{array} \right. \label{correspondance}$$ where $\mathbb{1}_{d_R}$ and $ iT_a$ are hermitian matrices. What remains now to make the generalization complete, is to extend the notion of the determinant taken over the space-time indices in the usual case. We propose to replace the determinant of a $4 \times 4$ matrix (denoted hereafter ${\det_{\mathcal{M}}}$) by a determinant taken in the tensor product of space-time and matrix indices of the representation $R$ (denoted hereafter ${\det_{\mathcal{M}\otimes R}}$). Notice that this kind of tensor product of algebras appears in the context of the noncommutative geometry of matrices (see ([@dubois-violette:90] [@dubois-violette:90:II] [@dubois-violette:89:II])). Indeed, the general structure of the connection one-form in these noncommutative geometries is very similar to the one in (\[connection\]). In this kind of generalization, one would replace the objects in (\[BI\]) following the procedures in (\[correspondance\]). This leads to a complex Lagrangian in the case of a non-abelian structure group. Indeed, the determinant $ {\det_{\mathcal{M}\otimes R}}( g_{\mu \nu} \otimes \mathbb{1}_{d_R}+\beta^{-1}\,F^a_{\mu \nu} \otimes iT_a \, )$ is not real when $dim({\cal{A}}_G) >1$. Therefore we must find a different generalization. Another possibility would consist of taking anti-hermitian generators tensorized with the field $F$. This was proposed by Hagiwara ([@hagiwara:81]) and studied in more detail by Park ([@park:99]) for the Euclidean case. This substitution leads to a Lagrangian satisfying the requirements 1), 3), and 4), but not 2) (for details, see the article by J.-H. Park, ([@park:99]) ). Moreover, Lagrangians obtained with the above choices display invariants of order $3$ in the field $F$, destroying the charge conjugation invariance of the theory, $F \mapsto -F$, and possibly leading to indefinite energy densities. This is why we propose a third choice. We start from an alternative formulation of the abelian version. As a matter of fact, one can write the abelian Born-Infeld Lagrangian in the following alternative form: $$\label{BI:J} S_{BI}[F,g] =\int_{\mathbb{R}^4} \beta^2 \left( \sqrt{|g|} - \left|{\det_{\mathbb{C}^2\otimes \mathcal{M}}}\left(\mathbb{1}_2 \otimes g_{\mu \nu} +\beta^{-1} \, {J}\otimes i F_{\mu \nu} \,\right)\,\right|^{\frac{1}{4}} \right) d^4x \ ,$$ where ${J}$ is a $2 \times 2 $ complex matrix whose square is equal to $-\mathbb{1}_2$. The Lagrangian is independent of the choice of ${J}$ as can be easily seen. In (\[BI:J\]) (see also (\[correspondance\])), the imaginary unit $i$ can be considered as the anti-hermitian generator of $\mathfrak{u}(1)$. In our formula (\[BI:J\]), we use an obvious notation for the space on which the determinant is defined. With the correspondence displayed in (\[correspondance\]), we end up with the following action principle: $$\begin{aligned} \label{notreBI} S[F,g] &=\int_{\mathbb{R}^4} \alpha \left( \sqrt{|g|} - \left| \, {\det_{\mathbb{C}^2 \otimes \mathcal{M} \otimes R}}\left( \mathbb{1}_2 \otimes g_{\mu \nu} \otimes \mathbb{1}_{d_R} +\beta ^{-1} \, {J}\otimes F_{\mu \nu}^a \otimes T_a \,\right)\,\right|^{\frac{1 }{4 d_R}} \right) d^4x \ ,\end{aligned}$$ satisfying all the requirements we asked for, 1), 2), 3), and 4), by taking ${J}$ in $SL(2, \mathbb{C})$. The Lagrangian is again independent of the choice of ${J}$. In particular, we find the usual abelian Lagrangian if we replace ${T_a}$ by ${i}$ and set $d_R=1$.\ It was supposed in (\[notreBI\]) that $\alpha$ and $\beta$ are real positive constants. It is clear that only the root of degree $4 d_R$ will lead to an expression where $\sqrt{g}$ can be factorized out as an overall factor. This enables one to rewrite the action using a purely scalar quantity as follows: $$\begin{aligned} L(g,F) &= \alpha \left( 1 - \left|{\det_{\mathbb{C}^2 \otimes \mathcal{M} \otimes R}}\left(\mathbb{1}_2 \otimes \mathbb{1}_{4 \times d_R} +\beta ^{-1} \, {J}\otimes \hat{F} \right) \,\right|^{\frac{1}{4 d_R}}\, \right) , \label{notreBI:2}\end{aligned}$$ so that $$\begin{aligned} S[g,F] &= \int_{\mathbb{R}^4} L(g,F) \sqrt{|g|}d^4x \ ,\end{aligned}$$ where $\hat{F}=\frac{1}{2} F_{\mu \nu}^{a} \hat{M}^{\mu \nu} \otimes T_a $ as defined in the introductory section. Explicit computation of the determinant ========================================= General remarks --------------- The determinant defined in (\[notreBI:2\]) can be written in several equivalent forms: \[determinants\] $$\begin{aligned} & {\det_{\mathbb{C}^2 \otimes \mathcal{M} \otimes R}}\left(\mathbb{1}_2 \otimes \mathbb{1} + \beta^{-1} \, {J}\otimes \hat{F}\right)\label{det1}\\ &= {\det_{\mathbb{C}^2 \otimes \mathcal{M} \otimes R}}\left( s \otimes \mathbb{1} + \beta^{-1}\, s {J}\otimes \hat{F}\right) \label{det2}\\ &= {\det_{\mathcal{M}\otimes R}}\left(\mathbb{1} + \beta^{-2} \, \hat{F}^2\right)\label{det3} \ , \end{aligned}$$ where $s$ and ${J}$ are elements of $SL(2, \mathbb{C})$, ${J}$ satisfying $J^2=-\mathbb{1}$. For example, choosing $s=i\sigma_2$ and $ s {J}= -i\sigma_3$ in (\[det2\]), we get the following determinant: $$\label{matrice_de_commutation} \begin{vmatrix} -i \beta^{-1} \hat{F} & \mathbb{1} \\ -\mathbb{1} & i \beta^{-1} \hat{F} \end{vmatrix} = |g|^{-2 d_{R}} \begin{vmatrix} -i \beta^{-1} F^a_{\mu \nu} \otimes T_a & g_{\mu \nu} \otimes \mathbb{1} \\ - g_{\mu \nu} \otimes \mathbb{1} & i \beta^{-1} F^a_{\mu \nu} \otimes T_ a \end{vmatrix} \,$$ which is a straightforward generalization of the determinant considered by Schuller ([@schuller:02]). Following Schuller’s idea, the matrix  (\[matrice\_de\_commutation\]) in the abelian case can be interpreted as the matrix defining commutation relations between the coordinates in the phase space of a relativistic point particle minimally coupled to the Born-Infeld field. Similarly, we can extend this interpretation to the case of coordinates taking their values in an appropriate Lie algebra, i.e., by imposing the following relations: $$\begin{split} [ X_\mu,X_\nu ] &= - \frac{1}{e \beta^2} F_{\mu \nu}^a \otimes T_a\\ [ X_\mu,P_\nu ] &= -i g_{\mu \nu} \otimes \mathbb{1}\\ [ P_\mu,P_\nu ] &= e F_{\mu \nu}^a \otimes T_a \ , \end{split}$$ with $$X_\mu := X^a_\mu \otimes -i T_a \, , \ \ \, \ \ \, \ \ P_\mu := P^a_\mu \otimes -i T_a \, .$$ On the other hand, the particular form (\[det3\]) enables us to check that the Lagrangian is indeed real, and at the same time it represents an obvious generalization of the abelian Born-Infeld action in the form given in ([@schuller:02], and the references therein). It is worthwhile to note that if one chooses ${J}= -i \sigma_3$ in (\[det1\]) the determinant can be written as an absolute value of a complex number. Indeed, one has $$\begin{vmatrix} \mathbb{1} -i \beta^{-1} \hat{F} & 0 \\ 0 & \mathbb{1}+ i \beta^{-1} \hat{F} \end{vmatrix} = |{\det_{\mathcal{M}\otimes R}}\left(\mathbb{1} -i \beta^{-1} \hat{F}\right)|^2 \label{module} \ .$$ We shall use this particular form of the determinant in the subsequent computations.\ Comparison with the symmetric trace prescription ------------------------------------------------ Let us recall a useful formula relating the determinant of a linear operator $M$ to traces: $$\label{trace} \begin{split} \left(\det(1+M)\right)^\beta & = \exp\left({\beta \,{\mathrm{tr}}(\, \log(1+M)\,)}\right)\\ & = \sum_{n=0}^{\infty} \ \sum_{\substack{ \underline{\alpha}=(\alpha_1, \cdots , \alpha_n) \\ \in [S_n]}} (-1)^n \prod_{p=1}^{n} \frac{1}{\alpha_p !} \left(- \frac{\beta \, {\mathrm{tr}}(M^p)}{p} \right)^{\alpha_p} \ , \end{split}$$ where $\underline{\alpha} \in [S_n]$ and $[S_n]$ is the set of equivalence classes of the permutation group of order $n$. The multi-index $\underline{\alpha}$ is given by a Ferrer-Young diagram or equivalently satisfies the relation, $$\begin{aligned} \sum_{p=1}^{n}\, p \, \alpha_p & = n \ , \ \alpha_p \geqslant 0 \ .\end{aligned}$$ Using this trace formula, we can develop our Lagrangian up to any order in $F$. In order to avoid ambiguities, we shall denote by $ {{\mathrm{tr}}_{\mathcal{M}}}$ the trace taken over the space-time indices, by $ {{\mathrm{tr}}_{R}}$ the trace over the representation indices, and by ${{\mathrm{tr}}_{\otimes}}$ the trace over the tensor product of these two spaces. For the sake of simplicity, we have absorbed the scale factor $\beta^{-1}$ in the definition of the field tensor $F$. When needed, the appropriate powers of $\beta^{-1}$ can easily be recovered. Following (\[det3\]), we have, $$\label{traces} \begin{split} & \left( {\det_{\mathcal{M}\otimes R}}\left(1+\hat{F}^2\right) \right)^{\frac{1}{4d_R}}\\ & =\sum_{n =0}^{\infty} \sum_{\underline{\alpha}=(\alpha_1, \cdots , \alpha_n)} (-1)^{n} \prod_{k=1}^{n} \frac{1}{\alpha_k !} \left(\frac{- {{\mathrm{tr}}_{\otimes}}(\hat{F}^{2k})}{4 d_R \times k} \right)^{\alpha_k} \\ & =\sum_{n =0}^{\infty} \sum_{\underline{\alpha}=(\alpha_1, \cdots , \alpha_n)} (-1)^{n} \prod_{k=1}^{n} \frac{1}{\alpha_k !} \prod_{\substack{m=1 \\ \alpha_k \neq 0}}^{\alpha_k} \left( - \frac{ {{\mathrm{tr}}_{\mathcal{M}}}(\hat{F}^{a^m_1} \cdots\hat{F}^{a^m_{2 k }} )}{4 k} \times \frac{{{\mathrm{tr}}_{R}}( T_{a^m_1} \cdots T_{a^m_{2 k}})}{d_R} \right) \ , \end{split}$$ where $ \underline{\alpha} \in [S_n] $ satisfies $ \sum_{k=1}^{n}k\alpha_k = n $ .\ We can compare the resulting expansion with the symmetrized trace prescription given by Tseytlin in ([@tseytlin:97]). With the notation adopted above, we have $$\begin{gathered} \frac{1}{d_R}{\mathcal{S}{{\mathrm{tr}}_{R}}}\left( {\det_{\mathcal{M}}}\left(1+i\hat{F}^a T_a\right) \right)^{\frac{1}{2}} = \frac{1}{d_R}{\mathcal{S}{{\mathrm{tr}}_{R}}}\left( {\det_{\mathcal{M}}}\left(1+\hat{F}^a\hat{F}^b T_a T_b\right) \right)^{\frac{1}{4}}\\ \shoveleft{ \quad = \frac{1}{d_R}{\mathcal{S}{{\mathrm{tr}}_{R}}}\sum_{n =0}^{\infty} \sum_{ \underline{\alpha}=(\alpha_1, \cdots , \alpha_{n})} (-1)^n \prod_{ k=1 }^{n} \frac{1}{\alpha_k !} \left(- \frac{ {{\mathrm{tr}}_{\mathcal{M}}}(\hat{F}^{a_1} \cdots\hat{F}^{a_{2 k}} )}{4 k} T_{a_1} \cdots T_{ a_{2 k} } \right)^{\alpha_k} }\\ \shoveleft{ \quad =\sum_{n =0}^{\infty} \sum_{ \underline{\alpha}=(\alpha_1, \cdots , \alpha_{n})} (-1)^n \Biggl( \prod_{k=1}^{n} \frac{1}{\alpha_k !} \prod_{\substack{m=1 \\ \alpha_k \neq 0}}^{\alpha_k} \left(- \frac{{{\mathrm{tr}}_{\mathcal{M}}}(\hat{F}^{a^m_1} \cdots\hat{F}^{a^m_{2 k}}) }{4 k } \right) \times }\\ \times \frac{1}{d_R} {\mathcal{S}{{\mathrm{tr}}_{R}}}\left( \prod_{k=1}^{n} \prod_{m=1}^{ \alpha_k} T_{a^m_1} \cdots T_{a^m_{2 k}} \right) \Biggr) \, \label{Str:2}\end{gathered}$$ Now we can easily compare the series resulting from similar expansions of two different Lagrangians: the symmetrized trace prescription, and the generalized determinant prescription, i.e. comparing (\[traces\]) and (\[Str:2\]) with the corresponding expansion of the abelian version of Born-Infeld electrodynamics. In both cases, the third-order and higher odd-order invariants that are possible in a non-abelian case, do not appear (as they are absent in the abelian version, of course) . Let us compare, up to the fourth order, the expansion in powers of $F$ of the two Lagrangians. Our Lagrangian (\[notreBI\]) yields the following series: $$\begin{gathered} L[F,g] \simeq -\frac{1}{4{d_R}} {{\mathrm{tr}}_{\otimes}}\hat{F}^2 + \frac{1}{8 {d_R}} {{\mathrm{tr}}_{\otimes}}\hat{F}^4 -\frac{1}{32 {d_R}^2}({{\mathrm{tr}}_{\otimes}}\hat{F}^2)^2\\ \shoveleft{ \qquad \quad \simeq -\frac{1}{2} (F^a,F^b)K_{ab} +\frac{1}{8}(F^a,F^b)(F^c,F^{d}) (- K_{ab}K_{cd} +K_{abcd}+K_{acbd})} \\ +\frac{1}{8} (F^a,{\star F}^b)(F^c,{\star F}^d)K_{acbd} \ ,\end{gathered}$$ whereas the symmetrized trace prescription of ([@tseytlin:97]) gives: $$\begin{split} L_{Sym}[F,g] & = \frac{1}{{d_R}}{\mathcal{S}{{\mathrm{tr}}_{R}}}(\mathbb{1} - \sqrt{{\det_{\mathcal{M}}}(\mathbb{1} + i \hat{F})})\\ & \simeq \frac{1}{d_R} {\mathcal{S}{{\mathrm{tr}}_{R}}}( -\frac{1}{4} {{\mathrm{tr}}_{\mathcal{M}}}\hat{F}^2 + \frac{1}{8} {{\mathrm{tr}}_{\mathcal{M}}}{F}^4 - \frac{1}{32} ({{\mathrm{tr}}_{\mathcal{M}}}\hat{F}^2)^2 )\\ & \simeq - \frac{1}{2} (F^a,F^b) K_{ab} + \frac{1}{8} \left((F^a, F^b)(F^c,F^d)+(F^a,{\star F}^b)(F^c,{\star F}^d)\right) K_{\{abcd\}} \ , \end{split}$$ with $ K_{\{abcd\}} =\frac{1}{3} (K_{ab}K_{cd}+K_{ac}K_{bd}+K_{ad}K_{bc} + \frac{1}{4} S^e_{ab} S_{cde} + \frac{1}{4} S^e_{ac} S_{bde} + \frac{1}{4} S^e_{ad} S_{bce})$. As usual we note $$T_a T_b =- g_{ab} \mathbb{1} + \frac{1}{2} C_{ab}^{c} T_{c}+ \frac{i}{2} S_{ab}^{c} T_{c} \ ,$$ where $g_{ab}=\frac{c_R}{d_R} \delta_{ab}$ , $S_{cab}=g_{cd} S_{ab}^{d}$ is completely symmetric and real, $C_{cab}=g_{cd} C_{ab}^{d}$ is completely antisymmetric and real, and, $$\begin{aligned} K_{a_1 \cdots \ a_n} = \frac{(-1)^{[\frac{n}{2}]}}{d_R}{{\mathrm{tr}}_{R}}(T_{a_1} \cdots T_{a_n}) \ .\end{aligned}$$ Explicit calculus for G=$SU(2)$ ------------------------------- We use the fundamental representation of $G=SU(2)$, with generators defined by $T_a=-\frac{i}{2}\sigma_{a}$. In order to simplify the calculus, we have rescaled the formula (\[module\]) replacing $\beta$ by $1/2$, so that it compensates the factor $1/2$ in the definition of $T_a$. It is useful to note that in the formula (\[module\]), the expression ${\det_{\mathcal{M}\otimes R}}(\mathbb{1} -i \beta^{-1} \hat{F})$ is a perfect square (as noticed already in ([@park:99])). As a matter of fact, one can multiply this determinant by $ 1 ={\det_{\mathcal{M}\otimes R}}( \mathbb{1} \otimes -i\sigma_{2}) $, to obtain $$\begin{aligned} {\det_{\mathcal{M}\otimes R}}\left(\mathbb{1} - 2 i \hat{F}\right) &= {\det_{\mathcal{M}\otimes R}}\left( \mathbb{1} \otimes (-i \sigma_2) + \hat{F}^a \otimes (i\sigma_2 \sigma_a)\right) \\ &= |g|^{-2} {\det_{\mathcal{M}\otimes R}}\left( g_{\mu \nu} \otimes (-i\sigma_2) + F^a_{\mu \nu} \otimes (i\sigma_2 \sigma_a)\right) \ .\end{aligned}$$ It is easily seen that the matrix in the last expression is antisymmetric, so its determinant is a perfect square. This implies that the highest power of $F$ in the expansion of $\exp(\frac{1}{2} {\mathrm{tr}}\log(1+ 2 i \hat{F}))$ is $4 $ ; therefore $$\begin{aligned} {\det_{\mathcal{M}\otimes R}}( \mathbb{1} + 2 i \hat{F}) &= \left( \exp\left({\frac{1}{2} {\mathrm{tr}}\log ( 1 + i \hat{F} )}\right) \right)^2\\ & = \left( 1 + {\frac{1}{2}{{\mathrm{tr}}_{\otimes}}\left(\frac{\hat{F}^{2}}{2}\right)} - i {\frac{1}{2}{{\mathrm{tr}}_{\otimes}}\left(\frac{\hat{F}^{3}}{3}\right)} - {\frac{1}{2}{{\mathrm{tr}}_{\otimes}}\left(\frac{\hat{F}^{4}}{4}\right)} + {\frac{1}{2 !}\left(\frac{1}{2} {{\mathrm{tr}}_{\otimes}}\left(\frac{\hat{F}^{2}}{2}\right) \right)^{2}} \right)^2 \\ &= \left( 1 + \frac{{t_{2}}}{4} - i \frac{{t_{3}}}{6} - \frac{{t_{4}}}{8} +\frac{{t_{2}}^2}{32} \right)^2 \ ,\end{aligned}$$ where ${t_{i}}= {{\mathrm{tr}}_{\otimes}}\left( \left(\hat{F} \right)^i\right)$.\ Using formula (\[module\]), we get $$\label{bina:su2} L= 1-\sqrt[4]{(1+2P-Q^2)^2 + (2K_3)^2} \ ,$$ where $$\left\{\begin{array}{lclcl} 2P & =& \frac{1}{4} {t_{2}} &=& (F^a,F_a)\\ Q^2 & = &\frac{1}{8}{t_{4}}-\frac{1}{32}{t_{2}}^2 &=& \frac{1}{4}(F^a,{\star F}^b) (F^c,{\star F}^d) K_{acbd}\\ K_3 & =& - \frac{1}{12}{t_{3}} &=& \frac{1}{6}\epsilon_{abc} {{\mathrm{tr}}_{\mathcal{M}}}(\hat{F}^a \hat{F}^b \hat{F}^c) \end{array} \right. \ .$$ It is also interesting to note that our Lagrangian depends exclusively on three invariants of $F$, (the third-order invariant entering via its square), although the determinant can lead to the expressions up to the eighth order in $F$. In this particular case, there exist many relations between the traces, so that the complicated expressions can finally be simplified and expressed as functions of three invariants only, even though there are eight for a general $SU(2)$ Lagrangian(cf ([@roskies:77], [@anandan:78])). Spherically symmetric static configurations =========================================== The magnetic ansatz and equations of motion ------------------------------------------- Our aim now is to study static, spherically symmetric solutions of purely “magnetic” type. They are given by the so-called ’t Hooft-Polyakov ansatz ([@'thooft:74]): $$\begin{split} A & = \frac{1-k(r)}{2 } UdU^{-1} \ \ \text{with} \ U = e^{i \pi T_r} \\ &= (1-k(r)) [ T_r, dT_r] = (1-k(r)) \ (T_{\theta} \sin\theta d\varphi - T_{\varphi} d\theta) \\ &= \frac{1-k(r)}{ r^2} \ ( \vec{r} \wedge \vec{T} ) \cdot \vec{dx} \ , \end{split}$$ where the usual notation is used. When expressed in components, the same formula becomes: $$A^a_k = \frac{(1 - k(r))\,}{r^2} \ \epsilon^a{}_{km} \, x^{m} \, ,$$ where $$a,b,c... = 1,2,3 \, ; \, \ \ i,j,k...= 1,2,3 \, ; \, \ \ \epsilon^{a}{}_{km} = \epsilon^{aij} \, g_{ik} \, g_{jm} \, .$$ The notion of spherical symmetry for gauge potentials in Yang-Mills theory has been analyzed by P. Forgacs and N.S. Manton in ([@forgacs:80]); see also ([@bertrand:92]). The most general form for a spherically symmetric $SU(2)$ gauge potential is often called “the Witten ansatz” (cf [@witten:77]); an exhaustive discussion of its properties can be found in ([@volkov:98]). When this form of potential is chosen, there remaines a residual $U(1)$ symmetry preserving the field, and the situation can be interpreted as an abelian gauge theory on two dimensional de Sitter space, coupled to a complex scalar field $w$ with a Higgs-like potential. Then the problem is parametrized by four real functions $a_0$, $a_1$, $Re(w)$, and $Im(w)$ (we use the notation introduced in ([@volkov:98])). Fixing the gauge enables one to set $a_1=0$. Next, one can eliminate $a_0$ if one restrain the solutions to the “magnetic” type only. In the static case, the remaining equations of motion possesses a first integral (due to the residual global $U(1)$ symmetry). The condition that the energy must be finite at infinity forces it to vanish in this case. This means that we can choose the phase of the function $w$ at will, thus reducing the form of the potential to the one proposed by ’t Hooft in 1974 ([@'thooft:74]). Therefore, the only nonvanishing components of the curvature $F$ can be identified as the “magnetic” components of the Yang-Mills field: $$\begin{aligned} B^a_i &=\frac{1}{e r^2} \left[ \hat{r}_i \hat{r}^a (1-k^2) - r k' \, P^a_i \, \right] \ ,\end{aligned}$$ where $\hat{r}_i = \frac{x_i}{r}$ and, $P^a_i=\delta^a_i - \hat{r}^a \hat{r}_i$ is the projection operator onto the subspace perpendicular to the radial direction. The only nonvanishing invariants of the field appearing in the Lagrangian density can now be expressed by means of the spherical variable $r$, one unknown real function $k(r)$, and its first derivative $k'(r)$: $$\begin{split} 2P &= \frac{1}{ r^4} \left[ (1-k^2)^2 + 2 (r k')^2 \right]\\ K_3 &=\frac{1}{ r^6} \left( (1-k^2) (r k')^2 \right)\\ Q^2 &= 0 \end{split}$$ Then the action takes on the following form: $$S =\int \, \left( 1 - \left\{\left( 1 + \frac{( 1-k^2)^2 + 2 ( r k')^2 }{r ^4} \right)^2 + \frac{4}{r^{12}} (1- k^2)^2 (r k')^4 \right\}^{1/4} \ \right) r^2 d r \ .$$ For the subsequent analysis, it is very useful to change the independent variable by introducing its logarithm $\tau = \log(r)$. Then the action can be expressed as follows: $$S =\int (1 - \sqrt[4]{A} ) \, e^{3 \tau} \, d \tau \ ,$$ where $$\left\{ \begin{array}{ll} A & =(1+a^2+2b)^2+4a^2 b^2 = (1+a^2)((1+2b)^2+a^2)\\ a & =(1-k^2) /r^2\\ b &=\dot{k}^2 /r^4 \end{array} \right.$$ Now the equation of motion can be written as: $$A_{k} + A_{\dot{k}} ( \, \frac{3}{4} \frac{\dot{A}}{A}-3 \, ) - \frac{d}{d \tau} A_{\dot{k}} = 0 \, ,$$ or equivalently, in a more standard form: $$\left\{ \begin{array}{ll} \dot{k} &= u \\ \dot{u} &= \gamma(k,u,\tau) u + k (k^2-1) \end{array} \right. \label{systeme_dynamique}$$ with $$\begin{split} \gamma (k,u,\tau) &= 1 - 2 \ \frac{ u^2 + 2u k (1- k^2) + (1- k^2 )^2}{ r^4 + (1- k^2)^2} \\ & + \ \frac{ 6 u (1- k^2)\left[ k u^2 + 2u (1-k^2) + k ( 1- k^2)^2 \right] \left[ r^4 + 2 u^2 + (1- k^2)^2\right]}{\left[ r^4 + (1- k^2)^2 \right]\left[ (r^4+2 u^2)^2 + (1-k^2)^2(r^4+6 u^2) \right]} \ . \end{split}$$ The coefficient $\gamma$, which plays the role of dynamic friction, is quite similar to the one found in ([@gal'tsov:00]) (except for a missing factor 2, due to a printing error). In the usual Yang-Mills theory with the same ansatz, the corresponding factor is just $\gamma_{YM} = 1$. The system (\[systeme\_dynamique\]) is not autonomous (i.e., some of the coefficients depend explicitly on the variable $\tau$), so that the qualitative analysis of solutions should be performed in an extended three-dimensional phase space $(\tau,k,u)$ (see for example ([@chernavsky:78])). Of course, one cannot expect to find true singular points, because the “time” variable $\tau$ never stands still. Instead, one can find asymptotic behaviors of function $k$ whose dominant terms for $\tau \rightarrow - \infty$ ($r \rightarrow 0$) or for $\tau \rightarrow \infty$ ($r \rightarrow \infty$) satisfy the equations of motion up to a required order, neglecting infinitely small terms. However, for $r \rightarrow \infty$ there are two genuine fixed points $(k=1,u = 0)$ and $(k=-1,u = 0)$. Having found these asymptotic expansions, we then try to extend them from both sides so that they can meet and produce a regular solution valid for all values of $\tau$. Although our equations display asymptotic expansions analogous to those found in ([@donets:97], [@dyadichev:00], [@gal'tsov:00]), careful analysis shows that solutions of the Bartnik-McKinnon type ([@bartnik:88]) are excluded here. Asymptotic expansions --------------------- We have found two expansions in positive powers of $r$ which satisfy the equations of motion up to a certain finite order in $r$ near $r=0$. The first one depends on two free parameters $k_0$ and $a$, and starts with the following expressions: $$\begin{aligned} \label{dev:r=0:1} k= k_0+ a r - k_0 \left(\frac{ 5 a^2 }{6 g}+ \frac{g}{12 a^2}\right) r^2 + \frac{a^8 (52-70 g) - 9 a^4 g^3 + (g-1) g^4}{108 a^5 g^2} r^3+ O(r^4) \ ,\end{aligned}$$ where $g=1-k_0^2$, $a \neq 0$ and $ g \neq 0$. This expansion displays a certain similarity with the expressions found in ([@donets:97], [@dyadichev:00]), which depend on the same parameter $k_0$. The second one depends on only one free parameter $b$, and starts as follows: $$\begin{aligned} \label{dev:r=0:2} k=\pm \left( 1- b r^2 + \frac{3 b^2 + 92 b^4 + 608 b^6}{10+200 b^2 + 1600 b^4}\,r^4 + O(r^6) \right)\end{aligned}$$ Near $r=\infty$, the Taylor expansion can be made with respect to $r^{-1}$. It depends on one free parameter, denoted by $c$: $$\begin{aligned} \label{dev:inf} k=\pm \left( 1- \frac{c}{r} + \frac{3 c^2}{4 r^2} + O(\frac{1}{r^3}) \right) \ .\end{aligned}$$ It is remarkable that the asymptotic behavior at $r = \infty$ is the same here (up to the order $O(r^{-7}$)) as the corresponding behavior of the spherically symmetric static ansatz in the usual Yang-Mills theory, which makes it easier to interpret the characteristic integrals as magnetic charge, energy, etc. Taking these expansions as the first approximation either at $r=0$ or at $r = \infty$, we then use standard techniques in order to generate solutions valid everywhere. It is interesting to note that, when we started from infinity, no fine-tuning was necessary, and an arbitrarily fixed constant $c$ would lead to a solution which, when extrapolated to $r=0$, would define a particular pair of values of constants $k_0$ and $a$. On the contrary, starting from $r=0$, arbitrarily chosen values of $k_0$ and $a$ would not necessarily lead to good extrapolation at $r= \infty$. We shall discuss the properties of numerical solutions so obtained in the following subsection. Numerical solutions ------------------- The search for numerical solutions was based on the same method as in ([@donets:97], [@dyadichev:00], [@gal'tsov:00]). With the expansions (\[dev:r=0:1\]) and (\[dev:inf\]), we evaluate the initial conditions used as starting point for the numerical integration of equation (\[systeme\_dynamique\]). The three parameters occurring in the asymptotic expansions (two at $r=0$ and one at $r= \infty$) are interrelated by two constraint equalities, therefore the solutions can be labeled by only one real parameter. We chose to index the solutions with the parameter $c$ of (\[dev:inf\]), with $c>0$, or its logarithm $\tau_c=\log(c)$. As in the Bartnik-McKinnon case, we can assign to each solution an integer $n$, with $n-1$ denoting the number of zeros of the function $u$ or the winding number of the corresponding trajectory in the phase plane $(k,u)$, as seen in Fig. \[some\_plots\], where a few solutions are plotted. When the parameter $\tau_c$ goes from $- \infty$ to $+ \infty$, we observe that this integer $n$ grows from $1$ to $\infty$. At certain special values of the parameter $\tau_c$, this integer increases by $1$. Here are the first critical values of $\tau_c$: $\tau_c$ 1.658 4.781 7.510 10.092 13.218 16.530 19.813 ---------- ------- ------- ------- -------- -------- -------- -------- \[c\]\[c\][$n=1$]{} \[c\]\[c\][$n=2$]{} \[c\]\[c\][$n=3$]{} \[c\]\[c\][$n=4$]{} \[c\]\[c\][$k$]{} \[c\]\[c\][$\tau$]{} \[c\]\[c\][$u$]{} ![Plots of solutions for the parameters $\tau_c=-3, 1.2, 4, 7, 10$.[]{data-label="some_plots"}](SomePlots_k.eps "fig:"){width="90.00000%"} ![Plots of solutions for the parameters $\tau_c=-3, 1.2, 4, 7, 10$.[]{data-label="some_plots"}](SomePlots_ku.eps "fig:"){width="90.00000%"} The two graphs in Fig. \[some\_plots\] should be combined in order to give a correct representation of the trajectories as they appear in the extended three-dimensional phase space including the variable $\tau = \log r$. The graph on the left represents the cut $k, \tau$, and the graph on the right represent the cut $k,u$, i.e., the usual phase space of the function $k(r)$ and its first derivative $u = \dot{k}$. One can see some trajectories on the plane $k,u$ with various winding numbers. Our solutions do not interpolate between the two singular points at $k=1$ and $k= -1$, but between the singular point at $k=1$ for $r = \infty$ and a certain value $k_0$ (related to $\tau_c$) which is always lower than $1$ and bigger than $-1$ (as a matter of fact $k_0=0$ is a solution). This is radically different from the sphaleronlike solutions or solutions of Bartnik-McKinnon type found in ([@bartnik:88], [@gal'tsov:00]). The two parameters $k_0$ and $a$ of (\[dev:r=0:1\]) are functions of the parameter $\tau_c$. We have evaluated the energy $E$ of the solutions and the values of the parameter $k_0$ for $\tau_c$ varying from $-10$ to $20$. The energy $E$ is represented as a function of the parameter $\tau_c$ in Fig. \[fig:E\]. This figure represents two enlargements of the upper graph with the precision of $10^{-2}$ in order to show local minima of the energy curve. The energy minima of each class of solutions are found near the critical values of the parameter $\tau_c$, and as far as we can judge, given the precision of numerical calculus employed, coincide with their positions on the $\tau_c$-axis. Supposing that the solutions attaining local minima of energy are stable, we conjecture that these most stable solutions can be grouped in couples, with winding numbers $n$ and $n+1$, starting with the couple $n=1, n=2$. The energies converge to the limit $E_{\tau_c=\infty}= E_{n=\infty} = 1.23605...$, which coincides with the energy of the pointlike magnetic Born-Infeld monopole computed in ([@gal'tsov:00]). \[c\]\[c\][$n=1$]{} \[c\]\[c\][$ n=2$]{} \[c\]\[c\][$n=3$]{} \[c\]\[c\][$n=4$]{} \[c\]\[c\][$n=5$]{} \[c\]\[c\][$n=6$]{} \[c\]\[c\][$n=7$]{} \[c\]\[c\][$E$]{} \[c\]\[c\][$\tau_c$]{} ![Energy as function of the parameter $ \tau_c$, with local minima visible (the magnification is 100 times higher for the second minimum display).[]{data-label="fig:E"}](E.eps "fig:"){width="90.00000%"} ![Energy as function of the parameter $ \tau_c$, with local minima visible (the magnification is 100 times higher for the second minimum display).[]{data-label="fig:E"}](zE.eps "fig:"){width="40.00000%"} ![Energy as function of the parameter $ \tau_c$, with local minima visible (the magnification is 100 times higher for the second minimum display).[]{data-label="fig:E"}](zE2.eps "fig:"){width="40.00000%"} The last two graphs in Fig. \[fig:ko\] show the specific features of the dependence of the parameter $k_0$ (the initial value of function $k$ at $r=0$) with respect to $\tau_c$. The dependence is smooth only between the critical values of parameter $\tau_c$, at which the change of winding number $n$ occurs, as can be viewed in the second graph where the second derivative of $k_0$ with respect to $\tau_c$ is plotted. \[c\]\[c\][$n=1$]{} \[c\]\[c\][$n=2$]{} \[c\]\[c\][$n=3$]{} \[c\]\[c\][$n=4$]{} \[c\]\[c\][$n=5$]{} \[c\]\[c\][$n=6$]{} \[c\]\[c\][$n=7$]{} \[c\]\[c\][$k_0$]{} \[c\]\[c\][$\tau_c$]{} \[c\]\[c\][$\displaystyle\frac{d^2 k_0}{d \tau_c^2}$]{} ![$k_0$, function of $\tau_c$, and its second derivative. The singularities of second derivative $d^2k_0/d\tau_c^2$ occur at values of $\tau_c$ which coincide with the change of winding number $n$ . []{data-label="fig:ko"}](Ko.eps "fig:"){width="90.00000%"}\ ------------------------------------------------------------------------ \ ![$k_0$, function of $\tau_c$, and its second derivative. The singularities of second derivative $d^2k_0/d\tau_c^2$ occur at values of $\tau_c$ which coincide with the change of winding number $n$ . []{data-label="fig:ko"}](ddKo.eps "fig:"){width="90.00000%"} It is important to notice that our version of generalized non-abelian Born-Infeld theory is quite different from the symmetrized trace prescription. Nevertheless, the nonpolynomial character of the Lagrangian, common to all generalizations, still ensures a very rich spectrum of solutions, although very different and specific to the choice of the Lagrangian. All our solutions tend to the genuine vacuum configuration at $r \rightarrow \infty$, but their behavior near the origin $r = 0$ is very different from the sphaleronlike solutions. At the origin, our solutions look like monopole configurations whose magnetic charge has been renormalized, as suggested in ([@donets:97]), where the constant $1- k_0^2$ is also integrated in this manner. Acknowledgments {#acknowledgments .unnumbered} =============== We wish to express our thanks to M. Dubois-Violette, D.V. Gal’tsov, Y. Georgelin and C. Schmit for many enlightening comments.
--- abstract: 'The Information Bottleneck (IB) is a conceptual method for extracting the most compact, yet informative, representation of a set of variables, with respect to the target. It generalizes the notion of minimal sufficient statistics from classical parametric statistics to a broader information-theoretic sense. The IB curve defines the optimal trade-off between representation complexity and its predictive power. Specifically, it is achieved by minimizing the level of mutual information (MI) between the representation and the original variables, subject to a minimal level of MI between the representation and the target. This problem is shown to be in general NP hard. One important exception is the multivariate Gaussian case, for which the Gaussian IB (GIB) is known to obtain an analytical closed form solution, similar to Canonical Correlation Analysis (CCA). In this work we introduce a Gaussian lower bound to the IB curve; we find an embedding of the data which maximizes its “Gaussian part", on which we apply the GIB. This embedding provides an efficient (and practical) representation of any arbitrary data-set (in the IB sense), which in addition holds the favorable properties of a Gaussian distribution. Importantly, we show that the optimal Gaussian embedding is bounded from above by non-linear CCA. This allows a fundamental limit for our ability to Gaussianize arbitrary data-sets and solve complex problems by linear methods.' author: - | Amichai Painsky [email protected] Naftali Tishby [email protected]\ School of Computer Science and Engineering and\ The Interdisciplinary Center for Neural Computation\ The Hebrew University of Jerusalem\ Givat Ram, Jerusalem 91904, Israel bibliography: - 'sigproc.bib' title: Gaussian Lower Bound for the Information Bottleneck Limit --- Information Bottleneck, Canonical Correlations, ACE, Gaussianization, Mutual Information Maximization, Infomax Introduction {#intro} ============ The problem of extracting the relevant aspects of complex data is a long standing staple in statistics and machine learning. The Information Bottleneck (IB) method, presented by [@tishby1999information], approaches this problem by extending its classical notion to a broader information-theoretic setup. Specifically, given the joint distribution of a set of explanatory variables $\underline{X}$ and a target variable $\underline{Y}$ (which may also be of a higher dimension), the IB method strives to find the most compressed representation of $\underline{X}$, while preserving information about $\underline{Y}$. Thus, $\underline{Y}$ implicitly regulates the compression of $\underline{X}$, so that its compressed representation maintains a level of relevance as explanatory variables with regards to $\underline{Y}$. The IB problem is formally defined as follows: $$\label{IB problem} \begin{aligned} & {\min_{P(\underline{T}|\underline{X})}} & & I(\underline{X};\underline{T}) \\ & \text{subject to} & & I(\underline{T};\underline{Y}) \geq I_Y \end{aligned}$$ where $\underline{T}$ is the compressed representation of $\underline{X}$ and the minimization is over the mapping of $\underline{X}$ to $\underline{T}$, defined by the conditional probability $P(\underline{T}|\underline{X})$. Here, $I_Y$ is a constant parameter that sets the level of information to be preserved between the compressed representation and the target. Solving this problem for a range of $I_Y$ values defines the *IB curve* – a continuous concave curve which demonstrates the optimal trade-off between representation complexity (regarded as $I(\underline{X};\underline{T})$) and predictive power ($I(\underline{T};\underline{Y})$). The IB method showed to be a powerful tool in a variety of machine learning domains and related areas [@slonim2000document; @friedman2001multivariate; @sinkkonen2002clustering; @slonim2005information; @hecht2009speaker]. It is also applicable to other fields such as neuroscience [@schneidman2001analyzing] and optimal control [@tishby2011information]. Recently, [@tishby2015deep] and [@shwartz2017opening] demonstrated its abilities in analyzing and optimizing the performance of deep neural networks. Generally speaking, solving the IB problem (\[IB problem\]) for an arbitrary joint distribution is not a simple task. In the introduction of the IB method, [@tishby1999information] defined a set of self-consistent equations which formulate the necessary conditions for the optimal solution of (\[IB problem\]). Further, they provide an iterative Arimoto– Blahut like algorithm which shows to converge to local optimum. In general, these equations do not hold a tractable solution and are usually approximated by different means [@slonim2002information]. An extensive attention was given to the simpler categorical setup, where the IB curve is somewhat easier to approximate. Here, $\underline{X}$ and $\underline{Y}$ take values on a finite set and $\underline{T}$ represents (soft and informative) clusters of $\underline{X}$ (REF). Naturally, the IB problem also applies for continuous variables. In this case, approximating the solution to the self-consistent equations is even more involved. A special exception is the Gaussian case, where $\underline{X}$ and $\underline{Y}$ are assumed to follow a jointly normal distribution and the *Gaussian IB* problem (GIB) is analytically solved by linear projections to the canonical correlation vector space [@chechik2005information]. However, evaluating the IB curve for arbitrary continuous random variables is still considered a highly complicated task where most attempts focus on approximating or bounding it [@rey2012meta; @chalk2016relevant]. A detailed discussion regarding currently known methods is provided in the following section. In this work we present a novel Gaussian lower bound to the IB curve, which applies to all types of random variables (continuous, nominal and categorical). Our bound strives to maximize the “jointly Gaussian part" of the data and apply the analytical GIB to it. Specifically, we seek for two transformations, $\underline{U}=\phi(\underline{X})$ and $\underline{V}=\psi(\underline{Y})$ so that $\underline{U}$ and $\underline{V}$ are highly correlated and “as jointly Gaussian as possible". In addition, we ask that the transformations preserve as much information as possible between $\underline{X}$ and $\underline{Y}$. This way, we maximize the portion of the data that can be explained by linear means, $I(\underline{U};\underline{V})\leq I(\underline{X};\underline{Y})$, specifically using the GIB. In fact, our results goes beyond the specific context of the information bottleneck. In this work we tackle the fundamental question of linearizing non-linear problems. Specifically, we ask ourselves whether it is possible to “push" all the information in the data to its second moments. This problem has received a great amount of attention over the years. For example, [@schneidman2006weak] discuss this problem in the context of neural networks; they provide preliminary evidence that in the vertebrate retina, weak pairwise correlations may describe the collective (non-linear) behavior of neurons. In this work, we provide both fundamental limits and constructive algorithms for maximizing the part of the data that can be optimally analyzed by linear means. This basic property holds both theoretical and practical implications, as it defines the maximal portion which allows favorable analytical properties in many applications. Interestingly, we show that even if we allow the transformations $\underline{U}=\phi(\underline{X})$ and $\underline{V}=\psi(\underline{Y})$ to increase the dimensions of $\underline{X}$ and $\underline{Y}$, our ability to linearize the problem is still limited, and governed by the non-linear canonical correlations [@breiman1985estimating] of the original variables. Our suggested approach may also be viewed as an extension of the *Shannon lower bound* [@cover2012elements], for evaluating the mutual information. In his seminal work, Shannon provided an analytical Gaussian lower bound for the generally involved rate distortion function. He showed that the rate distortion function $R(D)$ can be bounded from below by $h(X)-\frac{1}{2}\log(2\pi e D)$ where $X$ is the compressed source, $h(X)$ is its corresponding deferential entropy and $\frac{1}{2}\log(2\pi e D)$ is the differential entropy of an independent Gaussian noise with a maximal distortion level $D$. This bound holds some favorable theoretical properties [@cover2012elements] and serves as one of the most basic tools for approximating the rate distortion function to this very day. In this work, we use a somewhat similar rationale and derive a Gaussian lower bound for the mutual information of two random variables, which holds an analytical expression just like the Shannon’s bound. We then extend our result to the entire IB curve and discuss its theoretical properties and practical considerations. The rest of this manuscript is organized as follows: In Section \[Related work\] we review previous work on the IB method for continuous random variables. Section \[Problem Formulation\] defines our suggested lower bound and formulates it as an optimization problem. We then propose a set of solutions and bounds to this problem, as we distinguish between the easier univariate case (Section \[The 1-D case\]) and the more involved multivariate case (Section \[multivariate case\]). Finally, in Section \[Gaussian lower bound for the Information Bottleneck Curve\] we extend our results to the entire IB curve. Related work {#Related work} ============ As discussed in the previous section, solving the IB problem for continuous variables is in general a difficult task. A special exception is where $\underline{X}$ and $\underline{Y}$ follow a jointly normal distribution. [@chechik2005information] show that in this case, the Gaussian IB problem (GIB) is solved by a noisy linear projection, $T=A\underline{X} + \underline{\zeta}$. Specifically, assume that $\underline{X}$ and $\underline{Y}$ are of dimensions $n_X$ and $n_Y$ respectively and denote the covariance matrix of $\underline{X}$ as $C_{\underline{X}}$ while the conditional covariance matrix of $\underline{X}|\underline{Y}$ is $C_{\underline{X}|\underline{Y}}$. Then, $\underline{\zeta}$ is a Gaussian random vector with a zero mean and a unit covariance matrix, independent of $\underline{X}$. The matrix $A$ is defined as follows: $$A=\left\{ \begin{array}{cc} [0^T; \dots; 0^T] & 0 \leq \beta \leq \beta_1^C\\ {}[a_1v_1^T ; 0^T; \dots ; 0^T] & \beta_1^C \leq \beta \leq \beta_2^C\\ {}[a_1v_1^T ; a_2v_2^T; 0^T; \dots ; 0^T] & \beta_2^C \leq \beta \leq \beta_3^C\\ \vdots & \vdots \end{array}\right\} .$$ where $\{v_1^T,v_2^T,\dots,v_{n_x}^T \}$ are the left eigenvectors of $C_{\underline{X}|\underline{Y}} C_{\underline{X}}^{-1}$, sorted by their corresponding ascending eigenvalues $\lambda_1, \dots , \lambda_{n_X}$, $\beta_I^C=\frac{1}{\lambda_i}$ are the critical $\beta$ values, $a_i$ are defined by $a_i=\sqrt{\frac{\beta(1-\lambda_i)-1}{\lambda_i r_i}} $. $r_i=v_i^T C_{\underline{X}} v_i$ and $0^T$ is an $n_X$ row vector of zeros. Notice that the critical values $\beta$ correspond to the slope of the IB curve, as they represent the Lagrange multipliers of the IB problem. Unfortunately, this solution is limited to jointly Gaussian random variables. In fact, it can be shown that a closed form analytical solution (for continuous random variables) may only exist under quite restrictive assumptions on the underlaying distribution. Moreover, as the IB curve is so challenging to evaluate in the general case, most known attempts either focus on extending the GIB to other distributions under varying assumptions, or approximate the IB curve by different means. [@rey2012meta] reformulate the IB problem in terms of probabilistic copulas. They show that under a Gaussian copula assumption, an analytical solution (which extends the GIB) applies to joint distributions with arbitrary marginals. This formulation provides several interesting insights on the IB problem. However, its practical implications are quite limited as the Gaussian copula assumption is very restrictive. In fact, it implicitly requires that the joint distribution would maintain a Gaussian structure. As we show in the following sections, this assumption makes the problem significantly easier and does not hold in general. [@chalk2016relevant] provide a lower bound to the IB curve by using an approximate variational scheme, analogous to variational expectation maximization. Their method relaxes the IB problem by restricting the class of distributions, $P(\underline{Y}|\underline{T})$ and $P(\underline{T})$ to a set of parametric models. This way, the relaxed IB problem may be solved in EM-like steps; their suggested algorithm iteratively maximize the objective over the mappings (for fixed parameters) and then maximize the set of parameters, for fixed mappings. [@chalk2016relevant] show that this method can be effectively applied to “sparse" data in which $\underline{X}$ and $\underline{Y}$ are generated by sparsely occurring latent features. However, in the general case, their suggested bound strongly depends on the assumption that the chosen parametric models provide reasonable approximations for the optimal distributions. This assumption is obviously quite restrictive. Moreover, it is usually difficult to validate, as the optimal distributions are unknown. [@kolchinsky2017nonlinear] take a somewhat similar approach, as they suggest a variational upper bound to the IB curve. The main difference between the two methods relies on the variational approximation of objective, $I(\underline{X};\underline{Y})$. However, they are both prune to the same difficulties stated above. [@alemi2016deep] propose an additional variational inference method to construct a lower bound to the IB curve. Here, they re-parameterize the IB problem followed by Monte Carlo sampling, to get an unbiased estimate of the IB objective gradient. This allows them to apply deep neural networks in order to parameterize any given distribution. However, this method fails to provide guarantees on the obtained bound, as a result of the suggested stochastic gradient decent optimization approach. [@achille2016information] relax the bottleneck problem by introducing an additional *total correlation* (TC) regularization term that strives to maximize the independence among the components of the representation $T$. They show that under the assumption that the Lagrange multipliers of the TC and MI constraints are identical, the relaxed problem may be solved by adding auxiliary variables. However, this assumption is usually invalid, and the suggested method fails to provide guarantees on difference between the obtained objective and original IB formulation. In this work we suggest a novel lower bound to the IB curve which provides both theoretical and practical guarantees. In addition, we introduce upper and lower bounds for our suggested solution that are very easy to attain. This way we allow immediate benchmarks to the IB curve using common off-shelf methods. Problem formulation {#Problem Formulation} =================== Throughout this manuscript we use the following standard notation: underlines denote vector quantities, where their respective components are written without underlines but with index. For example, the components of the $n$-dimensional vector $\underline{X}$ are $X_1, X_2, \dots X_n$. Random variables are denoted with capital letters while their realizations are denoted with the respective lower-case letters. The mutual information of two random variables is defined as $I(\underline{X};\underline{Y})=h(\underline{X})+h(\underline{Y})-h(\underline{X},\underline{X})$ where $h(\underline{X})=-\int_{\underline{X}} f_{\underline{X}}(\underline{x})\log f_{\underline{X}}(\underline{x}) d\underline{x}$ is the differential entropy of $\underline{X}$ and $f_{\underline{X}}(\underline{x})$ is its probability density function. We begin by introducing a Gaussian lower bound to the mutual information $I(\underline{X};\underline{Y})$. We then extend our result to the entire IB curve. Problem statement ----------------- Let $\underline{X}\in \mathbb{R}^{d_x}, \underline{Y}\in \mathbb{R}^{d_y}$ be two multivariate random vectors with a joint cumulative distribution function (CDF) $F_{XY}(x,y)$ and mutual information $I(\underline{X}, \underline{Y})$. In the following sections we focus on bounding $I(\underline{X}, \underline{Y})$ from below with an analytical expression. Let $\underline{U}=\phi(\underline{X})$ and $\underline{V}=\psi(\underline{Y})$ be two transformations of $\underline{X}$ and $\underline{Y}$, respectively. Assume that $\underline{U}$ and $\underline{V}$ are *separately normal distributed*. This means that $\underline{U} \sim N\left(\mu_{\underline{U}},C_{\underline{U}}\right)$ and $\underline{V} \sim N\left(\mu_{\underline{V}},C_{\underline{V}}\right)$ but the vector $[\underline{U}, \underline{V}]^T$ is not necessarily normal distributed. This allows us to derive the following fundamental inequality $$\begin{aligned} \label{basic_inequality} I(\underline{X}, \underline{Y}) \geq &I(\underline{U}, \underline{V})=h(\underline{U})+h(\underline{V})-h(\underline{U},\underline{V}) \geq \\\nonumber &h(\underline{U})+h(\underline{V})-h(\underline{U}_{jg},\underline{V}_{jg}) = \frac{1}{2} \log \left(\frac{\left|C_{[\underline{U},\underline{V}] }\right|}{|C_{\underline{U}}||C_{\underline{V}}|} \right)\end{aligned}$$ where the first inequality follows from the Data Processing lemma [@cover2012elements] and the second inequality follows from $\left[\underline{U}_{jg},\underline{V}_{jg}\right]^T$ being jointly Gaussian (jg) distributed with the same covariance matrix as $\left[\underline{U},\underline{V}\right]^T$ , $C_{[\underline{U}_{jg},\underline{V}_{jg}]}=C_{[\underline{U},\underline{V}]}$, so that $h(\underline{U}_{jg},\underline{V}_{jg})\geq h(\underline{U},\underline{V})$ [@cover2012elements]. Notice that (\[basic\_inequality\]) can also be derived from an *information geometry* (IG) view point, as shown by [@cardoso2003dependence]. Equality is attained in ($\ref{basic_inequality}$) iff $I(\underline{X}, \underline{Y}) = I(\underline{U}, \underline{V})$ (no information is lost in the transformation) and $\underline{U}=\phi(\underline{X})$, $\underline{V}=\psi(\underline{Y})$ are jointly normally distributed. In other words, in order to preserve all the information we must find $\phi$ and $\psi$ that capture all the mutual information, and at the same time make $\underline{X}$ and $\underline{Y}$ jointly normal. This is obviously a complicated task as $\phi$ and $\psi$ only operate on $\underline{X}$ and $\underline{Y}$ separately. Therefore, we are interested in maximizing this lower bound as much as possible: $$\label{basic problem} \begin{aligned} & {\max_{\phi,\psi}} & & \log \left(\frac{\left|C_{[\underline{U},\underline{V}] }\right|}{|C_{\underline{U}}||C_{\underline{V}}|} \right) \\ & \text{subject to} & & \underline{U}=\phi(\underline{X}) \sim N\left(0,C_{\underline{U}}\right) \\ & &&\underline{V}=\psi(\underline{Y}) \sim N\left(0,C_{\underline{V}}\right) \end{aligned}$$ In other words, we would like to maximize [@cardoso2003dependence] IG bound by applying two transformations, $\phi$ and $\psi$, to the original variables. This would allow us to achieve a tighter result. Notice that our objective is invariant to the means of $\underline{U}, \underline{V}$ so they are chosen to be zero. In addition, it is easy to show that our objective is invariant to linear scaling of $\underline{U}, \underline{V}$. This means we can equivalently assume that $C_{\underline{U}}, C_{\underline{V}}$ are identity covariance matrices.As shown by [@kay1992feature] and others [@klami2005non; @chechik2005information], maximizing the objective of (\[basic problem\]) is equivalent to maximizing the canonical correlations, $\text{cov}(\underline{U}_i,\underline{V}_i)$. Therefore, our problem can be written as Specifically, Let $W_x$ and $W_y$ be linear transformations, applied to the vectors $\underline{X}$ and $\underline{Y}$ respectively. Then, CCA strives to maximize the sum of Pearson correlations between the components of $W_x\underline{X}$ and $W_y\underline{Y}$. This means that the first canonical variables may be found by solving $$\begin{aligned} &\max_{w_x, w_y} & & E(w_x^T \underline{X} \underline{Y}^T w_y)\\ & \text{subject to} & & w_x^T C_{\underline{X}} w_x = 1\\ & && w_y^T C_{\underline{Y}} w_y = 1\\ \end{aligned}$$ where $w_x, w_y$ are the first columns of $W_x, W_y$ respectively. Then, the second canonical variables are found by solving the same optimization problem, subject to the constraint that they are to be uncorrelated with the first pair of canonical variables. Going back to our problem, we notice for every $\phi,\psi$ that satisfy the normality constraints, we may always further increase (at least not decrease) our objective by further applying CCA. In other words, we may always find $\phi,\psi$ that achieve some value for our objective while satisfying the normality constraint, and then apply a CCA to further increase the objective while maintaining normality (as the CCA is a linear transformation). This leads to the following problem, equivalent to (\[basic problem\]), $$\begin{aligned} & {\max_{\phi,\psi, W_x, W_y}} & & \sum_{i=1}^k E\left(\underline{U}_{i} \underline{V} _{i}\right) \\ & \text{subject to} & & \underline{U}=W_x\phi(\underline{X}) \sim N\left(0,I\right) \\ & &&\underline{V}=W_y\psi(\underline{Y}) \sim N\left(0,I\right)\\ \end{aligned}$$ Since we can compose $W_x$ and $W_y$ into $\phi$ and $\psi$ respectively while maintaining both the objective and the constraints, our problem can be written more compactly as $$\label{multivariate_problem} \begin{aligned} & {\max_{\phi,\psi}} & & \sum_{i=1}^{k} E\left(\underline{U}_{i} \underline{V} _{i}\right) \\ & \text{subject to} & & \underline{U}=\phi(\underline{X}) \sim N\left(0,I\right) \\ & &&\underline{V}=\psi(\underline{Y}) \sim N\left(0,I\right)\\ \end{aligned}$$ where $k=\min\{k_x,k_y\}$. This problem may also be viewed as a variant of the well-known CCA problem [@hotelling1936relations], where we optimize over nonlinear transformations $\phi$ and $\psi$, and impose additional normality constraints. As in CCA, this problem can be solved iteratively by gradually finding the the optimal canonical components in each step (subject to the normality constraint), while maintaining orthogonality with the components that were previously found. For simplicity of the presentation we begin by solving (\[multivariate\_problem\]) in the univariate ($1$-D) case. Then, we generalize to the multivariate case. In each of these setups we present a solution to the problem, followed by simpler upper and lower bounds. The univariate case {#The 1-D case} =================== In the univariate case we assume that $d=k=1$. We would like to find $\phi,\psi$ such that $$\begin{aligned} \label{1-D problem} & {\max_{\phi,\psi}} & & \rho=E(U,V) \\ & \text{subject to} & & U=\phi(X) \sim N(0,1) \\ & && V=\psi(Y)\sim N(0,1)\\ \end{aligned}$$ As a first step towards this goal, let us relax our problem by replacing the normality constraint with simpler second order statistics constraints, $$\label{ACE problem} \begin{aligned} & {\max_{\phi,\psi}} & & \rho=E(U,V) \\ & \text{subject to} & & U=\phi(X),\; E(U)=0, \;E(U^2)=1 \\ & && V=\psi(Y), \;E(V)=0, \;E(V^2)=1\\ \end{aligned}$$ As mentioned above, this problem is a non-linear extension of CCA, which traces back to early work by [@lancaster1963correlations]. As this problem is also a relaxed version of our original task (\[1-D problem\]), it may serve us as an upper bound. This means that the optimum of (\[ACE problem\]), denoted as $\rho_{ub}$, necessarily bound from above $\rho_*$, the optimum of (\[1-D problem\]). Alternation Conditional Expectation (ACE) {#ACE_section} ----------------------------------------- [@breiman1985estimating] show that the optimal solution to (\[ACE problem\]) is achieved by a simple alternating conditional expectation procedure, named ACE. Assume that $\psi(Y)$ is fixed, known and satistfies the constraints. Then, we optimize (\[ACE problem\]) only over $\phi$ and by Cauchy-Schwarz inequality, we have that $$E(\phi(X)\psi(Y))=E_x \left(\phi(X)E(\psi(Y)|X)\right) \leq \sqrt{\text{var}(\phi(X))} \sqrt{\text{var}(E(\psi(Y)|X))}$$ with equality iff $\phi(X)=c\cdot E(\psi(Y)|X)$. Therefore, choosing the constant $c$ to satisfy the unit variance constraint we achieve $\phi(X)=\frac{E(\psi(Y)|X)}{\sqrt{var(E(\psi(Y)|X))}}$. In the same manner we may fix $\phi(X)$ and attain $\psi(Y)=\frac{E(\phi(X)|Y)}{\sqrt{var(E(\phi(X)|Y))}}$. These coupled equations are in fact necessary conditions for the optimality of $\phi$ and $\psi$, leading to an alternating procedure in which at each step we fix one transformation and optimize the other. [@breiman1985estimating] prove that this procedure convergences to the global optimum using Hilbert space algebra. They show that the transformations $\phi$ and $\psi$ may be represented in a zero-mean and finite variance Hilbert space, while the conditional expectation projection is linear, closed, and shown to be self-adjoint and compact under mild assumptions. Then, the coupled equations may be formulate as an eigen problem in the Hilbert space, for which there exists a unique and optimal solution. The following lemma defines a strict connection between the non-linear canonical correlations and the Gaussinized IB problem. \[negative\_lemma\] Let $\rho_{ub}$ be the solution to (\[ACE problem\]). If $I(X;Y)>-\log\left(1-\rho_{ub}^2\right)$, then there are no transformations $\phi, \psi$ such that $U=\phi(X)$ and $V=\psi(Y)$ are jointly normally distributed and preserve all of the mutual information, $I(X;Y)$. Let $\rho_{*}$ be the solution to (\[1-D problem\]). As mentioned above, $\rho_{ub}\geq \rho_{*}$. Therefore, $I(X;Y)>-\log\left(1-\rho_{ub}^2\right)>-\log\left(1-\rho_{*}^2\right)$. This means that the inequality (\[basic\_inequality\]) cannot be achieved with equality. Hence, there are no transformations $U=\phi(X)$ and $V=\psi(Y)$ so that $U$ and $V$ are jointly normal and preserve all of the mutual information, $I(X;Y)$. Lemma \[negative\_lemma\] suggests that if the optimal transformations of the relaxed problem (which can be obtained by ACE) fails to capture all the mutual information between $X$ and $Y$, then there are no transformations that can project $X$ and $Y$ onto jointly normal variables without losing information. Moreover, notice that the maximal level of correlation $\rho_{ub}$ cannot be further increased, even if we allow $\underline{U}=\phi(X)$ and $\underline{V}=\phi(Y)$ to reside in higher spaces. This means that Lemma \[negative\_lemma\] holds for any $\phi:R\rightarrow R^{k_u}$ and $\psi:R\rightarrow R^{k_v}$, such that $k_u,k_v \geq 0$. Alternating Gaussinized Conditional Expectations (AGCE) {#AGCE} ------------------------------------------------------- Let us go back to our original problem, which strives to maximize the correlation between $U$ and $V$, subject to marginal normality constraints (\[1-D problem\]). Here we follow [@breiman1985estimating], and suggest an alternating optimization procedure. Let us fix $\psi(Y)$ and optimize (\[1-D problem\]) with respect to $\phi(X)$. As before, we can write the correlation objective as $E(\phi(X)\psi(Y))=E_x \left(\phi(X)E(\psi(Y)|X)\right)$. Since $E(\phi(X)^2)$ is constrained to be equal to $1$, while $E\left(E(\psi(Y)|X)^2\right)$ is fixed, maximizing $E_x \left(\phi(X)E(\psi(Y)|X)\right)$ is equivalent to minimizing $E_x \left(\phi(X)-E(\psi(Y)|X)\right)^2$. For simplicity, denote $\bar{X} \equiv E(\psi(Y)|X)$. Then, our optimization problem can be reformulated as $$\label{AGCE problem} \begin{aligned} & {\min_{\phi}} & & E \left(\phi(\bar{X})-\bar{X}\right)^2 \\ & \text{subject to} & & \bar{X} \sim F_{\bar{X}}\\ & && \phi(\bar{X}) \sim N(0,1) \\ \end{aligned}$$ where $F_{\bar{X}}$ is the (fixed) CDF of $\bar{X} \equiv E(\psi(Y)|X)$. Notice that $\phi$ is necessarily a function of $\bar{X}$ alone (as opposed to $X$), for simple optimization considerations. Assuming that $\bar{X}$ and $U=\phi(\bar{X})$ are two separable metric spaces such that any probability measure on $\bar{X}$ (or $U$) is a Radon measure (i.e. they are Radon spaces), then (\[AGCE problem\]) is simply an optimal transportation problem [@monge1781memoire] with a strictly convex cost function (mean square error). We refer to $\phi^*(\bar{X})$ that minimizes (\[AGCE problem\]) as the optimal map. The optimal transportation problem was presented by [@monge1781memoire] and has generated an important branch of mathematics. The problem originally studied by Monge was the following: assume we are given a pile of sand (in $\mathbb{R}^3$) and a hole that we have to completely fill up with that sand. Clearly the pile and the hole must have the same volume and different ways of moving the sand will give different costs of the operation. Monge wanted to minimize the cost of this operation. Formally, the optimal transportation problem is defined as $$\inf \left\{\int_{\bar{X}}c(\bar{X},\phi(\bar{X}))d\mu(\bar{X}) \Big| \phi_*(\mu)=\nu \right\}$$ where $\mu$ and $\nu$ are the probability measures of $\bar{X}$ and $U$ respectively, $c(\cdot,\cdot)$ is some cost function and $\phi_*(\mu)$ denotes the push forward of $\mu$ by the map $\phi$. Clearly, (\[AGCE problem\]) is a special case of the optimal transportation problem where the $\mu=F_{\bar{X}}$, $\nu$ is a standard normal distribution and the cost function is the euclidean distance between the two. Assume that $\bar{X} \in \mathbb{R}$ has finite $p^{th}$ moments for $1 \leq p < \infty$ and a strictly continuous CDF, $F_{\bar{X}}$ (that is $\bar{X}$ is a strictly continuous random variable). Then, [@rachev1998mass] show that the optimal map (which minimizes (\[AGCE problem\])) is exactly $\phi^*(\bar{X})=\Phi^{-1}_N \circ F_{\bar{X}} (\bar{X})$ where $\Phi^{-1}_N$ is the inverse CDF of a standard normal distribution. As shown by [@rachev1998mass], the optimal map is unique and achieves $$\label{transportation_problem_loss} E \left(\left(\phi^*(\bar{X})-\bar{X}\right)^2\right) =\int_0^1 \left(F_{\bar{X}}(s)- \Phi_N(s) \right)^2 ds.$$ Notice that the optimal map may be generalized to the multivariate case, as discussed in the next Section. The solution to the optimal transportation problem is in fact the “optimal projection" of our problem (\[AGCE problem\]). Further, it allows us to quantify how much we lose from imposing the marginal normality constraint, compared with ACE’s optimal projection. Notice that the optimal map, $\phi^*(\bar{X})=\Phi^{-1}_N \circ F_{\bar{X}} (\bar{X})$, is simply marginal Gaussianization of $\bar{X}$: applying $\bar{X}$’s CDF to itself results in a uniformly distributed random variable, while $\Phi^{-1}_N$ shapes this uniform distribution into a standard normal. In other words, while the optimal projection of $\psi(Y)$ on $X$ is its conditional expectation, the optimal projection under a normality constraint is simply a Gaussianization of the conditional expectation. The uniqueness of the optimal map leads to the following necessary conditions for an optimal solution to (\[1-D problem\]), $$\begin{aligned} \label{necessary_conditions} &\phi(X)=\Phi^{-1}_N \circ F_{E(\psi(Y)|X)} (E(\psi(Y)|X))\\\nonumber &\psi(Y)=\Phi^{-1}_N \circ F_{E(\phi(X)|Y)} (E(\phi(X)|Y))\end{aligned}$$ As in ACE, these necessary conditions imply an alternating projection algorithm, namely, the Alternating Gaussinized Conditional Expectation (AGCE). Here, we begin by randomly choosing a transformation that only satisfies the normality constraint $\psi(Y) \sim N(0,1)$. Then, we iterate by fixing one of the transformation while optimizing the other, according to (\[necessary\_conditions\]). We terminate once $E(\phi(X)\psi(Y))$ fails to increase, which means that we converged to a set of transformations that satisfy the necessary conditions for optimal solution. Notice that in every step of our procedure, we may either: 1. Increase our objective value, as a result of the optimal map for (\[AGCE problem\]). 2. Maintain with the same objective value and with the same transformation that was found in of the previous iteration, as we converged to (\[necessary\_conditions\]). This means that our alternating method generates a monotonically increasing sequence of objective values. Moreover, as shown in Section \[The 1-D case\], this sequence is bounded from above by the optimal correlation given by ACE. Therefore, according to the monotone convergence theorem, our suggested method converges to a local optimum. Unfortunately, as opposed to ACE, our projection operator is not linear and we cannot claim for global optimality. We see that for different random initializations we converge to (a limited number) of local optima. Yet, AGCE provides an effective tool for finding local maximizers of (\[The 1-D case\]), which together with MCMC [@gilks2005markov] initializations (or any other random search mechanisms) is capable of finding the global optimum. Off-shelf lower bound {#offshelf_lower_bound} --------------------- Although the AGCE method provides a (locally) optimal solution to (\[The 1-D case\]), we would still like to consider a simpler “off-shelf" mechanism that is easier to implement and gives a lower bound to the best we can hope for. Here, we tackle (\[The 1-D case\]) in two phases. In the first phase we would like to maximize the correlation objective, $E(UV)$, subject to the relaxed second order statistics constraints (as defined in (\[ACE problem\])). Then, we enforce the marginal normality constraints by simply applying *separate Gaussianization* to the outcome of the first phase. In other words, we first apply ACE to increase our objective as much as possible, and then separately Gaussianize the results to meet the normality constraints, hoping this process does not reduce our objective “too much". Notice that in this univariate case, separate Gaussianization is achieved according to Theorem \[gaussianization\_theorem\]: \[gaussianization\_theorem\] Let $X$ be any random variable $X \sim F_X (x)$ and $\theta \sim \text{Unif}[0,1]$ be statistically independent of it. In order to shape $X$ to a normal distribution the following applies: 1. Assume $X$ is a non-atomic distribution ($F_X (x)$ is strictly increasing) then $\Phi^{-1}_N \circ F_X(X)\sim N(0,1)$ 2. Assume $X$ is discrete or a mixture probability distribution then $\Phi^{-1}_N \circ \left( F_X(X)-\theta P_X(x)\right) \sim N(0,1)$ The proof of this theorem can be located in Appendix $1$ of [@shayevitz2011optimal]. Theorem \[gaussianization\_theorem\] implies that if $X$ is strictly continuous then we may achieve a normal distribution by applying $\Phi^{-1}_N \circ F_X(X)$ to it, as discussed in the previous section. Otherwise, we shall handle its CDF’s singularity points by randomly scattering them in a uniform manner, followed by applying $\Phi^{-1}_N$ to the random variable we achieved. Notice that this process do not allow any flexibility in the Gaussianization process. However, we show that in the multivariate case (Section (\[multivariate LB\])) the equivalent process is quite flexible and allows us to control the correlation objective. Further, notice that this lower bound is by no means a candidate for an optimal solution to (\[1-D problem\]), as it does not meet the necessary conditions described in (\[necessary\_conditions\]). Yet, by finding both an upper and lower bounds (through ACE, and then separately Gaussianizing the result of ACE) we may immediately achieve the range in which the optimal solution necessarily resides. Assuming this range is not too large, one may settle for a sub-optimal solution without a need to apply AGCE at all. Illustrative example {#examples_1} -------------------- We now demonstrate our suggested methodology with a simple illustrative example. Let $X \sim N(0,1)$, $W \sim N(0,\epsilon^2)$ and $Z\sim N(\mu_z,1)$ be three normally distributed random variables, all independent of each other. Let $P$ be a Bernoulli distributed random variable with a parameter $\frac{1}{2}$, independent of $X,W$ and $Z$. Define $Y$ as: $$\begin{aligned} Y=\left\{\begin{tabular}{ l c } X+W & P=0 \\ Z & P=1 \\ \end{tabular}\right\}.\nonumber\end{aligned}$$ Then, $Y$ is a balanced Gaussian mixture with parameters $$\theta_y=\left\{\mu_1=0, \sigma_1^2=1+\epsilon^2, \mu_2=\mu_z, \sigma_2^2=1\right\}.$$ The joint probability density function of $X$ and $Y$ is also a balanced two-dimensional Gaussian mixture with parameters $$\_[xy]{}={\_1=,C\_1=, \_2=,C\_2=I}. $$ Let us further assume that $\mu_z$ is large enough, and $\epsilon^2$ is small enough, so that the overlap between the two Gaussian is negligible. For example, we set $\mu_z=10$ and $\epsilon=0.1$. The correlation between $X$ and $Y$ is easily shown to be $\rho_{xy}=\frac{\sfrac{1}{2}}{\sqrt{1+\sfrac{1}{2}\epsilon^2+\sfrac{1}{4}\mu_z}}=0.098$. The mutual information between $X$ and $Y$ is defined as $$I(X;Y)=h(X)+h(Y)-h(X;Y)$$ Since we assume that the Gaussians in the mixture practically do not overlap, we have that $$\begin{aligned} h(Y)=&-\int f_Y(y)\log f_Y(y)dy \approx\frac{1}{4}\log\left(2\pi e (1+\epsilon^2)\right)+\frac{1}{4}\log\left(2\pi e \right)+1\end{aligned}$$ In the same manner, $$\begin{aligned} h(X,Y)=&-\int f_{X,Y}(x,y)\log f_{X,Y}(x,y)dxdy \approx\\\nonumber &\frac{1}{4}\log\left((2\pi e)^2 |C_1|\right)+\frac{1}{4}\log\left((2\pi e)^2 |C_2| \right)+1\end{aligned}$$ Plugging $\mu_z=10$ and $\epsilon=0.1$ we have that $$\begin{aligned} I(X;Y)=&h(X)+h(Y)-h(X;Y) \approx 1.66 \text{bits}.\end{aligned}$$ The scatter plot on the left of Figure \[one\_d\_example\_1\] illustrates $10,000$ independent draws of $X$ and $Y$, where the blue circles corresponds to the “correlated samples" ($P=0$) while the blue crosses are the “noise" ($P=1$). Before we proceed to apply our suggested methods, let us first examine two benchmark options for separate Gaussianization. As an immediate option, we may always apply separate Gaussianization, directly to $X$ and $Y$, denoted as $U_a$ and $V_a$ respectively. This corresponds to [@cardoso2003dependence] information geometry bound. Since $X$ is already normally distributed we may set $U_a=X$ and only apply Gaussinization to $Y$. Let $V_a=\psi(Y)$ be the Gaussianization of $Y$. This means that $$V_a=\Phi_N^{-1}\left(F_{Y}\left(Y\right)\right)= \Phi_N^{-1}\left( \Phi_{GM(\theta_y)}(Y)\right)$$ where $\Phi_{GM(\theta_y)}$ is the cumulative distribution function a Gaussian Mixture with the parameters $\theta_y$ described above. Therefore, $$\rho_{u_a,v_a}=E(XV)=\frac{1}{2}E\left(X \Phi_N^{-1}\left( \Phi_{GM(\theta_y)}(X+W)\right)\right).$$ Although it is not possible to obtain a closed form solution to this expectation, it may be numerically evaluated quite easily, as $X$ and $W$ are independent. Assuming $\mu_z=10$ and $\epsilon=0.1$ we get that $\rho_{u_a,v_a} \approx 0.288$ and our lower bound on the mutual information, as appears in (\[basic\_inequality\]), is $I_g \equiv -\frac{1}{2}\log\left(1-\rho_{u_a,v_a}^2\right)\approx0.0628\text{bits}$. The middle scatter plot of Figure \[one\_d\_example\_1\] presents this separate marginal Gaussianization of the previously drawn $10,000$ samples of $X$ and $Y$. Notice that the marginal Gaussianization is a monotonic transformation, so that the $Y$ samples are not being shuffled and maintain the separation between the two parts of the mixture. While the red circles are now “half Gaussian", the blue crosses are shaped in a curvy manner, so that their marginal distribution (projected on the $y$ axis) is also a “half Gaussian", leading to a normal marginal distribution of $Y$. We notice that while the mutual information between $X$ and $Y$ is $1.66$ bits, the lower bound attained by this naive Gaussianization approach is close to zero. This is obviously an unsatisfactory result. A second benchmark alternative for separate Gaussianization is to take advantage of the Gaussian mixture properties. Since we assume that the two Gaussians of $Y$ are practically separable, we may distinguish between observations from the two Gaussians. Therefore, we can simply reduce $\mu_z$ from the $Z$ samples (the red circles), and normalize the observations of $X+W$. This way the transformed $Y$ becomes a Gaussian mixture of two co-centered standard Gaussians, and no further Gaussianization is necessary. For $\mu_z=10$ and $\epsilon=0.1$, this leads to a correlation of $$\begin{aligned} \rho_{u_b,v_b}=\frac{1}{2}E\left( \frac{1}{\sqrt{1+\epsilon^2}}(X+W)X\right)=\frac{1}{2} \frac{1}{\sqrt{1+\epsilon^2}}=0.497\end{aligned}$$ and a corresponding mutual information lower bound of $I_g=0.204 \,\text{bits}$. However, notice that the suggested transformation is not invertible and may cause a reduction in mutual information. Specifically, we now have that the joint distribution of $U_b=X$ and $V_b$ follows a Gaussian mixture model with parameters: $$\_[u\_b, v\_b]{}={\_1=,C\_1=, \_2=,C\_2=I} $$ Therefore, $$\begin{aligned} h(U_b,V_b)=&-\int f_{U_b,V_b}(u,v)\log \left( f_{U_b,V_b}(u,v)\right)dudv=\\\nonumber &-\int \phi_{GN(\theta_{u_b, v_b})}(u,v)\log \phi_{GN(\theta_{u_b,v_b})}(u,v)dudv \approx 3.1384 \text{bits}\end{aligned}$$ where $\phi_{GN(\theta_{u_b,v_b})}(u,v)$ is the probability density function of a Gaussian mixture with the parameters $\theta_{u_b, v_b}$ described above, and the last approximation step is due to numerical integration. This leads to $I(U_b;V_b)=0.95$ bits. To conclude, although the mutual information is reduced from $1.66$ bits to $0.95$ bits, the suggested bound increased quite dramatically, from $0.0628$ bits to $0.204$ bits. The right plot of Figure \[one\_d\_example\_1\] demonstrates this customized separate Gaussianization (as it only applies for this specific setup) to the previously sampled $X$ and $Y$. Again, we emphasis that this solution is not applicable in general, and is only feasible due to the specific nature of this Gaussian mixture model. Let us now turn to our suggested methods, as described in detail in the previous sections. We begin by applying the ACE procedure (Section \[ACE\_section\]), to attain an upper bound on our problem (\[1-D problem\]). Not surprisingly, ACE converges to a solution in which the samples of $Y$ that are independent of $X$ (the ones that come from $Z$) are set to zero, while the rest are normalized to achieve an unit variance. Therefore, the resulting correlation is $\rho_{ub}=\frac{\sfrac{1}{2}}{\sqrt{\sfrac{1}{2}(1+\epsilon^2)}}=0.703$. This results further implies that we can never find a Gaussianization procedure that will capture all the information between $X$ and $Y$, as $I(X;Y)>-\log\left(1-\rho_{ub}^2\right)=0.4917$ bits, according to Lemma \[negative\_lemma\]. The left scatter plot of Figure \[one\_d\_example\_2\] demonstrates the outcome of the ACE procedure, applied to the drawn $10,000$ samples of $X$ and $Y$. Next, we apply our suggested AGCE routine, described in Section \[AGCE\]. As discussed above, the AGCE only converges to a local optimum. Therefore, we initialize it with several random transformation (including the ACE solution that we just found). We notice that the number of convergence points are very limited and result in almost similar maxima. The middle scatter plot of Figure \[AGCE\] shows the best result we achieve, leading to a correlation coefficient of $0.66$ and a lower bound on a corresponding Gaussian lower bound (\[basic\_inequality\]) of $0.411$ bits. This result demonstrates the power of our suggested approach, as it significantly improves the benchmarks, even compared with the $U_b, V_b$ that considers the separable Gaussian mixture nature of our samples. Finally, we evaluate a lower bound for (\[1-D problem\]), as described in Section \[offshelf\_lower\_bound\]. Here, we simply apply separate Gaussianization to the outcome of the ACE procedure. This results in $\rho_{lb}=0.646$ and a corresponding $I_g=0.389$. The right scatter plot of Figure \[one\_d\_example\_2\] shows the Gaussianized samples the we achieve. We notice that this lower bound is not significantly lower than AGCE, suggesting that in some case we may settle for this less involved method. To conclude, our suggested solution surpasses the benchmarks quite easily, as we increase the lower bound from $0.204$ bits using the custom Gaussianization procedure to $0.411$ bits using our general solution. We notice that all of the discussed procedures result in a joint distribution that are quite far from normal. This is not surprising, since $X$ and $Y$ were highly “non-normal" to begin with. Specifically, in all suggested procedures we loose information, compared with the original $I(X;Y)=1.66$. However, our suggested solution minimizes this loss, and may be considered “more jointly normal" than others, in this regards. The multivariate case {#multivariate case} ===================== Let us now consider the multivariate case where both $\underline{X} \in \mathbb{R}^{d_x}$ and $\underline{Y} \in \mathbb{R}^{d_y}$ are random vectors with a joint CDF $F_{\underline{X},\underline{Y}}$. One of the fundamental differences from the univariate case is that Gaussianizing each of these vectors (even separately) is not a simple task. In other words, finding a transformation $\phi: \mathbb{R}^{d_x} \rightarrow \mathbb{R}^{k_x}$ such that $\underline{U}=\phi(\underline{X})$ is normally distributed may be theoretically straight-forward but practically involved. For simplicity of the presentation, assume that $\underline{X}=[X_1, X_2]^T$ is a two dimensional, strictly continuous, random vector. Then, Gaussianization may be achieved in two steps: first, apply marginal Gaussianization to $X_1$, so that $U_1=\Phi^{-1}_N \circ F_{X_1} (X_1)$. Then, apply marginal Gaussianization on $X_2$, conditioned on each possible realization of the previous component, $U_2|u_1 = \Phi^{-1}_N \circ F_{X_2|U_1} (X_2|U_1=u_1)$. This results in a jointly normally distributed vector $\underline{U}=[U_1, U_2]^T$. While this procedure is theoretically simple, it is quite problematic to apply in practice, as it requires Gaussianizing each and every conditional CDF. This is obviously impossible, given a finite number of samples. Yet, it gives us a constructive method, assuming that all the CDF’s are known. In the following sections we shall present several alternatives for Gaussianization in finite sample size setup. Upper bound by ACE {#multivariate_ub} ------------------ As in the univariate case, we begin our analysis by relaxing the normality constraints with softer second order statistics constraints. This leads to a straight forward multivariate generalization of the ACE procedure: We begin by extracting the first canonical pair, which satisfies $U_1=c \cdot E(V_1|\underline{X})$ and $V_1=c \cdot E(U_1|\underline{Y})$. As in the univariate case, $c$ is a normalization coefficient (the square root of the variance of the conditional expectation), and the optimization is done by alternating projections. Then, we shall extract the second pair of canonical components, subject to an orthogonality constraint with the first pair. It is easy to show that if $V_2$ is orthogonal to $V_1$, then $U_2= c\cdot E(V_2|\underline{X})$ is orthogonal to $U_1$, and obviously maximizes the correlation with $V_2$. Therefore, we may extract the second canonical pair by first randomly assigning a zero-mean and unit variance $V_2$ that is also orthogonal to $V_1$ (by Gram-Schmidt procedure, for example), followed by alternating conditional expectations with respect to $V_2$ and $U_2$, in the same manner as we did with the first pair. We continue this way for the rest of the canonical pairs. As in the univariate case, convergence to a global maximum is guaranteed from the same Hilbert space arguments. As before, the multivariate ACE sets an upper bound to (\[multivariate\_problem\]) as it maximizes a relaxed version of this problem. \[negative\_lemma\_2\] Let $\underline{U}_*,\underline{V}_*$ be the outcome of multivariate ACE procedure (the canonical vectors). Assuming that $I(\underline{X};\underline{Y})>\log \left|C_{[\underline{U}_*,\underline{V}_*] }\right|$, there are no transformations such that $\underline{U}=\phi(\underline{X})$ and $\underline{V}=\psi(\underline{Y})$ follow a jointly normal distribution and preserve all of the mutual information, $I(X;Y)$. The proof of Lemma \[negative\_lemma\_2\] follows exactly the proof of Lemma \[negative\_lemma\]. Here again, the multivariate ACE objective, $\log \left|C_{[\underline{U}_*,\underline{V}_*] }\right|$, cannot be further increased by artificially inflating the dimension of the problem. Therefore, Lemma \[negative\_lemma\_2\] holds for any $\phi:\mathbb{R}^{d_x}\rightarrow \mathbb{R}^{k_x}$ and $\psi:\mathbb{R}^{d_y}\rightarrow \mathbb{R}^{k_y}$, such that $k_x,k_y \geq 0$. multivariate AGCE {#multivariate AGCE} ----------------- As with the multivariate ACE, we propose a generalized multivariate procedure for AGCE. We begin by extracting the first pair, in the same manner as we did in the univariate case. That is, we find a pair $U_1$ and $V_1$ that satisfies $$\begin{aligned} \label{necessary_conditions_2} &U_1=\Phi^{-1}_N \circ F_{E(U_1|\underline{X})} (E(U_1|\underline{X}))\\\nonumber &V_1=\Phi^{-1}_N \circ F_{E(V_1|\underline{Y})} (E(V_1|\underline{Y}))\end{aligned}$$ by applying the alternating optimization scheme. As we proceed to the second pair, we require that $U_2$ is both orthogonal and jointly normally distributed with $U_1$ (same goes for $V_2$ with respect to $V_1$). This means that the second pair needs not only to be orthogonal, but also statistically independent with the first pair. In other words, assuming $V_2$ is fixed, our basic projection step is $$\label{AGCE problem 2} \begin{aligned} & {\max_{\phi_2}} & & E \left(\phi_2(\underline{X}) V_2\right) \\ & \text{subject to} & & \phi_2(\underline{X}) \sim N(0,1)\\ & && \phi_2(\underline{X}) {\rotatebox[origin=c]{90}{$\models$}}\phi_1(\underline{X}) \\ \end{aligned}$$ Let us denote a subspace $\tilde{\underline{X}} \subset \underline{X}$ that is statistically independent of $U_1=\phi_1(\underline{X})$. Then, the problem of maximizing $E\left(\phi_2(\tilde{\underline{X}}) V_2\right)$ subject to $\phi_2(\tilde{\underline{X}}) \sim N(0,1)$ is again solved by the optimal map, $\phi_2(\tilde{\underline{X}})=\Phi^{-1}_N \circ F_{E(V_2|\tilde{\underline{X}})} (E(V_2|\tilde{\underline{X}}))$. Therefore, the remaining task is to find the “best" subspace $\tilde{\underline{X}} \subset \underline{X}$, so that $E\left(\phi_2(\tilde{\underline{X}}) V_2\right)$ is maximal, when plugging the optimal map. Let $U_1=u_1$ be the value (realization) of $U_1$. Let $\tilde{\underline{X}} = g \left(\underline{X}, u_1\right)$ be a subspace of $\underline{X}$, independent of $U_1$. If $g \left(\underline{X}, u_1\right)$ is an invertible function with respect to $\underline{X}$ given $u_1$, then $\tilde{\underline{X}}$ is an optimal subspace for maximizing $E\left(\phi_2(\tilde{\underline{X}}) V_2\right)$ subject to $\phi_2(\tilde{\underline{X}}) \sim N(0,1)$. Assume there exists a different subspace $\tilde{\underline{X}}'=g' \left(\underline{X} , u_1\right)$ so that $$\max_{\phi_2'} E\left(\phi_2'(\tilde{\underline{X}}') V_2\right)>\max_{\phi_2} E\left(\phi_2(\tilde{\underline{X}}) V_2\right)$$ subject to the normality constraint. Since $g$ is invertible we have that $\underline{X}=g^{-1}( \tilde{\underline{X}}, u_1)$. Therefore, $\tilde{\underline{X}}'= g'\left(g^{-1}( \tilde{\underline{X}},u_1)\right) \equiv f(\tilde{\underline{X}},u_1)$. Plugging this to the inequality above leads to $$\max_{\phi'_2} E\left(\phi'_2( f(\tilde{\underline{X}},u_1)) V_2\right)>\max_{\phi_2} E\left(\phi_2(\tilde{\underline{X}}) V_2\right)$$ which obviously contradicts the optimality of maximization over $\phi_2$. Therefore, we are left with finding $\tilde{\underline{X}} = g \left(\underline{X}, u_1\right)$ that is a subspace of $\underline{X}$, independent of $U_1$ and invertible with respect to $\underline{X}$ given $u_1$. For simplicity of th presentation, let us first assume that $X$ is univariate. Then, the function $g \left(X, u_1\right)= F_{X|U_1}(X|U_1=u1)$ is independent of $U_1$ (as it holds the same (uniform) distribution, regardless to the value of $U_1$), and invertible given $u_1$ (assuming that the conditional CDF’s $F_{X|U_1}(X|U_1=u1)$ are continuous for every $u_1$). Going back to the multivariate $\underline{X}\in \mathbb{R}^{d_x}$, we may follow the same rationale by choosing a single ${d_x}$-dimensional distribution that all the conditional CDF’s, $F_{\underline{X} | U_1}$ will be transformed to. For simplicity we choose a ${d_x}$-dimensional uniform distribution, denoted by its CDF as $F_{unif}$. Then, $g_* \left(F_{\underline{X}|U_1}, u_1\right) = F_{unif}$, where $g_* (P, x)=Q$ refers to a mapping that pushes forward the distribution $P$ into $Q$, given x. Specifically, if $p(w)$ and $q(w)$ are the corresponding density functions of the (absolutely continuous) CDF’s $P$ and $Q$ respectively, then we know from basic probability theory that the push forward transformation $S$ satisfies $$p(w)=q\left(S(w)\right) |J_S\left(S(w)\right)|$$ where $J_S$ is the Jacoby operator of the map $S$. To conclude, in order to construct $\tilde{\underline{X}}$ that is independent of $U_1$ and invertible given $u_1$, we need to push forward all the conditional CDF’s $F_{\underline{X}|U_1}(\underline{X}|U_1=u_1)$ into a predefined distribution (say, uniform). Then, the optimal map $\phi_2(\tilde{\underline{X}})$ that maximizes $E\left(\phi_2(\tilde{\underline{X}}) V_2\right)$ subject to $\phi_2(\tilde{\underline{X}}) \sim N(0,1)$ is given by $\phi_2(\tilde{\underline{X}})=\Phi^{-1}_N \circ F_{E(V_2|\tilde{\underline{X}})} (E(V_2|\tilde{\underline{X}}))$. In the same manner, we may find $\tilde{\underline{Y}}$ that is independent of $V_1$ and invertible given $v_1$, and carry on with the alternating projections. This process continues for all the Gaussinized canonical components and converges to a local optimum, from the same considerations described in the univariate case. It is important to notice that while this procedure may be considered practically infeasible (as it requires estimating the conditional CDF’s), it is equivalently impractical as the multivariate Gaussianization considered in the beginning of this section. Yet, it gives us a local optimum for our problem, assuming that we know the joint probability distribution. Off-shelf lower bound in the multivariate case {#multivariate LB} ---------------------------------------------- In the same manner as with the univariate case, we may apply a simple off-shelf lower bound to (\[basic problem\]) by first maximizing the objective as much as we can (using multivariate ACE) followed by Gaussianizing the outcome vectors, hoping we do not reduce the objective “too much". However, as mentioned in the beginning of Section \[multivariate case\], applying multivariate Gaussianization may be practically infeasible. Therefore, we begin this section by reviewing practical multivariate Gaussianization methodologies. Then, we use these ideas to suggest a practical lower bound, which unlike the univariate case, is not oblivious to our objective. ### Practical multivariate Gaussianization {#Practical multivariate Gaussianization} The Gaussianization procedure strives to find a transformation $\underline{Z}=\mathcal{G}(\underline{X})$ so that $\underline{Z} \sim N(0,I)$. A reasonable a cost function for describing “how Gaussian" $\underline{Z}$ really is, may be the Kullback Leibler Divergence (KLD) between $\underline{Z}$’s PDF, $f_{\underline{Z}}(\underline{z})$, and a standard normal distribution, $$J(\underline{Z})=D_{KL} \left(f_{\underline{Z}}(\underline{z}) || f_N(\underline{Z}) \right)=\int_{\underline{Z}} f_{\underline{Z}}(\underline{z}) \log \left( \frac{f_{\underline{Z}}(\underline{z})}{f_N(\underline{Z})} \right)dz$$ where $f_N(\underline{Z})$ is the PDF of a standard normal distribution. As shown by [@chen2001gaussianization], $J(\underline{Z})$ may be decomposed into $$\label{KLD} J(\underline{Z})=D_{KL} \left(f_{\underline{Z}}(\underline{z}) ||\prod_{i=1}^{d_z} f_{Z_i}(z_i) \right)+\sum_{i=1}^{d_z} D_{KL} \left(f_{Z_i}(z_i) ||f_N(z)\right)$$ where the first KLD term measures how independent are the components of $\underline{Z}$, while the second term indicates how normally distributed is each component. This decomposition led [@chen2001gaussianization] to an iterative algorithm. In each iteration, their suggested approach applies Independent Component Analysis [@hyvarinen2004independent], to minimize the first term, followed by marginal Gaussianization of each component (as we describe for the univariate case), to minimize the second term. Chen and Gopinath show that minimizing one term does not effect the other, which leads to a monotonically decreasing procedure that converges once $\underline{Z}$ is normally distributed. Notice that the Independent Component Analysis (ICA) is a linear operator. Therefore, if $\underline{Z}$ can be linearly decomposed into independent components, then Chen and Gopinath’s Gaussianizion process converges in a single step. Moreover, notice that this Gaussianization process does not require estimating the multivariate distribution. However, it does require estimating the marginals, $f_{Z_i}$ which is, in general, considered a much easier task. A similar but different multivariate Gaussianization approach was suggested by [@laparra2011iterative]. Here, the authors propose to replace the computationally costly ICA with a simple random rotation matrix. This way, they abandon the effort of minimizing the first term of (\[KLD\]), and only shuffle the components so that consequent marginal Gaussianization would further decrease $J(\underline{Z})$. Although this approach takes more iterations to converge to a normal distribution (as in each iteration, only the second term of (\[KLD\]) is being minimized), it holds several favorable properties. First, the overall run-time is dramatically shorter, since applying random rotations is much faster then linear ICA. Second, it implies a degree of freedom in choosing the rotation matrix, as the suggested random matrix is just one example of a linear shuffling of the components. ### Bi-terminal multivariate Gaussianization {#Bi-terminal multivariate Gaussianization} Going back to our problem, we would like to Gaussinize $\underline{U}_*$ and $\underline{V}_*$, the outcomes of the multivariate ACE procedure described above. Ideally, we would like to do so while refraining (as much as we can) from reducing our objective $$\label{objective} \log \left(\frac{\left|C_{[\underline{U}_*,\underline{V}_*] }\right|}{|C_{\underline{U}_*}||C_{\underline{V}_*}|} \right).$$ Following the Gaussianization procedures described in the previous section , we suggest an iterative process, where in each iteration we apply a rotation matrix to both vectors, followed by marginal Gaussianization to each of the components of the two vectors. It is easy to show that (\[objective\]) is invariant to any full rank linear transformations. However, it may be effected by the (non-linear) marginal Gaussianization of the components of $\underline{U}_*$ or $\underline{V}_*$ (as described in Theorem \[gaussianization\_theorem\]). Therefore, we would like to find rotation matrices that minimize the effect of the consequent marginal Gaussianization step. This problem is far from trivial. In fact, due to the complicated nature of the marginal Gaussianization procedure, it is quite impossible to a-priorly minimize the effect of the marginal Gaussianization, without actually applying it and see how it behaves. Therefore, we suggest a stochastic search mechanism, which allows us to construct a “reasonable" rotation matrix. Our suggested mechanism works as follow: At each iteration we begin by drawing two random rotation matrices $R_1$ and $R_2$ for the two vectors we are to Gaussianize, just like [@laparra2011iterative]. We apply marginal Gaussianization to all the components and evaluate our objective (\[objective\]). Then, we randomly choose two dimensions and an angle, $\theta$, and construct a corresponding rotation matrix $\tilde{R}$ that rotates the space spanned by the two dimensions in $\theta$ degrees. We apply $\tilde{R} \cdot R_1$ to our vector, followed by marginal Gaussianization, and again evaluate (\[objective\]). If the objective increases we assign $R_1=\tilde{R} \cdot R_1$. We repeat this process a configurable number of times, for the two vectors we are to Gaussianize. Notice that our suggested procedure applies a stochastic hill climbing search in each step: it randomly searches for the best rotation matrix by gradually composing “small" rotation steps (of two dimensions and an angle), as the complete search space is practically infinite. This procedure guarantees the convergence into two multivariate normal vectors, as shown by [@laparra2011iterative], under the reasonable assumption that $R_1$ and $R_2$ do not repeatedly converge to identity matrices. As we see in our experiments, the Bi-terminal Gaussianization is superior to naively applying a Gaussianization procedure to each of the vectors separately (as suggested by [@chen2001gaussianization] or [@laparra2011iterative]), in all the cases we examine. Stochastic hill climbing for joint Gaussianization. Input: max\_steps 0\. let counter=0 1. randomly draw a rotation matrix R 2. randomly draw 2 dimensions and an angle. construct a corresponding rotation matrix , R\_1 3. If R\*R\_1 improves the objective than R=R\*R\_1, else counter=counter+1 4. if counter&gt;max\_steps return R, else repeat 2 Illustrative examples {#multivariate examples} --------------------- We now examine our suggests multivariate approach in different setups. As in the univariate case, we draw samples from a given model and bound from below the mutual information $I(\underline{X}, \underline{Y})$ according to (\[basic\_inequality\]). First, we apply the multivariate ACE procedure (Section \[multivariate\_ub\]) to achieve an upper bound for our objective. Then, we apply separate Gaussianization to ACE’s outcome, to attain an immediate lower bound for our objective (Section \[Practical multivariate Gaussianization\]). Further, we tighten this lower bound by replacing the separate Gaussianization with bi-terminal Gaussianization to ACE’s outcome (Section \[Bi-terminal multivariate Gaussianization\]). Since our multivariate AGCE procedure (Section \[multivariate AGCE\]) is practically infeasible, we refrain from using it. This would be further justified later in our results, as we see that the gap between the lower and upper bounds is relatively small. In all of our experiments, our benchmark would be a direct separate Gaussianization of $\underline{X}$ and $\underline{Y}$, as an immediate alternative. We begin with a simple toy example. Let $\underline{X} \sim N(0,I)$ and $\underline{W} \sim N(0,I)$ be independent random vectors. Define $\underline{Y}=\underline{X}+\underline{W}$, so that $\underline{X}$ and $\underline{Y}$ are jointly normally distributed. Further, we “scramble" $\underline{X}$ and $\underline{Y}$ by applying invertible, yet non-monotonic, transformations to each of them separately. We ask that the transformations are invertible to guarantee that the (analytically derived) mutual information is preserved. We further require non-monotonic transformations since marginal Gaussianization is invariant to monotonic functions (see Proposition \[prop1\]), which would make this experiment too easy. In this experiment, we multiply all the observations in the range $[-1,1]$ by $-1$. This operation simply mirrors these observations with respect to the origin. \[prop1\] Let $\tilde{X}=g(X)$ be a monotonic transformation on $X \in \mathbb{R}$. Then Gaussinizing $\tilde{X}$ is equivalent to Gaussinizing $X$. Let $V=\Phi_N^{-1}\left(F_{{X}}\left({X}\right)\right)$ be the Gaussianization $\tilde{X}$ and $V=\Phi_N^{-1}\left(F_{{X}}\left({X}\right)\right)$ is the Gaussianization of $X$. Assume that $g$ is monotonically increasing. Then, $$F_{\tilde{X}}(a)=P(\tilde{X} \leq a)=P(g(X) \leq a)=P(X \leq g^{-1}(a)).$$ Therefore, $F_{\tilde{X}}\left(\tilde{X}\right)=F_{X}\left(g^{-1}(\tilde{X})\right)=F_{X}(X)$ and $\tilde{V}=V$. An equivalent derivation holds for the monotonically decreasing case. Before we proceed, it is important to briefly comment on the implications of the finite sample size in our multivariate experiments. The ACE procedure estimates conditional expectations at each of its iterations. This estimation task is known to be quite challenging in a finite sample size regime. [@breiman1985estimating] suggest a *k nearest neighbor* estimator which guarantees favorable consistency properties. Unfortunately, this solution is suffers from the curse of dimensionality [@hastie2005elements]. Therefore, as the dimension of our problem increases, we cannot turn to ACE and have to settle for suboptimal solutions. In our experiments, we use the kernel CCA [@lai2000kernel] as an alternative to ACE when the dimension size is greater than $d=5$. The kernel CCA (KCCA) is a non-linear generalization to the classical CCA which embeds the data in a high-dimensional Hilbert space and applies CCA in that space. It is known to significantly improve the flexibility of CCA while avoiding over-fitting of the data. Notice that other non-linear CCA extensions, such as *Deep CCA* [@andrew2013deep] or *nonparametric CCA* [@michaeli2015nonparametric], may also apply as a finite sample size alternative to ACE. We now demonstrate our suggested approach to the jointly Gaussian model discussed above. The left plot of Figure \[high\_d\_example\] shows the results we achieve for different dimension sizes $d$. The black line on the top is $I(\underline{X}, \underline{Y})$, which can be analytically derived. The red curve with the squares at the bottom is separate Gaussianization of $\underline{X}$ and $\underline{Y}$, which results in a very poor lower bound to the mutual information due to the non-monotonic nature of the transformation that we apply. The green curve with the squares is ACE, while the dashed blue curve is separate Gaussianization of ACE. Finally, the blue line between them is bi-terminal Gaussianization of ACE. As we can see, ACE succeeds in recovering the jointly Gaussian representation of $\underline{X}$ and $\underline{Y}$, which makes further Gaussianization redundant. Unfortunately, for $d >5$ we can no longer apply ACE and turn to KCCA instead. We use a Gaussian kernel with varying parameters to achieve the reported results. Since the KCCA attains a suboptimal representation it is followed by Gaussianization, which further decreases our objective. Here, we notice the improved effect of the bi-terminal Gaussianization, compared with separate Gaussianization. Next, we turn to a more challenging exponential model. In this model, each component of $\underline{X}$ and $\underline{W}$ is exponentially distributed with a unit parameter, while all the components are independent of each other. Again, we define $\underline{Y}=\underline{X}+\underline{W}$ so that $\underline{Y}$ is Gamma distributed. This allows us to analytically derive $I(\underline{X}, \underline{Y})$. As before, we apply an invertible non-monotonic transformation to each of the components of $\underline{X}$ and $\underline{Y}$. Notice that this time we mirror the observations in the range $[0,2]$ with respect to $1$. We then apply a linear rotation, so that the components are no longer independent. The plot in the middle of Figure \[high\_d\_example\] demonstrates the results we achieve. As before, we notice that separate Gaussianzation of $\underline{X}$ and $\underline{Y}$ preforms very poorly. On the other hand, ACE as well does not succeed in achieving this MI. This means that no Gaussianzation procedure would allow jointly normal representation of $\underline{X}$ and $\underline{Y}$ without losing information (Lemma \[negative\_lemma\_2\]). Still, by applying bi-terminal Gaussianization to ACE’s results we are able to capture more than half of the information in the worst case (for $d=5$, where ACE still applies). As before, we witness a reduction of performance when turning from ACE to KCCA. Finally, we go back to the multivariate extension of the Gaussian mixture model described in Section \[examples\_1\] and apply our suggested procedures. Again, we witness the same behavior described in the previous experiments. In addition, our results indicate that in this model, the Gaussian part of the MI is significantly smaller, compared with the exponential model. This further demonstrates the ability of our method to quantify how well an arbitrary distribution may be represented as jointly normal. Gaussian lower bound for the Information Bottleneck Curve {#Gaussian lower bound for the Information Bottleneck Curve} ========================================================= We now extended our derivation to the Information Bottleneck (IB) curve. We show that by maximizing the Gaussian lower bound of the mutual information (\[basic\_inequality\]), we allow a maximization of a Gaussian lower bound to the entire IB curve. We prove this in two steps. First, we show that the IB curve of $\phi(\underline{X}), \psi(\underline{Y})$ bounds from below the IB curve of $X$ and $Y$, for any choice of $\phi, \psi$ (specifically, $\phi(\underline{X}) \sim N$ and $\psi(\underline{Y}) \sim N$, in our case). This property is referred to as the *data processing lemma for the IB curve*. Then, we show that the IB curve of jointly normal random variables bounds from below the IB curve of separately normal random variable. Finally, by applying the GIB [@chechik2005information] to the maximally correlated jointly normal random variables that satisfy (\[basic\_inequality\]), we attain the desired Gaussian lower bound for the IB of $\underline{X}$ and $\underline{Y}$. (data processing lemma for the IB Curve): Denote the maximizer of the IB problem (\[IB problem\]) $$\label{DPL for IB} \begin{aligned} & {\max_{T}} & & I(T(\underline{X}); \underline{Y}) \\ & \text{subject to} & & I(T(\underline{X}); \underline{X}) \leq I_Y\\ \end{aligned}$$ as $I_*^\beta \left(\underline{X}; \underline{Y}\right)$. Then, $I_*^\beta \left(\underline{X}; \underline{Y}\right) \geq I_*^\beta \left(\phi(\underline{X}); \psi(\underline{Y})\right)$ for any $\phi, \psi$, and with equality iff $I\left(\underline{X}; \underline{Y}\right) = I\left(\phi(\underline{X}); \psi(\underline{Y})\right)$. We prove this lemma by showing that $I_*^\beta \left(\underline{X}; \underline{Y}\right) \geq I_*^\beta \left(\underline{X}; \psi(\underline{Y})\right) \geq I_*^\beta \left(\psi(\underline{X}); \psi(\underline{Y})\right)$. We start with the first inequality. According to the data processing lemma, we have that $I(T(\underline{X}); \underline{Y}) \geq I(T(\underline{X}); \psi(\underline{Y}))$. Notice that for convenience, we emphasize that $T$ is indeed a mapping of $X$ alone. In addition, since our constraint (\[IB problem\]) is independent of $Y$, we have that $I_*^\beta \left(\underline{X}; \underline{Y}\right) \geq I_*^\beta \left(\underline{X}; \psi(\underline{Y})\right)$, as expected. Second, notice that the IB problem (\[IB problem\]) may be equivalently written as $$\label{DPL for IB 2} \begin{aligned} & {\min_{T}} & & I(T(\underline{X}); \underline{X}) \\ & \text{subject to} & &I(T(\underline{X}); \underline{Y}) \geq \tilde{I}_Y\\ \end{aligned}$$ Denote the minimizer of (\[DPL for IB 2\]) as $\bar{I}_*^{\gamma}(\underline{X};\underline{Y})$. Assume that there exists such $\phi$ that $$\label{false_assumption} \bar{I}_*^{\gamma}(\underline{X};\underline{Y}) >\bar{I}_*^{\gamma}(\phi(\underline{X});\underline{Y})$$ This means that for $I(T(\underline{X}); \underline{Y}) \geq \tilde{I}_Y$ and $I(T'(\phi(\underline{X})); \underline{Y}) \geq \tilde{I}_Y$ we have that $I(T(\underline{X}); \underline{X}) >I(T'(\phi(\underline{X})); \phi(\underline{X}))$ where $T$ and $T'$ are the optimizer of (\[DPL for IB 2\]) with respect to $(\underline{X},\underline{Y})$ and $(\phi(\underline{X}),\underline{Y}$), for a given $\tilde{I}_Y$, respectively. Let us set $\tilde{T}\equiv T' \circ \phi$ and apply this transformation to $\underline{X}$. Then, we have that the constraint of (\[DPL for IB 2\]) is met, as $I(\tilde{T}(\underline{X});\underline{Y})\equiv I(T'(\phi(\underline{X}));\underline{Y}) \geq \tilde{I}_Y$. In addition, we have that $$I(\tilde{T}(\underline{X});\underline{X}) \equiv I(T'(\phi(\underline{X}));\underline{X})=I(T'(\phi(\underline{X}));\phi(\underline{X}))$$ where the second equality follows from $T'$ being independent of $\underline{X}$, given $\phi(\underline{X})$. Therefore, $\tilde{T}=T' \circ \phi$ is a better optimizer to (\[DPL for IB 2\]) with respect to $\underline{X}$ and $\underline{Y}$, then $T$. This contradicts the optimality of $T$ as a minimizer to (\[DPL for IB 2\]), which means that the assumption in (\[false\_assumption\]) is false and our proof is concluded Let $\underline{U}$ and $\underline{V}$ be separately Gaussian random vectors with a joint covariance matrix $C_{[\underline{U}, \underline{V}]}$(that is, $\underline{U} \sim N$ and $\underline{V} \sim N$ but $[\underline{U}, \underline{V}]^T$ is not normally distributed). Let $\underline{U}_{jg}, \underline{V}_{jg}$ be two jointly normally distributed random vectors with the same covariance matrix, $C_{[\underline{U}_{jg}, \underline{V}_{jg}]}=C_{[\underline{U}, \underline{V}]}$. Then, the IB curve of $\underline{U}_{jg}$ and $\underline{V}_{jg}$ bounds from below the IB curve of $\underline{U}$ and $\underline{V}$. Let $\left(I(\underline{U}_{jg}; \underline{T}), I(\underline{T};\underline{V}_{jg})\right)$ be a point of the IB curve of $\underline{U}_{jg}$ and $\underline{V}_{jg}$. Since $\underline{U}_{jg}$ and $\underline{V}_{jg}$ are jointly normally distributed, $T$ is necessarily a linear transformation of $\underline{U}_{jg}$, with additive independent Gaussian noise [@chechik2005information]. Specifically, $T=A\underline{U}_{jg}+ \underline{\zeta}$, where $\underline{\zeta} \sim N(0,I)$, independent of $\underline{U}_{jg}$ and $\underline{V}_{jg}$. Further, let $\underline{T}'=A\underline{U}+\underline{\zeta}$ be the same transformation, applied of $\underline{U}$. Since $\underline{U}$ and $\underline{V}$ are not jointly normal, the point $\left(I(\underline{U}; \underline{T}'), I(\underline{T}';\underline{V})\right)$ is below the IB curve of $\underline{U}$ and $\underline{V}$. First, notice that $$I(\underline{U};\underline{T}') \equiv I(\underline{U};A\underline{U}+\underline{\zeta}) =I(\underline{U}_{jg};A\underline{U}_{jg}+\underline{\zeta})\equiv I(\underline{U}_{jg},\underline{T})$$ where the second equality follows from $\underline{U}$ and $\underline{U}_{jg}$ having the same distribution. In addition, since $C_{[\underline{U}_{jg}, \underline{V}_{jg}]}=C_{[\underline{U}, \underline{V}]}$ we have that $C_{[A\underline{U}_{jg}+\underline{\zeta}, \underline{V}_{jg}]}=C_{[A\underline{U}+\underline{\zeta}, \underline{V}]}$. Therefore, $I(A\underline{U}+\underline{\zeta}; \underline{V}) \geq I(A\underline{U}_{jg}+\underline{\zeta}; \underline{V}_{jg}) $, in the same manner as the in (\[basic\_inequality\]). This means that $I(\underline{T}'; \underline{V}) \geq I(\underline{T}; \underline{V}_{jg}) $. To conclude, we showed that for the two pairs, $\left(I(\underline{U}_{jg}; \underline{T}), I(\underline{T};\underline{V}_{jg})\right)$ and $\left(I(\underline{U}; \underline{T}'), I(\underline{T}';\underline{V})\right)$, we have that $I(\underline{U};\underline{T}') = I(\underline{U}_{jg},\underline{T})$ while $I(\underline{T}'; \underline{V}) \geq I(\underline{T}; \underline{V}_{jg}) $, as desired. The two theorems above guarantee that the IB curve of $\underline{X}$ and $\underline{Y}$ is bounded from below by an IB curve of $\underline{U}_{jg}$ and $\underline{V}_{jg}$, where $C_{[\underline{U}_{jg}, \underline{V}_{jg}]}=C_{[\underline{U}, \underline{V}]}$, and $\underline{U}=\phi(\underline{X}) \sim N$, $\underline{V}=\psi(\underline{Y}) \sim N$. Therefore, in order to maximize this lower bound, one needs to maximize the correlation between $\underline{U}$ and $\underline{V}$, subject to a normality constraint, as discussed through out this manuscript. Moreover, once we have found a pair of ($\underline{U}_{jg}, \underline{V}_{jg}$) with a maximal correlation, we may directly apply the GIB to it, as shown by [@chechik2005information], to achieve the optimal Gaussian lower bound IB curve for $\underline{X}$ and $\underline{Y}$. Examples -------- We now demonstrate our suggested Gaussian lower bound for the IB curve in two different setups. Here, we would like to compare our bound with the “true" IB curve, and with an additional benchmark off-shelf lower bound. As discussed in Section \[intro\], computing the exact IB curve (for a general joint distribution) is not a simple task. This task becomes even more complicated when dealing with continuous random variables. In fact, to the best of our knowledge, all currently known methods provide approximated curves, which do not claim to converge to the exact IB curve. Moreover, these methods fail to provide any guarantees on the extent of their divergence from the true IB curve. Therefore, in our experiments, we apply the commonly used reverse annealing technique [@slonim2002information] in order to approximated the “true" IB curve. The reverse annealing algorithm is initiated by computing the mutual information between $\underline{X}$ and $\underline{Y}$, which corresponds to extreme point where $I_Y \rightarrow \infty$ on the IB curve. Then, $I_Y$ is gradually decreased and the solution of the IB problem (\[IB problem\]) with the previous value of $I_Y$ serves as a starting point to the currently solved $I_Y$. This results in a greedy “no-regret" optimization method, which in general, fails to converge to the exact IB curve. However, in some special cases (such as the GIB), it can be shown that the optimal solution for a given $I_Y$ is, in fact, the optimal starting point for a smaller value of $I_Y$. In the general case, it is implicitly assumed to be a reasonable local optimization domain. Since the reverse annealing was originally designed for discrete random variables, we apply discretization (via Gaussian quadratures) to our probability distributions is all of our experiments. We begin by revisiting the exponential model, described in Section \[multivariate examples\]. In this model, $X$ and $W$ are independent exponentially distributed random variables with a unit parameter. We define $Y=X+W$ so that $Y$ is Gamma distributed. As in Section \[multivariate examples\] we apply an invertible non-monotonic transformation to $X$ and $Y$, to make this problem more challenging. Since approximating the IB curve is involved enough for continuous random variables, we limit our attention to the simplest univariate case. The plot on the left of Figure \[IB\_curves\] demonstrates the results we achieve. The black curve on top is the approximated IB curve, using the reverse annealing procedure. The red curve on the bottom is a benchmark lower bound, achieved by simply applying the GIB to $X$ and $Y$, as if they were jointly Gaussian. The blue curve in the middle is our suggested Gaussian lower bound (Section \[AGCE\]). As we can see, our suggested bound surpasses the GIB quite remarkably. This is mainly due to the non-monotonic transformation we apply, which makes the joint distribution highly non-Gaussian. We further notice that our bound is quite tight for smaller $I_Y$’s (closer to the origin) but increasingly diverges as $I_Y$ increases. The reason is that more compressed representations are more “degenerate" and are easier to Gaussianize while maintaining reasonably high correlations. Next, we revisit the more challenging Gaussian mixture model, described in Section \[examples\_1\]. The right plot in Figure \[IB\_curves\] demonstrates the results we achieve. As before, we notice that our suggested lower bound surpasses the naive benchmark, while demonstrating favorable performance closer to the origin. Comparing the two models, we notice that the Gaussian mixture is more difficult to bound from below using our suggested method. This result is not surprising, given the gap in our ability to bound from below the mutual information in these two models, as discussed in Section \[multivariate examples\]. Discussion and conclusion ========================= In this work we address the fundamental problem of normalizing non-Gaussian data, while trying to avoid loss of information. This would allow us to solve complex problems by linear means, as we push information to the data’s second moments. We show that our ability to do so is strongly governed by the non-linear canonical correlations of the data. In other words, if the non-linear canonical coefficients of the data fail to maintain its mutual information, then it is impossible to describe its high order dependencies just by second order statistics. This result is of high interest to a broad variety of applications, as solving non-linear problems by linear means is a common alternative in many scientific and engineering fields. Further, we provide a variety of methods to quantify the minimal amount of information that may be lost when normalizing the data. We show that in many cases, our suggested approach is able to preserve a significant portion of the information, even for highly non-Gaussian joint distributions. Our results improves upon [@cardoso2003dependence] information geometry bound, as we show that a tighter bound may be obtained by the AGCE method. It is important to mention that while our suggested approach is theoretically found, it exhibits several practical limitation in a finite sample-size setup. This is a direct result of our use of the ACE algorithm, which suffers from the curse of dimensionality when applied to high-dimensional data. Therefore, we further examine different non-linear CCA methods, which are less vulnerable to this problem. However, these methods fail to converge to the optimal canonical coefficients. Finally, we show that our results may be generalized to bound from below the entire information bottleneck curve. This allows a practical alternative for different approximation methods and restrictive solutions to the involved IB problem in the continuous case. Our experiments show that the suggested Gaussian lower bound provides a meaningful benchmark to the IB curve, even in highly non-Gaussian setups. Acknowledgments =============== This research was supported by a Fellowship from the Israeli Center of Research Excellence in Algorithms to Amichai Painsky. The authors thank Nori Jacoby for early discussions on the subject.
--- abstract: 'Every halo finding algorithm must make a critical yet relatively arbitrary choice: it must decide which structures are parent halos, and which structures are sub-halos of larger halos. We refer to this choice as [*percolation*]{}. We demonstrate that the choice of percolation impacts the statistical properties of the resulting halo catalog. Specifically, we modify the halo-finding algorithm [[ROCKSTAR]{}]{} to construct four different halo catalogs from the same simulation data, each with identical mass definitions, but different choice of percolation. The resulting halos exhibit significant differences in both halo abundance and clustering properties. Differences in the halo mass function reach 10% for halos of mass $10^{13}\ {h^{-1}\ {\rm M_{\odot}}}$, larger than the few percent precision necessary for current cluster abundance experiments such as the Dark Energy Survey. Comparable differences are observed in the large-scale clustering bias, while differences in the halo–matter correlation function reach 40% on translinear scales. These effects can bias weak-lensing estimates of cluster masses at a level comparable to the statistical precision of current state-of-the-art experiments.' bibliography: - 'database.bib' title: Halo Exclusion Criteria Impacts Halo Statistics --- \[firstpage\] Introduction {#intro} ============ In the halo model the abundance and distribution of galaxies and clusters are linked to the abundance and distribution of dark matter halos [@Cooray-Sheth]. Predicting the properties of halos requires large computer simulations that map the matter distribution of the Universe. The output of simulations is then analyzed using a halo finder to find gravitationally bound dark matter structures. Every halo finding algorithm makes two critical yet relatively arbitrary choices. The first has received plenty of attention, and is the definition of halo mass. Halo mass is typically defined as the mass enclosed within some specific spherical aperture, chosen such that the mean density of the halo within that sphere is equal to some factor of either the critical density or the mass density of the Universe. However, other definitions are also commonly used (e.g. friends-of-friends) [see e.g. @Knebe2013]. For this reason, one can find calibrations of the halo mass function for multiple different halo mass definitions [@Tinker2008; @Bhattacharya2011; @McClintock2018]. The second arbitrary choice has received little attention to date, namely, how a halo finding algorithm decides which structures are parent halos, and which are sub-halos that “belong” to a larger halo. We refer to the criteria for categorizing structures as parent halos vs. sub-halos as [*percolation*]{} or [*exclusion criteria*]{}. There is currently no standard percolation scheme, with different halo finders applying different halo exclusion criteria when constructing halo catalogs. In this paper we show that the choice of percolation impacts the statistical properties of the resulting halo population at a non-negligible level. To do so, we modify [[ROCKSTAR]{}]{} [@Behroozi2013], a state-of-the-art halo finding algorithm, to generate halo catalogs with identical mass definitions, but different halo exclusion criteria. For each such halo catalog we measure the halo mass function, correlation function, and projected density profiles. By comparing these properties of the resulting halo catalogs to the properties of the fiducial [[ROCKSTAR]{}]{} catalog we quantify the level of systematic uncertainty in current theoretical predictions associated with the choice of percolation algorithm implemented in the construction of the catalog. Simulation Data {#sec:simdata} =============== We use a cosmological N-body simulation run using the publicly available code `GADGET2` [@Springel2005]. This simulation is similar to the simulations used for the Aemulus project [@deroseetal2018]. Specifically, the simulation has a box size of $1050\ {h^{-1}\ {\rm Mpc}}$ with $1400^3$ particles and was run using periodic boundary conditions with a force softening scale of $20\ h^{-1} {\rm kpc}$. The cosmology of the simulation we use is $h=0.6704$, $\Omega_m=0.318$, $\Omega_{\Lambda}=0.682$, $\Omega_b=0.049$, $\sigma_8=0.835$, $n_s=0.962$. The particle mass is $3.7275 \times 10^{10} \ {h^{-1}\ {\rm M_{\odot}}}$. Since the goal of this study is to highlight the under-appreciated impact of halo exclusion on halo statistics, a single simulation suffices for our purposes. We compare the statistics of halo catalogs generated from the same simulation box, using the same halo mass definition, namely [$M_{200\rm{m}}$]{}, but different halo exclusion criteria. The specific statistics we consider are: - the halo mass function, - the large-scale halo clustering bias, - the halo–mass correlation function, - and the halo–halo correlation function. The different halo catalogs are created by modifying the publicly available code [[ROCKSTAR]{}]{} [@Behroozi2013]. [[ROCKSTAR]{}]{} uses a friends-of-friends algorithm in 6 dimensional phase space to find seed dark matter structures. It then iteratively assigns particles in the simulation to each seed group, merging seeds into a single halo when the separation between halos is sufficiently small [equation 2 in @Behroozi2013]. [[ROCKSTAR]{}]{} then removes all unbound particles from each halo, and computes the spherical mass and radius as defined using a virial overdensity criteria. Specifically, the mass $M_\Delta$ and spherical radius $R_\Delta$ associated with each halo are selected such that they satisfy the constraint equation $$\frac{4}{3} \pi R_{\Delta}^3 \Delta {\bar{\rho}}_m = M_{\Delta}.$$ Here, $M_\Delta$ is the mass contained within the radius $R_\Delta$, and $\Delta$ is the overdensity calculated using the spherical collapse model of @BryanNorman1998. While the virial overdensity criterion is the default for [[ROCKSTAR]{}]{}, given the final halo catalog one can readily recompute strict spherical overdensity masses, i.e. masses using all particles, without any unbinding procedure. In our work, we always define halo mass using strict spherical overdensity masses with a fixed overdensity threshold of $\Delta= 200$ relative to the mean matter density. Finally, [[ROCKSTAR]{}]{} percolates the seed structure catalog to generate a final halo catalog by determining which seed structures are subhalos of the parent halo centered on a larger substructure. The classification of a seed structure as a halo or a subhalo is dependent upon the phase-space distance of the seed structure to all larger seeds, and incorporates information from the previous time-step when available. It is the impact of this percolation step on the statistical properties of the resulting halo catalog which we investigate in this work. Percolation Schemes {#percolations} =================== ![image](figs/perc1.png){width="60mm"} ![image](figs/perc2.png){width="60mm"} ![image](figs/perc3.png){width="60mm"}\ $x_{12} < R_{200} (M_1)$ $x_{12} < R_{200} (M_1 + M_2)$ $x_{12} < R_{200} (M_1) + R_{200} (M_2)$ We create alternate halo catalogs starting from the seed structures identified by [[ROCKSTAR]{}]{} by changing the default percolation in the code. First, we trim the list of seed structures to those above a mass threshold of $M_{200m}\geq 10^{12.5}\ {h^{-1}\ {\rm M_{\odot}}}$ ($\sim 300$ particles). Seed structures are then classified as halos or sub-halos using a simple spherical exclusion criterion. Specifically, we rank order all seed structures according to their maximum circular velocity, defined as the maximum of the circular velocity profile $$V(r) = [GM(<r)/r]^{1/2}.$$ Initially, all seed structures are considered candidate halos. Starting from the top-ranked (largest) candidate halo, we apply a spherical exclusion criteria to identify substructures of the halo centered on the top-ranked seed structure. Specifically, given two structures of mass $M_1$ and $M_2$, the two structures are considered to fall within the same parent halo if the separation between the two structures $x_{12}$ satisfies $$x_{12} \equiv |\bm{r}_1 - \bm{r}_2| \leq d(M_1,M_2) ,$$ where $d$ is the halo exclusion function. All seed structures identified as substructures of a larger parent halo are removed from the candidate halo list, and the procedure is iterated with the next highest-ranked candidate halo until no more candidate halos remain. We consider three different choices for halo exclusion: 1. **Soft-sphere halo exclusion: Two seed structures are considered to be in the same parent halo if their separation is less than the radius of the larger structure, i.e. $d(M_1,M_2) = R_{200}\left( \max(M_1,M_2) \right)$. This is the halo exclusion criteria used in @Tinker2008.** 2. **Point-mass exclusion: Two seed structures are considered to be in the same parent halo if their separation is less than the radius of a structure of mass $M_1+M_2$, i.e. $d(M_1,M_2) = R_{200}(M_1+M_2)$. This halo exclusion criterion self-consistently enforces strict spherical overdensity mass definitions and exclusion when the halos can be approximated as point particles.** 3. **Hard-sphere exclusion: Two seed structures are considered to be in the same parent halo if the spherical volumes associated with each structure overlap at all, i.e. $d(M_1,M_2) = R_{200}(M_1)+R_{200}(M_2)$.** The three percolation schemes are illustrated in Fig. \[fig:percschemes\]. There we apply each of our proposed percolations to the same set of halo seeds in an illustrative example in which a large mass halo is surrounded by several smaller halo seeds. The boundaries we draw are circles of radius $R_{200m} (M)$. The large halo is the top-ranked parent halo. The classification of the rest of the halo seeds as halos or subhalos depends on the percolation scheme: soft-sphere (left), point-mass exclusion (center), or hard-sphere exclusion (right). The crossed seeds are removed from the final halo catalog and are identified as substructures of the bigger halo in each of the percolation schemes. In terms of the amount of halos removed from the candidate halo list, the soft-sphere scheme (left) is the most conservative scheme, and the hard-sphere scheme (right) is the most aggressive one. Impact on halo statistics {#sec:results} ========================= We characterize the impact of the percolation scheme used to generate the halo catalog on four different halo statistics: the halo mass function, the halo–matter correlation function, the halo–halo correlation function, and the large-scale clustering bias. In addition to the three spherical exclusion criteria defined above, we also considered the default [[ROCKSTAR]{}]{} halo catalog. In all cases, we use strict spherical overdensity masses to define the mass of a halo. Because the mass definition itself is constant, any differences in the statistics of the four halo catalogs we generated must be the direct result of the different percolation schemes. Halo Mass Function {#sec:hmf} ------------------ ![The halo mass function for distinct percolation schemes. The choice of percolation has a significant impact on halo abundance. **Top**: Halo mass functions. **Bottom:** Fractional difference of the halo mass functions with respect to the fiducial. The shaded regions represent 68% confidence intervals as determined by jackknifing.[]{data-label="fig:hmf"}](figs/dndm.pdf){width="90mm"} Figure \[fig:hmf\] compares the halo mass functions of the halo catalogs generated using each of the four different percolation algorithms (fiducial, soft-sphere, point-mass, and hard-sphere). The lower panel show the fractional difference in the halo mass function relative to the fiducial percolation. The impact of percolation is clearly negligible at the high-mass end, but can become significant at low halo masses. This makes sense. Since high mass halos dominate their environment, the impact of percolation on these halos is negligible: these halos are never assigned as sub-halos of more massive systems. By contrast, the more aggressive percolation schemes remove small halos from the immediate vicinity of large halos, thereby suppressing the resulting halo abundance at low masses. The largest difference is that between the fiducial [[ROCKSTAR]{}]{} percolation algorithm and the hard-sphere exclusion, reaching $\approx 10\%$ (5%) differences for mass $M\sim 10^{13}\ {h^{-1}\ {\rm M_{\odot}}}$ ($M\sim 10^{14}\ {h^{-1}\ {\rm M_{\odot}}}$) halos. The differences illustrated in Figure \[fig:hmf\] are larger than the $\approx 1\%$ precision necessary for stage III dark energy experiments such as the Dark Energy Survey, and significantly larger than the precision reached by current halo–mass function emulators [e.g. @McClintock2018]. Evidently, while we can make very precise predictions for the halo mass function given a halo-finding algorithm, it is clear that the choice of percolation introduces a significant amount of systematic uncertainty in our predictions. Moreover, this level of systematic uncertainty is irreducible so long as halos in simulations are percolated in a way that is different from the way clusters are percolated in real data sets. Our results demonstrate that implementing identical percolation algorithms across both simulated halos and real clusters is necessary for stage III and IV dark energy experiments. Halo–Mass Correlation Function ------------------------------ ![image](figs/xihm.pdf){width="180mm"} Figure \[fig:xihm\] shows the halo–mass correlation function measured for each of our halo catalogs (top row), and the relative difference in the halo–mass correlation functions relative to that measured in our fiducial catalog (bottom row). The left and right columns correspond to halos with masses in the range $[3\times 10^{12}, 5\times 10^{12}] {h^{-1}\ {\rm M_{\odot}}}$ and $[2\times 10^{14}, 5\times 10^{14}] {h^{-1}\ {\rm M_{\odot}}}$ respectively. We plot data for these same mass bins through the rest of this paper. We find that aggressive halo exclusion criteria lead to suppression of the halo-mass correlation function $\xi_{hm}$. There is an obvious large feature on translinear scales ($\sim 1\ {h^{-1}\ {\rm Mpc}}$ to a few ${h^{-1}\ {\rm Mpc}}$), along with a constant change in the clustering amplitude at large scales. The large (up to 40% difference) feature at translinear scales makes sense: the more aggressive exclusion criteria remove low-mass halos in the vicinity of high-mass halos. Consequently, the amount of mass in the immediate neighborhood of the remaining low mass halos is suppressed, leading to a large decrease in the halo–mass correlation function. The typical length scale associated with this effect is the exclusion radius of the largest dark matter halos. The fact that there is an overall offset in the clustering amplitude of halos at large scales may seem surprising at first sight. However, this too is easily explained. A stronger halo exclusion region removes more small halos from the vicinity of large halos. Since large halos live in high density regions, the surviving low-mass halos must necessarily be less clustered. As for the halo mass function, these trends are more pronounced for low-mass halos than they are for high-mass halos, and for the same reason: high mass halos dominate their environment, and are therefore rarely removed through percolation. We characterize the large-scale clustering amplitude in terms of the halo bias, defined as the ratio of the halo–matter cross correlation function and the matter auto-correlation function $$b(r|M) = \frac{\xi_{hm} (r|M)} {\xi_{mm} (r)}$$ where $M$ is the mass of the halo. On large scales ($r \geq 10\ {h^{-1}\ {\rm Mpc}}$), the halo bias is approximately constant, as expected. We fit a constant bias model to the data in the radial range $10 < r < 80\ {h^{-1}\ {\rm Mpc}}$ to arrive at our final value for the large-scale bias $b(M)$ in each of our four halo catalogs. Interestingly, we find that the ratio $\xi_{hm}/\xi_{\rm lin}$, where $\xi_{\rm lin}$ is the linear correlation function, is [*not*]{} constant over the same scales. That is, the linear-bias approximation is valid to significantly lower scales provided the reference clustering function is the matter correlation function rather than the linear correlation function. The left panel in Figure \[fig:biashmdif\] shows the fractional difference of the large-scale bias between our different halo catalogs and the fiducial [[ROCKSTAR]{}]{} catalog. The data points show the relative bias as measured using the halo–halo correlation function (see next section for details), whereas the colored bands represent our measurements using the halo–mass correlation function. The width of the band is set by the error in our measurement. As expected, the bias of the high mass halos is largely insensitive to percolation effects, whereas the bias of low-mass ($M\sim 10^{13}\ {h^{-1}\ {\rm M_{\odot}}}$) halos changes by as much as $\approx 8\%$. The large scale clustering amplitude of cosmological objects is often used as as a way to estimate the mass of the halos hosting those objects [e.g. @Robertson2010; @Mountrichas2016]. The dependence of the clustering amplitude on the choice of percolation algorithm demonstrates that these type of estimates can be subject to large systematic uncertainties. To illustrate this, we consider a class of cosmological objects hosted in halos of mass $M$ as defined using the [[ROCKSTAR]{}]{} percolation algorithm. We calculate the clustering amplitude of these halos, and then use the $b(M)$ relation for the halos in each of our four halo catalogs to infer the corresponding halo mass. Figure \[fig:biasmatching\] shows the bias in the inferred halo masses for each of our four halo catalogs. We see that the choice of percolation algorithm can bias the inferred halo masses by as much as 40% for halos of mass $M\approx 10^{13}\ {h^{-1}\ {\rm M_{\odot}}}$. The translinear regime of the halo–correlation function has long been difficult to model, requiring ad-hoc parameterizations that are then calibrated in numerical simulations [e.g. @Surhud2013]. Such an approach is likely sufficient within the context of modeling galaxy–galaxy clustering for the cosmological purposes, though we caution that verifying robustness of the modeling to simulations populated based on a different halo definition would be worthwhile. At the very least, inferences about how galaxy populate halos will necessarily be impacted by the differences highlighted above. The sensitivity to halo definition will be even more problematic for cosmological studies that rely on the halo statistics directly, e.g. cluster abundance studies. We emphasize again that the work here is not meant to *calibrate this effect, but rather to demonstrate its existence. Calibrations for observational studies must be specifically tailored to the observational methodologies employed.* Finally, our results bear some impact on the location of the splashback radius as found both in simulations and data. Specifically, the sharp steepening in the halo–mass correlation that occurs at the translinear regime has been identified with the splashback radius, the distance to the apocenter of dark matter substructures falling into a dark matter halo after their first pericenter pass [@adhikari2014]. This splashback radius has been proposed as a physical definition for the halo boundary [@diemerkravtsov14; @More2015]. As demonstrated in this work, the steepening feature of the stacked halo profiles for halos of a given mass can be moved by a change in the choice of halo percolation. This is not in itself problematic: in changing the population of halos being stacked, the distribution of mass accretion rates of the resulting halos will likely change, which in turn will move the average splashback radius [@More2015]. It does demonstrate, however, that calibration of the splashback radius via halo stacking is prone to systematics arising from the choice of halo percolation. This is particularly true for measurements of the splashback radius based on observationally selected cluster samples [e.g. @moreetal2016; @buschwhite17; @umetsuetal2017; @baxteretal2017; @shinetal2018; @zuercheretal2018; @changetal2018; @contigianietal2019]. Splashback measurements based on the analysis of particle orbits are, of course, free of such systematics [@diemer2017; @diemeretal2017]. ![Fractional difference between the large scale bias measured by each percolation scheme and the fiducial scheme. Points with error bars represent the bias measured using the halo–halo correlation functions. Colored regions show the bias measured using the halo–mass correlation functions. It is reassuring that the two bias measurements are in excellent agreement with each other.[]{data-label="fig:biashmdif"}](figs/biashhdif.pdf){width="90mm"} Halo–Halo Correlation Function ------------------------------ We computed the halo auto-correlation functions for the same mass bins for which we calculated the halo–mass correlation function. As before, the auto correlation functions exhibit an overall decrease in clustering amplitude for more aggressive halo exclusion criteria. As in the case for the halo–mass correlation function, we see the appearance of features in the translinear regime, though these features are less apparent than for the case of the halo–mass correlation function: even the translinear feature in the autocorrelation of our lowest mass bins has an amplitude of only $\sim 10\%$. Sample halo–halo correlation function plots are shown in Appendix \[app:plots\]. In a way analogous to the halo–mass correlation function, we can define the large scale halo bias via $$b^2(r|M) = \frac{\xi_{hh} (r|M)} {\xi_{mm} (r)}.$$ We fit a constant halo bias model over the same radial range as employed in our analysis of the halo–mass correlation function ($r\in[10,80]\ {h^{-1}\ {\rm Mpc}}$). The data points in the left panel of Figure \[fig:biashmdif\] show the change in the clustering bias relative to our fiducial measurement for each of our four halo catalogs. We see that the change in the clustering bias amplitude is consistent across the halo–mass and halo–halo correlation function measurements. ![Bias in the inferred halo masses of our four catalogs by using large scale clustering amplitude measurements to infer halo masses. In each case, the clustering amplitude is set by the clustering of the fiducial [[ROCKSTAR]{}]{} halos. The observed amplitude is then mapped to a new halo mass using each of the alternative halo catalogs in turn. It is clear that the choice of percolation algorithm can severely impact the inferred halo mass.[]{data-label="fig:biasmatching"}](figs/bias_matching.pdf){width="90mm"} Summary and Conclusions ======================= We have shown that halo exclusion criteria impact halo statistics. Specifically, we modified the percolation of the [[ROCKSTAR]{}]{} halo finding algorithm to generate four different halo catalogs with four different halo exclusion criteria, but identical mass definitions. We then measured the halo mass functions, halo-matter and halo-halo correlation functions, and large-scale clustering bias of the resulting catalogs. We compared these statistics to the halo statistics of the fiducial halo catalog to quantify the level of uncertainty that the thus-far arbitrary choice of halo percolation scheme introduces in halo clustering statistics. We find: - The choice of halo exclusion criteria introduces a significant amount of systematic uncertainty on the halo mass function. The largest difference observed in this work was $\approx 10\%$ at a halo mass scale $M\sim 10^{13} {h^{-1}\ {\rm M_{\odot}}}$. This value corresponds to the difference between the fiducial and hard-sphere percolation schemes. Consequently, it is of critical importance for future work on cluster abundances to implement identical exclusion criteria in both theory and simulations, particularly as the low mass threshold for cluster detection gets progressively lower. Notably, [*none*]{} of the current choices of halo percolation can be implemented observationally. - The choice of percolation impacts the halo-matter correlations in two ways. At intermediate ($\approx 1\ {h^{-1}\ {\rm Mpc}}$) scales, large ($\approx 40\%$) relative differences in the halo-mass correlation function of the different halo catalogs arise. In addition, at large scales we see an offset in the large scale clustering bias. We demonstrate that halo mass estimates based on the clustering amplitude of a set of cosmological objects can be biased by as much as 40% due to the choice of percolation used when calibrating the bias–mass relation for halos. The differences in the predicted halo–mass correlation function of halos will necessarily propagate into the predicted weak lensing profiles of the resulting halo population, leading to further sources of systematic uncertainty impacting cluster abundance studies, a systematic which we intend to quantify in future work. It is worth noting that while these differences are similar in spirit to differences associated with halo mass definition, a “right” answer would naively appear to be more elusive. Within the context of halo mass definitions, the splashback radius [e.g. @More2015] is now a leading contender for the “right” radius at which to define the halo edge, naturally leading one to define halo mass as the mass contained within the splashback radius of a halo. By contrast, no similar leading candidate exists within the context of percolation schemes. We note, however, that the point–mass exclusion criterion adopted in this work is clearly the one closest in spirit to that of a strict spherical-overdensity mass definition. We will investigate in future work whether one can define objective quantitative criteria that might lead one to select one exclusion criteria over another within the context of specific science goals. [*Acknowledgements:*]{} ER is supported by DOE grant DE-SC0015975 and the Cottrell Scholar Award program. We would also like to thank Risa Wechsler, Joe de Rose, and Matt Becker for making the simulation used in this study available to us, Tom McClintock for technical help throughout the project, and Surhud More, Matt Becker, and Tom McClintock for comments on an early version of this manuscript. Sample Halo–Halo Correlation Function Plots {#app:plots} =========================================== Figure \[fig:xihh\] shows the halo–halo auto and cross correlation functions for the two mass bins used throughout the paper, as labeled. These plots are included here for completeness. ![image](figs/xihh.pdf){width="180mm"} \[lastpage\]
--- abstract: | What does it mean for a clustering to be fair? One popular approach seeks to ensure that each cluster contains groups in (roughly) the same proportion in which they exist in the population. The normative principle at play is balance: any cluster might act as a representative of the data, and thus should reflect its diversity. But clustering also captures a different form of representativeness. A core principle in most clustering problems is that a cluster center should be representative of the cluster it represents, by being “close" to the points associated with it. This is so that we can effectively replace the points by their cluster centers without significant loss in fidelity, and indeed is a common “use case" for clustering. For such a clustering to be fair, the centers should “represent" different groups equally well. We call such a clustering a group-representative clustering. In this paper, we study the structure and computation of group-representative clusterings. We show that this notion naturally parallels the development of fairness notions in classification, with direct analogs of ideas like demographic parity and equal opportunity. We demonstrate how these notions are distinct from and cannot be captured by balance-based notions of fairness. We present approximation algorithms for group representative $k$-median clustering and couple this with an empirical evaluation on various real-world data sets. author: - | Mohsen Abbasi\ School of Computing\ University of Utah\ `[email protected]`\ - | Aditya Bhaskara\ School of Computing\ University of Utah\ `[email protected]`\ - | Suresh Venkatasubramanian\ School of Computing\ University of Utah\ `[email protected]`\ bibliography: - 'refs.bib' title: Fair clustering via equitable group representations ---
--- abstract: 'We report on new imaging observations of the Lyman alpha emission line ([Ly$\alpha$]{}), performed with the Hubble Space Telescope, that comprise the backbone of the *Lyman alpha Reference Sample* (LARS). We present images of 14 starburst galaxies at redshifts $0.028 < z < 0.18$ in continuum-subtracted [Ly$\alpha$]{}, [H$\alpha$]{}, and the far ultraviolet continuum. We show that [Ly$\alpha$]{} is emitted on scales that systematically exceed those of the massive stellar population and recombination nebulae: as measured by the Petrosian 20 percent radius, [$R_{\mathrm{P}20}$]{}, [Ly$\alpha$]{} radii are larger than those of [H$\alpha$]{} by factors ranging from 1 to 3.6, with an average of 2.4. The average ratio of [Ly$\alpha$]{}-to-FUV radii is 2.9. This suggests that much of the [Ly$\alpha$]{} light is pushed to large radii by resonance scattering. Defining the *Relative Petrosian Extension* of [Ly$\alpha$]{}compared to [H$\alpha$]{}, [$\xi_{\mathrm{Ly}\alpha}$]{}= [$R_{\mathrm{P}20}^{\mathrm{Ly}\alpha}$]{}/[$R_{\mathrm{P}20}^{\mathrm{H}\alpha}$]{}, we find [$\xi_{\mathrm{Ly}\alpha}$]{} to be uncorrelated with total [Ly$\alpha$]{} luminosity. However [$\xi_{\mathrm{Ly}\alpha}$]{} is strongly correlated with quantities that scale with dust content, in the sense that a low dust abundance is a necessary requirement (although not the only one) in order to spread [Ly$\alpha$]{} photons throughout the interstellar medium and drive a large extended [Ly$\alpha$]{} halo.' author: - 'Matthew Hayes, Göran Östlin, Daniel Schaerer, Anne Verhamme, J. Miguel Mas-Hesse, Angela Adamo, Hakim Atek, John M. Cannon, Florent Duval, Lucia Guaita, E. Christian Herenz, Daniel Kunth, Peter Laursen, Jens Melinder, Ivana Orlitov[á]{}, Héctor Otí-Floranes, and Andreas Sandberg' title: 'The Lyman alpha Reference Sample: Extended Lyman alpha Halos Produced at Low Dust Content' --- Introduction ============ The Lyman alpha emission line ([Ly$\alpha$]{}), emitted by the spontaneous de-excitation over the $n=2\rightarrow 1$ electronic transition in neutral hydrogen ([H[i]{}]{}), is now an established observational probe of evolving galaxies in the high-$z$ Universe [@Cowie1998; @Rhoads2000]. Exploitation of [Ly$\alpha$]{}has resulted in significant galaxy surveys [@Ouchi2008; @Nilsson2009survey; @Guaita2010; @Adams2011], the next generations of which will recover vast numbers of galaxies. However the [H[i]{}]{} abundance in most galaxies, combined with the large [Ly$\alpha$]{} absorption cross section of ground-state hydrogen, suggests that most [Ly$\alpha$]{} will be absorbed and re-scattered by the same transition that created it. Thus most [Ly$\alpha$]{} photons are thought to be subject to multiple scattering events as they encounter neutral gas, resulting in a complicated radiative transport [@Neufeld1990; @Verhamme2006; @Laursen2009b]. ![image](f1.eps) Because [H[i]{}]{} is often found at distances that exceed the size of stellar disks and star-forming regions [@Yun1994; @Meurer1996; @Cannon2004], characteristic [Ly$\alpha$]{} scale lengths may be expected to be substantially larger than those of, for example, the FUV continuum or [H$\alpha$]{}. Indeed this has been well observed at high $z$ (e.g. @Fynbo2001 [@Rauch2008; @Steidel2011], although see also @Feldmeier2013) and low [@Mas-Hesse2003; @Ostlin2009], and studied extensively by simulation [@Laursen2009b; @Barnes2010; @Zheng2011; @Verhamme2012]. In this *Letter* we present images from the *Lyman alpha Reference Sample* (LARS). The LARS program (Östlin et al., in prep; Hayes et al., in prep) is targeting 14 UV-selected star-forming galaxies in the nearby Universe, all of which have been imaged in [Ly$\alpha$]{}, [H$\alpha$]{}, [H$\beta$]{}, and five UV/optical continuum bands. Many other observations, both in hand and ongoing, are providing gas covering fractions and kinematics, and measuring the [H[i]{}]{} mass and extent directly. HST imaging allows us to probe spatial scales down to 28 pc in individual galaxies, quantify the extent of [Ly$\alpha$]{}, and compare it with other wavelengths and derived properties. This *Letter* discusses the extension of [Ly$\alpha$]{} radiation. In Section \[sect:data\] we briefly summarize the data and show the new images. In Section \[sect:extent\] we quantify the sizes of the galaxies in [Ly$\alpha$]{}, FUV, and [H$\alpha$]{}, and discuss them with reference to high-$z$ measurements in Section \[sect:highz\]. In Section \[sect:corr\] we show how a low dust content seems to be a necessary prerequisite in order to produce this extended emission. We assume a cosmology of $(H_0, \Omega_\mathrm{M}, \Omega_\Lambda) = (70~\mathrm{km~s}^{-1}~\mathrm{Mpc}^{-1}, 0.3, 0.7)$. LARS Images {#sect:data} =========== LARS consists of 14 star-forming galaxies selected by FUV luminosity from the GALEX all-sky surveys, and imaged with Hubble Space Telescope cameras ACS/SBC, ACS/WFC, and WFC3/UVIS. The sample selection, observations, and data processing are described in detail in Östlin et al. (in prep). FUV luminosities range between $\log (L_\mathrm{FUV}/L_\odot) = 9.2$ and 10.7, overlapping much of the luminosity range of Lyman-break Galaxy (LBG) surveys, and are listed in Table \[tab:quants\]. We use the *Lyman alpha eXtraction software* ([LaXs]{}, @Hayes2009) to produce continuum-subtracted [Ly$\alpha$]{} and [H$\alpha$]{} images, corrected for underlying stellar absorption and contamination from [\[N[ii]{}\]]{}. In 1 arcsec square boxes away from the targets we measure r.m.s. background noise of $5.7 \times 10^{-19}$ [erg s$^{-1}$ cm$^{-2}$]{} in [Ly$\alpha$]{}, $2.1 \times 10^{-21}$ [erg s$^{-1}$ cm$^{-2}$ Å$^{-1}$]{} in the FUV, and $6.8 \times 10^{-19}$ [erg s$^{-1}$ cm$^{-2}$]{} in [H$\alpha$]{}. Total [Ly$\alpha$]{} luminosities range from 0 (non-detection) and $2 \times 10^{43}$ [erg s$^{-1}$]{}with a median of $8.1\times 10^{41}$ [erg s$^{-1}$]{}; roughly seven of the objects would be recovered by the deepest [Ly$\alpha$]{} surveys (Hayes et al. in prep). We present our first imaging results in this paper as a series of RGB composite images in Figures \[fig:rgb1\] and \[fig:rgb2\]. In green we encode the far UV continuum, which traces the unobscured massive stars, and roughly incorporates the sites that produce the ionizing photons. In the red we show continuum-subtracted [H$\alpha$]{}, which traces the nebulae where the aforementioned ionizing photons are reprocessed into the recombination line spectrum. The continuum subtracted [Ly$\alpha$]{} observation is encoded in blue. The images have been adaptively smoothed using a variable Gaussian kernel (`FILTER/ADAPTIVE` in `ESO/MIDAS`), in order to enhance positive regions of low surface brightness emission. The intensity scaling of all the images is logarithmic, and the levels are set to show the maximum of structure and the level at which the faintest features fade into the background. ![image](f2.eps) Immediately it can be seen that [Ly$\alpha$]{} morphologies bear limited resemblance to those of the FUV and [H$\alpha$]{}. In some cases [Ly$\alpha$]{} appears to be almost completely absent: LARS04 and 06 in particular show only small hints of [Ly$\alpha$]{} emission that contribute negligibly towards filling in the global absorption, and the composites are dominated by UV and [H$\alpha$]{} light. [Ly$\alpha$]{} is strongly absorbed, particularly in the central regions of these objects. Others show copious [Ly$\alpha$]{} emission and reveal morphological structures that are not seen at other wavelengths. Most obviously, LARS01, 02, 05, 07, 12, and 14 show large-scale halos of [Ly$\alpha$]{}  emission that completely encompass the star-forming regions, although the same phenomenon is visible to some extent in all the objects, even the absorbers. We have discussed this extended [Ly$\alpha$]{} emission in depth in the past [@Hayes2005; @Hayes2007; @Atek2008; @Ostlin2009]. However now, with an observational setup that is more sensitive to faint levels of [Ly$\alpha$]{}  and a larger and UV-selected sample (Östlin et al, in prep), we are able to robustly quantify and contrast these sizes and the relative extension of [Ly$\alpha$]{}. Apertures, Sizes and Global Quantities {#sect:extent} ====================================== In order to quantify the sizes of the galaxies at various wavelengths, we adopt the Petrosian radius [@Petrosian1976] with index of $\eta=0.2$: i.e. the radius, $R$, at which the local surface brightness is 20 percent the average surface brightness inside of $R$. In Hayes et al. (in prep) we will show the [Ly$\alpha$]{} extent of some objects to be so large that ACS/SBC cannot capture the full flux, and hence measurements like 50 percent light radius are not robust. Indeed Petrosian radii were developed to be depth-independent measures of size. We note from experimentation, however, that very similar conclusions are reached using other definitions. The choice of $\eta=0.2$ gives a size for every [Ly$\alpha$]{}-emitting galaxy in the sample except LARS09, for which even at the full extent of the SBC we do not come close to crossing the $\eta=0.2$ threshold. We reach the edge of the detector at $\eta \sim 1$ ($R>12$ kpc) and can expect the true extent of [Ly$\alpha$]{} to be much larger. For the 11 galaxies in which [$R_{\mathrm{P}20}^{\mathrm{Ly}\alpha}$]{} is well measured, its determination is robust, and would not change were the observations deeper or the field-of-view larger. [$R_{\mathrm{P}20}$]{}, is computed for [Ly$\alpha$]{}, [H$\alpha$]{}, and the FUV continuum, and listed in Table \[tab:quants\]. Based upon aperture-matched [H$\alpha$]{} and [H$\beta$]{} imaging and standard Case B assumptions, we recover up to 60 % of the intrinsic [Ly$\alpha$]{}flux, although the median value is just $\sim 3$ % (Hayes et al. in prep). [clllccccccccc]{} 01 & Mrk259 & 13:28:44.0 & +43:55:49.9 & 0.028 & 9.92 & 1.18 & 1.29 &    4.36 & 7.87 &   3.37 & –1.83 & 3.08\ 02 & & 09:07:04.9 & +53:26:56.5 & 0.030 & 9.48 & 1.12 & 1.17 &    2.67 & 8.41 &   2.27 & –2.02 & 3.08\ 03 & Arp238 & 13:15:35.1 & +62:07:27.2 & 0.031 & 9.52 & 0.84 & 0.97 &    0.75 & 8.68 &   0.77 & –0.57 & 5.18\ 04 & & 13:07:28.2 & +54:26:50.7 & 0.033 & 9.93 & 3.79 & 1.57 &    & 9.22 &   & –1.76 & 3.48\ 05 & Mrk1486 & 13:59:51.0 & +57:26:23.0 & 0.034 & 10.0 & 0.93 & 1.24 &    3.24 & 9.49 &   2.61 & –2.09 & 3.06\ 06 & KISSR2019 & 15:45:44.5 & +44:15:49.9 & 0.034 & 9.20 & 3.65 & 0.66 &    & 9.48 &   & –1.85 & 2.96\ 07 & IRAS1313+2938 & 13:16:03.9 & +29:22:54.2 & 0.038 & 9.75 & 0.85 & 0.89 &    3.01 & 10.5 &   3.37 & –1.94 & 3.37\ 08 & & 12:50:13.7 & +07:34:44.2 & 0.038 & 10.2 & 5.01 & 3.89 &    4.35 & 10.5 &   1.12 & –0.90 & 4.09\ 09 & IRAS0820+2816 & 08:23:54.9 & +28:06:22.8 & 0.047 & 10.5 & 5.00 & 4.21 & $>$12.0 & 12.9 & $>$2.85 & –1.52 & 3.48\ 10 & Mrk0061 & 13:01:41.5 & +29:22:53.2 & 0.057 & 9.74 & 2.34 & 2.63 &    5.49 & 15.5 &    2.08 & –1.36 & 3.93\ 11 & & 14:03:47.1 & +06:28:15.0 & 0.084 & 10.7 & 8.00 & 6.81 &    15.5 & 22.1 &    2.27 & –1.50 & 4.60\ 12 & SBS0934+547 & 09:38:13.5 & +54:28:25.3 & 0.102 & 10.5 & 1.78 & 2.03 &    7.06 & 26.3 &    3.48 & –1.92 & 3.21\ 13 & IRAS0147+1254 & 01:50:28.4 & +13:08:59.2 & 0.147 & 10.6 & 3.83 & 4.68 &    8.12 & 36.0 &    1.74 & –1.53 & 4.07\ 14 & & 09:26:00.3 & +44:27:36.0 & 0.181 & 10.7 & 0.79 & 1.62 &    5.86 & 42.7 &    3.62 & –2.22 & 3.13 We compare the light radii graphically in Figure \[fig:sizes\]. The plots show [$R_{\mathrm{P}20}^{\mathrm{Ly}\alpha}$]{} vs. [$R_{\mathrm{P}20}^{\mathrm{FUV}}$]{}, a comparison that could be made at high-$z$, and [$R_{\mathrm{P}20}^{\mathrm{Ly}\alpha}$]{} vs. [$R_{\mathrm{P}20}^{\mathrm{H}\alpha}$]{}, a comparison that more directly conveys the difference between the observed and intrinsic [Ly$\alpha$]{} sizes. Clearly, though, there is little difference in the result: [Ly$\alpha$]{} radii are, on average, substantially larger than corresponding FUV or [H$\alpha$]{} radii. In Table \[tab:quants\] we also report the *Relative Petrosian Extension* of [Ly$\alpha$]{} compared to [H$\alpha$]{}, [$\xi_{\mathrm{Ly}\alpha}$]{}, which is simply defined as [$R_{\mathrm{P}20}^{\mathrm{Ly}\alpha}$]{}/[$R_{\mathrm{P}20}^{\mathrm{H}\alpha}$]{}. 12 galaxies show net emission of [Ly$\alpha$]{}, where all except for one (LARS03) has [$\xi_{\mathrm{Ly}\alpha}$]{}$>1$. The galaxy with the largest extension is LARS14, for which we measure [$\xi_{\mathrm{Ly}\alpha}$]{}=3.6. It is not clear whether globally absorbing galaxies LARS04 and 06 become emitters on larger scales, but if so their [$R_{\mathrm{P}20}^{\mathrm{Ly}\alpha}$]{} must be larger than the radius of the SBC chip, implying that [$\xi_{\mathrm{Ly}\alpha}$]{} must exceed 5.3 and 13.4, respectively. That would make them the most extended objects in the sample. Excluding these two galaxies, and also LARS09 for which we can only provide a lower limit, the sample mean (median) is computed as 2.43 (2.28). ![image](f3.eps) Relevance for high-redshift studies {#sect:highz} ==================================== It is important to note that FUV radii imply that all the galaxies would be effectively unresolved by ground-based observations if they were at $z\gtrsim 2$. The largest is 8 kpc, which corresponds to the 1 arcsec resolution that could be expected from the seeing. However one of the objects has a [Ly$\alpha$]{} radius of 15.5 kpc: recovering this total flux at $z\sim 2$ would require an aperture of at least 2 arcsec. Some objects are also highly elongated and were they pushed to the high-$z$ Universe, much of their [Ly$\alpha$]{} could also be unmeasured if circular apertures are used. [Ly$\alpha$]{} emission more extended than the FUV has been reported in numerous high-$z$ samples. @Fynbo2003 remarked upon a few such objects at the brighter end of the luminosity distribution of the 27 narrowband selected galaxies, and the extremely deep spectroscopic observations of @Rauch2008 uncovered 28 [Ly$\alpha$]{} galaxies, ten of which were classified as extended. Samples of [Ly$\alpha$]{} blobs [e.g. @Matsuda2012; @Prescott2012] may be many times the size of their counterpart galaxies, if indeed counterparts are identified at all. Here we report that every galaxy in the sample that emits [Ly$\alpha$]{} does so by producing a halo; on average the halo is over twice the linear size of [H$\alpha$]{} and the FUV. By stacking narrowband images of LBGs at $\langle z\rangle=2.65$, @Steidel2011 reported [Ly$\alpha$]{} halos that extend many tens of kpc, probably probing the neutral circumgalactic medium (CGM) out to the virial radius. Subdividing the full sample by [Ly$\alpha$]{} properties, the halos at radii larger than 20–30 physical kpc show very similar scale lengths in all subsamples (although different central surface brightnesses), even when central [Ly$\alpha$]{} absorption is found. At small radii the subsamples exhibit profiles that differ markedly, dropping rapidly to $\sim 0$ for the [Ly$\alpha$]{}-absorbing sample but steepening by varying degrees in all others. Even the steepest central profiles, however, still run much flatter than those of the stellar continuum, and this change likely marks the onset of higher density gaseous disks or similar. From the various $z\approx 2.7$ [Ly$\alpha$]{} profiles of @Steidel2011 we calculate [$R_{\mathrm{P}20}^{\mathrm{Ly}\alpha}$]{} using the same method as for our sample, and dividing by [$R_{\mathrm{P}20}^{\mathrm{FUV}}$]{} from the continuum profile we obtain [$\xi_{\mathrm{Ly}\alpha}$]{} (now relative to the UV). These raw values range between [$\xi_{\mathrm{Ly}\alpha}$]{}=3.8 for the non-LAEs, and 5.9 for the LAE-only sample, and are notably bigger than our largest [$\xi_{\mathrm{Ly}\alpha}$]{}. However, under the assumption that the inner and outer profiles mark physically different regimes that may not be the same in low-$z$ galaxies, we also subtract the exponential halo fits of @Steidel2011 and repeat the exercise; this yields a range of [$\xi_{\mathrm{Ly}\alpha}$]{}=0.84 to 2.0. This is now smaller than many of our values, although close to the average and the dispersion of the high-$z$ sample is obviously lost in the stacking process. On the other hand, the UV continuum profile of @Steidel2011 is dominated by atmospheric seeing. If we instead use the continuum effective radius of BM/BX galaxies and LBGs from HST imaging [@Mosleh2011] we compute [$R_{\mathrm{P}20}^{\mathrm{FUV}}$]{}$\approx 5$ kpc, which would increase all the [$\xi_{\mathrm{Ly}\alpha}$]{} quoted above by a factor of 2.5. [$\xi_{\mathrm{Ly}\alpha}$]{} from the raw data would then become much larger than we measure in the local universe (up to 15), and [$\xi_{\mathrm{Ly}\alpha}$]{} in halo-subtracted profiles that are roughly consistent (2.1 to 4.8). ![image](f4a.eps) ![image](f4b.eps) LARS observations probe scales far below the tens of kpc sampled at high-$z$ on a case-by-case basis. The galaxies likely include the range between, or roughly bracketing, the averaged subsamples of @Steidel2011. Our imaging also suggests this extension to be a very common property of [Ly$\alpha$]{}-emitting galaxies, and its onset begins almost immediately in the inner few kpc: we find seven galaxies with FUV Petrosian radii below 2 kpc, five of which have corresponding [Ly$\alpha$]{} radii three times larger. It is also noteworthy that @Steidel2011 find different median dust attenuations for the [Ly$\alpha$]{}-emitting and non-emitting subsamples, almost precisely as we did in @Hayes2010. LAEs, which show extended central peaks, were determined to have stellar [$E_{B-V}$]{}=0.09 magnitudes (c.f. 0.085 in @Hayes2010) while absorbers show [$E_{B-V}$]{}=0.19 (c.f. 0.23 for our [H$\alpha$]{}-selected sample). Adopting the prescription of @Meurer1999 the stellar [$E_{B-V}$]{} measurements for the Steidel et al. samples correspond to $\beta$ slopes[^1] of $-1.77$ (LAEs) and $-1.27$ ([Ly$\alpha$]{} absorbers). Bluntly accounting for a factor of 2.27 that connects stellar [$E_{B-V}$]{} to its nebular equivalent in local starbursts [@Calzetti2000], the same stellar [$E_{B-V}$]{} would equate to [H$\alpha$]{}/[H$\beta$]{} ratios of 3.5 (LAEs) and 4.4 (absorbers). In the next Section we will show case-by-case that [Ly$\alpha$]{} halos systematically become more extended with decreasing dust contents. Lyman alpha extension and dust contents {#sect:corr} ======================================= In Hayes et al. (in prep) we compute many global properties for the sample, in order to study the processes complicit in [Ly$\alpha$]{} transport. Indeed that paper will include a complete analysis of correlations between [Ly$\alpha$]{} transmission, halo sizes, and many other properties; in this *Letter* we restrict ourselves to observables that scale with the dust content. It is noteworthy for the moment, however, that we find no correlation between [$\xi_{\mathrm{Ly}\alpha}$]{} and the total [Ly$\alpha$]{} luminosity. In Figure \[fig:corr\] we show how [$\xi_{\mathrm{Ly}\alpha}$]{}  compares with both the UV continuum slope $\beta$ and the [H$\alpha$]{}/[H$\beta$]{} ratio. We note that the SDSS fibers are on average smaller than the [Ly$\alpha$]{} radii, but do capture the bulk of the nebular emission, and fluxes can easily be measured without contamination of [\[N[ii]{}\]]{} and stellar absorption. Since @Meurer1999 $\beta$ has been used almost ubiquitously as a proxy of stellar attenuation in high-$z$ galaxies; here we measure $\beta$ from aperture-matched HST imaging using the FUV (SBC/F140LP or F150LP) and the $U-$band (UVIS/F336W or F390W) filters. With colors between $\beta\approx -2.2$ and $-0.6$ our objects have similar UV slopes to the vast majority of those found in $z=2-4$ [Ly$\alpha$]{}-emitting galaxies [@Blanc2011]. Similarly the [H$\alpha$]{}/[H$\beta$]{} ratio is the canonical probe of nebular reddening (i.e. that which is to zeroth order expected for [Ly$\alpha$]{}) used in studies of low-$z$ and Galactic nebulae. $\beta$ and [H$\alpha$]{}/[H$\beta$]{} are listed in Table \[tab:quants\]. Both measures of dust content strongly anti-correlate with [$\xi_{\mathrm{Ly}\alpha}$]{}  although the sample is small ($N=12$ defined sizes in [Ly$\alpha$]{}). To assess its significance we compute the Spearman rank correlation coefficient, $\rho$, which yields $\rho = -0.73$ and $-0.61$ for the anti-correlation of [$\xi_{\mathrm{Ly}\alpha}$]{} with $\beta$ and [H$\alpha$]{}/[H$\beta$]{}, respectively. This corresponds to likelihoods of the null hypothesis – that this correlation arises purely by chance – amounting to 0.7 percent (UV slope), and 3.6 percent ([H$\alpha$]{}/[H$\beta$]{}). The halo–dust phenomenon appears not to be a direct effect of radiative transfer. We have performed new test simulations with the `McLya` code [@Verhamme2006], by tuning the gas-to-dust ratio in the synthetic galaxy of @Verhamme2012. Indeed the surface brightness does scale with dust abundance but the light profile (therefore [$R_{\mathrm{P}20}^{\mathrm{Ly}\alpha}$]{}) does not, and the [$\xi_{\mathrm{Ly}\alpha}$]{}–dust trend must be a secondary correlation. A scenario is needed in which galaxies decrease the relative size of their [H[i]{}]{}  envelopes as the absolute dust content increases. A sequence in which neutral gas settles into the galaxy (reducing [$\xi_{\mathrm{Ly}\alpha}$]{}) and subsequently forms stars (creating more dust) would explain the trend, but without yet having obtained spatially resolved [H[i]{}]{} data this is conjecture. Scattering also has the potential to spread [Ly$\alpha$]{} over such an area that its surface brightness decreases greatly. In such a case, scattered radiation measured at large radii may not be sufficient to recover flux from a broad central absorption, making [$\xi_{\mathrm{Ly}\alpha}$]{}  observationally undefined when it is actually very large. The trend of [$\xi_{\mathrm{Ly}\alpha}$]{} increasing in bluer galaxies, then, is also able to explain the undefined sizes of LARS 04 and 06, at their measured dust abundance. Similar considerations would also explain the non-detection of [Ly$\alpha$]{} in local gas-rich but metal- and dust-poor dwarf starbursts such as [i]{}Zw18 and SBS0335–052 [@Kunth1994; @Mas-Hesse2003; @Ostlin2009], as discussed in @Atek2009izw18. We have empirically shown before [@Atek2009fesc; @Hayes2010] that the global escape fraction of [Ly$\alpha$]{}photons anti-correlates strongly with attenuation (also @Kornei2010 in LBGs). We now demonstrate that at lower [$E_{B-V}$]{}, the more strongly emitting galaxies are likely to also spread their [Ly$\alpha$]{} over larger surfaces. Thus while they do transmit more of their [Ly$\alpha$]{}, it may be that more of the transferred [Ly$\alpha$]{} is observationally lost outside photometric apertures. This may also explain the lack of correlation between [Ly$\alpha$]{}/[H$\beta$]{} and [$E_{B-V}$]{} observed by @Giavalisco1996, compared to trends seen in other samples: the aperture of the IUE probed just 3 kpc at $z=0.01$ and if more [Ly$\alpha$]{} is lost in bluer galaxies the [Ly$\alpha$]{}/Balmer ratios would be artificially lowered in in such systems. This could in part mask an underlying correlation. By a similar token, galaxies that can very efficiently scatter [Ly$\alpha$]{} photons may not be recovered at all, despite frequently showing very blue UV colors. Determining precisely how [Ly$\alpha$]{} profiles are modified for a given set of host properties will provide a cornerstone for interpreting future large high-$z$ surveys. M.H. received support from Agence Nationale de la recherche bearing the reference ANR-09-BLAN-0234-01. G.Ö. is a Swedish Royal Academy of Sciences research fellow supported by a grant from Knut and Alice Wallenberg foundation, and also acknowledges support from the Swedish research council (VR) and the Swedish National Space Board (SNSB). A.V. benefits from the fellowship ‘Boursière d’excellence de l’Université de Genève’. H.A. and D.K. are supported by the Centre National d’Études Spatiales (CNES) and the Programme National de Cosmologie et Galaxies (PNCG). I.O. acknowledges the Sciex fellowship. H.O.F. acknowledges financial support from CONACYT grant 129204, and Spanish FPI grant BES-2006-13489. H.O.F. and J.M.M.H. are partially funded by Spanish MICINN grants CSD2006-00070 (CONSOLIDER GTC), AYA2010-21887-C04- 02 (ESTALLIDOS) and AYA2011-24780/ESP. We thank C. Steidel for making the high-$z$ [Ly$\alpha$]{} profiles available for our comparisons in Section \[sect:highz\]. [*Facilities:*]{} . [39]{} natexlab\#1[\#1]{} , J. J., [Blanc]{}, G. A., [Hill]{}, G. J., [et al.]{} 2011, , 192, 5 , H., [Kunth]{}, D., [Hayes]{}, M., [[Ö]{}stlin]{}, G., & [Mas-Hesse]{}, J. M. 2008, , 488, 491 , H., [Kunth]{}, D., [Schaerer]{}, D., [et al.]{} 2009, , 506, L1 , H., [Schaerer]{}, D., & [Kunth]{}, D. 2009, , 502, 791 , L. A., & [Haehnelt]{}, M. G. 2010, , 403, 870 , G. A., [Adams]{}, J. J., [Gebhardt]{}, K., [et al.]{} 2011, , 736, 31 , D., [Armus]{}, L., [Bohlin]{}, R. C., [et al.]{} 2000, , 533, 682 , J. M., [Skillman]{}, E. D., [Kunth]{}, D., [et al.]{} 2004, , 608, 768 , L. L., & [Hu]{}, E. M. 1998, , 115, 1319 , J., [Hagen]{}, A., [Ciardullo]{}, R., [et al.]{} 2013, ArXiv e-prints , J. P. U., [Ledoux]{}, C., [M[ö]{}ller]{}, P., [Thomsen]{}, B., & [Burud]{}, I. 2003, , 407, 147 , J. U., [M[ö]{}ller]{}, P., & [Thomsen]{}, B. 2001, , 374, 443 , M., [Koratkar]{}, A., & [Calzetti]{}, D. 1996, , 466, 831 , L., [Gawiser]{}, E., [Padilla]{}, N., [et al.]{} 2010, , 714, 255 , M., [[Ö]{}stlin]{}, G., [Atek]{}, H., [et al.]{} 2007, , 382, 1465 , M., [[Ö]{}stlin]{}, G., [Mas-Hesse]{}, J. M., & [Kunth]{}, D. 2009, , 138, 911 , M., [[Ö]{}stlin]{}, G., [Mas-Hesse]{}, J. M., [et al.]{} 2005, , 438, 71 , M., [[Ö]{}stlin]{}, G., [Schaerer]{}, D., [et al.]{} 2010, , 464, 562 , K. A., [Shapley]{}, A. E., [Erb]{}, D. K., [et al.]{} 2010, , 711, 693 , D., [Lequeux]{}, J., [Sargent]{}, W. L. W., & [Viallefond]{}, F. 1994, , 282, 709 , P., [Razoumov]{}, A. O., & [Sommer-Larsen]{}, J. 2009, , 696, 853 , J. M., [Kunth]{}, D., [Tenorio-Tagle]{}, G., [et al.]{} 2003, , 598, 858 , Y., [Yamada]{}, T., [Hayashino]{}, T., [et al.]{} 2012, , 425, 878 , G. R., [Carignan]{}, C., [Beaulieu]{}, S. F., & [Freeman]{}, K. C. 1996, , 111, 1551 , G. R., [Heckman]{}, T. M., & [Calzetti]{}, D. 1999, , 521, 64 , M., [Williams]{}, R. J., [Franx]{}, M., & [Kriek]{}, M. 2011, , 727, 5 , D. A. 1990, , 350, 216 , K. K., [Tapken]{}, C., [M[ø]{}ller]{}, P., [et al.]{} 2009, , 498, 13 , G., [Hayes]{}, M., [Kunth]{}, D., [et al.]{} 2009, , 138, 923 , M., [Shimasaku]{}, K., [Akiyama]{}, M., [et al.]{} 2008, , 176, 301 , V. 1976, , 209, L1 , M. K. M., [Dey]{}, A., & [Jannuzi]{}, B. T. 2012, , 748, 125 , M., [Haehnelt]{}, M., [Bunker]{}, A., [et al.]{} 2008, , 681, 856 , J. E., [Malhotra]{}, S., [Dey]{}, A., [et al.]{} 2000, , 545, L85 , C. C., [Bogosavljevi[ć]{}]{}, M., [Shapley]{}, A. E., [et al.]{} 2011, , 736, 160 , A., [Dubois]{}, Y., [Blaizot]{}, J., [et al.]{} 2012, , 546, A111 , A., [Schaerer]{}, D., & [Maselli]{}, A. 2006, , 460, 397 , M. S., [Ho]{}, P. T. P., & [Lo]{}, K. Y. 1994, , 372, 530 , Z., [Cen]{}, R., [Weinberg]{}, D., [Trac]{}, H., & [Miralda-Escud[é]{}]{}, J. 2011, , 739, 62 [^1]: UV continuum flux density, parameterized by a power-law of the form $f_\lambda \propto \lambda^\beta$.
--- abstract: 'In this paper we review the general properties of X-ray afterglows. We discuss in particular on the powerful diagnostics provided by X-ray afterglows in constraining the environment and fireball in normal GRB, and the implications on the origin of dark GRB and XRF. We also discuss on the observed properties of the transition from the prompt to the afterglow phase, and present a case study for a late X-ray outburst interpreted as the onset of the afterglow stage.' author: - 'L. Piro' title: 'Global properties of X-ray afterglows of GRB' --- \[1999/12/01 v1.4c Il Nuovo Cimento\] X-ray features ============== The presence of X-ray features is an issue with important implications on the origin of the progenitor and on the cosmological use of GRB (see [@p04] for a review). So far, different authors reported evidence, ranging from 2.8 to 4.7 sigma significance, in six objects for features associated to iron complex (see [@p04] and reference therein), and in 3-4 objects for lines associated to lower Z elements (Mg, Si, S, Ar)[@r+02; @wro+02; @bmr+03; @wrh+03]. For what regards iron features, when an independent redshift measurement is available, the emission features in the afterglow phase are consistent with highly ionized iron, while in the prompt phase the absorption feature corresponds to neutral stage, suggesting a temporal evolution guided by the large increase of ionization produced by GRB photon field [@pl98]. The strong dependence of the line efficiency on the ionization stage of the material, variable over orders of magnitude from the prompt to the afterglow phase, can explain the transitory presence of lines in a same object or the tighter upper limits derived in some other bursts. In this framework soft X-ray lines and iron lines are mutually exclusive, because their maximum efficiencies are achieved at a different ionization parameter [@lrr02]. While the present body of observations can thus be consistent with theoretical expectations, the former is still sparse, and more data, of higher statistical quality, are required. Here we briefly discuss about the methods to assess the statistical significance of these features. Usually this is derived by using the F-test. According to [@pvc+02] the F-variable does not follow the F distribution when the boundary of the normalization of an additional model component is zero, as it is the case of emission or absorption features. In such a case the correct probability distribution of the F-variable is to be derived by using MonteCarlo (MC) simulations of the null (i.e. the continuum) model. For each MC realization of the null model, the value of the F-variable is then derived by fitting the simulated spectrum with the continuum model and then with the addition of the line (hereafter we refer to this method with SP, standard prescription). [@shr05] carried out a systematic analysis of several afterglows, including all those with a reported evidence of features. However they did not apply the aforementioned SP but devised a different method, obtaining a much lower statistical significance than reported in “discovery” papers. They claim that, while the SP is in principle correct, in practice it fails in a blind search of several lines, due to the inadequacy of $\chi^2$ minimization routines to find the absolute minimum. We have applied the SP to a single line at a given energy, i.e. a case not affected by minimization convergence, specifically to the case of GRB970508. The results of the different methods are thus compared [*under the same assumption*]{}, i.e. single trial probability. Our result with the SP is presented in Fig.1. It gives a statistical significance of 99.6%. This is actually slightly better than the 99.3%, originally derived in [@pcf+99] by applying the F-test. On the contrary [@shr05] derive a significance of 60%. The same data were also reanalyzed in [@pvc+02], which derive a 99.3% significance by applying the SP, consistent with our estimation. ![MonteCarlo simulations of F variable distribution for the BeppoSAX observation of GRB970508 for the null model (power law without line). The vertical line identifies the observed value of the F variable after the addition of a line, that corresponds to a probability of a chance fluctuation of 0.4% []{data-label="fig_1"}](piro1_f1.ps){width="7cm"} As originally noted in [@pvc+02] , the F-test slightly [*underestimates*]{} the significance of an emission feature. We also note that in XSPEC the $\chi^2$ computation with the default setting adopts errors derived from observed counts, while the correct formula (that can be selected in the program, but requires the background file) requires the variance to be computed from the model [@wdj+95]. This leads to an underestimation of the significance of deviations above the continuum, i.e. of emission lines, and to an overestimation of deviations below the continuum (absorption feature) the more severe when the equivalent width is large and the spectrum is source-dominated. We therefore conclude that the statistical significance derived by using the F-test, at least in the case of single line searched in a narrow range, gives a conservative estimation of the confidence level of an emission feature. An extension of the work to blind searches for multiple lines (soft X-ray lines) is in progress. ![Distribution of the values of the closure relationships for jet (upper panel), spherical expansion in ISM (mid panel) and wind (lower panel). The sample includes afterglows observed with Chandra, XMM and BeppoSAX. The vertical lines are the expected values for $\nu > \nu_c$ (left) and $\nu < \nu_c$ (right). The arrows identify the average value derived for the combined BeppoSAX and XMM sample. They are consistent with ISM or wind expansion with $\nu > \nu_c$ []{data-label="fig_2"}](piro1_f2.ps){height="10cm" width="10cm"} ![Distribution of the difference of decay indeces of X-ray and optical afterglows ($\delta_X-\delta_O$) for the BeppoSAX sample. The two dashed lines identify the expected value for a ISM ($\delta_X-\delta_O$=0.25) and a wind ($\delta_X-\delta_O$=-0.25)[]{data-label="fig_3"}](piro1_f3.ps){width="7cm"} Afterglow evolution and constraints on the fireball and environment =================================================================== In previous papers [@p04; @dpp+03]we have shown that the application of closure relationship derived from spectral and temporal evolution of X-ray afterglows by BeppoSAX set relevant contraints on the fireball model and the environment. In particular we derived that the fireball at $t\leq$ 1-2 day does not (yet) show evidence of collimated flow, and is consistent with a spherical expansion with a cooling frequency below the X-ray range. We are extending this analysis including the XMM and Chandra observations. The results are summarized in Fig.2. We note that the XMM data are fully consistent with the BSAX sample. Indeed, the typical observing time by XMM is similar to that of BeppoSAX (few hours – 2 days). In the case of Chandra, there is a significant difference in the closure relationship, with the Chandra sample consistent with a jet flow [@g+05]. This apparent inconsistency is likely due to the fact that Chandra observations start on average at later times, and that the effects of jet flow on the light curve (i.e. the break time) take place around 2-3 days. Density profiles derived from broad-band afterglow modelling are particularly intriguing, in that the majority of events are consistent with a constant density environment, and only in few cases a wind profile is clearly preferred [@cl00; @pk02]. This is at odd with the simple expectation of massive star progenitors. Recently [@clf04] proposed a solution to solve this discrepancy, arguing that a region of constant density would be produced at the boundary of the wind with the molecular cloud surrounding the progenitor. What can we tell about this issue from X-ray afterglows? When the cooling frequency is above the X-ray band, as it happens in the majority of the cases, the closure relationship for wind and ISM are degenerate (Fig. 2). One notable exception is GRB040106 [@gpp04] where $\nu_c$ is below the X-ray band and the combined spectral and temporal evolution in X-rays indicates a wind profile. The addition of the optical temporal evolution provides a powerful indicator to tell the ISM vs wind environment. In a wind profile the decay in X-rays should be shallower than in the optical, while the reverse holds true in a constant density environment (ISM). The absolute value of the difference of the decay slopes is 0.25. In Fig. 3 we plot the difference in decay slopes $\delta_X-\delta_O$ for the BeppoSAX sample. This shows that the ISM profile is preferred in most of the cases. One intriguing outlier (see next section) is GRB011121 [@pds+05]. In this event we derived $\delta_X=1.29 \pm 0.04$ vs $\delta_O=1.66 \pm 0.06$ observed by [@pbr+02], consistent only with a wind profile, as also suggested in [@pbr+02] by comparing optical and radio data. Another wind-profile candidate is presented in [@gp05]. Early and delayed afterglows and the transition from the prompt phase ===================================================================== Early papers combining the BeppoSAX WFC and NFI [@fac+00] have outlined a clear separation in between two phases of the GRB emission. The prompt phase is characterized by a hard spectrum with a strong hard-to-soft spectral evolution. This is followed, on a time scale of tens of seconds, by a second phase with a soft spectrum, well described by a power law with spectral index $\alpha\sim 1$ with no substantial spectral variation. This phase is associated with the onset of the afterglow on the bases of two evidences. First, the X-ray spectrum is similar to that observed in the late afterglow at 1 day. Second, the X-ray flux falls on the backward extrapolation of the late afterglow light curve. The prompt and early afterglow phases can be separated by temporal gaps or can be mixed [@sdp+04] but, so far, the transition has been always observed within tens of seconds from the onset of the prompt emission. In few cases, however, the transition appears to take place on a much longer time scale. In a recent paper [@pds+05] have presented evidence of an X-ray burst starting hundreds of seconds after the prompt phase in two bursts, GRB011121 and GRB011211 (see also [@gp05] for XRF011030). In the November burst the spectrum of the late X-ray burst was markedly softer than that of the preceding emission and was similar to that observed in the late afterglow observations. It is therefore tempting to identify the late X-ray bursting as the onset of the afterglow. Contrary to what observed for transitions on shorter time scales, the decay part of the late X-ray burst cannot be connected to the 1-day afterglow emission with a single power-law $(t-t_0)^{-\delta_{\rm X}}$ (Fig. 4). [*However, this is the case (Fig. 4 right panel) when $t_0$ is set equal to the onset of the late X-ray burst*]{}. This empirical result can be explained in the framework of the fireball model, taking into account the thickness of the shell. The onset of external shocks depends on the dynamical conditions of the fireball and, in particular, two regimes can be identified depending on the “thickness” of the fireball [@sp99a]. In the regime of thin shell, the reverse shock crosses the shell before the onset of the self-similar solution (i.e. when an ISM mass $m=M_0/\Gamma_0$ is collected, $M_0$ and $\Gamma_0$ being the rest mass and asymptotic Lorentz factor of the fireball respectively). As a consequence, the onset of the afterglow coincides with the deceleration time. Moreover, the evolution of the afterglow after the peak is well described by a power-law decay, if the time is measured starting from the explosion time, very well approximated by the time at which the first prompt phase photons are collected. In the case of a thick shell, the reverse shock has not crossed the shell when the critical mass $m=M_0/\Gamma_0$ has been collected, and therefore the external shock keeps being energized for a longer time. The peak of the afterglow emission therefore coincides with the shell crossing time of the reverse shock, equal to the duration of prompt phase. The afterglow decay will be well described with a single power-law only if the time is measured starting from the time at which the inner engine turns off, roughly coincident with the GRB duration. Dark bursts =========== One of the most discussed issues of the field is the origin of the dark GRB [@dpp+03]. A first estimation of the fraction of these events, around 50% of the whole population of GRB, was based adopting an upper limit on the optical magnitude at 1 day of around 22-23. The availability of fast and precise position, as e.g. delivered by HETE2, has led to the detection of a few rapidly fainting optical transients, that would likely be classified as dark GRB with observations performed at later times [@fps+03]. Fynbo et al. [@fjg+01] showed that about 75% of dark GRB were consistent with no detection if they were similar to dim burst detected in the optical with deep searches. Thus, the original definition of a dark burst is affected by significant observational bias. A more effective classification makes use of broad band information. Let us consider a first set of causes that can make a burst optically dim. A fast decay, as it would be the case of a highly collimated jet is one possibility. They can also be intrinsically under-luminous events, or GRB located at distances higher than that of OTGRB, but at a redshift not greater than 5 (see below). [*In all these cases the afterglow flux should scale of the same factor at all wavelengths*]{}. Indeed [@dpp+03] found that dark GRB are on average 6 times fainter in X-rays that OTGRB. Incidentally we note that this effect can account, at least in part, for the higher number of OTGRB identified by HETE2. [*On the contrary, the optical flux of a GRB at z${\lower.5ex\hbox{{$\; \buildrel > \over \sim \;$}}}$5 or of a GRB in a dusty star forming region should be depleted not only in absolute magnitude but also with respect to other wavelengths*]{}. Guided by this consideration, [@dpp+03] have carried out a study of dark GRB vs OTGRB comparing their X-ray vs optical fluxes. In 75% of dark GRB’s, the upper limits on the optical-to-X-ray flux ratio ($f_{OX}$) are consistent with the ratio observed in OTGRB, which is narrowly distributed around a optical-to-X-ray spectral index $\beta_{OX}=0.8$ (that is, modulo a constant factor, the same as $log(f_{OX})$). This population of events is therefore consistent with being OTGRB going undetected in the optical because searches were not fast or deep enough. However, for about 25% of dark GRB, $f_{OX}$ is at least a factor 5-10 lower than the average value observed in OTGRB, and also lower than the smallest observed $f_{OX}$. In terms of spectral index, these events have $\beta_{OX} < 0.6$. Furthermore, the optical upper limits are also lower than the faintest optical afterglow. These GRB cannot be therefore explained as dim OTGRB’s, and are named [*truly dark or optically depleted GRB*]{}. We stress that the upper limit on $f_{OX}$ for optically depleted GRB is model-independent, being derived by a comparison with the optically bright GRB, where the $f_{OX}$ distribution is rather narrow, clustering around the average value within a factor of 2 (the 1 sigma width). A similar value on the upper limit on $f_{OX}$ has been derived in two dark GRB ([@dfk+01; @pfg+02]) by modelling the broad band data via the standard fireball model. Both of these events have been associated with host galaxies at z${\lower.5ex\hbox{{$\; \buildrel < \over \sim \;$}}}$5, leading to the conclusion that the optical is depleted by dust in star-forming region. A similar approach has been followed by [@jhf+04]. They derived $\beta_{OX}$ for a large sample of GRB and compared it with the expectations of the fireball model. They find that at least 10% of the objects of their sample have an optical flux (or upper limits) fainter than the minimum allowed by the model, corresponding to $\beta_{OX}< 0.5-0.55$, and similar to the [*observed*]{} limit derived by [@dpp+03]. In conclusion, $\approx 10-20 \%$ of the burst population is characterized by an optical afterglow emission substantially fainter than that expected from the X-ray afterglow flux. As mentioned above this behaviour cannot be accounted by [*achromatic*]{} effects, such as jet expansion or luminosity. It requires causes that are selectively depleting the optical range with respect to X-rays, such as dust extinction in star forming regions or absorption by Ly$_\alpha$ forest of GRB at z$>$5. As noted above, in few events the likely explanation is dust extinction. The detection of high-z GRB is more challenging, since the light of the host galaxy should be extremely dim. The large number of SWIFT localizations should hopefully lead to the first identifications of high-z GRB. X-ray flashes ============= This new class of GRB was originally discovered by BeppoSAX [@hzkw01], confirmed and extended by HETE2 [@l05]. Their origin is still to be understood and, on this issue, we would like to show some implications derived from the properties of X-ray afterglows of XRF when compared to those of normal GRB. [@dp05] find that the distribution of X-ray fluxes (at 12 hours after the burst) of the two classes are consistent with each other. In particular the ratio of the average flux of the two populations is $1.2\pm0.6$. This result appears at odd with simple expectations by two popular scenarios. Let us first assume that XRF are normal GRB lying at larger distances. In such a case one would expect to observe a fainter afterglow, assuming that no selection effect biases the sample. For example, assuming an average redshift of 5 for XRF vs $z=1$ for GRB, the X-ray afterglow flux should be on average 7 times fainter in XRF, contrary to what is observed. Indeed we already know that some XRF are much closer (see references in [@dp05]). Nonetheless, about 50% of the XRF for which optical observations were carried out, lack an optical counterpart and we cannot exclude that some of these events are at $z{\lower.5ex\hbox{{$\; \buildrel > \over \sim \;$}}}5$. A second scenario is that XRF are normal GRB seen off-axis (e.g. [@l05] for a review). Let us assume that the only difference between the two classes is the viewing angle. This is equivalent to the unification scenario of Seyfert galaxies in the strong form. Again, under this assumption, the flux of the X-ray (and optical) afterglow observed at 12 hours should be substantially fainter than observed. In particular, both for the homogenous and for the universal jet models [@dp05] find that the observed value of the afterglow ratio requires a maximum angle of a few degrees. In conclusion, the average property X-ray afterglows of XRF appears too bright to be consistent with a single origin, either in terms of off-axis jet or high-z scenario. One possibility is that we are missing a significant fraction of faint XRF and it is hoped that this population could be probed with more data by HETE2 and SWIFT. In order to assess alternative scenarios for the prompt emission ([@mdb+04]) predictions on the afterglow properties need to be made. I would like to thank B. Gendre, M. De Pasquale, A. Corsi, A. Galli and V. D’Alessio for inputs on this paper. [0]{} N. R. [Butler]{}, H. L. [Marshall]{}, G. R. [Ricker]{}, et al., 597:1010–1016, 2003. R.Ã. [Chevalier]{} and Z. [Li]{}. , 536:195–212, 2000. R. A. [Chevalier]{}, Z. [Li]{}, and C. [Fransson]{}. , 606:369–380, 2004. V. D’Alessio and L. Piro. these proceedings, 2005. M. [De Pasquale]{}, L. Piro, R. Perna, et al. , 5092:1018–1024, 2003. S. G. Djorgovski, , D. A. Frail, S. R. Kulkarni, et al., 562:654, 2001. D. W. [Fox]{}, P. A. [Price]{}, A. M. [Soderberg]{}, et al., 586:L5–L8, 2003. F. [Frontera]{}, L. [Amati]{}, E. [Costa]{}, et al., 127:59–78, 2000. J. U. [Fynbo]{}, B. L. [Jensen]{}, J. [Gorosabel]{}, et al. , 369:373–379, 2001. A. Galli and L. Piro. these proceedings, 2005. B. Gendre et al. these proceedings, 2005. B. [Gendre]{}, L. [Piro]{}, and M. [De Pasquale]{}. , 424:L27–L30, 2004. J. Heise, J. in ’t Zand, M. Kippen, and P. Woods. In E. Costa, F. Frontera, and J. Hjorth, editors, [*GRBs in the Afterglow Era*]{}, 16–21. ESO-Springer, 2001. P. [Jakobsson]{}, J. [Hjorth]{}, J. P. U. [Fynbo]{}, et al. , 617:L21–L24, 2004. D. Lamb. these proceedings, 2005. D. [Lazzati]{}, E. [Ramirez-Ruiz]{}, and M. J. [Rees]{}. , 572:L57–L60, 2002. R. Mochkovitch, F. Daigne, C. Barraud, and J.-L. Atteia. In M. Feroci, F. Frontera, N. Masetti, and L. Piro, editors, [ *Third Rome workshop on GRBs in the Afterglow Era*]{}, 312, 381–384. ASP, 2004. A. [Panaitescu]{} and P. [Kumar]{}. , 571:779–789, 2002. R. [Perna]{} and A. [Loeb]{}. , 501:467–472, 1998. L. Piro. In M. Feroci, F. Frontera, N. Masetti, and L. Piro, editors, [ *Third Rome workshop on GRBs in the Afterglow Era*]{}, 312, 149–156. ASP, 2004. L. [Piro]{}, M. [De Pasquale]{}, P. [Soffitta]{}, et al., 623:314–324, 2005. L. [Piro]{}, E. Costa, M. Feroci et al. , 514:L73–L77, 1999. L. Piro, D. Frail, J. Gorosabel, et al. , 577:680, 2002. P. A. [Price]{}, E. [Berger]{}, D.E. [Reichart]{}, et al. , 572:L51–L55, 2002. R. [Protassov]{}, D. A. [van Dyk]{}, A. [Connors]{}, et al., 571:545–559, 2002. J. N. [Reeves]{}, D. Watson, J.L. Osborne et al. , 416:512, 2002. M. [Sako]{}, F. A. [Harrison]{}, and R. E. [Rutledge]{}. , 623:973–999, 2005. Re’Em [Sari]{} and Tsvi [Piran]{}. , 520:641–649, 1999. P. Soffitta, M. De Pasquale, L. Piro, and E. Costa. In M. Feroci, F. Frontera, N. Masetti, and L. Piro, editors, [ *Third Rome workshop on GRBs in the Afterglow Era*]{}, 312, 23–28. ASP, 2004. D. [Watson]{}, J. N. [Reeves]{}, J. [Hjorth]{}, et al. , 595:L29–L32, 2003. D. Watson, J. N. Reeves, J. Osborne, et al., 393:L1, 2002. W. A. [Wheaton]{}, A. L. [Dunklee]{}, A. S. [Jacobsen]{}, et al., 438:322–340, 1995.
--- abstract: 'Learning Mahalanobis metric spaces is an important problem that has found numerous applications. Several algorithms have been designed for this problem, including Information Theoretic Metric Learning ([$\mathsf{ITML}$]{}) \[Davis et al. 2007\] and Large Margin Nearest Neighbor ([$\mathsf{LMNN}$]{}) classification \[Weinberger and Saul 2009\]. We study the problem of learning a Mahalanobis metric space in the presence of adversarial label noise. To that end, we consider a formulation of Mahalanobis metric learning as an optimization problem, where the objective is to minimize the number of violated similarity/dissimilarity constraints. We show that for any fixed ambient dimension, there exists a fully polynomial-time approximation scheme (FPTAS) with nearly-linear running time. This result is obtained using tools from the theory of linear programming in low dimensions. As a consequence, we obtain a fully-parallelizable algorithm that recovers a nearly-optimal metric space, even when a small fraction of the labels is corrupted adversarially. We also discuss improvements of the algorithm in practice, and present experimental results on real-world, synthetic, and poisoned data sets.' author: - | Diego Ihara Centurion [^1]\ Department of Computer Science\ University of Illinois at Chicago\ Chicago, IL 60607\ `[email protected]`\ Neshat Mohammadi\ Department of Computer Science\ University of Illinois at Chicago\ Chicago, IL 60607\ `[email protected]`\ Francesco Sgherzi\ Department of Computer Science\ University of Illinois at Chicago\ Chicago, IL 60607\ `[email protected]`\ Anastasios Sidiropoulos\ Department of Computer Science\ University of Illinois at Chicago\ Chicago, IL 60607\ `[email protected]`\ bibliography: - 'bibfile.bib' title: Robust Mahalanobis Metric Learning via Geometric Approximation Algorithms --- Introduction ============ Learning metric spaces is a fundamental computational primitive that has found numerous applications and has received significant attention in the literature. We refer the reader to [@kulis2013metric; @li2018survey] for detailed exposition and discussion of previous work. At the high level, the input to a metric learning problem consists of some universe of objects $X$, together with some similarity information on subsets of these objects. Here, we focus on pairwise similarity and dissimilarity constraints. Specifically, we are given ${\cal S}, {\cal D}\subset{X\choose 2}$, which are sets of pairs of objects that are labeled as similar and dissimilar respectively. We are also given some $u,\ell>0$, and we seek to find a mapping $f:X\to Y$, into some target metric space $(Y,\rho)$, such that for all $x,y\in {\cal S}$, $$\rho(f(x),f(y)) \leq u,$$ and for all $x,y\in {\cal D}$, $$\rho(f(x),f(y)) \geq \ell.$$ In the case of Mahalanobis metric learning, we have $X\subset \mathbb{R}^d$, with $|X|=n$, for some $d\in \mathbb{N}$, and the mapping $f:\mathbb{R}^d\to \mathbb{R}^d$ is linear. Specifically, we seek to find a matrix ${\mathbf{G}}\in \mathbb{R}^{d\times d}$, such that for all $\{p, q\}\in {\cal S}$, we have $$\begin{aligned} \|{\mathbf{G}}p - {\mathbf{G}}q\|_2 &\leq u, \label{eq:G_u}\end{aligned}$$ and for all $\{p, q\}\in {\cal D}$, we have $$\begin{aligned} \|{\mathbf{G}}p - {\mathbf{G}}q\|_2 &\geq \ell. \label{eq:G_l}\end{aligned}$$ Our Contribution ---------------- In general, there might not exist any ${\mathbf{G}}$ that satisfies all constraints of type \[eq:G\_u\] and \[eq:G\_l\]. We are thus interested in finding a solution that minimizes the fraction of violated constraints, which corresponds to maximizing the accuracy of the mapping. We develop a $(1+{\varepsilon})$-approximation algorithm for optimization problem of computing a Mahalanobis metric space of maximum accuracy, that runs in near-linear time for any fixed ambient dimension $d\in \mathbb{N}$. This algorithm is obtained using tools from geometric approximation algorithms and the theory of linear programming in small dimension. The following summarizes our result. \[thm:main\] For any $d\in \mathbb{N}$, ${\varepsilon}>0$, there exists a randomized algorithm for learning $d$-dimensional Mahalanobis metric spaces, which given an instance that admits a mapping with accuracy $r^*$, computes a mapping with accuracy at least $r^*-{\varepsilon}$, in time $d^{O(1)} n (\log{n}/{\varepsilon})^{O(d)}$, with high probability. The above algorithm can be extended to handle various forms of regularization. We also propose several modifications of our algorithm that lead to significant performance improvements in practice. The final algorithm is evaluated experimentally on both synthetic and real-world data sets, and in a data poisoning scenario, and is compared against the currently best-known algorithms for the problem. Related Work ------------ Several algorithms for learning Mahalanobis metric spaces have been proposed. Notable examples include the SDP based algorithm of Xing et al. [@xing2003distance], the algorithm of Globerson and Roweis for the fully supervised setting [@globerson2006metric], Information Theoretic Metric Learning ([$\mathsf{ITML}$]{}) by Davis et al. [@davis2007information], which casts the problem as a particular optimization minimizing LogDet divergence, as well as Large Margin Nearest Neighbor ([$\mathsf{LMNN}$]{}) by Weinberger et al. [@weinberger2006distance], which attempts to learn a metric geared towards optimizing $k$-NN classification. We refer the reader to the surveys [@kulis2013metric; @li2018survey] for a detailed discussion of previous work. Our algorithm differs from previous approaches in that it seeks to directly minimize the number of violated pairwise distance constraints, which is a highly non-convex objective, without resorting to a convex relaxation of the corresponding optimization problem. Organization ------------ The rest of the paper is organized as follows. Section \[sec:lp-type\] describes the main algorithm and the proof of Theorem \[thm:main\]. Section \[sec:practical\] discusses practical improvements used in the implementation of the algorithm. Section \[sec:experiments\] presents the experimental evaluation. NP-hardness =========== The problem of learning $d$-dimensional Euclidean metric spaces with perfect information is NP-hard, even for $d=2$. Mahalanobis Metric Learning as an LP-Type Problem {#sec:lp-type} ================================================= In this Section we present an approximation scheme for Mahalanobis metric learning in $d$-dimensional Euclidean space, with nearly-linear running time. We begin by recalling some prior results on the class of LP-type problems, which generalizes linear programming. We then show that linear metric learning can be cast as an LP-type problem. LP-type Problems ---------------- Let us recall the definition of an LP-type problem. Let ${\cal H}$ be a set of constraints, and let $w:2^{\cal H}\to \mathbb{R}\cup \{-\infty,+\infty\}$, such that for any $G\subset {\cal H}$, $w(G)$ is the value of the optimal solution of the instance defined by $G$. We say that $({\cal H}, w)$ defines an LP-type problem if the following axioms hold: [**(A1) Monotonicity.**]{} For any $F\subseteq G \subseteq {\cal H}$, we have $w(F)\leq w(G)$. [**(A2) Locality.**]{} For any $F\subseteq G \subseteq {\cal H}$, with $-\infty < w(F) = w(G)$, and any $h\in {\cal H}$, if $w(G) < w(G \cup \{h\})$, then $w(F) < w(F \cup \{h\})$. More generally, we say that $({\cal H}, w)$ defines an LP-type problem on some ${\cal H}'\subseteq {\cal H}$, when conditions (A1) and (A2) hold for all $F\subseteq G \subseteq {\cal H}'$. A subset $B\subseteq {\cal H}$ is called a *basis* if $w(B)>-\infty$ and $w(B') < w(B)$ for any proper subset $B'\subsetneq B$. A *basic operation* is defined to be one of the following: [**(B0) Initial basis computation.**]{} Given some $G\subseteq {\cal H}$, compute any basis for ${\cal G}$. [**(B1) Violation test.**]{} For some $h\in {\cal H}$ and some basis $B\subseteq {\cal H}$, test whether $w(B\cup \{h\}) > w(B)$ (in other words, whether $B$ violates $h$). [**(B2) Basis computation.**]{} For some $h\in {\cal H}$ and some basis $B\subseteq {\cal H}$, compute a basis of $B\cup \{h\}$. An LP-type Formulation ---------------------- ${\mathbf{A}}\gets {\ensuremath{\mathsf{Basic\text{-}LPTML}}}(B)$ choose $h\in F$ uniformly at random ${\mathbf{A}}\gets {\ensuremath{\mathsf{Exact\text{-}LPTML}}}(F-\{h\})$ ${\mathbf{A}}:= {\ensuremath{\mathsf{Exact\text{-}LPTML}}}(F-\{h\}; B\cup \{h\})$ **return** ${\mathbf{A}}$ We now show that learning Mahalanobis metric spaces can be expressed as an LP-type problem. We first note that we can rewrite and as $$\begin{aligned} (p - q)^T {\mathbf{A}}(p-q) &\leq u^2, \label{eq:A_u}\end{aligned}$$ and $$\begin{aligned} (p - q)^T {\mathbf{A}}(p-q) &\geq \ell^2, \label{eq:A_l}\end{aligned}$$ where ${\mathbf{A}}={\mathbf{G}}^T {\mathbf{G}}$ is positive semidefinite. We define ${\cal H}=\{0,1\}\times {\mathbb{R}^d\choose 2}$, where for each $(0,\{p, q\})\in {\cal H}$, we have a constraint of type , and for every $(1,\{p,q\})\in {\cal H}$, we have a constraint of type . Therefore, for any set of constraints $F\subseteq {\cal H}$, we may associate the set of feasible solutions for $F$ with the set ${\cal A}_F$ of all positive semidefinite matrices ${\mathbf{A}}\in \mathbb{R}^{n\times n}$, satisfying and for all constraints in $F$. Let $w:2^{\cal H}\to \mathbb{R}$, such that for all $F\in {\cal H}$, we have $$w(F) = \left\{\begin{array}{ll} \inf_{{\mathbf{A}}\in {\cal A}_F} r^T {\mathbf{A}}r & \text{ if } {\cal A}_F \neq \emptyset\\ \infty & \text{ if } {\cal A}_F = \emptyset \end{array}\right.,$$ where $r\in \mathbb{R}^d$ is a vector chosen uniformly at random from the unit sphere from some rotationally-invariant probability measure. Such a vector can be chosen, for example, by first choosing some $r'\in \mathbb{R}^d$, where each coordinate is sampled from the normal distribution ${\cal N}(0,1)$, and setting $r=r'/\|r'\|_2$. \[lem:ML\_is\_LP-type\] When $w$ is chosen as above, the pair $({\cal H}, w)$ defines an LP-type problem of combinatorial dimension $O(d^2)$, with probability 1. Moreover, for any $n>0$, if each $r_{i}$ is chosen using $\Omega(\log n)$ bits of precision, then for each $F\subseteq {\cal H}$, with $n=|F|$, the assertion holds with high probability. Since adding constraints to a feasible instance can only make it infeasible, it follows that $w$ satisfies the monotonicity axiom (A1). We next argue that the locality axion (A2) also holds, with high probability. Let $F\subseteq G \subseteq {\cal H}$, with $-\infty<w(F)=w(G)$, and let $h\in {\cal H}$, with $w(G)<w(G\cup \{h\})$. Let ${\mathbf{A}}_F\in {\cal A}_F$ and ${\mathbf{A}}_G\in {\cal A}_G$ be some (not necessarily unique) infimizers of $w({\mathbf{A}})$, when ${\mathbf{A}}$ ranges in ${\cal A}_F$ and ${\cal A}_G$ respectively. The set ${\cal A}_F$, viewed as a convex subset of $\mathbb{R}^{d^2}$, is the intersection of the SDP cone with $n$ half-spaces, and thus ${\cal A}_F$ has at most $n$ facets. There are at least two distinct infimizers for $w({\mathbf{A}}_G)$, when ${\mathbf{A}}_G\in {\cal A}_G$, only when the randomly chosen vector $r$ is orthogonal to a certain direction, which occurs with probability 0. When each entry of $r$ is chosen with $c\log n$ bits of precision, the probability that $r$ is orthogonal to any single hyperplane is at most $2^{-c \log n}= n^{-c}$; the assertion follows by a union bound over $n$ facets. This establishes that axiom (A2) holds with high probability. It remains to bound the combinatorial dimension, $\kappa$. Let $F\subseteq {\cal H}$ be a set of constraints. For each ${\mathbf{A}}\in {\cal A}_F$, define the ellipsoid $${\cal E}_{\mathbf{A}}= \{v \in \mathbb{R}^d : \|{\mathbf{A}}{\mathbf{v}}\|_2 = 1\}.$$ For any ${\mathbf{A}}, {\mathbf{A}}'\in {\cal A}_F$, with ${\cal E}_{\mathbf{A}}={\cal E}_{{\mathbf{A}}'}$, and ${\mathbf{A}}={\mathbf{G}}^T {\mathbf{G}}$, ${\mathbf{A}}'={\mathbf{G}}'^T {\mathbf{G}}'$, we have that for all $p,q\in \mathbb{R}^d$, $\|{\mathbf{G}}p-{\mathbf{G}}q\|_2=(p-q)^T {\mathbf{A}}(p-q) = (p-q)^T {\mathbf{A}}' (p-q)=\|{\mathbf{G}}' p-{\mathbf{G}}' q\|_2$. Therefore in order to specify a linear transformation ${\mathbf{G}}$, up to an isometry, it suffices to specify the ellipsoid ${\cal E}_{{\mathbf{A}}}$. Each $\{p,q\}\in {\cal S}$ corresponds to the constraint that the point $(p-q)/u$ must lie in ${\cal E}_{\mathbf{A}}$. Similarly each $\{p,q\}\in {\cal D}$ corresponds to the constraint that the point $(p-q)/\ell$ must lie either on the boundary or the exterior of ${\cal E}_{\mathbf{A}}$. Any ellipsoid in $\mathbb{R}^d$ is uniquely determined by specifying at most $(d+3)d/2 = O(d^2)$ distinct points on its boundary (see [@welzl1991smallest; @chazelle2000discrepancy]). Therefore, each optimal solution can be uniquely specified as the intersection of at most $O(d^2)$ constraints, and thus the combinatorial dimension is $O(d^2)$. \[lem:basis\_comp\] Any initial basis computation (B0), any violation test (B1), and any basis computation (B2) can be performed in time $d^{O(1)}$. The violation test (B1) can be performed by solving one SDP to compute $w(B)$, and another to compute $w(B\cup \{h\})$. By Lemma \[lem:ML\_is\_LP-type\] the combinatorial dimension is $O(d^2)$, thus each SDP has $O(d^2)$ constraints, and be solved in time $d^{O(1)}$. The basis computation step (B2) can be performed starting with the set of constraints $B\cup \{h\}$, and iteratively remove every constraint whose removal does not decrease the optimum cost, until we arrive at a minimal set, which is a basis. In total, we need to solve at most $d$ SDPs, each of size $O(d^2)$, which can be done in total time $d^{O(1)}$. Finally, by the choice of $w$, any set containing a single constraint in ${\cal S}$ is a valid initial basis. Algorithmic implications ------------------------ $p \gets (1+{\varepsilon})^{-i}$ subsample $F_j\subseteq F$, where each element is chosen independently with probability $p$ ${\mathbf{A}}_{i,j} \gets {\ensuremath{\mathsf{Exact\text{-}LPTML}}}(F_j)$ **return** a solution out of $\{{\mathbf{A}}_{i,j}\}_{i,j}$, violating the minimum number of constraints in $F$ Using the above formulation of Mahalanobis metric learning as an LP-type problem, we can obtain our approximation scheme. Our algorithm uses as a subroutine an *exact* algorithm for the problem (that is, for the special case where we seek to find a mapping that satisfies all constraints). We first present the exact algorithm and then show how it can be used to derive the approximation scheme. #### An exact algorithm. [@welzl1991smallest] obtained a simple randomized linear-time algorithm for the minimum enclosing ball and minimum enclosing ellipsoid problems. This algorithm naturally extends to general LP-type problems (we refer the reader to [@har2011geometric; @chazelle2000discrepancy] for further details). With the interpretation of Mahalanobis metric learning as an LP-type problem given above, we thus obtain a linear time algorithm for in $\mathbb{R}^d$, for any constant $d\in \mathbb{N}$. The resulting algorithm on a set of constraints $F\subseteq {\cal H}$ is implemented by the procedure ${\ensuremath{\mathsf{Exact\text{-}LPTML}}}(F; \emptyset)$, which is presented in Algorithm \[fig:algo1\]. The procedure ${\ensuremath{\mathsf{LPTML}}}(F;B)$ takes as input sets of constraints $F,B\subseteq {\cal H}$. It outputs a solution $\mathbf{A} \in \mathbb{R}^{d\times d}$ to the problem induced by the set of constraints $F\cup B$, such that all constraints in $B$ are tight (that is, they hold with equality); if no such solution solution exists, then it returns ${\mathsf{nil}}$. The procedure ${\ensuremath{\mathsf{Basic\text{-}LPTML}}}(B)$ computes ${\ensuremath{\mathsf{LPTML}}}(\emptyset; B)$. The analysis of [@welzl1991smallest] implies that when ${\ensuremath{\mathsf{Basic\text{-}LPTML}}}(B)$ is called, the cardinality of $B$ is at most the combinatorial dimension, which by Lemma \[lem:ML\_is\_LP-type\] is $O(d^2)$. Thus the procedure [$\mathsf{Basic\text{-}LPTML}$]{} can be implemented using one initial basis computation (B0) and $O(d^2)$ basis computations (B2), which by Lemma \[lem:basis\_comp\] takes total time $d^{O(1)}$. **Algorithm** ${\ensuremath{\mathsf{Exact\text{-}LPTML}}}(F; B)$ ----------------------------------------------------------------------------------- if $F=\emptyset$ then ${\mathbf{A}}:= {\ensuremath{\mathsf{Basic\text{-}LPTML}}}(B)$ else chose $h\in F$ uniformly at random ${\mathbf{A}}:= {\ensuremath{\mathsf{Exact\text{-}LPTML}}}(F-\{h\})$ if ${\mathbf{A}}$ violates $h$ then ${\mathbf{A}}:= {\ensuremath{\mathsf{Exact\text{-}LPTML}}}(F-\{h\}; B\cup \{h\})$ return ${\mathbf{A}}$ #### An $(1+{\varepsilon})$-approximation algorithm. It is known that the above exact linear-time algorithm leads to an nearly-linear-time approximation scheme for LP-type problems. This is summarized in the following. We refer the reader to [@har2011geometric] for a more detailed treatment. \[lem:sariel\] Let ${\cal A}$ be some LP-type problem of combinatorial dimension $\kappa>0$, defined by some pair $({\cal H},w)$, and let ${\varepsilon}>0$. There exists a randomized algorithm which given some instance $F\subseteq {\cal H}$, with $|F|=n$, outputs some basis $B\subseteq F$, that violates at most $(1+{\varepsilon}) k$ constraints in $F$, such that $w(B) \leq w(B')$, for any basis $B'$ violating at most $k$ constraints in $F$, in time $O\left(t_0+\left(n + n\min\left\{\frac{\log^{\kappa+1}n}{{\varepsilon}^{2\kappa}}, \frac{\log^{\kappa+2}n}{k {\varepsilon}^{2\kappa+2}}\right\}\right)(t_1+t_2)\right)$, where $t_0$ is the time needed to compute an arbitrary initial basis of ${\cal A}$, and $t_1$, $t_2$, and $t_3$ are upper bounds on the time needed to perform the basic operations (B0), (B1) and (B2) respectively. The algorithm succeeds with high probability. For the special case of Mahalanobis metric learning, the corresponding algorithm is given in Algorithm \[fig:approx\_algo\]. The approximation guarantee for this algorithm is summarized in \[thm:main\]. We can now give the proof of our main result. Follows immediately by Lemmas \[lem:basis\_comp\] and \[lem:sariel\]. **Algorithm** ${\ensuremath{\mathsf{LPTML}}}(F)$ --------------------------------------------------------------------------------------------------------------------- for $i=0$ to $\log_{1+{\varepsilon}} n$ $p = (1+{\varepsilon})^{-i}$ for $j=1$ to $\log^{O(d^2)} n$ subsample $F_j\subseteq F$, where each element is chosen independently with probability $p$ ${\mathbf{A}}_j := {\ensuremath{\mathsf{Exact\text{-}LPTML}}}(F_j)$ return a solution out of ${\mathbf{A}}_1,\ldots,{\mathbf{A}}_j$, violating the minimum number of constraints in $F$ Triple constraints ================== In several works (see e.g. [@XXX]), the Mahalanobis metric learning problem in $d$-dimensional Euclidean space is solved under constraints of the form $$\begin{aligned} \|{\mathbf{G}}p-{\mathbf{G}}q\|_2 \geq \|{\mathbf{G}}q - {\mathbf{G}}z\|_2 + m, \label{eq:triple}\end{aligned}$$ for some $p,q,z\in \mathbb{R}^d$, where $m>0$ is the margin parameter. A constraint of the form \[eq:triple\] can be encoded by the triple $(p,q,z)$. An instance of the *Mahalanobis metric learning problem with triple constraints* is thus some $\phi=(P,F,m)$, where $P\subset \mathbb{R}^d$, with $|P|=n$, $F\subseteq P^d$, and $m>0$. It is straightforward to see that the above can also be interpreted as an LP-type problem, using the same approach as in the case of contrastive constraints. #### Regularization. We now argue that the LP-type algorithm described above can be extended to handle certain types of regularization on the matrix ${\mathbf{A}}$. In methods based on convex optimization, introducing regularizers that are convex functions can often be done easily. In our case, we cannot directly introduce a regularizing term in the objective function that is implicit in Algorithm \[fig:approx\_algo\]. More specifically, let ${\mathsf{cost}}({\mathbf{A}})$ denote the total number of constraints of type and that ${\mathbf{A}}$ violates. Algorithm \[fig:approx\_algo\] approximately minimizes the objective function ${\mathsf{cost}}({\mathbf{A}})$. A natural regularized version of Mahalanobis metric learning is to instead minimize the objective function ${\mathsf{cost}}'({\mathbf{A}}) := {\mathsf{cost}}({\mathbf{A}}) + \eta \cdot {\text{reg}}({\mathbf{A}})$, for some $\eta>0$, and regularizer ${\text{reg}}({\mathbf{A}})$. One typical choice is ${\text{reg}}({\mathbf{A}}) = {\text{tr}}({\mathbf{A}}{\mathbf{C}})$, for some matrix ${\mathbf{C}}\in \mathbb{R}^{d\times d}$; the case ${\mathbf{C}}={\mathbf{I}}$ corresponds to the trace norm (see [@kulis2013metric]). We can extend the Algorithm \[fig:approx\_algo\] to handle any regularizer that can be expressed as a linear function on the entries of ${\mathbf{A}}$, such as ${\text{tr}}({\mathbf{A}})$. The following summarizes the result. \[thm:reg\] Let ${\text{reg}}({\mathbf{A}})$ be a linear function on the entries of ${\mathbf{A}}$, with polynomially bounded coefficients. For any $d\in \mathbb{N}$, ${\varepsilon}>0$, there exists a randomized algorithm for learning $d$-dimensional Mahalanobis metric spaces, which given an instance that admits a solution ${\mathbf{A}}_0$ with ${\mathsf{cost}}'({\mathbf{A}}_0)=c^*$, computes a solution ${\mathbf{A}}$ with ${\mathsf{cost}}'({\mathbf{A}}) \leq (1+{\varepsilon}) c^*$, in time $d^{O(1)} n (\log{n}/{\varepsilon})^{O(d)}$, with high probability. If $\eta < {\varepsilon}^t$, for sufficiently large constant $t>0$, since the coefficients in ${\text{reg}}({\mathbf{A}})$ are polynomially bounded, it follows that the largest possible value of $\eta \cdot {\text{reg}}({\mathbf{A}})$ is $O({\varepsilon})$, and can thus be omitted without affecting the result. Similarly, if $\eta>(1/{\varepsilon})n^{t'}$, for sufficiently large constant $t'>0$, since there are at most ${n \choose 2}$ constraints, it follows that the term ${\mathsf{cost}}({\mathbf{A}})$ can be omitted form the objective. Therefore, we may assume w.l.o.g. that ${\text{reg}}(A_0) \in [{\varepsilon}^{O(1)}, (1/{\varepsilon}) n^{O(1)}]$. We can guess some $i=O(\log n + \log(1/{\varepsilon}))$, such that ${\text{reg}}(A_0) \in ((1+{\varepsilon})^{i-1}, (1+{\varepsilon})^{i}]$. We modify the SDP used in the proof of Lemma \[lem:basis\_comp\] by introducing the constraint ${\text{reg}}({\mathbf{A}}) \leq (1+{\varepsilon})^i$. Guessing the correct value of $i$ requires $O(\log n + \log(1/{\varepsilon}))$ executions of Algorithm \[fig:approx\_algo\], which implies the running time bound. [0.5]{} [0.5]{} Practical Improvements and Parallelization {#sec:practical} ========================================== We now discuss some modifications of the algorithm described in the previous section that significantly improve its performance in practical scenarios, and have been integrated in our implementation. #### Move-to-front and pivoting heuristics. We use heuristics that have been previously used in algorithms for linear programming [@seidel1990linear; @clarkson1995vegas], minimum enclosing ball in $\mathbb{R}^3$ [@megiddo1983linear], minimum enclosing ball and ellipsoid is $\mathbb{R}^d$, for any fixed $d\in \mathbb{N}$ [@welzl1991smallest], as well as in fast implementations of minimum enclosing ball algorithms [@gartner1999fast]. The *move-to-front* heuristic keeps an ordered list of constraints which gets reorganized as the algorithm runs; when the algorithm finds a violation, it moves the violating constraint to the beginning of the list of the current sub-problem. The *pivoting* heuristic further improves performance by choosing to add to the basis the constraint that is “violated the most”. For instance, for similarity constraints, we pick the one that is mapped to the largest distance greater than $u$; for dissimilarity constraints, we pick the one that is mapped to the smallest distance less than $\ell$. #### Approximate counting. The main loop of Algorithm \[fig:approx\_algo\] involves counting the number of violated constraints in each iteration. In problems involving a large number of constraints, we use approximate counting by only counting the number of violations within a sample of $O(\log 1/{\varepsilon})$ constraints. #### Early termination. A bottleneck of Algorithm \[fig:approx\_algo\] stems from the fact that the inner loop needs to be executed for $\log^{O(d^2)} n$ iterations. In practice, we have observed that a significantly smaller number of iterations is needed to achieve high accuracy. We denote by ${\ensuremath{\mathsf{LPTML}}}_{t}$ for the version of the algorithm that performs a total of $t$ iterations of the inner loop. #### Parallelization. Algorithm \[fig:approx\_algo\] consists of several executions of the algorithm [$\mathsf{Exact\text{-}LPTML}$]{} on independently sampled sub-problems. Therefore, Algorithm \[fig:approx\_algo\] can trivially be parallelized by distributing a different set of sub-problems to each machine, and returning the best solution found overall. [.31]{} [.31]{} [.31]{} Experimental Evaluation {#sec:experiments} ======================= We have implemented Algorithm \[fig:approx\_algo\], incorporating the practical improvements described in Section \[sec:practical\], and performed experiments on synthetic and real-world data sets. Our ${\ensuremath{\mathsf{LPTML}}}$ implementation and documentation can be found in our repository. We now describe the experimental setting and discuss the main findings. Experimental Setting -------------------- Data set ITML LMNN LPTML$_{t=2000}$ ----------- ------------------- ------------------- ------------------- Iris $ 0.96\pm 0.01 $ $ 0.96 \pm 0.02 $ $ 0.94 \pm 0.04 $ Soybean $ 0.95 \pm 0.04 $ $ 0.96 \pm 0.04 $ $ 0.90\pm 0.05 $ Synthetic $0.97 \pm 0.02$ $1.00 \pm 0.00$ $1.00 \pm 0.00$ : Average accuracy and standard deviation over 50 executions of ITML, LMNN and LPTML.[]{data-label="table:accuracies"} #### Classification task. Each data set used in the experiments consists of a set of labeled points in $\mathbb{R}^d$. The label of each point indicates its class, and there is a constant number of classes. The set of similarity constraints ${\cal S}$ (respt. dissimilarity constraints ${\cal D}$) is formed by uniformly sampling pairs of points in the same class (resp. from different classes). We use various algorithms to learn a Mahalanobis metric for a labeled input point set in $\mathbb{R}^d$, given these constraints. The values $u$ and $\ell$ are chosen as the $90$th and $10$th percentiles of all pairwise distances. We used 2-fold cross-validation: At the training phase we learn a Mahalanobis metric, and in the testing phase we use $k$-NN classification, with $k=4$, to evaluate the performance. #### Data sets. We have tested our algorithm on the following synthetic and real-world data sets: [*1. Real-world:*]{} We have tested the performance of our implementation on the Iris, Wine, Ionosphere and Soybean data sets from the UCI Machine Learning Repository[^2]. [*2. Synthetic:*]{} Next, we consider a synthetic data set that is constructed by first sampling a set of $100$ points from a mixture of two Gaussians in $\mathbb{R}^2$, with identity covariance matrices, and with means $(-3,0)$ and $(3,0)$ respectively; we then apply a linear transformation that stretches the $y$ axis by a factor of $40$. This linear transformation reduces the accuracy of $k$-NN on the underlying Euclidean metric with $k=4$ from 1 to 0.68. [*3. Data poisoning:*]{} We modify the above synthetic data set by introducing a small fraction of points in an adversarial manner, before applying the linear transformation. Figure \[fig:noiseA-LPTML\] depicts the noise added as five points labeled as one of the classes, and sampled from a Gaussian with identity covariance matrix and mean $(-100, 0)$ (Figure \[fig:noiseA\]). #### Algorithms. We compare the performance of our algorithm against [$\mathsf{ITML}$]{} and [$\mathsf{LMNN}$]{}. We used the implementations provided by the authors of these works, with minor modifications. Results ------- #### Accuracy. Algorithm \[fig:approx\_algo\] minimizes the number of violated pairwise distance constraints. It is interesting to examine the effect of this objective function on the accuracy of $k$-NN classification. Figure \[fig:wine\_viol\_acc\] depicts this relationship for the Wine data set. We observe that, in general, as the number of iterations of the main loop of [$\mathsf{LPTML}$]{} increases, the number of violated pairwise distance constraints decreases, and the accuracy of $k$-NN increases. This phenomenon remains consistent when we first perform PCA to $d=4,8,12$ dimensions. #### Comparison to [$\mathsf{ITML}$]{} and [$\mathsf{LMNN}$]{}. We compared the accuracy obtained by ${\ensuremath{\mathsf{LPTML}}}_t$, for $t=2000$ iterations, against [$\mathsf{ITML}$]{} and [$\mathsf{LMNN}$]{}. Table \[table:accuracies\] summarizes the findings on the real-world and data sets and the synthetic data set without adversarial noise. We observe that [$\mathsf{LPTML}$]{} achieves accuracy that is comparable to [$\mathsf{ITML}$]{} and [$\mathsf{LMNN}$]{}. We observe that [$\mathsf{LPTML}$]{} outperforms [$\mathsf{ITML}$]{} and [$\mathsf{LMNN}$]{} on the poisoned data set. This is due to the fact that the introduction of adversarial noise causes the relaxations used in [$\mathsf{ITML}$]{} and [$\mathsf{LMNN}$]{} to be biased towards contracting the $x$-axis. In contrast, the noise does not “fool” [$\mathsf{LPTML}$]{} because it only changes the optimal accuracy by a small amount. The results are summarized in Figure \[fig:synthetic-noise-accuracy\]. #### The effect of dimension. The running time of [$\mathsf{LPTML}$]{} grows with the dimension $d$. This is caused mostly by the fact that the combinatorial dimension of the underlying LP-type problem is $O(d^2)$, and thus performing each basic operation requires solving an SDP with $O(d^2)$ constraints. Figure \[fig:wine\_time\] depicts the effect of dimensionality in the running time, for $t=100,\ldots,2000$ iterations of the main loop. The data set used is Wine after performing PCA to $d$ dimensions, for $d=2,\ldots,13$. #### Parallel implementation. We implemented a massively parallel version of [$\mathsf{LPTML}$]{} in the MapReduce model. The program maps different sub-problems of the main loop of [$\mathsf{LPTML}$]{} to different machines. In the reduce step, we keep the result with the minimum number of constraint violations. The implementation uses the mrjob [@mrjob-docs] package. For these experiments, we used Amazon cloud computing instances of type *m4.xlarge*, AMI 5.20.0 and configured with Hadoop. As expected, the training time decreases as the number of available processors increases (Figure \[fig:parallel\_running\_time\]). All technical details about this implementation can be found in the parallel section of the documentation of our code. Conclusions =========== We have shown that the problem of learning a Mahalanobis metric space can be cast as an LP-type problem. This formulation allows us to obtain an efficient approximation scheme using tools from the theory of linear programming in low dimensions. Specifically, we present a near-linear time $(1+{\varepsilon})$-approximation algorithm that minimizes the number of violated constraints. Experimental evaluation demonstrates that when compared to prior work, our method is significantly more robust against small adversarial modifications of the input labelling. Our approach also leads to a fully parallelizable algorithm. It is an interesting research direction to extend our approximation algorithm to other classes of metric learning problems. One such case is when the input is specified as a set of ordered triples $(x,y,z)$, and the goal is to find a mapping $f$ with $\|f(x)-f(y)\|_2\leq \|f(x)-f(z)\|_2 - m$, for some margin $m>0$ (see [@weinberger2009distance]). Another important direction is to obtain geometric approximation algorithms for non-linear metric learning primitives, such as mappings computed by small depth neural networks. [^1]: Authors sorted in alphabetical order. [^2]: <https://archive.ics.uci.edu/ml/datasets.php>
--- author: - 'J. M. Diego, E. Martínez-González, J.L. Sanz, N. Benitez, J. Silk' title: 'Cosmology with GTC. A combined mm-optical galaxy cluster survey' --- Introduction {#sec:intro} ============ Clusters of galaxies have been widely used as cosmological probes. Their modeling can be easily understood as they are the final stage of the linearly evolved primordial density fluctuations. As a consequence, it is possible to describe, as a function of the cosmological model, the distribution of clusters and their evolution, the [*mass function*]{}, which is usually used as a cosmological test . Therefore, a detailed study of the cluster mass function will provide very useful information about the underlying cosmology.\ Unfortunately, cluster masses can not very well determined for intermediate-high redshift clusters and even for low redshift ones the error bars are still significant. However, instead of the mass function, it is possible to study the cluster population through other functions like the X-ray flux or luminosity functions, the temperature function or the Sunyaev-Zel’dovich effect (SZE hereafter) function. The advantage of these functions compared with the mass function is that, in these cases, the estimation of the X-ray fluxes, luminosities, temperatures or SZE decrements of the clusters is less affected by systematics than the mass estimation. The largest catalogue of clusters in the next years will be provided by the CMB satellite Planck. It is expected that Planck data will contain about 30000 detectable clusters (see Diego et al. 2002).\ The decrement in the CMB temperature due to the SZE is independent of redshift. Therefore the most distant clusters could be detected through the SZE. Hence, the SZE is the perfect way to look at those high redshift clusters. It is in the high redshift interval where the differences among the cosmological models are more evident when one looks at the cluster population.\ Unfortunately, through the SZE it is not possible to measure the redshift of the clusters and an independent observation (in the optical waveband for instance) of the clusters is needed in order to estimate their redshifts.\ Since many clusters will be at high redshift, a telescope with a large diameter (like GTC) will be needed in order to identify those high-z clusters. However, due to the large size of the Planck catalogue, any proposal which attempts to make use of a 10-m class telescope to identify 30000 clusters would be rejected (even in the case the proposal plans to observe only the high-z clusters there would be thousands of them !). In the next sections we will show how an SZE-selected optical survey made with GTC of only a small portion of the clusters in the Planck catalogue could be enough to obtain important cosmological constraints. A combined mm-optical survey {#sec:survey} ============================ At the end of this decade, the Planck satellite will carry out a full-sky survey in order to measure with a high sensitivity the CMB. However, other emissions (not only due to the CMB) will also be present in the data. Most of this non-CMB emissions will come from our own Galaxy (dust, free-free, synchrotron) but there will be also some extra-galactic emissions (point sources and the SZE). The wide frequency range, angular resolution and high sensitivity of Planck will allow to detect $\approx$ 30000 clusters.\ Observing galaxy clusters in the mm sub-mm band through the SZE has a unique advantage. The amplitude of the decrement in the center of the cluster is independent of its redshift. Due to this fact, the catalogue of clusters obtained by Planck will have a privileged selection function and the proportion of high-to-low redshift clusters will be maximum if we compare the catalogue with others obtained in optical or X-ray surveys.\ Galaxy clusters are the best tracers of the large scale structure and by studying the evolution of their abundance with redshift it is possible to impose very strong constraints on the cosmological model. ![Cluster number counts for Planck. Two models are plotted for comparison. OCDM model is the solid line ($\Omega = 0.3$, $\Lambda = 0.0$) and $\Lambda$CDM the dotted line ($\Omega = 0.3$, $\Lambda = 0.7$). The data is consistent with both models. Only the number counts as a function of redshift can distinguish both models.[]{data-label="fig:Fig1"}](Fig1.eps){width="\columnwidth"} For instance, using the local abundance of clusters it has been possible to find a strong correlation between the amplitude of the power spectrum ($\sigma _8$) and the matter density ($\Omega$). However, all the models in that correlation having different values of $\sigma _8$ and $\Omega$ predict the same observed local abundance of clusters (within the error bars). Therefore, by using just the local abundance of clusters it is not possible for instance to rule out models with high or low values of $\Omega$. However this can be done if we go back on time and study the evolution of the cluster abundance with redshift. In this case, the further we go in redshift, the better are the constraints in the cosmological parameters.\ ![Evolution of the number counts for the subsample of 300 clusters The data points were obtained from a Montecarlo simulation of the $\Lambda$CDM model. The solid line is the mean expected number counts for the same model ($\Lambda$CDM) and the dotted line is the corresponding expected number counts for the OCDM model. The OCDM model is excluded at 3 $\sigma$ level. []{data-label="fig:Fig2"}](Fig2.eps){width="\columnwidth"} As we have mentioned, the selection function of Planck will be privileged in the sense that the final catalogue will have a large proportion of high redshift clusters. It is interesting to exploit this fact and study the cosmological implications of such a large and redshift-independent catalogue. Unfortunately, since the SZE is independent of redshift, it will not be possible to determine the redshift of the clusters by just looking at their SZE emission. This fact will limit the kind of cosmological studies which can be done with the Planck catalogue since the only observable will be the flux of the cluster. In Fig. \[fig:Fig1\] we show the expected number counts for the Planck catalogue as a function of the observed flux.\ Despite the large number of clusters in the catalogue, this number will not be enough to discriminate between a model with a cosmological constant and a model without cosmological constant.\ However, both models can be distinguished if one uses the evolution of the number counts with redshift. To build this data one needs, obviously, to estimate first the redshift of the clusters and since this can not be done through the SZE one should make an independent optical observation for each one of the clusters. This can be a huge task if one attempts to measure the redshift for each one of the expected 30000 clusters. Some of them will be nearby clusters and they can be easily identified in previous surveys like the Sloan. Others at intermediate redshift could be observed with medium-size telescopes. There will be however, a large number of high-z clusters. In these cases, a large telescope like GTC will be needed. But even observing only the high-z clusters, their expected number is still too large to make a project like this one possible. However, instead of trying to identify all the Planck cluster catalogue, one could try to estimate the redshifts for only a small portion of it and build the number counts (as a function of z) from them. The question now is, how small should be the subsample of clusters if we want to extract useful information about the cosmological information form that subsample ?\ In Diego et al. (2001) we computed such number and we found that due to the particular selection function of Planck, by randomly selecting a subsample of only 300 clusters we could discriminate between $\Lambda$CDM and a OCDM models. This is an important conclusion since it tells us that we do not need to identify all the clusters in the Planck catalogue. The selection function of Planck is such that, a random subsample of 300 clusters contain enough high-z clusters to make possible the distinction between the two models.\ As we suggested before, part of those 300 clusters could be identified with existing cluster catalogues. Others could be identified using medium-size telescopes and only a small portion of them ($\approx 20 \%$) would require a telescope like GTC. In the same paper we calculated that, in order to estimate the photometric redshifts for the most distant clusters with GTC, we should observe them with two hours of integration time (8 bands and 900s per band) per cluster. These numbers show that this is a feasible project and the evolution of the number counts of the SZE-selected subsample of 300 clusters could be determined.\ In Fig. \[fig:Fig2\] we show the expected number counts as a function of redshift for a Montecarlo realization of the $\Lambda$CDM model. Also plotted are the mean number counts for the $\Lambda$CDM (solid line) and for the OCDM model (dotted line). As we mentioned earlier, with just 300 clusters the OCDM model could be excluded by the high-z bins.\ ![Cosmological constraints (marginalized probability) obtained after combining the Planck number counts (Fig. 1) with the evolution of the number counts of the optically observed subsample of 300 clusters (Fig. 2). The fiducial model used to simulate the two data sets is indicated by the big black dots. []{data-label="fig:Fig3"}](Fig3.eps){width="\columnwidth"} By combining the two data sets (number counts of Planck (Fig 1) and evolution of the number counts of the subsample (Fig. 2)) it is possible to constraint the cosmological parameters of the $\Lambda$CDM model.\ The first data set is very good to constrain the $\sigma _8 - \Omega$ correlation. The second data can be used to break the previous degeneracy between both parameters. The result of combining the Planck number counts (30000 clusters, Fig. 1) with the GTC followup (300 clusters, Fig. 2) can be seen in Fig. \[fig:Fig3\].\ conclusions =========== We have seen how combining two very different instruments (Planck and GTC) it is possible to constraint the cosmological model. Planck will provide a unique way of selecting distant clusters. However a subsequent redshift estimation will be needed. We have seen that a subsample (randomly selected from the Planck catalogue) of 300 identified clusters is large enough in order to distinguish between a $\Lambda$CDM and a OCDM model. Among these 300 clusters there will be many of them which can be identified using existing galaxy cluster catalogues or medium-size telescopes but there will be a portion ($\approx 20 \%$) which will require a telescope like GTC. A survey made with GTC over the SZE-selected subsample will allow to constrain the cosmological model in an independent test. This work has been supported by the Spanish DGESIC Project PB98-0531-C02-01, FEDER Project 1FD97-1769-C04-01, the EU project INTAS-OPEN-97-1192, and the RTN of the EU project HPRN-CT-2000-00124.\ JMD acknowledges support from its Marie Curie Fellowship of the European Community programme [*Improving the Human Research Potential and Socio-Economic knowledge*]{} under contract number HPMF-CT-2000-00967. D Diego J.M., Martínez-González E., Sanz J.L., Benitez N., Silk J., MNRAS accepted. astro-ph/0103512 D Diego J.M., Vielva P., Martínez-González E., Silk J., Sanz J.L., MNRAS submitted. astro-ph/0110587.
--- abstract: - 'A few studies have been recently presented for the existence of oxygen in diamond, for example, the N3 EPR centres have been theoretically and experimentally assigned the model made up from complex of substitutional nitrogen and substitutional oxygen as nearest neighbours. We present ab initio calculations of substitutional oxygen in diamond in terms of stability, electronic structures, geometry and hyperfine interaction and show that substitutional oxygen with C$_{2v}\,$, $S=1$ is the ground state configuration. We find that oxygen produces either a donor or acceptor level depending on the position of the Fermi level. density functional theory, diamond, oxygen, hyperfine interaction 31.15.E-, 81.05.ug, 31.30.Gs' - 'Останнім часом представлено декілька досліджень стосовно наявності кисню в алмазі, наприклад теоретично і експериментально призначена модель для N3 EPR створена з комплексу заміщувального азоту та заміщувального кисню як найближчих сусідів. У цій статті представлено ab initio обчислення заміщувального кисню в алмазі з огляду на стійкість, електронні структури, геометрію та надтонку взаємодію, а також показано, що заміщувальний кисень з C$_{2v}\,$, S = 1 є конфігурацією основного стану. Встановлено, що кисень продукує або ж донорний, або акцепторний рівень в залежності від положення Фермі рівня. теорія функціоналу густини, алмаз, кисень, надтонка взаємодія' author: - 'K.M. Etmimi, P.R. Briddon, A.M. Abutruma, A. Sghayer, S.S. Farhat' - 'K.M. Етмімі, П.Р. Бріддон, A.M. Абутрума, A. Сайер, С.С. Фархат' date: 'Received July 20, 2015, in final form December 22, 2015' title: Дослідження методом теорії функціоналу густини кисню заміщення в алмазі --- Introduction ============ High electron and hole mobilities at room temperature and unrivalled thermal conductivity at room temperature, mean that diamond could be the material of choice for high-power and high-frequency electronics. Many defects in a diamond such as nitrogen [@briddon-PB-185-179; @jones-DRM-53-35; @atumi-JPCM-25-6-065802] and boron [@chrenko-PRB.7.4560] are the main chemical elements now well identified. Nitrogen as a simple substitutional defect forms a deep level at 1.7  [@farrer-SSC-7-685] below the conduction band edge and the boron gives a shallower level at 0.37  above the edge of the valence band [@crowther-PR-154-772]. For this reason, a variety of other chemical defects in a diamond are now being investigated. Oxygen is expected to be one of important impurities in a diamond due to its relatively close size of carbon atom and abundance, and its has been suggested to lead to n-type conductivity in a diamond [@prins-PRB-61-7191]. Experimentally, oxygen was found in mineral inclusions in a diamond [@Walker-N-263-275] and combustion analysis of converting diamond to graphite indicated high levels of oxygen in natural diamond [@melton-n-263-309]. Moreover, a small number of optical centres may be related to oxygen [@ruan-APL-60-1379; @mori-apl-60-47]. Electron-paramagnetic-resonance (EPR) spectroscopy is a powerful probe used to identify enormous centres in diamond with nucleus spin of none zero. A small percentage (0.04$\%$) of natural oxygen is $^{17}$O [@iakoubovskii-PRB-66-045406] which is consistent with little evidence of the involvement of oxygen in electron paramagnetic resonance centres in diamond [@Ammerlaan1998]. However, enrichment of diamond in the $^{17}$O isotope can occur during diamond growth [@iakoubovskii-PRB-66-045406] or $^{17}$O ion implantation [@prins-PRB-61-7191; @iakoubovskii-PRB-66-045406] where the former undergoes insufficient control of the gas environment during diamond growth [@iakoubovskii-PRB-66-045406], and the latter is induced by lattice damage where the oxygen may be trapped for instant vacancy. EPR centre called KUL12 was detected to be an $S=1/2$ centre interacting with one $I=5/2$ nucleus with $A_\parallel = - 362$ MHz and $A_\perp = - 315$ MHz. Unfortunately, there is no other direct measurement of this centre. The N3 and OK1 centres have been suggested to contain oxygen; the former in a nearest neighbour nitrogen-oxygen pair, and the latter in a second nearest neighbour nitrogen-oxygen pair. Previously, we analysed N3 and OK1 in a broader study [@etmimi-jpcm-22-385502] and we concluded that the most suitable candidate structure for N3 is N$_s$–O$_s$. For OK1, none of the proposed models yield hyperfine tensors in agreement with experiment. Three of the EPR centres found in synthetic diamonds grown in carbonate medium in recent experimental study [@komarovskikh-PSSA-210-2074], using the high pressure apparatus BARS [@palyanov-7-3169; @Komarovskikh-PSSA-31163] showed that OX1, OX2 and OX3 centres are oxygen atoms occupying substitutional, interstitial and next to vacancy sites, respectively. The previous theoretical work using ab initio calculations [@gali-JPCM-13-11607] shows that substitutional oxygen exhibits carbon vacancy character which gives rise to an occupied $a$-level into the middle of the band gap and unoccupied $t$-level just below the conduction band. A theoretical work has predicted that oxygen introduces a mid-gap donor level in the band gap of a diamond, which is above the fundamental level of vacancy being 2  above the top of valence band. So, in a material containing both types of centres, one would expect a charge transfer to occur, which gives rise to EPR active defect. In this work, extensive calculations on different models containing oxygen atoms were carried out, and the total energies and other properties of defects were determined using ab initio calculations. Method ====== The structures were modelled using density-functional calculations with the exchange-correlation in a generalised gradient approximation [@perdew-prl-77-3865] by the AIMPRO code [@briddon-pssb-217-131; @rayson-cpc-178-128]. The Brillouin-zone is sampled using the Monkhorst-Pack scheme [@monkhorst-prb-13-5188] with a uniform mesh of $2\times2\times2$ special $k$-points. For several sample structures, we calculated the total energies using a $4\times4\times4$ mesh, which indicated that the relative total energies are converged to better than 10 . The valence states were represented by a set of atom-centred $s$- and $p$- with the addition of a set of $d$-like Gaussian functions [@goss-tap-104-69] to allow for polarization, and the Kohn-Sham states were expanded with the help of a contracted basis with a total of 22 functions on each carbon and oxygen atom. For the charge density evaluation, the plane waves with a cut-off of 300 Ha were used, yielding structures optimized until the total energy changes by less than $10^{-5}$ Ha. The lattice constant and the bulk modulus were within  1% and 2%, respectively, of experimentally determined values. The lattice constant was optimized, keeping the symmetry of the supercell fixed, giving a value of 3.5719 Å, close to the experimental value of 3.5667 Å [@Sze1981]. The calculated direct and indirect band gaps agree with the published plane-wave values [@liberman-PRB-62-6851] (5.68 and 4.18 ). In general 216-atom, simple-cubic supercells of side length $3a_0$ are used. Core-electrons are eliminated by using pseudo-potentials [@hartwigsen-prb-58-3641], the $1s$ electrons of C and O are in the core, and the $3p$ electrons are treated as the valence ones, so that hyperfine interactions are obtained by reconstructing the all-electron wave functions in the core region [@shaw-prl-95-105502; @blochl-prb-50-17953]. The atomic calculations for the reconstruction in the hyperfine calculations were performed using a systematic polynomial basis [@rayson-pre-76-026704]. Electrical levels were calculated using the marker method by comparing acceptor and donor with B and N, respectively. Results and dissections ======================= (a) ![ (Color online) Schematic structures of the substitutional oxygen in both the $S=0$ and $S=1$ configurations. Grey and red spheres represent C and O. (a) $C_{2v}$, $S=1$, (b) $C_{3v}$, $S=0$, (c) $T_d$, $S=0$, (d) $C_{2v}$, $S=0$, (e) $C_{3v}$, $S=1$. Bond lengths in Å.[]{data-label="fig:O"}](Stru1-C2v-S1.eps){width="\textwidth"} (b) ![ (Color online) Schematic structures of the substitutional oxygen in both the $S=0$ and $S=1$ configurations. Grey and red spheres represent C and O. (a) $C_{2v}$, $S=1$, (b) $C_{3v}$, $S=0$, (c) $T_d$, $S=0$, (d) $C_{2v}$, $S=0$, (e) $C_{3v}$, $S=1$. Bond lengths in Å.[]{data-label="fig:O"}](Stru1-C3v.eps){width="\textwidth"} (c) ![ (Color online) Schematic structures of the substitutional oxygen in both the $S=0$ and $S=1$ configurations. Grey and red spheres represent C and O. (a) $C_{2v}$, $S=1$, (b) $C_{3v}$, $S=0$, (c) $T_d$, $S=0$, (d) $C_{2v}$, $S=0$, (e) $C_{3v}$, $S=1$. Bond lengths in Å.[]{data-label="fig:O"}](Stru1-Td.eps){width="\textwidth"} (d) ![ (Color online) Schematic structures of the substitutional oxygen in both the $S=0$ and $S=1$ configurations. Grey and red spheres represent C and O. (a) $C_{2v}$, $S=1$, (b) $C_{3v}$, $S=0$, (c) $T_d$, $S=0$, (d) $C_{2v}$, $S=0$, (e) $C_{3v}$, $S=1$. Bond lengths in Å.[]{data-label="fig:O"}](Stru1-C2v.eps.eps){width="\textwidth"} (e) ![ (Color online) Schematic structures of the substitutional oxygen in both the $S=0$ and $S=1$ configurations. Grey and red spheres represent C and O. (a) $C_{2v}$, $S=1$, (b) $C_{3v}$, $S=0$, (c) $T_d$, $S=0$, (d) $C_{2v}$, $S=0$, (e) $C_{3v}$, $S=1$. Bond lengths in Å.[]{data-label="fig:O"}](Stru1-C3v-S1.eps){width="\textwidth"} Different charged forms , and of substitutional oxygen in diamond are examined. In all cases, the $C_{2v}$ configuration is found to be favoured. In the neutral charge state, we find several metastable structures for substitutional O$^0_s$ in a diamond. Interestingly, the spin orientation was crucial in terms of determining the stability. The lowest in energy exhibits a $C_{2v}$ $S=1$, as schematically shown in figure \[fig:O\] (a). The oxygen atom undergoes a distortion along $\langle$001$\rangle$, the oxygen moves strongly off centre to form two C–O bonds leaving behind two C dangling orbitals. This suggests that it may undergo a symmetry lowering distortion, probably of a chemical re-bonding. Three structures \[figures \[fig:O\] (b), (c) and (d)\] were found to be energetically indistinguishable and higher in energy than the ground state configuration by just 0.2 . The oxygen atom in figure \[fig:O\] (c) is in a fourfold coordinated arrangement with C–O bonds of lengths 1.73 [Å]{}. A trigonally symmetric state was also examined and was found to be metastable. The $C_{3v}$ $S=0$ structure in the figure \[fig:O\] (b) exhibits a slight displacement along $\langle$111$\rangle$, where one of the four neighbouring C atoms move from 1.73 [Å]{} to 1.72 [Å]{} compared to the on-site structure in figure \[fig:O\] (c). The energy differences \[between figure \[fig:O\] (b) and (c)\] are just of a few . Symmetrically equivalent to the ground state configuration \[figure \[fig:O\] (a)\], the spin averaged structure with two equivalent carbon neighbours to O is moved closer to the impurity, lowering the symmetry to $C_{2v}$, however, it is 0.21  higher in energy than the most stable one. The structure in the figure \[fig:O\] (e) exhibits $C_{3v}$ $S=1$ where the oxygen is significantly displaced off-site along $\langle$111$\rangle$ producing an elongated C–O bond to 2.08 [Å]{} which significantly increased the energy to 0.57  compared to the most stable configuration. Generally, the substitutional O atom bonds relatively weakly to the carbon dangling bonds in the vacancy, the reason for this probably being the oxygen atom having a relatively small atom compared to carbon and it can be understood as having vacancy-like characteristics, which has proved successful in explaining the electronic structure of the defects in a diamond [@watkins-PBC-117-9]. Symmetry Charge state Spin configuration Relative energy ---------- -------------- -------------------- ----------------- $C_{3v}$ neutral $S=0$ $0.20$  $T_d$ neutral $S=0$ $0.20$  $C_{2v}$ neutral $S=0$ $0.21$  $C_{3v}$ neutral $S=1$ $0.57$  $C_{3v}$ positive $S=1/2$ 0.04  $T_d$ positive $S=1/2$ 0.05  $C_{3v}$ negative $S=1/2$ 0.57  $T_d$ negative $S=1/2$ 1.13  : Relative energies for different structures with different symmetries of O$_s$. The zero of energy for neutral, positive and negative charged state is set to the O$_s^0$ $S=1$ $C_{2v}$, O$_s^{+1}$ $S=1/2$ $C_{2v}$ and O$_s^{-1}$ $S=1/2$ $C_{2v}$, respectively.[]{data-label="table:OHFI"} In the positive charge state, three different symmetry configurations are obtained. The differences are within a few as listed in table \[table:OHFI\]. The lowest structure has $C_{2v}$ symmetry lower than two other structures by just 0.04  for $C_{3v}$ and 0.05  for $T_d$ configuration, respectively, where all structures with spin $S=1/2$. The energy difference is so tiny that one cannot be certain which possesses the lowest energy. Similarly to , the $C_{2v}$ $S = 1/2$ configuration is found to be favoured in the negatively charged state. Moreover, there are two other metastable configurations which are high in energy as listed in table \[table:OHFI\]. (a) ![ (Color online) Band structure for $S=1$ and $S=0$ configurations of substitutional oxygen in diamond. Filled and empty circles show filled and empty bands, respectively. The energy scale is defined by the valence band top at zero energy. (a) $C_{2v}$ $S=1$, (b) $C_{3v}$ $S=0$, (c) $T_d$ $S=0$.[]{data-label="fig:bandest-O"}](O-c2v.eps){width="\textwidth"} (b) ![ (Color online) Band structure for $S=1$ and $S=0$ configurations of substitutional oxygen in diamond. Filled and empty circles show filled and empty bands, respectively. The energy scale is defined by the valence band top at zero energy. (a) $C_{2v}$ $S=1$, (b) $C_{3v}$ $S=0$, (c) $T_d$ $S=0$.[]{data-label="fig:bandest-O"}](O-C3v.eps){width="\textwidth"} (c) ![ (Color online) Band structure for $S=1$ and $S=0$ configurations of substitutional oxygen in diamond. Filled and empty circles show filled and empty bands, respectively. The energy scale is defined by the valence band top at zero energy. (a) $C_{2v}$ $S=1$, (b) $C_{3v}$ $S=0$, (c) $T_d$ $S=0$.[]{data-label="fig:bandest-O"}](O1-Td.eps){width="\textwidth"} Electronically, we present the previously calculated energy structure of on-site configuration quoted by Gali et al. [@gali-JPCM-13-11607] that possesses the ground state configuration \[figure \[fig:bandest-O\] (c)\]. Its band gap consists of one fully non-degenerate one-electron states $a_1$ near the middle of the band gap and one triplet degenerate $t_2$ state close to the conduction band, with a total occupation of two electrons, where $a_1$ and $t_2$ levels are introduced in the band gap due to C vacancy [@coulson-PRSLA-241-433]. Apparently, the $a_1$ and $t_2$ levels hybridized with oxygen valence states $s$ and $p$ atomic orbitals, which gives rise to four levels $a_1$, $t_2$, $a^*_1$ and $t^*_2$, two states will be in the band gap, the anti-bonding $a^*_1$ level will be occupied by two electrons and triplet degeneracy $t^*_2$ will be unoccupied. Our calculation shows that $C_{2v}$ $S=1$ in the neutral charge state is the most stable structure, where the $t_2$-level is split into three levels, $b_1a_1b_2$ as shown schematically in figure \[fig:bandest-O\] (a), where $b_2$ lies in the conduction band. The $b_1$ and $a_1$ levels are odd and even combinations of the neighbouring carbon dangling bonds which are farther apart than the other two to oxygen, respectively. Other previous density functional calculations [@gali-JPCM-13-11607] found that O$_s$ has $T_d$ symmetry and is the most stable configuration in neutral, negative and positive charge states. However, it is unclear whether the $S=1$ was considered. We find that the spin states for the neutral and positively charged states follow the Hund rule, so the positive and neutral defects have $S = 1/2$ and $S = 1$, respectively. Compared to the band gap of the well-known substitutional nitrogen (it has $C_{3v}$), O$_s$ with $C_{3v}$ $S=0$ symmetry has more levels. In addition to $a_1$ level associated with the radical on the unique carbon, there is one empty doublet degeneracy $e^0$ and a singlet empty $a^0$ from anti-bonding between O and there are three identical carbon atoms neighbouring the oxygen as shown in figure \[fig:bandest-O\] (b). The oxygen atom is more shared with the three close neighbours. is theoretically electrically active, with donor, double donor and acceptor levels being estimated to be at $E_c-2.8~{{\mbox{{eV}}}}$, $E_v+0.04~{{\mbox{{eV}}}}$ and $E_c-1.9~{{\mbox{{eV}}}}$, respectively. These levels are in disagreement with the previous density functional calculation [@gali-JPCM-13-11607] of values $E_v+1.97~{{\mbox{{eV}}}}$, $E_v+1.39~{{\mbox{{eV}}}}$ and $E_v+2.89~{{\mbox{{eV}}}}$, respectively. Oxygen can exhibit an amphoteric behaviour depending on the location of Fermi level. Since the acceptor level of oxygen lies below a donor level such as and the donor level of oxygen lies above the acceptor level such as $V$ [@dannefaer-DRM-10-2113], one would expect the charge transfer to occur in the material containing both types of centres. Previously, in [@etmimi-jpcm-22-385502] we showed that Nitrogen-Oxygen complexes render both an acceptor and donor; we find the $(-/0)$ and $(0/+)$ levels at $E_v+ 3.7~{{\mbox{{eV}}}}$ and $E_v + 1.5~{{\mbox{{eV}}}}$. [|c|c|c|c|c|c|c|]{} Species & & &\ \ $^{17}$O &$-198$ &$(90,315)$ &$-181$ &$(00,00)$ &$-165$&$(90,45)$\ C1 &$-1$ &$(147,225)$&$-1$ &$(90,135)$ &$4$ &$(123,45)$\ C2 &$-1$ &$(147,45)$ &$-1$ &$(90,135)$ &$4$ &$(57,45)$\ C3 &$205$ &$(126,315)$ &$86$ &$(90,45)$ &$86$ &$(144,135)$\ C4 &$205$ &$(54,315)$&$86$ &$(90,45)$ &$86$ &$(144,315)$\ \ $^{17}$O &$-568$ &$(00,129)$ &$-552$ &$(90,45)$ &$-538$ &$(90,315)$\ C1 &$41$ &$(125,45)$ &$18$ &$(102,144)$ &$18$ &$(142,250)$\ C2 &$41$ &$(55,45)$ &$18$ &$(102,126)$ &$18$ &$(142,20)$\ C3 &$197$ &$(125,315)$ &$86$ &$(90,45)$ &$86$ &$(145,135)$\ C4 &$197$ &$(55,315)$ &$86$ &$(90,45)$ &$86$ &$(145,315)$\ \ $^{17}$O &$-19$ &$(90,135)$ &$31$ &$(00,00)$ &$40$ &$(90,45)$\ C1 &$-7$ &$(82,225)$ &$-4$ &$(172,224)$ &$-4$ &$(90,315)$\ C2 &$-7$ &$(98,225)$ &$-4$ &$(172,46)$ &$-4$ &$(90,315)$\ C3 &$238$ &$(126,315)$ &$105$ &$(90,45)$ &$105$ &$(144,135)$\ C4 &$238$ &$(54,315)$ &$105$ &$(90,45)$ &$105$ &$(144,315)$\ Hyperfine tensors for the most stable configurations within different charge states of oxygen and four nearest neighbours are listed in table \[table:O2HFI\]. There are no hyperfine values for substitutional oxygen in literature so far. with $C_{3v}$ structurally resembles P1 centre, although the present calculation shows that this configuration is metastable within a few . $C_{3v}$ symmetry means that one of the four neighbouring C atoms is farther away from the oxygen than the other three, the hyperfine tensor on these carbon atoms is small ($A_\parallel=89~{\mbox{MHz}}$, ${A_\perp=42~{\mbox{MHz}}}$) compared to those in P1 centre, which means that the spin density is not mostly localized on the carbon radical site. It is distributed on the anti-bond between O and four neighbouring carbon atoms as shown in figure \[fig:O-positive-C3v\]. In the negatively charged state with $C_{2v}$ symmetry, the spin density is strongly localized in the vicinity of the carbon radical sites, leading to small, anisotropic hyperfine tensors for the oxygen, whereas in neutral charged state, the relatively larger values for the hyperfine O compared to the negatively charged state are due to the relatively big amount of spin density on the O-site, and to some extent on the carbon radical atoms, from odd and even combinations of the neighbouring carbon dangling bonds for the highest and second highest occupied levels, respectively. In the positively charged state, the even combinations of neighbouring carbon dangling bonds make the values of hyperfine tensor on O still larger. ![ (Color online) Unpaired electron Kohn-Sham functions for positive substitutional oxygen with C$_{3v}$ symmetry.[]{data-label="fig:O-positive-C3v"}](O-C3v-positive.eps){width="\textwidth"} Conclusions =========== We have used ab initio computational modelling mainly for the stability and electronic structure on different forms of the substitutional oxygen in diamond. Energetically we find that $S=0$ $C_{2v}$ is the most stable structure where both $S=0$ and $S=1$ are considered. The band gap of substitutional oxygen gives rise to two states, one $a_1$ state located near the middle of the band gap and the other $t_2$ state located close to the conduction band edge. The $t_2$ state is populated when O becomes negatively charged or neutral $S=1$ configuration. [99]{} \[1\][`#1`]{} \[2\]\[\][[\#2](#2)]{} Briddon P.R., Jones R., Physica B, 1993, **185**, No. 1–4, 179; . Jones R., Goss J., Pinto H., Palmer D., Diamond Relat[.]{} Mater[.]{}, 2015, **53**, 35; . Atumi M.K., Goss J.P., Briddon P.R., Shrif F.E., Rayson M.J., J. Phys.: Condens. Matter, 2013, **25**, No. 6, 065802; . Chrenko R., Phys. Rev. B, 1973, **7**, 4560; . Farrer R., Solid State Commun[.]{}, 1969, **7**, 685; . Crowther P.A., Dean P.J., Sherman W.F., Phys[.]{} Rev[.]{}, 1967, **154**, No. 3, 772; . Prins J.F., Phys[.]{} Rev[.]{} B, 2000, **61**, No. 11, 7191; . Walker J., Nature, 1976, **263**, No. 275, 275; . Melton C.E., Nature, 1976, **263**, 309; . Ruan J., Choyke W.J., Kobashi K., Appl[.]{} Phys[.]{} Lett[.]{}, 1993, **60**, No. 12, 1379; . Mori Y., Eimori N., Kozuka H., Yokota Y., Moon H., Ma J.S., Ito T., Hiraki A., Appl[.]{} Phys[.]{} Lett[.]{}, 1992, **60**, No. 1, 47; . Iakoubovskii K., Stesmans A., Phys[.]{} Rev[.]{} B, 2002, **66**, No. 4, 045406; . Ammerlaan C.A.J., In: Landolt-Börnstein Numerical Data and Functional Relationships in Science and Technology New Series Vol. III/22b, Schultz M. (Ed.), Springer, Berlin, 1989, 177–206. Etmimi K.M., Goss J.P., Briddon P.R., Gsiea A.M., J[.]{} Phys[.]{}: Condens[.]{} Matter, 2010, **22**, No. 38, 385502;\ . Komarovskikh A., Nadolinny V., Palyanov Y., Kupriyanov I., Phys[.]{} Status Solidi A, 2013, **210**, No. 10, 2074;\ . Palyanov Y.N., Borzdov Y.M., Khokhryakov A.F., Kupriyanov I.N., Sokol A.G., Cryst. Growth Des., 2010, **10**, No. 7, 3169; . Komarovskikh A., Nadolinny V., Palyanov Y., Kupriyanov I., Sokol A., Phys[.]{} Status Solidi A, 2014, **211**, No. 10, 2274; . Gali A., Lowther J.E., De[á]{}k P., J[.]{} Phys[.]{}: Condens[.]{} Matter, 2001, **13**, 11607; . Perdew J.P., Burke K., Ernzerhof M., Phys[.]{} Rev[.]{} Lett[.]{}, 1996, **77**, 3865; . Briddon P.R., Jones R., Phys[.]{} Status Solidi B, 2000, **217**, No. 1, 131;\ . Rayson M.J., Briddon P.R., Comput. Phys[.]{} Commun[.]{}, 2008, **178**, No. 3, 128; . Monkhorst H.J., Pack J.D., Phys[.]{} Rev[.]{} B, 1976, **13**, No. 12, 5188; . Goss J.P., Shaw M.J., Briddon P.R., In: Theory of Defects in Semiconductors, Drabold D.A., Estreicher S.K. (Eds.), Springer, Berlin/Heidelberg, 2007, 69–94; . Sze S.M., Physics of Semiconductor Devices, 2nd Edn., Wiley-Interscience, New York, 1981. Liberman D.A., Phys[.]{} Rev[.]{} B, 2000, **62**, No. 11, 6851; . Hartwigsen C., Goedecker S., Hutter J., Phys[.]{} Rev[.]{} B, 1998, **58**, No. 7, 3641; . Shaw M.J., Briddon P.R., Goss J.P., Rayson M.J., Kerridge A., Harker A.H., Stoneham A.M., Phys[.]{} Rev[.]{} Lett[.]{}, 2005, **95**, 205502; . Blöchl P.E., Phys[.]{} Rev[.]{} B, 1994, **50**, No. 4, 17953; . Rayson M.J., Phys[.]{} Rev[.]{} E, [2007]{}, **[76]{}**, No. 2, [026704]{}; . Watkins G.D., Physica B, 1983, **117–118**, No. 1–3, 9; . Coulson C.A., Kearsley M.J., Proc[.]{} R[.]{} Soc[.]{} London, Ser[.]{} A, 1957, **241**, 433; . Dannefaer S., Pu A., Kerr D., Diamond Relat[.]{} Mater[.]{}, 2001, **10**, 2113; .
--- abstract: 'Chemical models predict that the deuterated fraction (the column density ratio between a molecule containing D and its counterpart containing H) of , (), high in massive pre-protostellar cores, is expected to rapidly drop of an order of magnitude after the protostar birth, while that of HNC, (HNC), remains constant for much longer. We tested these predictions by deriving (HNC) in 22 high-mass star forming cores divided in three different evolutionary stages, from high-mass starless core candidates (HMSCs, 8) to high-mass protostellar objects (HMPOs, 7) to Ultracompact HII regions (UCHIIs, 7). For all of them, () was already determined through IRAM-30m Telescope observations, which confirmed the theoretical rapid decrease of () after protostar birth (Fontani et al. 2011). Therefore our comparative study is not affected by biases introduced by the source selection. We have found average (HNC) of 0.012, 0.009 and 0.008 in HMSCs, HMPOs and UCHIIs, respectively, with no statistically significant differences among the three evolutionary groups. These findings confirm the predictions of the chemical models, and indicate that large values of () are more suitable than large values of (HNC) to identify cores on the verge of forming high-mass stars, likewise what found in the low-mass regime.' author: - | F. Fontani$^{1}$[^1], T. Sakai$^{2}$, K. Furuya$^{3}$, N. Sakai$^{4}$, Y. Aikawa$^{3}$, S. Yamamoto$^{4}$\ \ $^{1}$INAF-Osservatorio Astrofisico di Arcetri, L.go E. Fermi 5, Firenze, I-50125, Italy\ $^{2}$Graduate School of Informatics and Engineering, The University of Electro-Communications, Chofu, Tokyo 182-8585, Japan\ $^{3}$Department of Earth and Planetary Sciences, Kobe University, Kobe 657-8501, Japan\ $^{4}$Department of Physics, Graduate School of Science, The University of Tokyo, Tokyo 113-0033, Japan\ date: 'Accepted date. Received date; in original form date' title: 'DNC/HNC and / ratios in high-mass star forming cores' --- \[firstpage\] Molecular data – Stars: formation – radio lines: ISM – submillimetre: ISM – ISM: molecules Introduction {#intro} ============ The process of deuterium enrichment in molecules from HD, the main reservoir of deuterium in molecular clouds, is initiated by three exothermic ion-molecule reactions (e.g. Millar et al. 1989): $${\rm H_3^+ + HD \rightarrow H_2D^+ + H_2}\;, \label{eqa}$$ $${\rm CH_3^+ + HD \rightarrow CH_2D^+ + H_2}\;, \label{eqb}$$ $${\rm C_2H_2^+ + HD \rightarrow C_2HD^+ + H_2} \;. \label{eqc}$$ Since the backward reactions are endothermic by 232, 390 and 550 K, respectively, in cold environments (e.g. $T_{\rm kin} \leq 20$ K for reaction (1)) they proceed very slowly, favouring the formation of deuterated ions. Moreover, the freeze-out of CO and other neutrals, particularly relevant in high-density gas ($n_{\rm H_2}\geq 10^4$ ), further boosts the deuteration process (e.g. Bacmann et al. [-@bacmann03], Crapsi et al. [-@crapsi05], Gerin et al. [-@gerin06]). Therefore, in dense and cold cores the deuterated fraction, , defined as the abundance ratio between a deuterated molecule and its hydrogenated counterpart, is expected to be much higher than the average \[D/H\] interstellar abundance (of the order of $10^{-5}$, Oliveira et al. [-@oliveira03], Linsky et al.  [-@linsky06]). Because of the changements in physical and chemical properties of a star forming core, its  is expected to change with the evolution too. Specifically,  is predicted to increase when a pre–protostellar core evolves towards the onset of gravitational collapse, as the core density profile becomes more and more centrally peaked (due to the temperature decrease at core centre, e.g. Crapsi et al. [-@crapsi07]), and then it drops when the young stellar object formed at the core centre begins to heat its surroundings (see e.g. Caselli et al. [-@caselli02]). While this net drop in  before and after the protostellar birth in () is clearly observed in both low-mass (Crapsi et al. [-@crapsi05], Emprechtinger et al. [-@emprechtinger09]) and high-mass (Fontani et al. [-@fontani11], Chen et al. [-@chen11]) star-forming cores, other species show deviations from this general scenario. For example, DNC is produced in the gas from the same route reaction as , namely reaction (\[eqa\]), so that (HNC) and () are expected to vary similarly with temperature (Turner [-@turner]). However, Sakai et al. ([-@sakai12]) have measured (HNC) in a sample of 18 massive cores including both infrared-dark starless cores and cores harbouring high-mass protostellar objects, and found that (HNC) in the starless cores is only marginally higher than that measured in the protostellar cores. This ’anomaly’ could be explained by the fact that the destruction processes of  are much faster than those of DNC: being an ion,  can recombine quickly (few years) with CO and/or electrons, while the neutral DNC has to be destroyed by ions (such as  and/or H$_3^+$) through much slower ($10^4 - 10^5$ yrs) chemical reactions (Sakai et al. [-@sakai12]). The chemical models of Sakai et al. ([-@sakai12]), are able to partially reproduce the observational results obtained by Fontani et al. ([-@fontani11]) and Sakai et al. ([-@sakai12]). We have performed chemical calculations similar to those in Sakai et al. ([-@sakai12]), and obtained the consistent results (see Section 4.2 for the details of our chemical model): the / ratio approaches $\sim 0.1$ during the cold pre–protostellar phase and drops quickly to $\sim 0.01$ after the protostellar birth because very sensitive to a temperature growth (Fig. 1, panel (a)), while the DNC/HNC ratio remains relatively high (above $\sim 0.01$) even after a rapid temperature rise, and decreases in timescales of several 10$^4$ yrs. On the other hand, the /  drops much more quickly, in less than 100 yrs (Fig. 1, panel (b)). However, the different criteria adopted by Fontani et al. ([-@fontani11]) and Sakai et al. ([-@sakai12]) to select the targets does not allow for a consistent observational comparison between the two deuterated fractions, as well as between models and data. In this paper we report observations performed with the Nobeyama-45m Telescope in the DNC and HN$^{13}$C (1–0) rotational transitions towards 22 high-mass cores harbouring different stages of the high-mass star formation process, in which () was already measured through observations of the IRAM-30 Telescope (Fontani et al. [-@fontani11]). In this way, our study is not affected by observational biases possibly introduced by the source selection. The main aim of the work is to test in the same sample of objects whether the / and DNC/HNC ratios trace differently the thermal history of high-mass cores despite the similar chemical origin, as predicted by Sakai et al. ([-@sakai12]). The sample, selected as explained in Fontani et al. ([-@fontani11]), is divided in 8 high-mass starless cores (HMSCs), 7 high-mass protostellar objects (HMPOs) and 7 ultracompact HII regions (UC HIIs), so that all main evolutionary groups of the high-mass star formation process are almost equally represented. We stress that all HMSCs, except I22134–B, have been previously classified as ’quiescent’ by Fontani et al. ([-@fontani11]) to distinguish them from ’perturbed’ cores, in which external phenomena (passage of outflows, shocks, nearby infrared objects) can have affected significantly the physical-chemical properties of the gas, as discussed in Fontani et al. ([-@fontani11]). In Sect. \[obs\] we give an overview of the technical details of the observations; Sect. \[res\] presents the main observational results, which are discussed in Sect. \[dis\], including a detailed comparison with chemical models. A summary of the main findings of the paper are given in Sect. \[con\]. Observations {#obs} ============ The (1–0) and DNC(1–0) transitions were observed with the NRO 45 m Telescope in may 2012 towards 22 out of the 27 cores already observed by Fontani et al. ([-@fontani11]) in (2–1) and (3–2). The source coordinates, as well as some basic properties of the star forming regions where they are embedded (LSR velocity of the parental core, distance to the Sun, bolometric luminosity, () as measured by Fontani et al. [-@fontani11]) are listed in Table \[tab\_sou\]. The two transitions were observed simultaneously by using the sideband-separating superconductor-insulator-superconductor receiver, T100 (Nakajima et al. [-@nakajima08]). Some important spectroscopic parameters of the lines observed and the main technical parameters are listed in Table \[tab\_lin\]. The half-power beam width is about 21[$^{\prime\prime}$]{} and 18[$^{\prime\prime}$]{} at 76 (DNC (1–0)) and 87 GHz ((1–0)), respectively, similar to the beam width of the IRAM-30m Telescope at the frequency of the (2–1) line ($\sim 15$ [$^{\prime\prime}$]{}, Fontani et al. [-@fontani11]). The main beam efficiency ($\eta_{\rm MB}$) is 0.53 and 0.43 at 76 and 87 GHz, respectively. We derived the main beam temperature () from the antenna temperature () by using the relation  = /$\eta_{\rm MB}$, where $\eta_{\rm MB}$ is the main beam efficiency (see Table \[tab\_lin\]). For all the observations, we used digital backends SAM45 (bandwidth = 500 MHz, frequency resolution = 122.07 kHz). The telescope pointing was checked by observing nearby SiO maser source every one to two hours, and was maintained to be better than 5[$^{\prime\prime}$]{}. The line intensities were calibrated by the chopper wheel method. All the observations were carried out with the position switching mode. [lccccccc]{} source& RA(J2000) & Dec(J2000) &  & $d$ & $L_{\rm bol}$ & Ref. & ()\ & h m s & $o$ $\prime$ $\prime\prime$ &   & kpc &  & &\ \ I00117-MM2 $^{a}$ & 00:14:26.3 & +64:28:28 & $-36.3$ & 1.8 & $10^{3.1}$ & (1) & 0.32\ G034-G2(MM2) $^{a}$ & 18:56:50.0 & +01:23:08 & $+43.6$ & 2.9 & $10^{1.6}$ $^{r}$ & (4) & 0.7\ G034-F2(MM7) $^{a}$ & 18:53:19.1 & +01:26:53 & $+57.7$ & 3.7 & $10^{1.9}$ $^{r}$ & (4) & 0.43\ G034-F1(MM8) $^{a}$ & 18:53:16.5 & +01:26:10 & $+57.7$ & 3.7 & – & (4) & 0.4\ G028-C1(MM9) $^{a}$ & 18:42:46.9 & $-$04:04:08 & $+78.3$ & 5.0 & – & (4) & 0.38\ I20293-WC $^{a}$ & 20:31:10.7 & +40:03:28 & $+6.3$ & 2.0 & $10^{3.6}$ & (5,6) & 0.19\ I22134-G $^{b}$ $^{w}$ & 22:15:10.5 & +58:48:59 & $-18.3$ & 2.6 & $10^{4.1}$ & (7) & 0.023\ I22134-B $^{b}$ & 22:15:05.8 & +58:48:59 & $-18.3$ & 2.6 & $10^{4.1}$ & (7) & 0.09\ \ I00117-MM1 $^{a}$ & 00:14:26.1 & +64:28:44 & $-36.3$ & 1.8 & $10^{3.1}$ & (1) & $\leq 0.04$\ 18089–1732 $^{b}$ & 18:11:51.4 & $-$17:31:28 & $+32.7$ & 3.6 & $10^{4.5}$ & (9) & 0.031\ 18517+0437 $^{b}$ & 18:54:14.2 & +04:41:41 & $+43.7$ & 2.9 & $10^{4.1}$ & (10) & 0.026\ G75-core $^{a}$ & 20:21:44.0 & +37:26:38 & $+0.2$ & 3.8 & $10^{4.8}$ & (11,12) & $\leq 0.02$\ I20293-MM1 $^{a}$ & 20:31:12.8 & +40:03:23 & $+6.3$ & 2.0 & $10^{3.6}$ & (5) & 0.07\ I21307 $^{a}$ & 21:32:30.6 & +51:02:16 & $-46.7$ & 3.2 & $10^{3.6}$ & (13) & $\leq 0.03$\ I23385 $^{a}$ & 23:40:54.5 & +61:10:28 & $-50.5$ & 4.9 & $10^{4.2}$ & (14) & 0.028\ \ G5.89–0.39 $^{b}$ & 18:00:30.5 & $-$24:04:01 & $+9.0$ & 1.28 & $10^{5.1}$ & (15,16) & 0.018\ I19035-VLA1 $^{b}$ & 19:06:01.5 & +06:46:35 & $+32.4$ & 2.2 & $10^{3.9}$ & (11) & 0.04\ 19410+2336 $^{a}$ & 19:43:11.4 & +23:44:06 & $+22.4$ & 2.1 & $10^{4.0}$ & (17) & 0.047\ ON1 $^{a}$ & 20:10:09.1 & +31:31:36 & $+12.0$ & 2.5 & $10^{4.3}$ & (18,19) & 0.017\ I22134-VLA1 $^{a}$ & 22:15:09.2 & +58:49:08 & $-18.3$ & 2.6 & $10^{4.1}$ & (11) & 0.08\ 23033+5951 $^{a}$ & 23:05:24.6 & +60:08:09 & $-53.0$ & 3.5 & $10^{4.0}$ & (17) & 0.08\ NGC7538-IRS9 $^{a}$ & 23:14:01.8 & +61:27:20 & $-57.0$ & 2.8 & $10^{4.6}$ & (8) & 0.030\ $^{a}$ Observed in  (3–2) and  (2–1);\ $^{b}$ Observed in  (1–0),  (3–2), and  (2–1);\ $^{c}$ Observed in  (1–0) and  (2–1);\ $^{w}$ “warm” HMSC;\ $^{r}$ Luminosity of the core and not of the whole associated star-forming region (Rathborne et al. [-@rathborne]);\ References: (1) Palau et al. ([-@palau10]); (2) Busquet et al. ([-@busquet11]); (3) Beuther et al. ([-@beuther07]); (4) Butler & Tan ([-@bet]); (5) Palau et al. ([-@palau07]); (6) Busquet et al. ([-@busquet]); (7) Busquet ([-@busquetphd]); (8) Sánchez-Monge et al. ([-@sanchez]); (9) Beuther et al. ([-@beuther04]); (10) Schnee & Carpenter ([-@schnee]); (11) Sánchez-Monge ([-@sanchez11]); (12) Ando et al. ([-@ando]); (13) Fontani et al. ([-@fonta04a]); (14) Fontani et al. ([-@fonta04b]); (15) Hunter et al. ([-@hunter]); (16) Motogi et al. ([-@motogi]); (17) Beuther et al. ([-@beuther02]); (18) Su et al. ([-@su]); (19) Nagayama et al. ([-@nagayama]). ------------------ ---------------- --------------- --------- ------- -------------- ---------- ------ --------------- Transition Rest Frequency $E_{\rm u}/k$ $\mu_0$ BW $\Delta_\nu$ HPBW   $T_{\rm sys}$ (GHz) (K) (D) (MHz) (kHz) (arcsec) (K) DNC (1–0) 76.305727 3.66 3.05 40 37 21 0.53 250 HN$^{13}$C (1–0) 87.090850 4.18 3.05 40 37 18 0.43 150 ------------------ ---------------- --------------- --------- ------- -------------- ---------- ------ --------------- Results {#res} ======= Detection rates and line profiles {#line_profiles} --------------------------------- We have detected DNC (1–0) emission in: 6 out of 8 HMSCs, 3 out of 7 HMPOs, and 5 out of 7 UCHIIs.  (1–0) has been detected towards all cores detected in DCN, except I00117-MM2, undetected in  but detected in DNC (although a faint line at $\sim 2.5 \sigma$ rms level is possibly present in the spectrum). Therefore, the detection rate follows a trend with core evolution similar to the one observed in  by Fontani et al. ([-@fontani11]): its maximum is in the HMSC phase, then it decreases in the HMPO phase but does not decrease further in the later stage of UC HII region. In the Appendix–A, available online, we show all spectra (Figs. A.1 to A.6). They have been analysed with the CLASS program, which is part of the GILDAS software[^2] developed at the IRAM and the Observatoire de Grenoble. Both transitions have a hyperfine structure due to electric quadrupole interactions of nitrogen and deuterium nuclei. We tried to fit the observed spectral line profiles by considering these components through the command ’method hfs’ into CLASS. However, given that the maximum separation between the components is 1.28  and 0.73  for DNC and , respectively (van der Tak et al. [-@vandertak]), i.e. smaller than the typical velocity widths ($\sim 1 -2$ ), the method failed. Due to this, and because the observed spectra typically show single-peaked profiles, the lines have been fitted with Gaussian functions. In Tables \[tab\_fit\_dnc\] and \[tab\_fit\_hnc\] we give the main parameters of the lines derived through Gaussian fits: integrated area ($A$), full width at half maximum ($\Delta v$), peak temperature ($T_{\rm pk}$) and 1$\sigma$ rms of the spectrum. Asymmetric profiles and hints of non-Gaussian high-velocity wings are detected in the DNC(1–0) spectrum of G034–F1 and G034–F2 (Fig. A.1), and in the (1–0) spectrum of G034–F2, G028–C1 (Figs. A.1), 18089–1732, 18517+0437 (Fig. A.3), G5.89–0.39 and I19035–VLA1 (Fig. A.5), 23033+5951 and NGC7538–IRS9 (Fig. A.6). The non-Gaussian emission in the wings can be naturally attributed to outflows in the HMPOs and UCHIIs, although neither DNC nor  are typical outflow tracers. On the other hand, this explanation is not plausible in the HMSCs G034–F1, G034–F2 and G028–C1, which do not show any clear star formation activity. Other possibilities could be the presence of multiple velocity components, or high optical depth effects. High optical depths seem unlikely for the DNC and  lines because these species are not very abundant, and also because the major effect should be at line centre, while the asymmetries are seen at the edges of the lines (see e.g. G034–F1 in Fig. A.1). The presence of multiple velocity components is the most realistic scenario, especially in the HMSCs. However, only with higher angular resolution observations one can shed light on the origin of these asymmetric line shapes. Line widths {#line_widths} ----------- In Fig. \[line\_widths\] we compare the line widths of DNC(1–0) to both those of the (1–0) and (2–1) transitions detected by Fontani et al. ([-@fontani11]). Globally, DNC and (1–0) have comparable line widths (left panel in Fig. \[line\_widths\]) regardless of the evolutionary stage of the cores. This global trend confirms that the two transitions arise from gas with similar turbulence, as already found in the massive star-forming cores studied by Sakai et al. ([-@sakai12]). Incidentally, we note that four HMSCs have (1–0) line widths in between $\sim 2$ and 3 , i.e. almost twice the corresponding DNC(1–0) line widths. However, the low signal to noise ratio in the spectra, and the fact that the kinematics in the targets may be complex and due to the superimposition of several velocity components (as suggested by the deviations from the Gaussian line shape, see Sect. \[line\_profiles\]), could explain the different line widths of DNC and  in these sources. A similar comparison between the DNC(1–0) line widths and those of the (2–1) transition is shown in the right panel of Fig. \[line\_widths\], for which a strong correlation is found (correlation coefficient $\sim 0.71$, Kendall’s $\tau$ = 0.52). This time the DNC lines show a systematical smaller broadening not only in the HMSC group but also in the groups containing the more evolved objects. In this case, however, the reason could be also attributed to the fact that the two transitions require different excitation conditions: the (1–0) line traces gas colder, and hence more quiescent, than that associated with the emission of the (2–1) line. Deuterated fraction of HNC {#dfrac} -------------------------- [lcccc]{} core & $A$ & $\Delta v$ & $T_{\rm pk}^{(1)}$ & rms$^{(2)}$\ & K &   & K & K\ \ I00117–MM2 & 0.20(0.04) & 1.0(0.2) & 0.20(0.04) & 0.044\ G034–F2(MM7) & 0.38(0.06) & 1.3(0.2) & 0.27(0.05) & 0.055\ G034–F1(MM8) & 0.23(0.04) & 0.95(0.2) & 0.23(0.04) & 0.017\ G034–G2(MM2) & 0.53(0.06) & 1.4(0.2) & 0.35(0.05) & 0.018\ G028–C1(MM9) & 0.58(0.06) & 1.6(0.2) & 0.35(0.05) & 0.032\ I20293–WC & 0.47(0.07) & 1.3(0.2) & 0.34(0.05) & 0.021\ I22134–B & $\leq 0.14$ & & $\leq 0.04$ &\ I22134–G & $\leq 0.14$ & & $\leq 0.035$ &\ \ I00117–MM1 & $\leq 0.19$ & & $\leq 0.04$ &\ 18089–1732 & 0.43(0.07) & 1.6(0.3) & 0.26(0.04) & 0.055\ 18517+0437 & 0.43(0.05) & 1.4(0.2) & 0.29(0.05) & 0.012\ G75–core & $\leq 0.19$ & & $\leq 0.05$ &\ I20293–MM1 & 0.21(0.04) & 0.8(0.2) & 0.25(0.04) & 0.05\ I21307 & $\leq 0.17$ & & $\leq 0.035$ &\ I23385 & $\leq 0.19$ & & $\leq 0.04$ &\ \ G5.89–0.39 & 0.72(0.07) & 1.5(0.2) & 0.46(0.06) & 0.041\ I19035–VLA1 & 0.29(0.05) & 1.4(0.2) & 0.19(0.04) & 0.018\ 19410+2336 & 0.36(0.04) & 1.1(0.1) & 0.32(0.04) & 0.024\ ON1 & 0.40(0.06) & 2.0(0.4) & 0.18(0.04) & 0.026\ I22134-VLA1 & $\leq 0.16$ & & $\leq 0.04$ &\ 23033+5951 & 0.39(0.04) & 0.9(0.1) & 0.40(0.02) & 0.030\ NGC7538–IRS9 & $\leq 0.16$ & & $\leq 0.04$ &\ $(1)$ in $T_{\rm MB}$ units; $(2)$ $1\sigma$ rms noise in the spectrum. [lcccccccc]{} core & $A$ & $\Delta v$ & $T_{\rm pk}^{(1)}$ & rms$^{(2)}$\ & K &   & K & K\ \ I00117–MM2 & $\leq 0.14$ & & $\leq 0.03$ &\ G034–F2(MM7) & 1.05(0.06) & 2.2(0.2) & 0.44(0.04) & 0.05\ G034–F1(MM8) & 0.52(0.04) & 2.0(0.2) & 0.25(0.03) & 0.04\ G034–G2(MM2) & 0.87(0.08) & 2.6(0.3) & 0.32(0.06) & 0.022\ G028–C1(MM9) & 1.69(0.08) & 3.0(0.1) & 0.52(0.05) & 0.05\ I20293-WC & 0.42(0.04) & 1.3(0.1) & 0.30(0.04) & 0.04\ I22134–B & 0.09(0.02) & 0.7(0.2) & 0.36 & 0.03\ I22134–G & 0.35(0.02) & 0.93(0.07) & 0.36(0.03) & 0.03\ \ I00117–MM1 & $\leq 0.14$ & & $\leq 0.03$ &\ 18089–1732 & 1.96(0.05) & 1.81(0.05) & 1.02(0.04) & 0.04\ 18517+0437 & 0.79(0.04) & 1.6(0.1) & 0.46(0.03) & 0.03\ G75-core & 0.43(0.04) & 1.6(0.2) & 0.25(0.03) & 0.03\ I20293–MM1 & 0.52(0.03) & 1.22(0.09) & 0.40(0.03) & 0.03\ I21307 & $\leq 0.13$ & & $\leq 0.025$ &\ I23385 & 0.15(0.04) & 0.9(0.3) & 0.08 & 0.03\ \ G5.89–0.39 & 4.64(0.05) & 1.74(0.02) & 2.50 (0.04) & 0.03\ I19035–VLA1 & 0.54(0.04) & 1.75(0.2) & 0.29(0.03) & 0.03\ 19410+2336 & 0.90(0.03) & 1.11(0.04) & 0.76(0.03) & 0.03\ ON1 & 1.37(0.04) & 1.81(0.07) & 0.71(0.03) & 0.03\ I22134-VLA1 & 0.18(0.03) & 0.9(0.2) & 0.19(0.03) & 0.02\ 23033+5951 & 0.54(0.03) & 1.2(0.1) & 0.41(0.03) & 0.02\ NGC7538–IRS9 & 0.57(0.04) & 1.4(0.1) & 0.39(0.03) & 0.03\ $(1)$ in $T_{\rm MB}$ units; $(2)$ $1\sigma$ rms noise in the spectrum. To derive (HNC) from the line parameters, we have adopted the same approach as in Sakai et al. ([-@sakai12]). First, we assume the lines are optically thin. This is justified by both the shape of the lines (which have not the flat-topped shape typical of high optical depth transitions) and by the findings of Sakai et al. ([-@sakai12]) in similar objects. Second, given the similar critical density of the two transitions ($\sim 0.5\times 10^6$ ), we assume their [excitation temperatures]{} and emitting regions, and thus the filling factors, are the same. Under these hypotheses, the column density ratio is given by (see Sakai et al. [-@sakai12]): $$\frac{N({\rm DNC})}{N({\rm HN^{13}C})} \simeq 1.30{\rm exp}\left(-\frac{0.52}{T_{\rm ex}}\right)\frac{A_{\rm DNC}}{A_{\rm HN^{13}C}} \label{eq_dfrac}$$ where  is the excitation temperature, and $A$ is the integrated area of the line (in $T_{\rm MB}$ units). The method adopted to fit the lines does not allow to derive directly , for which a good fit to the hyperfine structure is needed. Therefore, in Eq. (\[eq\_dfrac\]) as  we have taken the kinetic temperatures given in Fontani et al. ([-@fontani11], see their Table 3) assuming that the lines are thermalised. These have been derived either directly from the  (3–2) transition or from  measurements, and then extrapolated to  following Tafalla et al. ([-@tafalla]). In principle, the excitation temperature derived from  or  can be different from that of the DNC and (1–0) lines, but we stress that Eq. (\[eq\_dfrac\]) shows that the column density ratio is little sensitive even to changes of an order of magnitude in (see also Sakai et al. [-@sakai12]). We have then converted $N({\rm DNC})/N({\rm HN^{13}C})$ into $N({\rm DNC})/N({\rm HN^{12}C})$=(HNC) by calculating the $^{13}$C/$^{12}$C abundance ratio from the relation: $^{13}$C/$^{12}$C$ = 1/(7.5\times D_{\rm gc} + 7.6)$ (Wilson & Rood [-@wer]), where $D_{\rm gc}$ is the source Galactocentric distance in kpc, and multiplying Eq. (\[eq\_dfrac\]) by this correction factor. In Table \[tab\_dfrac\] we list the column density ratio (HNC) for the cores observed and the physical parameters used to derive it as explained above: Galactic coordinates (longitude $l$, latitude $b$), Galactocentric distance ($D_{\rm gc}$), isotopic abundance ratio $^{13}$C/$^{12}$C, $A_{\rm DNC}$, $A_{\rm HN^{13}C}$, and $T_{\rm ex}$. [lcccccccc]{} core & $ l$ & $ b $ & $D_{\rm gc}$ & $^{13}$C/$^{12}$C & $A_{\rm DNC}$ & $A_{\rm HN^{13}C}$ & $T_{\rm ex}^{(1)}$ & (HNC)\ & $^o$ & $^o$ & kpc & & K  & K & K &\ \ I00117–MM2 & 2.082 & 0.03706 & 9.5 & 79 & 0.20 & $\leq 0.14$ & 14 & $\geq 0.008$\ G034–F2(MM7) & 0.6071 & –0.005093 & 5.9 & 52 & 0.38 & 1.05 & 17 & 0.009\ G034–F1(MM8) & 0.6068 & –0.005021 & 5.9 & 52 & 0.23 & 0.52 & 17 & 0.011\ G034–G2(MM2) & 0.6132 & –0.0192 & 6.4 & 55 & 0.53 & 0.87 & 17 & 0.014\ G028–C1(MM9) & 0.5004 & –0.008629 & 4.8 & 43 & 0.58 & 1.69 & 17 & 0.010\ I20293–WC & 1.384 & 0.003144 & 8.4 & 70 & 0.47 & 0.42 & 17 & 0.02\ I22134–B & 1.819 & 0.03383 & 9.5 & 79 & $\leq 0.14$ & 0.09 & 17 & $\leq 0.025$\ I22134–G & 1.819 & 0.03373 & 9.5 & 79 & $\leq 0.14$ & 0.35 & 25 & $\leq 0.006$\ \ 18089–1732 & 0.231 & –0.001963 & 5.1 & 46 & 0.43 & 1.96 & 38 & 0.006\ 18517+0437 & 0.6592 & 0.01745 & 6.5 & 56 & 0.43 & 0.79 & 43 & 0.0125\ G75-core & 1.329 & 0.002301 & 8.4 & 71 & $\leq 0.19$ & 0.43 & 96 & $\leq 0.008$\ I20293-MM1 & 1.385 & 0.003035 & 8.4 & 70 & 0.21 & 0.52 & 43 & 0.008\ I23385 & 2.005 & –0.006124 & 11.5 & 94 & $\leq 0.19$ & 0.15 & 43 & $\leq 0.017$\ \ G5.89–0.39 & 0.1088 & –0.01743 & 7.2 & 62 & 0.72 & 4.64 & 26 & 0.003\ I19035–VLA1 & 0.7151 & –0.01113 & 7.0 & 60 & 0.29 & 0.54 & 39 & 0.011\ 19410+2336 & 1.05 & –0.005147 & 7.7 & 65 & 0.36 & 0.90 & 21 & 0.008\ ON1 & 1.22 & –0.02177 & 8.0 & 68 & 0.40 & 1.37 & 26 & 0.006\ I22134–VLA1 & 1.819 & 0.0338 & 9.5 & 79 & $\leq 0.16$ & 0.18 & 47 & $\leq 0.014$\ 23033+5951 & 1.928 & 0.001401 & 10.3 & 85 & 0.39 & 0.54 & 25 & 0.011\ NGC7538-IRS9 & 1.953 & 0.01593 & 9.9 & 82 & $\leq 0.16$ & 0.57 & 26 & $\leq 0.004$\ $^{(1)}$ assumed equal to the gas kinetic temperature listed in Table A.3 of Fontani et al. ([-@fontani11]). Discussion {#dis} ========== () versus (HNC) {#dfrac_comp} --------------- Table \[tab\_dfrac\] shows that the HMSC group has the highest average (HNC) (mean value $\sim 0.012$, $\sim 0.019$ if one includes the lower limit on I00117–MM2). The HMPOs and UC HII groups have very similar average (HNC) ($\sim 0.009$ and $\sim 0.008$, respectively), but given the dispersion and the poor statistics, there are no significant statistical differences between the three groups. By comparing the observational data of this work and those of Fontani et al. ([-@fontani11]), we clearly note a different behavior of (HNC) and () in high-mass star-forming cores: (HNC) does not change significantly going from the pre-protostellar phase to subsequent phases of active star formation, while () is smaller of an order of magnitude in the evolved phases (HMPOs and UCHIIs) than in the pre-protostellar phase (HMSCs), and the latter evolutionary group is undoubtedly statistically separated from the other two (Fontani et al. [-@fontani11]). We stress once more that these results, obtained towards the same clumps and with comparable telescope beam sizes, are not affected by possible biases introduced by the source selection. In Fig. \[fig\_comp\] we compare () and (HNC) measured in the cores observed in both  (Fontani et al. [-@fontani11]) and DNC. As one can see, the sources with the largest () tend to also have the largest (HNC), despite the different absolute magnitude especially in the HMSC group. In fact, the two parameters are slightly correlated, with a Kendall’s $\tau$ rank correlation coefficient of $\sim 0.36$ (excluding lower and upper limits, which tend to reinforce the possible correlation though). () and (HNC) versus chemical models {#model} ----------------------------------- The results presented in Section \[dfrac\_comp\] are consistent overall with the scenario proposed by the chemical models of Sakai et al. ([-@sakai12]): the / abundance ratio sharply decreases after the protostellar birth, while the DNC/HNC abundance ratio decreases more gradually and maintains for longer the high deuteration of the earliest evolutionary stages of the core. In this subsection, we compare the observational results with the model predictions in detail. We have solved the chemical rate equations with the state-of-art gas-grain reaction network of Aikawa et al. ([-@aikawa12]). The model includes gas-phase reactions, interaction between gas and grains, and grain surface reactions. The parameters for chemical processes are essentially the same as in Sakai et al. ([-@sakai12]), except for the binding energy of HCN and HNC; we adopt a binding energy of 4170 K for HCN (Yamamoto et al. [-@yamamoto]), while a smaller value of 2050 K was used for those species in Sakai et al. ([-@sakai12]). We assume the binding energy of HNC to be the same as that of HCN. Species are initially assumed to be atoms (either neutrals or ions), except for hydrogen and deuterium, which are in molecular form. The elemental abundance of deuterium is set to be $1.5\times10^{-5}$ (Linsky 2003). As a physical model, we assume a static homogeneous cloud core with a H$_2$ volume density of 10$^4$ cm$^{-3}$ or 10$^5$ cm$^{-3}$. To mimic the protostar formation, we suddenly rise a temperature from 15 K to 40 K at a given time of 10$^5$ yr. The two temperatures are consistent with the average kinetic temperatures measured in the targets of this work ($T=18$ K and $T=38$ K for HMSCs and HMPOs, respectively, see Table \[tab\_dfrac\]). Our choice of 10$^5$ yr is comparable to the timescale of high-mass starless-phase ($3.7\times10^5$ yr) estimated by the statistical study of Chambers et al. ([-@chambers09]), while Parsons et al.  ([-@parsons09]) estimated it to be a few 10$^3$–10$^4$ yr. A comparison between model predictions and observational results is illustrated in Fig. \[models\_data\]. In our model, in the pre-protostellar stage ($T=15$ K) the deuteration of both molecules increases similarly with time: this is due to the fact that the deuterium fractionation is initiated in both species by the same route reaction (i.e. H$_3^+$ isotopologues), as mentioned in Sect. \[intro\]. On the other hand, if we assume that the deuterated fractions of the HMSCs must be compared with the model predictions before the temperature rise, i.e. before $t\simeq10^5$ yr, the measured (HNC) and () cannot be reproduced simultaneously in a single model: (HNC) can be well reproduced with $n_{\rm H_2} = 10^4$ cm$^{-3}$, while () is reproduced with $n_{\rm H_2} = 10^5$ cm$^{-3}$. This discrepancy could indicate that the observed lines of DNC (and HN$^{13}$C) arise from regions that, on average, are less dense than those responsible for the emission of  (and ). In fact, the observational parameters are averaged values measured over slightly different angular regions ($\sim 21$[$^{\prime\prime}$]{} for DNC, and $\sim 16$[$^{\prime\prime}$]{}for ), so that the emission seen in DNC could be more affected than that seen in  by the contribution from the low-density envelope surrounding the dense cores, where the deuterium fractionation is expected to be less important. This can explain why the model with lower average gas density can reproduce (HNC), but not (), for which a higher average gas density is needed. Another possibility is that we are missing something in the current chemical model. For example, we do not consider the ortho state of hydrogen molecules in this work. The presence of ortho-H$_2$ suppresses the deuteration process of molecules, since the internal energy of ortho-H$_2$ helps to overcome the endothermicity of reaction (1) in the backward direction (Flower et al. [-@flower06]). If we consider ortho-H$_2$, however, both the DNC/HNC and / abundance ratios would be lowered. Inspection of Fig. \[models\_data\] also shows that an average ()$\geq 0.2$ in a starless core with $n_{\rm H_2} = 10^5$ cm$^{-3}$ is reached at a time close to $\sim 10^5$ yrs. Assuming this as the time necessary for the starless core to collapse, as suggested by Chambers et al. ([-@chambers09]), this means that only cores relatively close to the onset of gravitational collapse, i.e. the so–called pre–stellar cores, can give rise to the observed high values of (). This behaviour is in agreement with the predictions of chemical models including also the spin states of the H$_2$ and H$_3^+$ isotopologues (Kong et al. [-@kong]), in which levels of () larger than 0.1 are possible only in cores older than $\sim 10^5$ yrs. Models with higher average density ($\geq 10^6$) can reproduce such high deuterated fractions in shorter times, but these average densities are not realistic to represent regions with angular sizes of $\sim 16$[$^{\prime\prime}$]{}, like those that we have observed. In the protostellar stage ($T=40$ K), the / abundance ratio sharply decreases in timescale of $\sim$10$^2$ yrs, while the DNC/HNC abundance ratio decreases in $\sim$10$^4$ yrs both in the model with $n_{\rm H_2} = 10^4$ cm$^{-3}$ and in that with $10^5$ cm$^{-3}$. Again, the measured (HNC) is better reproduced by the model with $n_{\rm H_2} = 10^4$ cm$^{-3}$ after 10$^4$–10$^5$ yr from a temperature rise, while our model slightly underestimates (), regardless of the average gas density, unless the protostellar cores are extremely young (i.e. age shorter than $10^2$ yrs). It should be noted that once the gas temperature rises, not only the / ratio, but also the  abundance decreases significantly in the models. Also, according to models of spherical star-forming cores (Aikawa et al. [-@aikawa12], Lee et al. [-@lee]), the central warm gas is still surrounded by a spherical shell of cold and dense gas during the early stages of collapse, in which both  and  are still abundant. This is not taken into account for simplicity in our one-box model. However, such residual emission from the cold envelope could explain the higher / ratio still apparent after the sudden temperature rise. Conclusions {#con} =========== We have observed the DNC and (1–0) rotational transitions towards 22 massive star-forming cores in different evolutionary stages, towards which () was already measured by Fontani et al. ([-@fontani11]). The aim of the work was to compare (HNC) to () in the same sample of sources and with similar telescope beams, so that the comparison should not suffer from possible inconsistencies due to different sample selection criteria. The main observational result of this work confirms the predictions of the models of Sakai et al. ([-@sakai12]), namely that (HNC) is less sensitive than () to a sudden temperature rise, and hence it should keep more than () of the thermal history of the cores, despite the chemical processes leading to the deuteration of the two species are similar. Therefore, our work clearly indicates that () is more suitable than (HNC) to identify high-mass starless cores. Based on the predictions of our chemical models, the starless cores studied in this work having () around 0.2 - 0.3 are very good candidate massive ’pre–stellar’ cores, because only relatively ’evolved’ starless cores can be associated with such high values of (). Several results require follow-up high-angular resolution observations to map the emitting region of DNC and , as well as DNC and , and test if these are slightly different as the present low-angular resolution data seem to suggest. Observations of higher excitation  and DNC lines (3–2 or 4–3) may be also important to constrain better the excitation conditions. Acknowledgments {#acknowledgments .unnumbered} =============== We thank the NRO staff for their help in the observations with the 45m Telescope presented in this paper. The 45m Telescope is operated by the Nobeyama Radio Observatory, a branch of the National Astronomical Observatory of Japan. This study is supported by KAKENHI (21224002, 23540266, 25400225 and 25108005). K.F. is supported by the Research Fellowship from the Japan Society for the Promotion of Science (JSPS) for Young Scientists. Aikawa, Y., Wakelam, V., Hersant, F., Garrod, R.T., Herbst, E. 2012, ApJ, 760, 40 Aikawa, Y., Herbst, E., Roberts, H., Caselli, P. 2005, ApJ, 620, 330 Ando, K., Nagayama, T., Omodaka, T. et al. 2011, PASJ, 63, 45 Bacmann, A., Lefloch, B., Ceccarelli, C., et al. 2003, ApJ, 585, L55 Bergin, E.A. & Tafalla, M. 2007, ARA&A, 45, 339 Beuther, H., Leurini, S., Schilke, P. et al. 2007, A&A, 466, 1065 Beuther, H., Hunter, T.R., Zhang, Q. et al. 2004, ApJ, 616, L23 Beuther, H., Walsh, A., Schilke, P. et al. 2002, A&A, 390, 289 Busquet, G., Estalella, R., Zhang et al. 2011, A&A, 525, A141 Busquet, G. 2010, PhD Thesis, University of Barcelona Busquet, G., Palau, A., Estalella, R. et al. 2010, A&A, 517, L6 Butler, M.J. &Tan, J.C. 2009, ApJ, 696, 484 Caselli, P., Walmsley, C.M., Zucconi, A., et al. 2002a ApJ, 565, 344 Chambers, E. T., Jackson, J. M., Rathborne, J. M., & Simon, R. 2009, ApJS, 181, 360 Chen, H.-R., Liu, S.-Y., Su, Y.-N., Wang, M.-Y. 2011, ApJ, 743, 196 Crapsi, A., Caselli, P., Walmsley, C. M., et al. 2005, ApJ, 619, 379 Crapsi, A., Caselli, P., Walmsley, M.C., Tafalla, M. 2007, A&A, 470, 221 Dalgarno, A. 2006, PNAS, 10312269 Emprechtinger, M., Caselli, P., Volgenau, N.H., Stutzki, J., Wiedner, M.C. 2009, A&A, 493, 89 Flower, D. R., Pineau des Forêts, G., & Walmsley, C. M. 2006, A&A, 449, 621 Fontani, F., Palau, A., Caselli, P. et al. 2011, A&A, 529, L7 Fontani, F., Cesaroni, R., Testi, L. et al. 2004a, A&A, 424, 179 Fontani, F., Cesaroni, R., Testi, L. et al. 2004b, A&A, 414, 299 Gerin, M., Lis, D.C., Philipp, S., Gusten, R., Roueff, E., Reveret, V. 2006, A&A, 454, L63 Goodman, A.A., Barranco, J. A., Wilner, D.J., & Heyer, M.H. 1998, ApJ, 504, 223 Hunter, T.R., Brogan, C.L., Indebetouw, R., Cyganowski, C. 2008, ApJ, 680, 1271 Kong, S., Caselli, P., Tan J.C., Wakelam, V. 2013, arXiv:1312.0971 Lee, J.-E., Bergin, E.A., Evans, N.J.II 2004, ApJ, 617, 360 Linsky, J.L., Draine, B.T., Moos, H.W., Jenkins, E.B., Wood, B.E. et al. 2006, ApJ, 647, 1106 Linsky, J.L. 2003, SSRv, 106, 49 Millar, T.J., Bennett, A., Herbst, E. 1989, ApJ, 340, 906 Motogi, K., Sorai, K., Habe, A. et al. 2011, PASJ, 63, 31 Nagayama, T., Omodaka, T., Nakagawa, A. et al. 2011, PASJ, 63, 23 Nakajima, T., Sakai, T., & Asayama, S. et al. 2008, PASJ, 60, 435 Oliveira, C.M., Hébrard, G., Howk, J.C., Kruk, J.W., Chayer, P., Moos, H.W. 2003, ApJ, 587, 235 Palau, A., Sánchez-Monge, Á., Busquet, G. et al. 2010, A&A, 510, 5 Palau, A., Estalella, R., Girart, J.M. et al. 2007, A&A, 465, 219 Parsons, H., Thompson, M. A., & Chrysostomou, A. 2009, MNRAS, 399, 1506 Pineda, J.E., Goodman, A.A., Arce, H.G., Caselli, P., Foster, J.B., Myers, P.C., Rosolowsky, E.W. 2010, ApJ, 712, L116 Rathborne, J.M., Jackson, J.M., Chambers, E.T. et al. 2010, ApJ, 715, 310 Sánchez-Monge, Á. 2011, PhD Thesis, University of Barcelona Sánchez-Monge, Á., Palau, A., Estalella, R., Beltrán, M.T., Girart, J.M. 2008, A&A, 485, 497 Sakai, T., Sakai, N., Furuya, K., Aikawa, Y., Hirota, T., Yamamoto, S., 2012, ApJ, 747, 140 Schnee, S. & Carpenter, J. 2009, ApJ, 698, 1456 Su, Y.-N., Liu, S.-Y., Lim, J. 2009, ApJ, 698, 1981 Tafalla, M., Myers, P.C., Caselli, P., Walmsley, C.M. 2004, A&A, 416, 191 Turner, B.E. 2001, ApJS, 136, 579 van der Tak, F.F.S., Müller, H. S. P.; Harding, M. E.; Gauss, J. 2009, A&A, 507, 347 Wilson, T. L. & Rood, R. 1994, ARA&A, 32, 191 Yamamoto, T., Nakagawa, N., Fukui, Y. 1983, A&A, 122, 171 Appendix A: Spectra {#appb .unnumbered} =================== In this appendix we show all spectra of the DNC(1–0) and (1–0) transitions. ![image](spectra_tmb_hmsc1_2.eps){width="16cm"} ![image](spectra_tmb_hmsc2_2.eps){width="16cm"} ![image](spectra_tmb_hmpo1_2.eps){width="16cm"} ![image](spectra_tmb_hmpo2_2.eps){width="16cm"} ![image](spectra_tmb_uchii1_2.eps){width="16cm"} ![image](spectra_tmb_uchii2_2.eps){width="16cm"} [^1]: E-mail: [email protected] [^2]: The GILDAS software is available at http://www.iram.fr/IRAMFR/GILDAS
--- abstract: | We address semigroup well-posedness of the fluid-structure interaction of a linearized compressible, viscous fluid and an elastic plate (in the absence of rotational inertia). Unlike existing work in the literature, we linearize the compressible Navier-Stokes equations about an arbitrary state (assuming the fluid is barotropic), and so the fluid PDE component of the interaction will generally include a nontrivial ambient flow profile $\mathbf{U}$. The appearance of this term introduces new challenges at the level of the stationary problem. In addition, the boundary of the fluid domain is unavoidably Lipschitz, and so the well-posedness argument takes into account the technical issues associated with obtaining necessary boundary trace and elliptic regularity estimates. Much of the previous work on flow-plate models was done via Galerkin-type constructions after obtaining good a priori estimates on solutions (specifically [Chu2013-comp]{}—the work most pertinent to ours here); in contrast, we adopt here a Lumer-Phillips approach, with a view of associating solutions of the fluid-structure dynamics with a $C_{0}$-semigroup $\left\{ e^{\mathcal{A}t}\right\} _{t\geq 0}$ on the natural finite energy space of initial data. So, given this approach, the major challenge in our work becomes establishing of the maximality of the operator $\mathcal{A}$ which models the fluid-structure dynamics. In sum: our main result is semigroup well-posedness for the fully coupled fluid-structure dynamics, under the assumption that the ambient flow field $\mathbf{U}\in \mathbf{H}^{3}(\mathcal{O})$ has zero normal component trace on the boundary (a standard assumption with respect to the literature). In the final sections we address well-posedness of the system in the presence of the von Karman plate nonlinearity, as well as the stationary problem associated with the dynamics. .3cm *Keywords*: fluid-structure interaction, compressible fluid, well-posedness, semigroup .3cm *AMS Mathematics Subject Classification 2010*: 34A12, 74F10, 35Q35, 76N10 author: - 'George Avalos,[^1]' - 'Pelin G. Geredeli,[^2]' - 'Justin T. Webster, [^3]' title: 'Semigroup Well-posedness of A Linearized, Compressible Fluid with An Elastic Boundary [^4]' --- *[In memory of Igor D. Chueshov]{}* Introduction ============ In this work, we consider a linearized fluid-structure model with respect to some reference state, including an arbitrary spatial vector field. The coupled system here describes the interaction between a plate and a flow of *compressible* barotropic, viscous fluid. Such interactive dynamics are crucially considered in the design of many engineering systems (e.g., aircraft, engines, and bridges). The study of compressible flows (gas dynamics) itself has implications to high-speed aircraft, jet engines, rocket motors, hyperloops, high-speed entry into a planetary atmosphere, gas pipelines, commercial applications (such as abrasive blasting), and many other fields (see [@BA62; @bolotin; @dowell1], for instance). In these applications, the density of a gas may change significantly along a streamline. *Compressibility*—i.e., the fractional change in volume of the fluid element per unit change in pressure—becomes important, for instance, in flows for which $$\text{Mach Number}=M\equiv \frac{\text{velocity}}{\text{local speed of sound}}>0.3.$$The cases $M<0.3$ and $0.3<M<0.8$ are subsonic/incompressible and subsonic/compressible regimes, respectively. Compressible flows can be either transonic $(0.8<M<1.2)$ or supersonic $(1.2<M<3.0)$. In supersonic flows, pressure effects are only transported downstream; the upstream flow is not affected by conditions downstream. In the study of incompressible flows, the associated analysis typically involves only two unknowns: pressure and velocity. These are usually found by solving two equations that describe conservation of mass and linear momentum, with the fluid density presumed to be constant. By contrast, in compressible flow, the gas density and temperature are variables. Consequently, the solution of compressible flow problems will require two more equations: namely, an equation of state for the gas, and a conservation of energy equation.[^5] Moreover, the imposition of external forces to the governing equations may not immediately result in a uniform flow throughout the system. In particular, the fluid may compress in the vicinity of the applied force; that is to say, the density may increase locally in response to the given force. The effects due to compressibility and viscosity on an (uncoupled) fluid dynamics will have to be taken in account when subsequently considering the mathematical properties of PDE’s describing interactions of said fluid dynamics with some given structure. In aeroelasticity, the compressible gas is often assumed to be inviscid—i.e., viscosity-free—and the flow irrotational (potential flow). These assumptions are often invoked in practice, as they reduce the flow dynamics to a wave equation [@dcds; @dowell1] (and see Section \[techreview\] below). However, there are situations where viscous effects cannot be neglected, e.g., in the transonic region [@dowell1]. The *mathematical* literature—especially in the last 20 years—on fluid-structure interactions across each of these fluid regimes is quite vast. We will certainly not attempt here a general overview of this literature, but in Section \[techreview\] we will provide an in-depth discussion of those key modern references that pertain to the present work. At this point, we mention the primary motivating reference, [@Chu2013-comp]: in this work, the author considers the dynamics of a nonlinear plate, located on a flat portion of the boundary of a three dimensional cavity, as it interacts with a compressible, barotropic (linearized) fluid that fills the cavity. In the present work, we will analyze a comparable setup, but with additional physical terms in the equations; the focus here will be on establishing and describing the essential semigroup dynamics which drives the coupled PDE model. The accommodation of physically relevant nonlinearities—i.e., those seen in [@Chu2013-comp; @cr-full-karman; @supersonic]—can be readily made subsequent to the present analysis, which develops a good linear theory. Linearities that are amenable to such treatment include those of Berger, Kirchhoff, or von Karman type, inasmuch as they are *locally Lipschitz* [@springer; @pazy] on the plate’s natural energy space. As a key illustrative example, we include a discussion of the well-posedness of this fluid-structure model in the presence of the von Karman plate nonlinearity in Section \[nonlinear\]. In many cases, it is the addition of structural nonlinearity which ultimately leads to global-in-time boundedness of corresponding trajectories [@conequil2; @webster]. Accordingly, long-time behavior of nonlinear dynamics will be considered in a forthcoming work. We also consider a Lipschitz geometry, as opposed to the common assumption [@Chu2013-comp; @dV] that the domain is smooth. Given the transition between the elastic and inelastic components of the boundary, a Lipschitz boundary is surely more natural and physically relevant—see Figure 1 below—and also more amenable to pertinent generalizations, e.g., tubular domains (finite or infinite) [@tube; @ChuRyz2012-pois]. Distinguishing our work from [@Chu2013-comp], we take the linearization of the compressible Navier-Stokes equations about a rest state which has a *nonzero ambient flow component*. Since this linearization process produces some additional terms that depend on the ambient flow, previous techniques to obtain the well-posedness result cannot be directly applied[^6]. We note that the resultant terms, due to the presence of the ambient flow, **do not represent bounded perturbations of principal spatial operators.** To obtain the primary result we utilize a semigroup approach, invoking the well known Lumer-Phillips theorem [@pazy p.13]. We believe that the present treatment fits nicely within the context of the recent work of I. Chueshov, where the interactive dynamics between fluid and a plate (or shell) are considered from various points of view [Chu2013-comp,Chu2013-inviscid,tube,cr-full-karman,ChuRyz2012-pois,ChuRyz2011]{}. Moreover, one can draw comparisons and contrasts between the well-posedness analysis here and that in [@clark] and [@george1], which deal with *incompressible* fluid-structure interactions: in [clark]{} and [@george1], there is also a two dimensional elastic structure existing on the boundary of a three dimensional domain, in which a fluid evolves. However, the earlier well-posedness work [@clark] requires an appropriate *mixed variational*, *Babuska-Brezzi* formulation, which is nonstandard and instrinsic to the particular dynamics under consideration (see e.g., Theorem 3.1.5 of [@kesavan]); whereas the present effort combines the Lax-Milgram Theorem with a critical well-posedness result for (static versions of) the pressure PDE component of the fluid-structure system (the first equation in (\[system1\]) and Theorem \[dV\] of the Appendix.) For fluid-structure well-posedness studies that involve a three dimensional solid immersed in a three dimensional fluid, and which utilize semigroup techniques, see [@T1],[@T2],[@dvorak].) Eventually, we are interested in learning if and how the presence of the dissipating fluid dynamics affects the stability of the structure (as in, e.g., [@delay; @Chu2013-comp; @ChuRyz2011; @ChuRyz2012-pois],[@george2]). In particular, for the *linear* compressible fluid-structure interaction PDE model, we are interested in strong/uniform stability properties of the associated $C_{0}$-semigroup. On the other hand, if one inserts nonlinearity into the structural component of the interaction, the existence and nature of global attractors become the primary objects of interest for the associated PDE dynamical system. Qualitative properties of fluid-structure models (such as well-posedness and stability of solutions, and the existence of compact global attractors) have been intensely investigated by many authors over the past 30 years. For the PDE model under consideration, (\[system1\])–(\[IC\_2\]) below, issues of long-time behavior of solutions are addressed in the forthcoming work [@preprint]. The paper is organized as follows: In Section \[model\] we describe the PDE model and discuss our standing hypotheses. In Section \[results\] we provide a discussion of the principal dynamics operator (on the natural space of finite energy), as well its domain; we then formally state the semigroup generation result which immediately yields well-posedness of fluid-structure model, in the sense of Hadamard. We also include in this section the notion of *energy balance* for semigroup solutions. Section \[techreview\] provides an in-depth discussion of the key pertinent references, and their relationship to the result presented here. Section [proof]{} gives the proof of the main result via the Lumer-Phillips theorem: namely, we establish dissipativity and maximality of a certain bounded perturbation of the modeling fluid-structure operator. Section \[static\] gives a description of stationary solutions to the dynamics at hand, and proves that the PDE system can be recovered (in some sense) from the stationary variational problem. Section [nonlinear]{} discusses the von Karman plate nonlinearities and the associated nonlinear dynamic well-posedness and stationary results, treating the nonlinearity as a locally-Lipschitz perturbation of the linear dynamics. Lastly, the Appendix provides a proof of a key technical lemma on the well-posedness of solutions to the stationary version of the decoupled pressure PDE component in (\[system1\])–([IC\_2]{}) below. Notation -------- For the remainder of the text we write $\mathbf{x}$ for $(x_{1},x_{2},x_{3})\in \mathbb{R}_{+}^{3}$ or $(x_{1},x_{2})\in \Omega \subset \mathbb{R}_{\{(x_{1},x_{2})\}}^{2}$, as dictated by context. For a given domain $D$, its associated $L^{2}(D)$ will be denoted as $||\cdot ||_D$ (or simply $||\cdot||$ when the context is clear). The symbols $\mathbf{n}$ and $\boldsymbol{\tau }$ will be used to denote, respectively, the unit external normal and tangent vectors to $\mathcal{O}$. Inner products in $L^{2}(\mathcal{O})$ or $\mathbf{L}^{2}(\mathcal{O})$ are written $(\cdot ,\cdot)_{\mathcal{O}}$ (or simply $(\cdot ,\cdot)$ when the context is clear), while inner products $L^{2}(\partial \mathcal{O})$ are written $\langle \cdot ,\cdot \rangle$. We will also denote pertinent duality pairings as $\left\langle \cdot ,\cdot \right\rangle _{X\times X^{\prime }}$, for a given Hilbert space $X$. The space $H^{s}(D)$ will denote the Sobolev space of order $s$, defined on a domain $D$, and $H_{0}^{s}(D)$ denotes the closure of $C_{0}^{\infty }(D)$ in the $H^{s}(D)$-norm $\Vert \cdot \Vert _{H^{s}(D)} $ or $\Vert \cdot \Vert _{s,D}$. We make use of the standard notation for the boundary trace of functions defined on $\mathcal{O}$, which are sufficently smooth: i.e., for a scalar function $\phi \in H^{s}(\mathcal{O})$, $\frac{1}{2}<s<\frac{3}{2}$, $\gamma (\phi )=\phi \big|_{\partial \mathcal{O}},$ a well-defined and surjective mapping on this range of $s$, owing to the Sobolev Trace Theorem on Lipschitz domains (see e.g., [necas]{}, or Theorem 3.38 of [@Mc]). PDE Model {#model} ========= Let $\mathcal{O}\subset \mathbb{R}^{3}$ be a *bounded* and *convex* fluid domain (and so has Lipschitz boundary $\partial \mathcal{O}$; see e.g., Corollary 1.2.2.3 of [@grisvard]). The boundary decomposes into two pieces $\overline{S}$ and $\overline{\Omega }$ where $\partial \mathcal{O}=\overline{S}\cup \overline{\Omega }$, with $S\cap \Omega =\emptyset $. We consider $S$ to be the solid boundary, with no interactive dynamics, and $\Omega $ to be the equilibrium position of the elastic domain, upon which the interactive dynamics takes place. We also assume that: (i) the active component $\Omega \subset \mathbb{R}^{2}$ is flat, with Lipschitz boundary, and embedded in the $x_{1}-x_{2}$ plane; (ii) the inactive component $S$ lies below the $x_{1}-x_{2}$ plane. This is to say, $$\begin{aligned} \Omega \subset & ~\{\mathbf{x}=(x_{1},x_{2},0)\} \\ S\subset & ~\{\mathbf{x}=(x_{1},x_{2},x_{3})~:~x_{3}\leq 0\}.\end{aligned}$$Letting $\mathbf{n}(\mathbf{x})$ denote the unit outward normal vector to $\partial \mathcal{O}$, we have $\left. \mathbf{n}\right\vert _{\Omega }=(0,0,1).$ (See Figure 1.) ![The Fluid-Structure Geometry](egg.png) We consider the compressible Navier-Stokes system [@chorin-marsden], assuming the fluid is barotropic, and linearize the system with respect to some reference rest state of the form $\left\{ p_{\ast },\mathbf{U},\varrho _{\ast }\right\} $. The pressure and density components ${p_{\ast },\varrho _{\ast }}$ are assumed to be scalar constants, and the arbitrary ambient flow field $\mathbf{U}:\mathcal{O}\rightarrow \mathbb{R}^{3}$ is given by: $$\mathbf{U}(x_{1},x_{2},x_{3})=[U_{1}(x_{1},x_{2},x_{3}),U_{2}(x_{1},x_{2},x_{3}),U_{3}(x_{1},x_{2},x_{3})]. \label{flowfield}$$Deleting non-critical lower order terms (see Remark \[delete\] below), and setting the pressure and density reference constants equal to unity, we obtain the following *perturbation equations*: $$\begin{aligned} & \left\{ \begin{array}{l} p_{t}+\mathbf{U}\cdot \nabla p+div~\mathbf{u}=0~\text{ in }~\mathcal{O}\times (0,\infty ) \\ \mathbf{u}_{t}+\mathbf{U}\cdot \nabla \mathbf{u}-div~\sigma (\mathbf{u})+\eta \mathbf{u}+\nabla p=0~\text{ in }~\mathcal{O}\times (0,\infty ) \\ (\sigma (\mathbf{u})\mathbf{n}-p\mathbf{n})\cdot \boldsymbol{\tau }=0~\text{ on }~\partial \mathcal{O}\times (0,\infty ) \\ \mathbf{u}\cdot \mathbf{n}=0~\text{ on }~S\times (0,\infty ) \\ \mathbf{u}\cdot \mathbf{n}=w_{t}~\text{ on }~\Omega \times (0,\infty )\end{array}\right. \label{system1} \\ & \notag \\ & \left\{ \begin{array}{l} w_{tt}+\Delta ^{2}w+\left[ 2\nu \partial _{x_{3}}(\mathbf{u})_{3}+\lambda \text{div}(\mathbf{u})-p\right] _{\Omega }=0~\text{ on }~\Omega \times (0,\infty ) \\ w=\frac{\partial w}{\partial \nu }=0~\text{ on }~\partial \Omega \times (0,\infty )\end{array}\right. \label{IM2} \\ & \notag \\ & \begin{array}{c} \left[ p(0),\mathbf{u}(0),w(0),w_{t}(0)\right] =\left[ p_{0},\mathbf{u}_{0},w_{0},w_{1}\right] .\end{array} \label{IC_2}\end{aligned}$$Here, $p(t):\mathbb{R}^{3}\rightarrow \mathbb{R}$ and $\mathbf{u}(t):\mathbb{R}^{3}\rightarrow \mathbb{R}^{3}$ (pointwise in time) are given as the pressure and the fluid velocity field, respectively. The quantity $\eta >0$ represents a drag force of the domain on the viscous fluid. In addition, the quantity $\mathbf{\tau }$ in (\[system1\]) is in the space $TH^{1/2}(\partial \mathcal{O)}$ of tangential vector fields of Sobolev index 1/2; that is,$$\mathbf{\tau }\in TH^{1/2}(\partial \mathcal{O)=}\{\mathbf{v}\in \mathbf{H}^{\frac{1}{2}}(\partial \mathcal{O})~:~\mathbf{v}\cdot \mathbf{n}=0~\text{ on }~\partial \mathcal{O}\}.\footnote{See e.g., p.846 of \cite{buffa2}.} \label{TH}$$ With respect to the ambient flow field $\mathbf{U}$ appearing in (\[system1\]), we define the space $$\mathbf{V}_{0}=\{\mathbf{v}\in \mathbf{H}^{1}(\mathcal{O})~:~\left. \mathbf{v}\right\vert _{\partial \mathcal{O}}\cdot \mathbf{n}=0~\text{ on }~\partial \mathcal{O}\}; \label{V_0}$$and subsequently impose the standard assumption that $$\mathbf{U}\in \mathbf{V}_{0}\cap \mathbf{H}^{3}(\mathcal{O}) \label{min}$$(see the analogous—and actually slightly stronger—specifications made on ambient fields on p.529 of [@dV] and pp.102–103 of [@valli]). As mentioned above, the presence of $\mathbf{U}$ in the modeling introduces the term $\mathbf{U}\cdot \nabla p$ into the pressure equation, which [**does not**]{} represent a bounded perturbation of the dynamics. Given the *Lamé Coefficients* $\lambda \geq 0$ and $\nu >0$, the *stress tensor* $\sigma $ of the fluid is defined as $$\sigma (\mathbf{\mu })=2\nu \epsilon (\mathbf{\mu })+\lambda \lbrack I_{3}\cdot \epsilon (\mathbf{\mu })]I_{3},$$where the *strain tensor* $\epsilon $ is given by $$\epsilon _{ij}(\mu )=\dfrac{1}{2}\left( \frac{\partial \mu _{j}}{\partial x_{i}}+\frac{\partial \mu _{i}}{\partial x_{j}}\right) \text{, \ }1\leq i,j\leq 3$$(see [@kesavan p.129]). With this notation it is easy to see that $$\text{div}~\sigma (\mathbf{\mu })=\nu \Delta \mathbf{\mu }+(\nu +\lambda )\nabla \text{div}(\mathbf{\mu }),$$where $\lambda $ and $\nu $ are the non-negative viscosity coefficients. The boundary conditions that are invoked in (\[system1\]) for the fluid PDE component are the so-called *impermeability*-slip conditions [bolotin,chorin-marsden]{}. Their physical interpretation is that no fluid passes through the boundary (the normal component of the fluid field $\mathbf{u}$ on the active boundary portion $\Omega $ matches the plate velocity $w_{t}$), and that there is no stress in the tangential direction $\tau $. Other possible physically relevant boundary conditions have appeared in the literature. We mention the *Kutta-Joukowski* type condition [@dcds], as well as the *adherence* condition . $$\begin{aligned} \sigma (\mathbf{u})\mathbf{n}-p\mathbf{n}=\mathbf{0}~\text{ on }~S;& ~~~\mathbf{u}\cdot \mathbf{n}=w_{t}~\text{ on }~\Omega ; \label{KJ} \\ \mathbf{u}=0~\text{ on }~S& ~~~\mathbf{u}\cdot \mathbf{n}=w_{t}~\text{ on }~\Omega. \label{ad}\end{aligned}$$ Though the focus of this treatment is on the [*linear dynamics*]{} of the fluid-plate interaction, we do provide a brief discussion of nonlinearity in the model in Section \[nonlinear\]. We now mention the principal nonlinear plate model of interest: the scalar von Karman plate. Writing the plate equation in as $$\label{IM2**} w_{tt}+\Delta^2w+\left[ 2\nu \partial _{x_{3}}(\mathbf{u})_{3}+\lambda \text{div}(\mathbf{u})-p\right] _{\Omega }=f(w)~\text{ on }~\Omega \times (0,\infty )$$ where we have $$f(w)=[w, v(w)+F_0],$$ where $F_0$ is a given function from $H^4(\Omega)$ and the von Karman bracket $[u,v]$ is given by $$\label{bracket} [u,w] = \partial_{xx} u\cdot \partial_{yy} w + \partial_{yy} u\cdot \partial_{xx} w - 2\cdot \partial_{xy} u\cdot \partial_{xy} w,$$ and the Airy stress function $v(u,w) $ solves the following elliptic problem $$\label{airy-1} \Delta^2 v(u,w)+[u,w] =0 ~~{\text in}~~ \Omega,\hskip.5cm \partial_{\nu} v(u,w) = v(u,w) =0 ~~{\text on}~~ \partial\Omega.$$ Von Karman equations are well known in nonlinear elasticity and constitute a basic model describing nonlinear oscillations of a plate accounting for large deflections, see [@springer] and references therein. In this paper we provide a discussion of the most physically relevant [*large deflection*]{} plate model. We do not fully discuss the breadth of nonlinear plate dynamics, as is done in [@Chu2013-comp]. However, the discussion we provide here is easily adapted to the other common plate nonlinearities of Berger or Kirchhoff type (see, for instance, [@supersonic] and [@Chueshov; @gw]). \[delete\] The above fluid equations in might be referred to as the Oseen equations for viscous compressible barotropic fluids. In the linearization procedure, without making additional assumptions on $\mathbf{U}$, we obtain: $$\begin{aligned} \label{bar-model1-U} & (\partial _{t}+\mathbf{U}\cdot \nabla )p+\mathrm{div\,}\,\mathbf{u}+\mathrm{div}(\mathbf{U})p=F_{1}(\mathbf{x})\quad \mathrm{in}~~\mathcal{O}\times \mathbb{R}_{+}, \\[2mm] & (\partial _{t}+\mathbf{U}\cdot \nabla )\mathbf{u}-\nu \Delta \mathbf{u}-(\nu +{\lambda }){\nabla }\mathrm{div\,}\,\mathbf{u}+{\nabla }p+\nabla \mathbf{U}\cdot \mathbf{u}+(\mathbf{U}\cdot \nabla \mathbf{U})p=\mathbf{F}_{2}(\mathbf{x})\quad \mathrm{in}~\mathcal{O}\times \mathbb{R}_{+}, \label{flu-eq1U}\end{aligned}$$ for a prescribed scalar function $F_{1}$ and vector field $\mathbf{F}_{2}$. In our analysis we retain only the principal mathematical terms in –, as the others may be viewed as zeroth order perturbations, and handled in a standard fashion. Main Results {#results} ============ We are primarily nterested in Hadamard well-posedness of the linearized coupled system given in (\[system1\])–(\[IC\_2\]). Specifically, we will ascertain well-posedness of the PDE model ([system1]{})–(\[IC\_2\]) for arbitrary initial data in the natural space of finite energy. To accomplish this, we will adopt a semigroup approach; namely, we will pose and validate an explicit semigroup generator representation for the fluid-structure dynamics (\[system1\])–(\[IC\_2\]). With respect to the coupled PDE system (\[system1\])–(\[IC\_2\]), the associated space of well-posedness will be $$\mathcal{H}\equiv L^{2}(\mathcal{O})\times \mathbf{L}^{2}(\mathcal{O})\times H_{0}^{2}(\Omega )\times L^{2}(\Omega ). \label{H}$$$\mathcal{H}$ is a Hilbert space, topologized by the following inner-product: $$(\mathbf{y}_{1},\mathbf{y}_{2})_{\mathcal{H}}=(p_{1},p_{2})_{L^{2}(\mathcal{O})}+(\mathbf{u}_{1},\mathbf{u}_{2})_{\mathbf{L}^{2}(\mathcal{O})}+(\Delta w_{1},\Delta w_{2})_{L^{2}(\Omega )}+(v_{1},v_{2})_{L^{2}(\Omega )} \label{innerp}$$for any $\mathbf{y}_{i}=(p_{i},\mathbf{u}_{i},w_{i},v_{i})\in \mathcal{H},~i=1,2.$ In what follows, we consider the linear operator $\mathcal{A}:D(\mathcal{A})\subset \mathcal{H}\rightarrow \mathcal{H}$, which expresses the compressible fluid-structure PDE system (\[system1\])–(\[IC\_2\]) as the abstract ODE: $$\begin{aligned} \dfrac{d}{dt}\begin{bmatrix} p \\ \mathbf{u} \\ w \\ w_{t}\end{bmatrix} &=&\mathcal{A}\begin{bmatrix} p \\ \mathbf{u} \\ w \\ w_{t}\end{bmatrix}; \notag \\ \lbrack p(0),\mathbf{u}(0),w(0),w_{t}(0)] &=&[p_{0},\mathbf{u}_{0},w_{0},w_{1}]. \label{ODE}\end{aligned}$$ To wit, $$\mathcal{A}=\left[ \begin{array}{cccc} -\mathbf{U}\mathbb{\cdot }\nabla (\cdot ) & -\func{div}(\cdot ) & 0 & 0 \\ -\mathbb{\nabla (\cdot )} & \func{div}\sigma (\cdot )-\eta I-\mathbf{U}\mathbb{\cdot \nabla (\cdot )} & 0 & 0 \\ 0 & 0 & 0 & I \\ \left. \left[ \cdot \right] \right\vert _{\Omega } & -\left[ 2\nu \partial _{x_{3}}(\cdot )_{3}+\lambda \func{div}(\cdot )\right] _{\Omega } & -\Delta ^{2} & 0\end{array}\right] . \label{AAA}$$ Here, the domain $D(\mathcal{A})$ is given as $$D(\mathcal{A})=\{(p_{0},\mathbf{u}_{0},w_{1},w_{2})\in L^{2}(\mathcal{O})\times \mathbf{H}^{1}(\mathcal{O})\times H_{0}^{2}(\Omega )\times H_{0}^{2}(\Omega )~:~(i)\text{--}(v)~~\text{hold below}\},$$where 1. $\mathbf{U}\cdot \nabla p_{0}\in L^{2}(\mathcal{O})$ 2. $\text{div}~\sigma (\mathbf{u}_{0})-\nabla p_{0}\in L^{2}(\mathcal{O})$ 3. $-\Delta ^{2}w_{0}-\left[ 2\nu \partial _{x_{3}}(\mathbf{u}_{0})_{3}+\lambda \text{div}(\mathbf{u}_{0})\right] _{\Omega }+\left. p_{0}\right\vert _{\Omega }\in L^{2}(\Omega )$ 4. $\left( \sigma (\mathbf{u}_{0})\mathbf{n}-p_{0}\mathbf{n}\right) \bot ~TH^{1/2}(\partial \mathcal{O})$. That is, $$\left\langle \sigma (\mathbf{u}_{0})\mathbf{n}-p_{0}\mathbf{n},\mathbf{\tau }\right\rangle _{\mathbf{H}^{-\frac{1}{2}}(\partial \mathcal{O})\times \mathbf{H}^{\frac{1}{2}}(\partial \mathcal{O})}=0\text{ \ for all }\mathbf{\tau }\in TH^{1/2}(\partial \mathcal{O}).$$ 5. $\mathbf{u}_{0}=\mathbf{\mu }_{0}+\widetilde{\mathbf{\mu }}_{0}$, where $\mathbf{\mu }_{0}\in \mathbf{V}_{0}$ and $\widetilde{\mathbf{\mu }}_{0}\in \mathbf{H}^{1}(\mathcal{O})$ satisfies[^7]$$\left. \widetilde{\mathbf{\mu }}_{0}\right\vert _{\partial \mathcal{O}}=\begin{cases} 0 & ~\text{ on }~S \\ w_{2}\mathbf{n} & ~\text{ on}~\Omega\end{cases}$$(and so $\left. \mathbf{\mu }_{0}\right\vert _{\partial \mathcal{O}}\in TH^{1/2}(\partial \mathcal{O})$). In the following theorem, we provide semigroup well-posedness for $\mathcal{A}:D(\mathcal{A})\in \mathcal{H}\rightarrow \mathcal{H}$, the proof of which is based on the well known Lumer-Phillips Theorem. \[wellp\] The map $\left\{ p_{0},\mathbf{u}_{0};w_{0},w_{1}\right\} \rightarrow \left\{ p(t),\mathbf{u}(t);w(t),w_{t}(t)\right\} $ defines a strongly continuous semigroup $\{e^{\mathcal{A}t}\}$ on the space $\mathcal{H}$, and hence the system (\[system1\])–(\[IC\_2\]) is well-posed (in the sense of mild solutions—see Remark \[notion\] below). In addition, the semigroup enjoys the following estimate: $$\big|\big|e^{\mathcal{A}t}\big|\big|_{\mathcal{L}(\mathcal{H})}\leq \exp \Big(\dfrac{t}{2}||\text{div}(\mathbf{U})||_{\infty }\Big),~~\forall t>0.$$ \[weaker\]Given the existence of a semigroup $\{e^{\mathcal{A}t}\}$ for the fluid-structure generator $\mathcal{A}:D(\mathcal{A})\subset \mathcal{H}\rightarrow \mathcal{H}$: if initial data $[p_{0},\mathbf{u}_{0};w_{0},w_{1}]\in D(\mathcal{A})$, the corresponding solution $[p(t),\mathbf{u}(t);w(t),w_{t}(t)]\in C([0,\infty ),D(\mathcal{A}))$. In particular, the solution satisfies the condition (A.iv) in the definition of the generator. This means that one has that the tangential boundary condition $$\lbrack \sigma (\mathbf{u}_{0})\mathbf{n}-p_{0}\mathbf{n}]\cdot \mathbf{\tau }=0\text{\ \ for all }\mathbf{\tau }\in TH^{1/2}(\partial \mathcal{O}),$$satisfied in the sense of distributions. That is to say, $\forall ~ \boldsymbol{\tau }\in TH^{1/2}(\partial \mathcal{O})$ and $\forall \phi \in \mathcal{D}(\partial \mathcal{O})$,$$\langle \sigma (\mathbf{u}_{0})\mathbf{n}-p_{0}\mathbf{n},\phi \boldsymbol{\tau }\rangle _{\partial \mathcal{O}}=0. \label{weak2}$$ \[notion\] We note that semigroup solutions, as arrived at in Theorem \[wellp\] for initial data $\mathbf{y}\in \mathcal{H}$, correspond to so called *mild* solutions (satisfying an integral form of –) in the sense of [@pazy Section 4.2]. Moreover, for initial data $\mathbf{y}\in D(\mathcal{A})$, we obtain so called *strong solutions*, which satisfy the PDE in a pointwise sense. In [@Chu2013-comp], semigroup techniques are not used in demonstrating well-posedness. As such, the author takes care to define an appropriate notion of *weak solution* corresponding to a Galerkin construction (see [@Chu2013-comp pp.653–654]). Such a notion of weak solution is relevant here, and can be obtained by making minor modifications that take into account the vector field $\mathbf{U}$. Here we assert that mild solutions (obtained via our semigroup) are in fact weak solutions as in [Chu2013-comp]{}. In this way, we recover the well-posedness result of [Chu2013-comp]{} (in the linear and nonlinear cases, with $\mathcal{O}$ bounded) by simply letting $\mathbf{U}\equiv \mathbf{0}$. (Note that in Section [static]{} we discuss the relation between the weak and strong forms of the *stationary problem* associated with –, and in Section \[nonlinear\] we discuss the presence of plate nonlinearity, in line with [@Chu2013-comp].) Finally, we describe the energy balance equation for semigroup solutions to –. We introduce the natural notion of *energy* into the analysis. Semigroup solutions obtained on the finite energy space $\mathcal{H}$ are measured in the finite energy norm, which provides us with the energy functional: for $\mathbf{y}_{0}=(p_{0},\mathbf{u}_{0},w_{0},w_{1})\in \mathcal{H}$, we have $$\mathcal{E}(\mathbf{y}_{0})=\frac{1}{2}||\mathbf{y}_{0}||_{\mathcal{H}}^{2}=\frac{1}{2}\Big\{||p_{0}||_{\Omega }^{2}+||\mathbf{u}_{0}||_{\Omega }^{2}+||\Delta w_{0}||_{\Omega }^{2}+||w_{1}||_{\Omega }^{2}\Big\}.$$Let us also introduce the convenient notation: $$a_{\mathcal{O}}(\mathbf{u},{\boldsymbol{\psi }})=(\sigma (\mathbf{u}),\epsilon ({\boldsymbol{\psi }}))_{\mathcal{O}}+\eta (\mathbf{u},\boldsymbol{\psi })_{\mathcal{O}}. \label{bi}$$ With strong solutions in hand (corresponding to smooth data in $D(\mathcal A)$), we may test – with $p,\mathbf{u},$ and $w_{t}$ (respectively) to obtain the energy balance. The energy balance is then obtained for semigroup (mild) solutions through the standard limiting process. Equivalently, it is admissible to test with semigroup solutions (for $\mathbf{y}_{0}\in \mathcal{H}$, $p\in L^{2}\big(0,t;L^{2}(\mathcal{O})\big)$, $\mathbf{u}\in L^{2}\big(0,t;\mathbf{L}^{2}(\mathcal{O})\big)$, and $w_{t}\in L^{2}\big(0,t;L^{2}(\Omega )\big)$) in the weak form of the problem in [@Chu2013-comp]. This also yields the energy balance below. \[energybalance\] Consider $\mathbf{y}_0=(p_0,\mathbf{u}_0,w_0,w_1) \in \mathcal{H}$ and $\mathbf{U }\in \mathbf{V}_0$. Any mild solution $y(t)=e^{\mathcal{A }t}\mathbf{y}_0=(p(t),\mathbf{u}(t),w(t),w_t(t))$ to – satisfies for $t>0$: $$\begin{aligned} \label{balancelaw} \mathcal{E}\big(p(t),\mathbf{u}(t),w(t),w_t(t)\big) +\int_0^t a_{\mathcal{O}}(\mathbf{u}(\tau),\mathbf{u}(\tau)) d\tau =&~ \mathcal{E}\big(p_0,\mathbf{u}_0,w_0,w_1\big) \\ &+\frac{1}{2}\int_0^t\int_{\mathcal{O}} \text{div}(\mathbf{U})[|p(\tau)|^2+|\mathbf{u}(\tau)|^2]dx d\tau. \notag\end{aligned}$$ We note two features of the energy identity: first, when the field $\mathbf{U }\in \mathbf{V}_0$ is also divergence free, the energy identity remains the same as in the case where $\mathbf{U }\equiv 0$ (like [@Chu2013-comp]). Secondly, the dissipation integral $\int_0^{\infty} a_{\mathcal{O}}(\mathbf{u},\mathbf{u})d\tau$ depends on the quantity $\text{\emph{div}}~\mathbf{U}$ as well: again, with $\text{\emph{div}}~\mathbf{U }\equiv 0$, we see that $$\int_0^{\infty} a_{\mathcal{O}}(\mathbf{u}(\tau),\mathbf{u}(\tau))d\tau < +\infty,$$ with a bound that depends only on the initial data. We conclude this section by noting that we provide a discussion of solutions in the presence of nonlinear (von Karman) plate dynamics, including well-posedness, energy-balance, and stationary solutions, but we relegate this discussion to Section \[nonlinear\]. Discussion of Main Results in Relation to the Literature {#techreview} ======================================================== The model under consideration describes the case of a (possibly viscous) *compressible* gas/fluid flow and was recently studied in [Chu2013-comp]{} in the case with *zero* speed ($\mathbf{U}=0$) of the unperturbed flow. Beginning with compressible Navier-Stokes, one can obtain several fluid-plate cases which are important from an applied point of view: - **Incompressible Fluid**, i.e., $\text{div}~\mathbf{u}=0$ and density constant: In the *viscous* case, the standard linearized Navier-Stokes equations arise; fluid-plate interactions in this case were studied in [@ChuRyz2011; @cr-full-karman; @ChuRyz2012-pois; @berlin11]. Results on well-posedness and attractors for different elastic descriptions and domains were obtained. In this case, we also mention the work [clark,george1,george2]{} which addresses semigroup well-posedness of a related linear fluid-plate model, and decay rates via *frequency domain* techniques. The *inviscid* case was studied in [@Chu2013-inviscid] in the same context. - **Compressible Fluid**: In the inviscid case we can obtain wave-type dynamics for the (perturbed) velocity potential $(\mathbf{u}=\nabla \phi $, potential flow) of the form (see also [BA62,bolotin,dowell1]{}): $$\begin{cases} (\partial _{t}+\mathbf{U}\cdot \nabla )^{2}\phi =\Delta \phi & \text{ in }\mathcal{O}\times (0,T), \\ {\partial _{z}}\phi =L(w_{t},\nabla w) & \text{ on }\Omega \times (0,T) \\ {\partial _{z}}\phi =0 & \text{ on }\partial \mathcal{O}\setminus \Omega \times (0,T).\end{cases} \label{flow}$$In these variables, the pressure/density of the fluid has the form $p=(\partial _{t}+\mathbf{U}\cdot \nabla )\phi $. Due to the impermeability assumption, in the case of the perfect fluid, we have only one Neumann-type boundary condition given above via the operator $L$. The (semigroup) well-posedness [@supersonic; @webster] and stability properties [springer,delay,conequil2]{} of this model have been intensively studied. The *viscous* case was studied in [@Chu2013-comp], and is the motivation of the current work. In all the papers cited above, the interactive dynamics between fluid and a plate (or shell) are considered. These analyses are distinguished from those for other fluid-structure interactive PDE models in that the elastic structure is two dimensional, and evolves on the boundary of the three dimensional fluid domain. One of the key issues for the present configuration—and indeed, one of the main points in the bulk of the literature above—is the determination of how, and to what extent, the fluid (de)stabilizes the structure. In [Chu2013-comp,ChuRyz2011,ChuRyz2012-pois]{}, after obtaining well-posedness of the models (with structural nonlinearity), the existence of compact global attractors for the dynamics is shown; in some cases the existence of this invariant set is due strictly to the presence of the fluid, rather than some underlying structural phenomenon.[^8] In addition, it is sometimes possible (perhaps under additional assumptions) to show strong stabilization to equilibrium for the fluid-structure dynamics (e.g., [@conequil2; @springer]). In all cases where the ambient flow field $\mathbf{U}\neq \mathbf{0}$, the stability properties of the model depend greatly on the structure and magnitude of the flow field $\mathbf{U}$ [@spectral]. This will certainly be the case for the dynamics considered here, as one can see from . The survey papers [@berlin11; @dcds] provides a nice overview of the modeling, well-posedness, and long-time behavior results for the family of dynamics described above. We emphasize that in any study involving *compressible fluids*, the enforced compressibility produces additional density/pressure variables, and, as a result, well-posedness cannot be obtained in a straightforward way. In fact, the primary difficulty lies in showing the maximality (range) condition of the generator, since one has to address this density/pressure component. This variable cannot be readily eliminated, and therefore accounts for an elliptic equation which must be solved. To overcome this, we develop a methodology based on the application of a static well-posedness result given in the Appendix of [@dV] (see also [@LaxPhil]). That paper, as well as [@valli], deals with the stationary compressible Navier-Stokes equations. Their principal result (obtained independently, through different methodologies) is a small data well-posedness for the fully nonlinear fluid problem. However, both approaches first necessarily provide a framework for the linearized problem; in particular, [@dV] provides a strategy for our analysis of the stationary compressible fluid-structure PDE which is associated with maximality of the generator. The pioneering work [@Chu2013-comp], which we cite as the primary motivating reference, considers the model presented here with $\mathbf{U}\equiv 0$. In this paper, solvability and dynamical properties of the model are considered in the case of a general (possibly unbounded) smooth domain and in the presence of plate nonlinearity. Along with the well-posedness result, the existence of a finite dimensional compact global attractor is proved when the domain is bounded. The techniques used are consistent with those in [@ChuRyz2011; @cr-full-karman; @Chu2013-inviscid], namely, Galerkin-type procedures are implemented, along with good a priori estimates, in order to produce solutions. As with many fluid-structure interactions, the critical issue in [@Chu2013-comp] is the appearance of ill-defined traces at the interface. In the incompressible case, one can recover negative Sobolev trace regularity of the pressure $p$ at the interface via properties of the Stokes’ operator. However, in the viscous compressible case this is no longer true. Our semigroup approach does not require the use of approximate solutions. Indeed, we overcome the key difficulty of trace regularity issues by exploiting cancellations at the level of solutions with data in the generator. In this way we do not have to work component-wise on the dynamic equations, though we must work carefully (and component-wise) on the static problem associated with maximality of the generator. We remark that, despite these trace regularity issues, when $\mathbf{U}\equiv 0$, uniform decay of finite energy solutions is obtained in [@Chu2013-comp] through a clever Lyapunov approach that makes use of a Neumann lifting map with associated estimates; this construction is fundamentally obstructed by the addition of the $\mathbf{U}\cdot \nabla p$ term in the pressure equation here. See the forthcoming work [@preprint] on the decay properties of the model considered here. The Proof of Theorem \[wellp\] {#proof} ============================== Our proof of well-posedness hinges on showing that the matrix $\mathcal{A}:D(\mathcal{A})\subset \mathcal{H}\rightarrow \mathcal{H}$ generates a $C_{0}$-semigroup. At this point, we should note that due to the existence of the generally nonzero ambient vector field $\mathbf{U}$ in the model, we have a lack of dissipativity of the operator $\mathcal{A}$. Accordingly, we introduce the following bounded perturbation $\widehat{\mathcal{A}}$ of our generator $\mathcal{A}$: $$\widehat{\mathcal{A}}=\mathcal{A}-\dfrac{\text{div}(\mathbf{U})}{2}\begin{bmatrix} I & 0 & 0 & 0 \\ 0 & I & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{bmatrix}\text{, \ \ }D(\widehat{\mathcal{A}})=D(\mathcal{A}). \label{pertA}$$Therewith, the proof of Theorem \[wellp\] is geared towards establishing the maximal dissipativity of the linear operator $\widehat{\mathcal{A}}$; subsequently, an application of the Lumer-Phillips Theorem will yield that $\widehat{\mathcal{A}}$ generates a $C_{0}$ semigroup of contractions on $\mathcal{H}$. In turn, applying the standard perturbation result [@kato] (given, for instance, in [@pazy Theorem 1.1, p.76]) yields semigroup generation for the original modeling fluid-structure operator $\mathcal{A}$ of (\[AAA\]), via (\[pertA\]). Dissipativity {#diss} ------------- Considering the inner-product for the state space $\mathcal{H}$ given in (\[innerp\]), for any $\mathbf{y}=[p_{0},\mathbf{u}_{0},w_{1},w_{2}]^{T}\in D(\mathcal{A})$ we have $$\begin{aligned} \left( \widehat{\mathcal{A}}\begin{bmatrix} p_{0} \\ \mathbf{u}_{0} \\ w_{1} \\ w_{2}\end{bmatrix},\begin{bmatrix} p_{0} \\ \mathbf{u}_{0} \\ w_{1} \\ w_{2}\end{bmatrix}\right) _{\mathcal{H}}=& ~-(\mathbf{U}\cdot \nabla p_{0},p_{0})_{\mathcal{O}}-\frac{1}{2}(\text{div}(\mathbf{U})p_{0},p_{0})_{\mathcal{O}}-(\text{div}(\mathbf{u}_{0}),p_{0})_{\mathcal{O}}-\eta ||\mathbf{u}_{0}||_{\mathbf{L}^{2}(\mathcal{O})}^{2} \notag \\ & +(\text{div}~\sigma (\mathbf{u}_{0})-\nabla p_{0},\mathbf{u}_{0})_{\mathcal{O}}-(\mathbf{U}\cdot \nabla \mathbf{u}_{0},\mathbf{u}_{0})_{\mathcal{O}}-\frac{1}{2}(\text{div}(\mathbf{U})\mathbf{u}_{0},\mathbf{u}_{0})_{\mathcal{O}} \notag \\ & \notag \\ & +(\Delta w_{2},\Delta w_{1})_{\Omega }-(\Delta ^{2}w_{1},w_{2})_{\Omega }-( \left[ 2\nu \partial _{x_{3}}(\mathbf{u}_{0})_{3}+\lambda \text{ div}(\mathbf{u}_{0})\right] _{\Omega }-\left. p_{0}\right\vert _{\Omega },w_{2})_{\Omega }. \label{diss_1}\end{aligned}$$ Applying Green’s Theorems to right hand side, we subsequently have $$\begin{aligned} \left( \widehat{\mathcal{A}}\begin{bmatrix} p_{0} \\ \mathbf{u}_{0} \\ w_{1} \\ w_{2}\end{bmatrix},\begin{bmatrix} p_{0} \\ \mathbf{u}_{0} \\ w_{1} \\ w_{2}\end{bmatrix}\right) _{\mathcal{H}}=& ~-(\mathbf{U}\cdot \nabla p_{0},p_{0})_{\mathcal{O}}-\frac{1}{2}(\text{div}(\mathbf{U})p_{0},p_{0})_{\mathcal{O}}-(\text{div}(\mathbf{u}_{0}),p_{0})_{\mathcal{O}} \notag \\ & -(\sigma (\mathbf{u}_{0}),\epsilon (\mathbf{u}_{0}))_{\mathcal{O}}-\eta ||\mathbf{u}_{0}||_{\mathbf{L}^{2}(\mathcal{O})}^{2}+(p_{0},\text{div}(\mathbf{u}_{0}))_{\mathcal{O}}+\left\langle \sigma (\mathbf{u}_{0})\mathbf{n}-p_{0}\mathbf{n},\mathbf{u}_{0}\right\rangle _{\partial \mathcal{O}} \notag \\ & \notag \\ & -(\mathbf{U}\cdot \nabla \mathbf{u}_{0},\mathbf{u}_{0})_{\mathcal{O}}-\frac{1}{2}(\text{div}(\mathbf{U})\mathbf{u}_{0},\mathbf{u}_{0})_{\mathcal{O}} \notag \\ & \notag \\ & +(\Delta w_{2},\Delta w_{1})_{\Omega }-(\Delta w_{1},\Delta w_{2})_{\Omega }-(\left[ 2\nu \partial _{x_{3}}(\mathbf{u}_{0})_{3}+\lambda \text{div}(\mathbf{u}_{0})\right] _{\Omega }-\left. p_{0}\right\vert _{\Omega },w_{2})_{\Omega }. \label{dissi_1.2}\end{aligned}$$Invoking now the boundary conditions (A.iv) and (A.v), in the definition of the domain $D(\mathcal{A})$, there is then a cancellation of boundary terms so as to have$$\begin{aligned} \left( \widehat{\mathcal{A}}\begin{bmatrix} p_{0} \\ \mathbf{u}_{0} \\ w_{1} \\ w_{2}\end{bmatrix},\begin{bmatrix} p_{0} \\ \mathbf{u}_{0} \\ w_{1} \\ w_{2}\end{bmatrix}\right) _{\mathcal{H}}=& ~-(\mathbf{U}\cdot \nabla p_{0},p_{0})_{\mathcal{O}}-\frac{1}{2}(\text{div}(\mathbf{U})p_{0},p_{0})_{\mathcal{O}}-2i\func{Im}(\text{div}(\mathbf{u}_{0}),p_{0})_{\mathcal{O}} \notag \\ & -(\sigma (\mathbf{u}_{0}),\epsilon (\mathbf{u}_{0}))_{\mathcal{O}}-\eta ||\mathbf{u}_{0}||_{\mathbf{L}^{2}(\mathcal{O})}^{2}-(\mathbf{U}\cdot \nabla \mathbf{u}_{0},\mathbf{u}_{0})_{\mathcal{O}}-\frac{1}{2}(\text{div}(\mathbf{U})\mathbf{u}_{0},\mathbf{u}_{0})_{\mathcal{O}} \notag \\ & \notag \\ & -2i\func{Im}(\Delta w_{1},\Delta w_{2})_{\Omega }. \label{diss_1.4}\end{aligned}$$ Moreover, via Green’s Theorem, as well as the assumption that $\mathbf{U}\in \mathbf{V}_{0}$ (as defined in (\[V\_0\])), we obtain $$2\func{Re}(\mathbf{U}\cdot \nabla p_{0},p_{0})_{\mathcal{O}}=-\int_{\mathcal{O}}\text{div}(\mathbf{U})\left\vert p_{0}\right\vert ^{2}d\mathcal{O}; \label{diss_2}$$$$2\func{Re}(\mathbf{U}\cdot \nabla \mathbf{u}_{0},\mathbf{u}_{0})_{\mathcal{O}}=-\int_{\mathcal{O}}\text{div}(\mathbf{U})\left\vert \mathbf{u}_{0}\right\vert ^{2}d\mathcal{O}. \label{diss_3}$$ Applying these relations to the right hand of (\[diss\_1.4\]), we then have$$\func{Re}\left( \widehat{\mathcal{A}}\begin{bmatrix} p_{0} \\ \mathbf{u}_{0} \\ w_{1} \\ w_{2}\end{bmatrix},\begin{bmatrix} p_{0} \\ \mathbf{u}_{0} \\ w_{1} \\ w_{2}\end{bmatrix}\right) _{\mathcal{H}}=-(\sigma (\mathbf{u}_{0}),\epsilon (\mathbf{u}_{0}))_{\mathcal{O}}-\eta ||\mathbf{u}_{0}||_{\mathbf{L}^{2}(\mathcal{O})}^{2}\leq 0,$$which establishes the dissipativity of $\widehat{\mathcal{A}}:D(\mathcal{A})\subset \mathcal{H}\rightarrow \mathcal{H}.$ Maximality {#max} ---------- In this section we show the maximality property of the operator $\widehat{\mathcal{A}}$ on the space $\mathcal{H}$. To this end, we will need to establish the *range condition*, at least for parameter $\xi >0~~$sufficiently large. Namely, we must show $$Range(\xi I-\widehat{\mathcal{A}})=\mathcal{H},~~\text{for some}~~\xi >0. \label{range_0}$$This necessity is equivalent to finding $[p,{\mathbf{v}},w_{1},w_{2}]\in D(\mathcal{A})$ which satisfies, for given $[p^{\ast },{\mathbf{v}}^{\ast },w_{1}^{\ast },w_{2}^{\ast }]\in \mathcal{H}$, the abstract equation $$(\xi I-\widehat{\mathcal{A}})\begin{bmatrix} p \\ {\mathbf{v}} \\ w_{1} \\ w_{2}\end{bmatrix}=\begin{bmatrix} p^{\ast } \\ {\mathbf{v}}^{\ast } \\ w_{1}^{\ast } \\ w_{2}^{\ast }\end{bmatrix}. \label{range}$$Given the definition of $\mathcal{A}$ in (\[AAA\]), then in PDE terms, solving the abstract equation (\[range\_0\]) is equivalent to proving that the following system of equations, with given data $[p^{\ast },{\mathbf{v}}^{\ast },w_{1}^{\ast },w_{2}^{\ast }]\in \mathcal{H}$, has a (unique) solution $[p,{\mathbf{v}},w_{1},w_{2}]\in D(\mathcal{A})$:$$\begin{aligned} & \left\{ \begin{array}{l} \xi p+\mathbf{U}\cdot \nabla p+\frac{1}{2}\text{div}(\mathbf{U})p+\text{div}({\mathbf{v}})=\text{ }p^{\ast }~~\text{ in }~\mathcal{O} \\ \xi {\mathbf{v}}+\mathbf{U}\cdot \nabla {\mathbf{v}}+\frac{1}{2}\text{div}(\mathbf{U}){\mathbf{v}}-\text{div}~\sigma ({\mathbf{v}})+\eta {\mathbf{v}}+\nabla p=~{\mathbf{v}}^{\ast }~~\text{ in }~\mathcal{O} \\ (\sigma (\mathbf{v})\mathbf{n}-p\mathbf{n})\cdot \boldsymbol{\tau }=0~\text{ on }~\partial \mathcal{O} \\ \mathbf{v}\cdot \mathbf{n}=0~\text{ on }~S \\ \mathbf{v}\cdot \mathbf{n}=w_{2}~\text{ on }~\Omega\end{array}\right. \label{statice1} \\ & \notag \\ & \left\{ \begin{array}{l} \xi w_{1}-w_{2}=~w_{1}^{\ast }~~\text{ on }~\Omega \\ \xi w_{2}+\Delta ^{2}w_{1}+\left[ 2\nu \partial _{x_{3}}(\mathbf{v)}_{3}+\lambda \text{div}(\mathbf{v})-p\right] _{\Omega }=w_{2}^{\ast }~\text{ on }~\Omega \\ w_{1}=\frac{\partial w_{1}}{\partial \nu }=0~\text{ on }~\partial \Omega .\end{array}\right. \label{staticsys3.5}\end{aligned}$$We will give our proof of maximality in two steps. In the first step, we will show the existence and uniqueness of uncoupled versions of the compressible fluid-structure PDE system (\[statice1\]) which is satisfied by variables $\left\{ p,\mathbf{v}\right\} $. To this end, the key ingredient will be the well-posedness result Theorem \[dV\], which is applicable to (uncoupled) equations of the type satisfied by the pressure variable. (See also [@dV] and [@LaxPhil].) Subsequently, we proceed to establish the range condition (\[range\]), by sequentially proving the existence of the pressure-fluid-structure components $\left\{ p,\mathbf{v},w_{1},w_{2}\right\} $ which solve the coupled system (\[statice1\])–(\[staticsys3.5\]). This work for pressure-fluid-structure static well-posedness involves appropriate uses of the Lax-Milgram Theorem. **STEP 1**:Consider the following $\xi $-parameterized PDE system on the fluid domain $\mathcal{O}$, with given forcing terms $\left\{ p^{\ast },{\mathbf{v}}^{\ast }\right\} \in {L}^{2}(\mathcal{O})\times \lbrack \mathbf{V}_{0}]^{\prime }$** **and boundary data $g\in H_{0}^{1/2+\epsilon }(\Omega )$, where $\epsilon >0$. $$\begin{aligned} \xi p+\mathbf{U}\cdot \nabla p+\frac{1}{2}\text{div}(\mathbf{U})p+\text{div}({\mathbf{v}})=& ~p^{\ast }~~\text{ in }~\mathcal{O} \label{a1} \\ \xi {\mathbf{v}}+\mathbf{U}\cdot \nabla {\mathbf{v}}+\frac{1}{2}\text{div}(\mathbf{U}){\mathbf{v}}-\text{div}~\sigma ({\mathbf{v}})+\eta {\mathbf{v}}+\nabla p=& ~{\mathbf{v}}^{\ast }~~\text{ in }~\mathcal{O} \label{a2} \\ \left( \sigma ({\mathbf{v}})\mathbf{n}-p\mathbf{n}\right) \cdot \boldsymbol{\tau }=& ~0~~\text{ on }~\partial \mathcal{O} \label{a3} \\ {\mathbf{v}}\cdot \mathbf{n}=& ~0~~\text{ on }~S \label{a4} \\ {\mathbf{v}}\cdot \mathbf{n}=& ~g~~\text{ on}~\Omega \label{a5}\end{aligned}$$ (and, where again, the ambient vector field $\mathbf{U}\in \mathbf{V}_{0}\cap \mathbf{H}^{3}(\mathcal{O})$). [STEP 1]{} consists of proving the following (driving) lemma for the existence and uniqueness of the solution $\left\{p,\mathbf{v}\right\} $ of (\[a1\])–(\[a5\]). \[staticwellp\] (i) With reference to problem (\[a1\])–(\[a5\]): with given data $$\lbrack p^{\ast },{\mathbf{v}}^{\ast },g]\in {L}^{2}(\mathcal{O})\times \lbrack \mathbf{V}_{0}]^{\prime }\times H_{0}^{\frac{1}{2}+\epsilon }(\Omega )\text{,}$$ and with $\xi >0$ sufficiently large, there exists a unique solution $\left\{p,\mathbf{v}\right\} $ $\in {L}^{2}(\mathcal{O})\times \mathbf{H}^{1}(\mathcal{O})$ of (\[a1\])–(\[a5\]). \(ii) The fluid solution component ${\mathbf{v}}$ is of the form $$\mathbf{v}=\mathbf{u}+\widetilde{{\mathbf{v}}_{0}}\text{,} \label{crucial}$$where $\mathbf{u}\in \mathbf{V}_{0}$, and $\widetilde{{\mathbf{v}}_{0}}\in \mathbf{H}^{1}(\mathcal{O})$ satisfies$$\widetilde{{\mathbf{v}}_{0}}\Big|_{\partial \mathcal{O}}=\begin{cases} 0 & ~\text{ on }~S \\ g\mathbf{n} & ~\text{ on }~\Omega .\end{cases} \label{crucial2}$$(iii) The trace term $\left[ \sigma ({\mathbf{v}})\mathbf{n}-p\mathbf{n}\right] _{\partial \mathcal{O}}\in \mathbf{H}^{-\frac{1}{2}}(\partial \mathcal{O})$, and moreover satisfies $$\left\langle \sigma ({\mathbf{v}})\mathbf{n}-p\mathbf{n,\tau }\right\rangle _{\mathbf{H}^{-\frac{1}{2}}(\partial \mathcal{O})\times \mathbf{H}^{\frac{1}{2}}(\partial \mathcal{O})}=0\text{ \ for all }\mathbf{\tau }\in TH^{1/2}(\partial \mathcal{O}), \label{crucial3}$$and so the boundary condition (\[a3\]) is satisfied in the sense of distributions; see (\[weak2\]) of Remark \[weaker\] \(iv) The fluid and pressure solution components $(p,\mathbf{v})$ satisfies the following estimates, for $\xi =\xi (\mathbf{U})$ large enough: $$\begin{aligned} \left\Vert p\right\Vert _{L^2(\mathcal{O})} &\leq &\frac{C}{\xi }\left\Vert [p^{\ast },{\mathbf{v}}^{\ast },g]\right\Vert _{\mathbf{L}^{2}(\mathcal{O})\times \lbrack \mathbf{V}_{0}]^{\prime }\times H_{0}^{\frac{1}{2}+\epsilon }(\Omega )}; \label{cdd_0} \\ \left\Vert {\mathbf{v}}\right\Vert _{\mathbf{H}^{1}(\mathcal{O})} &\leq &C\left\Vert [p^{\ast },{\mathbf{v}}^{\ast },g]\right\Vert _{\mathbf{L}^{2}(\mathcal{O})\times \lbrack \mathbf{V}_{0}]^{\prime }\times H_{0}^{\frac{1}{2}+\epsilon }(\Omega )}. \label{cdd}\end{aligned}$$ We give the proof in two parts. Our beginning point is to resolve the pressure term; this will be accomplished by applying Theorem \[dV\] of the Appendix. To this end: If we initially consider the equation $$\xi p+\mathbf{U}\cdot \nabla p+\frac{1}{2}\text{div}(\mathbf{U})p=\sigma ~~\text{ in }~\mathcal{O}, \label{above}$$where $\sigma \in L^{2}(\mathcal{O})$ and $\mathbf{U}\in \mathbf{V}_{0}\cap \mathbf{H}^{3}(\mathcal{O})$, we have by Theorem \[dV\] the existence of a unique $L^{2}(\mathcal{O})$-function $p$ which is a weak solution of ([above]{}); namely, it satisfies the variational relation$$\xi \int_{\mathcal{O}}p\phi d\mathbf{x}-\int_{\mathcal{O}}pdiv(\phi \mathbf{U})d\mathbf{x+}\frac{1}{2}\int_{\mathcal{O}}div(\mathbf{U})p\phi =\int_{\mathcal{O}}\sigma \phi d\mathbf{x}\text{, \ for all }\phi \in H^{1}(\mathcal{O}) \label{weak}$$(and in particular for $\phi \in \mathcal{D}(\mathcal{O})$; so we infer that for given $L^{2}$-function $\sigma $, corresponding $L^{2}$-solution $p$ satisfies the PDE (\[above\]) pointwise a.e.) Moreover, for $\xi =\xi (\mathbf{U})$ sufficiently large we have the estimate—see (\[est\]) of Theorem \[dV\], $$||p||_{L^{2}(\mathcal{O})}\leq \dfrac{1}{\xi }||\sigma ||_{L^{2}(\mathcal{O})}. \label{large}$$ With the well-posedness above, in order to find the existence and uniqueness of the fluid component $\mathbf{v}$, we now turn our attention to (\[a1\])–(\[a5\]). In view of the well-posedness of  (\[above\]), we decompose the fluid term $\mathbf{v}$ and pressure term $p$ as follows: $$\mathbf{v}=\mathbf{u}+\widetilde{{\mathbf{v}}_{0}}; \label{v_com}$$$$p=p[\mathbf{u}]+p[\widetilde{{\mathbf{v}}_{0}}]+p[p^{\ast }], \label{p_com}$$where $\mathbf{u}\in \mathbf{V}_{0}$ is the new fluid solution variable, and $\widetilde{{\mathbf{v}}_{0}}\in \mathbf{H}^{1}(\mathcal{O})$ is a vector field which is chosen to satisfy $$\widetilde{{\mathbf{v}}_{0}}\Big|_{\partial \mathcal{O}}=\begin{cases} 0 & ~\text{ on }~S \\ g\mathbf{n} & ~\text{ on }~\Omega .\end{cases} \label{bc}$$To wit: since boundary data $g\in H^{\frac{1}{2} +\epsilon}_0(\Omega)$ – with necessarily $\epsilon>0$ – we can extend by zero the function $g\mathbf n\Big|_{\Omega}=g[1,0,0],$ so as to have a $ \mathbf H^{\frac{1}{2} +\epsilon} $– function on all of $\partial \mathcal{O}.$ (See e.g., Theorem 3.33, p.95 of [@Mc].) In turn, since the Sobolev Dirichlet trace map from $H^{s}(\mathcal{O})$ into $H^{s-\frac{1}{2}}(\partial \mathcal{O})$ is surjective for $\frac{1}{2}<s<\frac{3}{2},$ then the existence of given $\widetilde{{\mathbf{v}}_{0}}\in \mathbf H^{1+\epsilon}(\mathcal{O})$ is assured. (See e.g., Theorem 3.38 of [@Mc], valid for Lipschitz domains.) Moreover, the functions $p[\mathbf{u}],~p[\widetilde{{\mathbf{v}}_{0}}],$ and $p[p^{\ast }]$ solve the respective versions of (\[above\]): $$\begin{aligned} \xi p[\mathbf{u}]+\mathbf{U}\cdot \nabla p[\mathbf{u}]+\frac{\text{div}(\mathbf{U})}{2}p[\mathbf{u}]=& ~-\text{div}(\mathbf{u}) \label{duh1} \\ \xi p[\widetilde{{\mathbf{v}}_{0}}]+\mathbf{U}\cdot \nabla p[\widetilde{{\mathbf{v}}_{0}}]+\frac{\text{div}(\mathbf{U})}{2}p[\widetilde{{\mathbf{v}}_{0}}]=& ~-\text{div}(\widetilde{{\mathbf{v}}_{0}}) \\ \xi p[p^{\ast }]+\mathbf{U}\cdot \nabla p[p^{\ast }]+\frac{\text{div}(\mathbf{U})}{2}p[p^{\ast }]=& ~p^{\ast },\end{aligned}$$ with estimates—for $\xi =\xi (\mathbf{U})$ large enough, see (\[large\]) and (\[bc\]) — $$\begin{aligned} ||p[\mathbf{u}]||_{L^2(\mathcal{O})}\leq & ~\dfrac{C}{\xi }||\mathbf{u}||_{\mathbf{H}^{1}(\mathcal{O})} \label{first} \\ ||p[\widetilde{{\mathbf{v}}_{0}}]||_{L^2(\mathcal{O})}\leq & ~\dfrac{C}{\xi }||\widetilde{{\mathbf{v}}_{0}}||_{\mathbf{H}^{1}(\mathcal{O})} \notag \\ \leq & ~\dfrac{C}{\xi }||g||_{H_{0}^{1/2+\epsilon }(\Omega )} \label{second} \\ ||p[p^{\ast }]||_{L^2(\mathcal{O})}\leq & ~\dfrac{C}{\xi }||p^{\ast }||_{\mathcal{O}}, \label{third}\end{aligned}$$where again, fluid variable $\mathbf{u}$ will be sought after. Now, the rest part of the proof relies on the application of Lax-Milgram theorem, by way of solving for $\mathbf{u}$ in (\[v\_com\]). For this reason, we firstly define the operator $A\in \mathcal{L}(\mathbf{V}_{0},[\mathbf{V}_{0}]^{\prime })$ to be, for all $\mathbf{\psi }\in \mathbf{V}_{0}$, $$\begin{aligned} \left\langle A\mathbf{u},\mathbf{\psi }\right\rangle _{\mathbf{v}_{0}\times \lbrack \mathbf{v}_{0}]^{\prime }}=& ~\big(\xi \mathbf{u}+\mathbf{U}\cdot \nabla \mathbf{u}+\frac{1}{2}\text{div}(\mathbf{U})\mathbf{u},\mathbf{\psi }\big)_{\mathcal{O}} \notag \\ & +\big(\sigma (\mathbf{u}),\epsilon (\mathbf{\psi })\big)_{\mathcal{O}}+\eta \big(\mathbf{u},\mathbf{\psi }\big)_{\mathcal{O}}-\big(p[\mathbf{u}],\text{div}(\mathbf{\psi })\big)_{\mathcal{O}}, \label{A}\end{aligned}$$where again, $p[\mathbf{u}]$ solves (\[duh1\]). So, with a view of finding the solution pair $(p,\mathbf{v})$, via the expressions (\[v\_com\]) and (\[p\_com\]), we are led to consider the following variational problem: Find $\mathbf{u}\in \mathbf{V}_{0}$ which solves, for every $\mathbf{\psi }\in \mathbf{V}_{0}$, $$\left\langle A{\mathbf{u}},\mathbf{\psi }\right\rangle _{\mathbf{v}_{0}\times \lbrack \mathbf{v}_{0}]^{\prime }}=\left\langle F,\mathbf{\psi }\right\rangle _{\mathbf{v}_{0}\times \lbrack \mathbf{v}_{0}]^{\prime }}; \label{vareq1}$$ where the forcing term $F\in \lbrack \mathbf{V}_{0}]^{\prime }$ is given by $$\begin{aligned} \left\langle F,\mathbf{\psi }\right\rangle _{\mathbf{v}_{0}\times \lbrack \mathbf{v}_{0}]^{\prime }}& =\left( {\mathbf{v}}^{\ast },\mathbf{\psi }\right) _{\mathcal{O}}-\big(\sigma (\widetilde{{\mathbf{v}}_{0}}),\epsilon (\mathbf{\psi })\big)_{\mathcal{O}}-\eta (\widetilde{{\mathbf{v}}_{0}},\mathbf{\psi })_{\mathcal{O}} \notag \\ & -\big(\xi \widetilde{{\mathbf{v}}_{0}}+\mathbf{U}\cdot \nabla \widetilde{{\mathbf{v}}_{0}}+\frac{1}{2}\text{div}(\mathbf{U})\widetilde{{\mathbf{v}}_{0}},\mathbf{\psi }\big)_{\mathcal{O}} \notag \\ & +\big(p[\widetilde{{\mathbf{v}}_{0}}]+p[p^{\ast }],\text{div}(\mathbf{\psi })\big)_{\mathcal{O}}, \label{force}\end{aligned}$$for given $\mathbf{\psi }\in \mathbf{V}_{0}.$ After considering the definition of $\mathbf{V}_{0}$ and using the divergence theorem we note that $$\big(\mathbf{U}\cdot \nabla \mathbf{\psi }+\frac{1}{2}\text{div}(\mathbf{U})\mathbf{\psi },\mathbf{\psi }\big)_{\mathcal{O}}=0,~~\forall ~\mathbf{\psi }\in \mathbf{V}_{0}. \label{there}$$Moreover, by estimate (\[first\]) we have for all $\mathbf{\psi }\in \mathbf{V}_{0}$, $$\Big\vert\left( p[\mathbf{\psi }],\func{div}(\mathbf{\psi })\right) _{\mathcal{O}}\Big\vert\leq \frac{C}{\xi }||\mathbf{\psi }||_{\mathbf{H}^{1}(\mathcal{O})}^{2}. \label{with}$$ Combining (\[there\]) and (\[with\]) with Korn’s inequality—see e.g., Theorem 2.6.5, p.93 of [@kesavan]—we then have, for $\xi >0$ sufficiently large, $$\begin{array}{l} \left\langle A\mathbf{\psi },\mathbf{\psi }\right\rangle _{\mathbf{v}_{0}\times \lbrack \mathbf{v}_{0}]^{\prime }}= (\xi +\eta )||\mathbf{\psi }||_{\mathcal{O}}^{2}+\sigma (\mathbf{\psi }),\epsilon (\mathbf{\psi })\big)-\big(p[\mathbf{\psi }],\text{div}(\mathbf{\psi })\big)_{\mathcal{O}}\geq ~c||\mathbf{\psi }||_{\mathbf{H}^{1}(\mathcal{O})}^{2},~~\forall ~\mathbf{\psi }\in \mathbf{V}_{0},\end{array} \label{V_e}$$where coercivity constant $c>0$ is independent of sufficiently large $\xi >0$. Therefore, $A\in \mathcal{L}(\mathbf{V}_{0},[\mathbf{V}_{0}]^{\prime })$ is $\mathbf{V}_{0}$-elliptic for $\xi >0$ large enough. Consequently, by the Lax-Milgram Theorem, the variational equation (\[vareq1\]) has a unique solution ${\mathbf{u}}\in \mathbf{V}_{0}$, which will in turn yield the solution pair $(\mathbf{v},p)$ of (\[a1\])–(\[a5\] through the relations (\[v\_com\]) and (\[p\_com\]). In particular, from (\[v\_com\]) and (\[bc\]), $\mathbf{v}\in \mathbf{H}^{1}(\mathcal{O})$ admits of the decomposition (\[crucial\]); and since $$\left. \mathbf{v}\right\vert _{\partial \mathcal{O}}\cdot \mathbf{n}=\begin{cases} 0 & ~\text{ on }~S \\ g & ~\text{ on }~\Omega ,\end{cases}$$the obtained solution component $\mathbf{v}$ satisfies the boundary conditions (\[a4\])–(\[a5\]). In addition, from (\[vareq1\]), (\[v\_com\]) and (\[p\_com\]), $(p,\mathbf{v})$ satisfies the variational relation $$\begin{aligned} \big(\xi {\mathbf{v}}+\mathbf{U}\cdot \nabla {\mathbf{v}}+\frac{1}{2}\text{div}(\mathbf{U}){\mathbf{v}},\mathbf{\psi }\big)_{\mathcal{O}}+(\sigma ({\mathbf{v}}),\epsilon (\mathbf{\psi }))_{\mathcal{O}}+\eta ({\mathbf{v}},\mathbf{\psi })& -(p,\text{div}(\mathbf{\psi }))_{\mathcal{O}} \notag \\ =& ({\mathbf{v}}^{\ast },\mathbf{\psi })_{\mathcal{O}},~~\forall ~\mathbf{\psi }\in \mathbf{V}_{0}. \label{L0}\end{aligned}$$ In particular, if $\mathbf{\psi }\in \lbrack \mathcal{D}(\mathcal{O})]^{3}$, we then have $$\begin{aligned} -(\text{div}~\sigma ({\mathbf{v}}),\mathbf{\psi })_{\mathcal{O}}+\eta ({\mathbf{v}},\mathbf{\psi })_{\mathcal{O}}& +(\nabla p,\mathbf{\psi })_{\mathcal{O}} \notag \\ =& -(\xi {\mathbf{v}}+\mathbf{U}\cdot \nabla {\mathbf{v}}+\frac{1}{2}\func{div}(\mathbf{U}){\mathbf{v}},\mathbf{\psi })_{\mathcal{O}}+({\mathbf{v}}^{\ast },\mathbf{\psi })_{\mathcal{O}}. \label{L1}\end{aligned}$$ Since ${\mathbf{v}}\in \mathbf{H}^{1}(\mathcal{O})$, this relation and the density of $[\mathcal{D}(\mathcal{O})]^{3}$ in $\mathbf{L}^{2}(\mathcal{O})$, yield that $$-\func{div}\sigma ({\mathbf{v}})+\eta {\mathbf{v}}+\nabla p=-\big(\xi {\mathbf{v}}+\mathbf{U}\cdot \nabla {\mathbf{v}}+\frac{1}{2}\func{div}(\mathbf{U}){\mathbf{v}}\big)+{\mathbf{v}}^{\ast } \label{L2}$$in $L^{2}$-sense. And so $(p,{\mathbf{v}})$ satisfies the coupled system (\[a2\]) and (\[a1\]) pointwise. (See the remark below (\[weak\])). Finally, since $[-\text{div}~\sigma ({\mathbf{v}})+\nabla p]\in \mathbf{L}^{2}(\mathcal{O})$, and $(p,{\mathbf{v}})\in L^{2}(\mathcal{O})\times \mathbf{H}^{1}(\mathcal{O})$, a classic integration by parts argument will yield the following trace regularity: $$\lbrack \sigma ({\mathbf{v}})\mathbf{n}-p\mathbf{n}]\in \mathbf{H}^{-1/2}(\partial \mathcal{O}) \label{trace}$$(see e.g., Theorem 13.2.3, p.326, of [@aubin]). Integrating by parts in (\[L0\]), we consequently have for all $\mathbf{\psi }\in \mathbf{V}_{0}.$ $$\begin{aligned} ({\mathbf{v}}^{\ast },\mathbf{\psi })_{\mathcal{O}}=& ~(\xi {\mathbf{v}}+\mathbf{U}\cdot \nabla {\mathbf{v}}+\frac{1}{2}\text{div}(\mathbf{U}){\mathbf{v}},\mathbf{\psi })_{\mathcal{O}}-(\func{div}\sigma ({\mathbf{v}})-\nabla p,\mathbf{\psi })_{\mathcal{O}} \\ & +\eta ({\mathbf{v}},\mathbf{\psi })_{\mathcal{O}}+\langle \sigma ({\mathbf{v}})\mathbf{n-}p\mathbf{n},\mathbf{\psi }\rangle _{\partial \mathcal{O}},\end{aligned}$$or, after invoking (\[L2\]): $$\langle \sigma ({\mathbf{v}})\mathbf{n}-p\mathbf{n},\mathbf{\psi }\rangle _{\partial \mathcal{O}}=0\text{ for every }\mathbf{\psi }\in \mathbf{V}_{0}.$$This orthogonality and the surjectivity of the trace mapping from $\mathbf{H}^{1}(\mathcal{O})\rightarrow \mathbf{H}^{1/2}(\partial \mathcal{O})$, allow us to deduce that the obtained solution pair $(p,{\mathbf{v}})$ satisfies the boundary condition $$\lbrack \sigma ({\mathbf{v}})\mathbf{n}-p\mathbf{n}]\cdot \boldsymbol{\tau }=0\text{ for all }\boldsymbol{\tau }\in TH^{1/2}(\partial \mathcal{O}),$$in *weak sense*; i.e., as in (\[weak2\]) of Remark \[weaker\]. Lastly, the necessary continuous dependence estimates (\[cdd\_0\])–([cdd]{}) come from collecting (\[p\_com\]) and (\[first\])–(\[third\]) – for $p$ in (\[cdd\_0\]); and (\[v\_com\]), (\[bc\]), (\[V\_e\]), and (\[force\]) – for $\mathbf v$ in (\[cdd\]). This concludes the proof of Lemma \[staticwellp\], and so [STEP 1]{} of the maximality argument. **STEP 2**With Lemma \[staticwellp\] in hand, we properly deal with the coupled fluid-structure PDE system (\[statice1\])–(\[staticsys3.5\]). Our solution here will be predicated on finding the structural variable $w_{1}$ which solves the $\Omega $-problem $$\begin{aligned} \xi ^{2}w_{1}+\Delta ^{2}w_{1}+[2\nu \partial _{x_{3}}({\mathbf{v}})_{3}]_{\Omega }+\lambda \text{div}({\mathbf{v}})|_{\Omega }-p|_{\Omega }=& ~w_{2}^{\ast }+\xi w_{1}^{\ast }~~\text{ on}~\Omega \label{staticsys5} \\ w_{1}\big |_{\partial \Omega }=& ~\partial _{\nu }w_{1}\big|_{\partial \Omega }=0. \label{staticsys6}\end{aligned}$$ Let $(p^{\ast },{\mathbf{v}}^{\ast })\in L^{2}(\mathcal{O})\times \mathbf{L}^{2}(\mathcal{O})$ be the pressure and fluid data from (\[statice1\]). Let $z\in H_{0}^{2}(\Omega )$ be given. Then from Lemma \[staticwellp\], we know that the following problem has a unique solution $\left\{ p(z;p^{\ast };\mathbf{v}^{\ast }), \mathbf{v}(z;p^{\ast };\mathbf{v}^{\ast })\right\} $: $$\begin{array}{l} \xi p(z;p^{\ast };\mathbf{v}^{\ast })+\mathbf{U}\cdot \nabla p(z;p^{\ast };\mathbf{v}^{\ast })+\frac{1}{2}\text{div}(\mathbf{U})p(z;p^{\ast };\mathbf{v}^{\ast })+\text{div}({\mathbf{v}}(z;p^{\ast };\mathbf{v}^{\ast }))=~p^{\ast }~~\text{ in }~\mathcal{O} \\ \xi {\mathbf{v}}(z;p^{\ast };\mathbf{v}^{\ast })+\mathbf{U}\cdot \nabla {\mathbf{v}}(z;p^{\ast };\mathbf{v}^{\ast })+\frac{1}{2}\text{div}(\mathbf{U}){\mathbf{v}}(z;p^{\ast };\mathbf{v}^{\ast }) \\ \text{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }-\text{div}~\sigma ({\mathbf{v}}(z;p^{\ast };\mathbf{v}^{\ast }))+\eta {\mathbf{v}}(z;p^{\ast };\mathbf{v}^{\ast })+\nabla p(z;p^{\ast };\mathbf{v}^{\ast })=~{\mathbf{v}}^{\ast }~~\text{ in }~\mathcal{O} \\ \left( \sigma ({\mathbf{v}}(z;p^{\ast };\mathbf{v}^{\ast }))\mathbf{n}-p(z;p^{\ast };\mathbf{v}^{\ast })\mathbf{n}\right) \cdot \boldsymbol{\tau }=~0~~\text{ on }~\partial \mathcal{O} \\ {\mathbf{v}}(z;p^{\ast };\mathbf{v}^{\ast })\cdot \mathbf{n}=~0~~\text{ on }~S \\ {\mathbf{v}}(z;p^{\ast };\mathbf{v}^{\ast })\cdot \mathbf{n}=~z~~\text{ on}~\Omega .\end{array} \label{staticsys1}$$ Akin to what was done in [STEP 1]{}, we decompose the solution of the BVP ([staticsys1]{}) into two parts:$$\begin{aligned} \mathbf{v}(z;p^{\ast };\mathbf{v}^{\ast }) &=&\mathbf{v(z)+v}(p^{\ast };\mathbf{v}^{\ast }); \label{d1} \\ p(z;p^{\ast };\mathbf{v}^{\ast }) &=&p(z)+p(p^{\ast };\mathbf{v}^{\ast }), \label{d2}\end{aligned}$$ where $(p(z),{\mathbf{v}}(z))\in \mathbf{H}^{1}(\mathcal{O})\times L^{2}(\mathcal{O})$ is the solution of the problem $$\begin{aligned} \xi p(z)+\mathbf{U}\cdot \nabla p(z)+\frac{1}{2}\text{div}(\mathbf{U})p(z)+\text{div}({\mathbf{v}}(z))=& 0~~\text{ in }~\mathcal{O} \label{huh} \\ \xi {\mathbf{v}(z)}+\mathbf{U}\cdot \nabla {\mathbf{v}}(z)+\frac{1}{2}\text{div}(\mathbf{U}){\mathbf{v}}(z)-\text{div}~\sigma ({\mathbf{v}}(z))+\eta {\mathbf{v}}(z)+\nabla p(z)=& ~0~~\text{ in }~\mathcal{O} \\ \left( \sigma ({\mathbf{v}}(z))\mathbf{n}-p(z)\mathbf{n}\right) \cdot \boldsymbol{\tau }=& ~0~~\text{ on }~\partial \mathcal{O} \\ {\mathbf{v}}(z)\cdot \mathbf{n}=& ~0~~\text{ on }~S \\ {\mathbf{v}}(z)\cdot \mathbf{n}=& ~z~~\text{ on}~\Omega , \label{givensys2}\end{aligned}$$and $\big(p(p^{\ast },{\mathbf{v}}^{\ast }),{\mathbf{v}}(p^{\ast };{\mathbf{v}}^{\ast })\big)\equiv (\overline{p},\overline{{\mathbf{v}}})\in L^{2}(\mathcal{O})\times \mathbf{V}_{0}$   is the solution of the problem $$\begin{aligned} \xi \overline{p}+\mathbf{U}\cdot \nabla \overline{p}+\frac{1}{2}\text{div}(\mathbf{U})\overline{p}+\text{div}\overline{{\mathbf{v}}}=& ~p^{\ast }~\text{ in }~~\mathcal{O} \label{insert} \\ \xi \overline{{\mathbf{v}}}+\mathbf{U}\cdot \nabla \overline{{\mathbf{v}}}+\frac{1}{2}\text{div}(\mathbf{U})\overline{{\mathbf{v}}}-\text{div}~\sigma (\overline{{\mathbf{v}}})+\eta \overline{{\mathbf{v}}}+\nabla \overline{p}=& ~{\mathbf{v}}^{\ast }~~\text{ in }~~\mathcal{O} \\ \left( \sigma ({\overline{{\mathbf{v}}}})\mathbf{n}-p\mathbf{n}\right) \cdot \boldsymbol{\tau }=& ~0~~\text{ on }~\partial \mathcal{O} \\ \overline{{\mathbf{v}}}\cdot \mathbf{n}=& ~0~~\text{ on }~~S \\ \overline{{\mathbf{v}}}\cdot \mathbf{n}=& ~0~~\text{ on }~~\Omega . \label{insert1}\end{aligned}$$Therewith: if we multiply the structural PDE component (\[staticsys5\])—in solution variables $(p,\mathbf{v},w_{1},w_{2})$—by given $z\in H_{0}^{2}(\Omega ),$ with associated fluid-pressure solution $(p(z),{\mathbf{v}}(z))\,$of (\[huh\])–(\[givensys2\]), integrate by parts, and utilize the boundary conditions in the BVP (\[huh\])–(\[givensys2\]), we then have: $$\begin{aligned} \langle w_{2}^{\ast }+\xi w_{1}^{\ast },z\rangle _{\Omega }=& ~\xi ^{2}\langle w_{1},z\rangle _{\Omega }+\langle \Delta w_{1},\Delta z\rangle _{\Omega } \notag \\ & +\langle \sigma ({\mathbf{v}})\mathbf{n}-p\mathbf{n},{\mathbf{v}}(z)\rangle _{\partial \mathcal{O}} \notag \\ & \text{(after using (\ref{crucial})--(\ref{crucial3}) of Lemma \ref{staticwellp})} \notag \\ =& ~\xi ^{2}\langle w_{1},z\rangle _{\Omega }+\langle \Delta w_{1},\Delta z\rangle _{\Omega }+(\text{div}(\sigma ({\mathbf{v}}))-\nabla p,{\mathbf{v}}(z))_{\mathcal{O}} \notag \\ & +(\sigma ({\mathbf{v}}),\epsilon ({\mathbf{v}}(z))_{\mathcal{O}}-(p,\text{div}({\mathbf{v}}(z)))_{\mathcal{O}} \notag \\ =& ~\xi ^{2}\langle w_{1},z\rangle _{\Omega }+\langle \Delta w_{1},\Delta z\rangle _{\Omega }+\xi ({\mathbf{v}},{\mathbf{v}}(z))_{\mathcal{O}}+(\mathbf{U}\cdot \nabla {\mathbf{v}},\mathbf{v}(z))_{\mathcal{O}} \notag \\ & +(\sigma ({\mathbf{v}}),\epsilon ({\mathbf{v}}(z)))_{\mathcal{O}}+\frac{1}{2}(\text{div}(\mathbf{U}){\mathbf{v}},{\mathbf{v}}(z))_{\mathcal{O}} \notag \\ & +\eta ({\mathbf{v}},{\mathbf{v}}(z))_{\mathcal{O}}-(p,\text{div}({\mathbf{v}}(z)))_{\mathcal{O}}-({\mathbf{v}}^{\ast },{\mathbf{v}}(z))_{\mathcal{O}}. \label{almost}\end{aligned}$$Now, using the first resolvent relation in (\[staticsys3.5\]) and invoking the respective solution maps for (\[huh\])–(\[givensys2\]) and ([insert]{})–(\[insert1\]), we may express the (prospective) solution component $(p,{\mathbf{v}})$ of (\[statice1\]) as $$\begin{aligned} {\mathbf{v}}& ={\mathbf{v}}(\xi w_{1}-w_{1}^{\ast };p^{\ast };\mathbf{v}^{\ast })={\mathbf{v}}(\xi w_{1}-w_{1}^{\ast })~+\overline{{\mathbf{v}}} \label{r1} \\ p& =~p(\xi w_{1}-w_{1}^{\ast };p^{\ast };\mathbf{v}^{\ast })=p(\xi w_{1}-w_{1}^{\ast })+\overline{p} \label{r2}\end{aligned}$$(cf. (\[d1\])–(\[d2\])). With (\[almost\]) and (\[r1\])–(\[r2\]) in mind: Accordingly, if we define an operator $B\in \mathcal{L}(H_{0}^{2}(\Omega ),H^{-2}(\Omega ))$ as $$\begin{aligned} \big(B(w),z\big)\equiv & ~\xi ^{2}\langle w,z\rangle _{\Omega }+\langle \Delta w,\Delta z\rangle _{\Omega } \notag \\ & +\xi ^{2}({\mathbf{v}}(w),{\mathbf{v}}(z))_{\mathcal{O}}+\xi (\mathbf{U}\cdot \nabla {\mathbf{v}}(w),{\mathbf{v}}(z))_{\mathcal{O}}+\frac{\xi }{2}\big(\text{div}(\mathbf{U}){\mathbf{v}}(w),{\mathbf{v}}(z)\big)_{\mathcal{O}} \notag \\ & +\big(\xi \sigma ({\mathbf{v}}(w)),\epsilon ({\mathbf{v}}(z))\big)_{\mathcal{O}}+\eta \xi \big({\mathbf{v}}(w),{\mathbf{v}}(z))_{\mathcal{O}}-\xi (p(w),\text{div}({\mathbf{v}}(z)))_{\mathcal{O}}, \label{AA}\end{aligned}$$– where $(p(w),{\mathbf{v}}(w))$ solves (\[huh\])–(\[givensys2\]) with $H_{0}^{2}(\Omega )$ boundary data $w$ – then finding solution $w_{1}\in H_{0}^{2}(\Omega )$ of the structural PDE component (\[staticsys5\])–([staticsys6]{}) is tantamount to finding solution $w_{1}\in H_{0}^{2}(\Omega )$ of the variational equation $$\left\langle B(w_{1}),z\right\rangle _{(H^{-2}\times H_{0}^{2})(\Omega )}=\mathcal{F}(z)\text{, for all }z\in H_{0}^{2}(\Omega )\text{;} \label{vareq}$$where the functional $\mathcal{F}\in H^{-2}(\Omega )$ is given by $$\begin{aligned} \mathcal{F}(z)\equiv & ~\big({\mathbf{v}}^{\ast },{\mathbf{v}}(z)\big)_{\mathcal{O}}+\langle w_{2}^{\ast }+\xi w_{1}^{\ast },z\rangle _{\Omega } \\ & +\xi ({\mathbf{v}}(w_{1}^{\ast }),{\mathbf{v}}(z))_{\mathcal{O}}-\xi (\overline{{\mathbf{v}}},{\mathbf{v}}(z))_{\mathcal{O}} \\ & +(\mathbf{U}\cdot \nabla {\mathbf{v}}(w_{1}^{\ast }),{\mathbf{v}}(z))_{\mathcal{O}}-(\mathbf{U}\cdot \nabla \overline{{\mathbf{v}}},{\mathbf{v}}(z))_{\mathcal{O}} \\ & +\frac{1}{2}(\text{div}(\mathbf{U}){\mathbf{v}}(w_{1}^{\ast }),{\mathbf{v}}(z))_{\mathcal{O}}-\frac{1}{2}(\text{div}(\mathbf{U})\overline{{\mathbf{v}}},{\mathbf{v}}(z))_{\mathcal{O}} \\ & +\eta ({\mathbf{v}}(w_{1}^{\ast }),{\mathbf{v}}(z))_{\mathcal{O}}-\eta (\overline{{\mathbf{v}}},{\mathbf{v}}(z))_{\mathcal{O}} \\ & +(\sigma ({\mathbf{v}}(w_{1}^{\ast })),\epsilon ({\mathbf{v}}(z)))_{\mathcal{O}}-(\sigma (\overline{{\mathbf{v}}}),\epsilon ({\mathbf{v}}(z)))_{\mathcal{O}} \\ & -(p(w_{1}^{\ast }),\text{div}({\mathbf{v}}(z)))_{\mathcal{O}}+(\overline{p},\text{div}({\mathbf{v}}(z)))_{\mathcal{O}}.\end{aligned}$$ Recall that by Lemma \[staticwellp\](iv), one has the following estimate for the pressure term in (\[AA\]), for $\xi =\xi (\mathbf{U})$ large enough: $$\left\Vert p(w)\right\Vert_{L^2(\mathcal{O})} \leq \frac{C}{\xi }\left\Vert w\right\Vert _{H_{0}^{2}(\Omega )}\text{, \ \ for all }w\in H_{0}^{2}(\Omega ).$$By means of this estimate and Korn’s inequality, we will have, in a manner analogous to that in the proof of Lemma \[staticwellp\], that the operator $B$ is $H_{0}^{2}(\Omega )$-elliptic, for $\xi =\xi (\mathbf{U})$ large enough. Thus we can use the Lax-Milgram Theorem to solve the variational equation (\[vareq\]), or what is the same, recover the solution component $w_{1}$ of the resolvent equations (\[staticsys5\])–(\[staticsys6\]). In turn, we will have $$\begin{array}{c} w_{2}=~\xi w_{1}-w_{1}^{\ast }, \\ {\mathbf{v}}=~{\mathbf{v}}(w_{2})+\overline{{\mathbf{v}}}, \\ p=~p(w_{2})+\overline{p}.\end{array}$$where again $[p(w_{2}),{\mathbf{v}}(w_{2})]$ is the solution to (\[huh\])–(\[givensys2\]), and $[\overline p,\overline{{\mathbf{v}}}]$ solves the system (\[insert\])–(\[insert1\]). This finally establishes the range condition in (\[range\_0\]) for $\xi >0$ sufficiently large. A subsequent application of Lumer-Philips Theorem yields a contraction semigroup for the $\widehat{\mathcal{A}}:D(\mathcal{A})\subset \mathcal{H}\rightarrow \mathcal{H}$. As a consequence, the application of Theorem 1.1 [@pazy Chapter 3.1], p.76, gives the desired result for the (unperturbed) compressible flow-structure generator $\mathcal{A}$. Stationary Problem {#static} ================== Since the stationary problem associated with a dissipative dynamical system is of interest when studying long-time behavior of solutions [@springer], we discuss the linear stationary problem associated with –. We briefly discuss the inclusion of nonlinearity in the plate for the stationary problem in Section \[nonlinear\]. (Such a discussion is in line with [@Chu2013-comp p.658].) Formally, we introduce the following problem: $$\begin{aligned} & \left\{ \begin{array}{l} \mathbf{U}\cdot \nabla p+\text{div}~\mathbf{u}=0~\text{ in }~\mathcal{O}\times (0,\infty ) \\ \mathbf{U}\cdot \nabla \mathbf{u}-\text{div}~\sigma (\mathbf{u})+\eta \mathbf{u}+\nabla p=0~\text{ in }~\mathcal{O}\times (0,\infty ) \\ (\sigma (\mathbf{u})\mathbf{n}-p\mathbf{n})\cdot \boldsymbol{\tau }=0~\text{ on }~\partial \mathcal{O}\times (0,\infty ) \\ \mathbf{u}\cdot \mathbf{n}=0~\text{ on }~\partial \mathcal{O}\times (0,\infty )\end{array}\right. \label{system1*} \\ & \notag \\ & \left\{ \begin{array}{l} \Delta ^{2}w+\left[ 2\nu \partial _{x_{3}}(\mathbf{u})_{3}+\lambda \text{div}(\mathbf{u})-p\right] _{\Omega }=0~\text{ on }~\Omega \times (0,\infty ) \\ w=\frac{\partial w}{\partial \nu }=0~\text{ on }~\partial \Omega \times (0,\infty )\end{array}\right. \label{IM2*}\end{aligned}$$(We note here, as in Remark \[weaker\], the boundary condition  $(\sigma (\mathbf{u})\mathbf{n}-p\mathbf{n})\cdot \boldsymbol{\tau }=0$—for $\mathbf{\tau }\in TH^{1/2}(\partial \mathcal{O})$—is to be interpreted in the sense of distributions.) Note that in terms of the fluid-structure generator $\mathcal{A}:D(\mathcal{A})\subset \mathcal{H}\rightarrow \mathcal{H}$, solving the PDE system ([system1\*]{})–(\[IM2\*\]) is equivalent to identifying an element in $Null(\mathcal{A})$. Alternatively, as in [@Chu2013-comp], the problem ([system1\*]{})–(\[IM2\*\]) can be interpreted variationally. This is to say, we define a *weak solution* to – to be a triple $(p,\mathbf{u},w)\in L^{2}(\mathcal{O})\times \mathbf{V}_{0}\times H_{0}^{2}(\Omega )$, which must satisfy, $$\begin{aligned} (\text{div}~\mathbf{u},q)_{\mathcal{O}}& =~\int_{\mathcal{O}}(\text{div}~\mathbf{U})(pq)dx+\int_{\mathcal{O}}(\mathbf{U}\cdot \nabla q)pdx \label{variationally1} \\ a_{\mathcal{O}}(\mathbf{u},\boldsymbol{\psi })-(p,\text{div}~\boldsymbol{\psi })_{\mathcal{O}}=& ~-\int_{\mathcal{O}}(\mathbf{U}\cdot \nabla \mathbf{u})\boldsymbol{\psi }dx-(\Delta w,\Delta \beta )_{\Omega }, \label{variationally2}\end{aligned}$$where the bilinear form $a_{\mathcal{O}}(\cdot ,\cdot )$ is as given in (\[bi\]) for all $q\in W_{\mathbf{U}}$ and all $\boldsymbol{\psi }\in \mathbf{V}_{\Omega }$. We take:$$W_{\mathbf{U}}=\left\{ q\in L^{2}(\mathcal{O}):\mathbf{U}\cdot \nabla q\in L^{2}(\mathcal{O})\right\} . \label{W_U}$$$$\mathbf{V}_{\Omega }\equiv \left\{ \mathbf{v=v(}\beta )\in \mathbf{H}^{1}(\mathcal{O})~:~\left[ \mathbf{v}\cdot \mathbf{n}\right] _{\partial \mathcal{O}}=\begin{cases} 0~\text{ on }~S \\ \beta \in H_{0}^{2}(\Omega )\text{ \ on \ }\Omega .\end{cases}. \label{V_omega} \right\}$$Note that by Theorem 5 of [@buffa], and extension by zero of $\beta \in H_{0}^{2}(\Omega )$, the space $\mathbf{V}_{\Omega }$ is well-defined on the Lipschitz geometry of $\mathcal{O}.$ We note that (\[variationally1\])–(\[variationally2\]) is a *natural* definition of a weak (variational) solution, in line with *weak* solutions [@Chu2013-comp] to the dynamic equations –. Indeed, we demonstrate that such weak solutions—[**should they exist**]{}—are classical solutions of the PDE system –(\[IM2\*\]). \[mark\]Suppose that $(p,\mathbf{u},w)\in L^{2}(\mathcal{O})\times \mathbf{V}_{0}\times H_{0}^{2}(\Omega )$ and satisfies –. Then $(p,\mathbf{u},w)$ satisfies – almost everywhere, and in fact $\left[ p,\mathbf{u},w,0\right] \in Null(\mathcal{A})$. Here, we essentially mimic the final part of the proof of Lemma 5.1. Firstly, in (\[variationally1\]) we consider $q\in \mathcal{D}(\mathcal{O})$. An invocation of Green’s Theorem then yields $$\begin{aligned} (\func{div}(\mathbf{u}),q)_{\mathcal{O}} &=&~\int_{\mathcal{O}}\func{div}(\mathbf{U})(pq)dx+0-\left( \int_{\mathcal{O}}\func{div}(\mathbf{U})(qp)dx+\int_{\mathcal{O}}(\mathbf{U}\cdot \nabla p)qdx\right) \\ &=&-\int_{\mathcal{O}}(\mathbf{U}\cdot \nabla p)qdx~~\text{ \ for every }q\in \mathcal{D}(\mathcal{O}).\end{aligned}$$The density of $\mathcal{D}(\mathcal{O})$ in $L^{2}(\mathcal{O})$ then yields that $$\mathbf{U}\cdot \nabla p+\func{div}(\mathbf{u})=0\text{ \ in the } L^{2}(\mathcal{O})\text{-sense} \label{r_1}$$(and so, with $\mathbf u \in \mathbf V_0$, in particular, $p\in W_{\mathbf{U}}$ of (\[W\_U\])). Secondly, if in (\[variationally2\]) we take test function $\mathbf{\psi }\in \left[ \mathcal{D}(\mathcal{O})\right] ^{3}$, then we infer, upon integration by parts, that $$-\func{div}\sigma (\mathbf{u})+\eta {\mathbf u}+\nabla p+\mathbf{U}\cdot \nabla \mathbf{u}=0\text{ \ in }\mathbf{L}^{2}(\mathcal{O})\text{-sense.} \label{r_2}$$Subsequently, from (\[r\_2\]), the fact that $\left\{ p,\mathbf{u}\right\} \in L^{2}(\mathcal{O})\times \mathbf{H}^{1}(\mathcal{O)}$, and by an integration by parts, we can (as before) assign a meaning to the boundary trace term  $\left[ \sigma (\mathbf{u)n}-p\mathbf{n}\right] _{\partial \mathcal{O}}$, viz.,$$\sigma (\mathbf{u)n}-p\mathbf{n\in H}^{-\frac{1}{2}}(\partial \mathcal{O}). \label{r_3}$$ Applying this boundary trace to the relation (\[variationally2\]), we then have for a test function $\mathbf{\psi }\in \mathbf{V}_{0}\subset \mathbf{V}_{\Omega }$, upon an integration by parts (and considering (\[r\_2\])) we make the inference $$\left\langle \sigma (\mathbf{u)n}-p\mathbf{n},\mathbf{\psi }\right\rangle _{\partial \mathcal{O}}=0\text{ \ for every }\mathbf{\psi }\in \mathbf{V}_{0}\text{,} \label{r_3.5}$$Thus, the following tangential boundary condition is satisified:$$\left[ \sigma (\mathbf{u)n}-p\mathbf{n}\right] \cdot \mathbf{\tau }=0\text{ \ for all }\mathbf{\tau }\in TH^{1/2}(\partial \mathcal{O}).\footnote{See again Remark \ref{weaker}.} \label{r_4}$$ Thirdly, with respect to (\[variationally2\]), we have upon integration by parts with variational relation, and an invocation of (\[r\_2\]),$$\left\langle \sigma (\mathbf{u)n}-p\mathbf{n},\mathbf{\psi }\right\rangle _{\partial \mathcal{O}}=-(\Delta w,\Delta \beta )_{\Omega }\text{ \ for every }\mathbf{\psi }\in \mathbf{V}_{\Omega }. \label{r_5}$$In particular, if for given $\beta \in H_{0}^{2}(\Omega )$, we set $$\mathbf{\psi }=\left\{ \begin{array}{c} 0\text{ \ in }S\text{,} \\ \beta \mathbf{n}\text{ \ in }\Omega \end{array}\right., \label{r_6}$$then this $\mathbf{\psi }\in \mathbf{V}_{\Omega }$ (see e.g., Theorem 3.33 of [@Mc]). Applying this test function in (\[r\_6\]) to (\[r\_5\]) – and using $\left. \mathbf{n}\right\vert _{\Omega }=\left[ 0,0,1\right] $—we have $$\left( \left[ 2\nu \partial _{x_{3}}(\mathbf{u})_{3}+\lambda \text{div}(\mathbf{u})-p\right] _{\Omega },\beta \right) _{\Omega }=-(\Delta w,\Delta \beta )_{\Omega }\text{ \ for every }\mathbf{\beta }\in H_{0}^{2}(\Omega ).$$In particular, this holds for ${\beta }\in \mathcal{D}(\Omega )$, whence we obtain$$\Delta ^{2}w+\left[ 2\nu \partial _{x_{3}}(\mathbf{u})_{3}+\lambda \text{div}(\mathbf{u})-p\right] =0\text{ in }~L^{2}(\Omega )\text{-sense.} \label{r_7}$$ Upon collecting (\[r\_1\]), (\[r\_2\]), (\[r\_4\]) and (\[r\_7\]), we then have that variational solution $\left[ p,\mathbf{u},w\right] $ of ([variationally1]{})–(\[variationally2\]) solves the coupled PDE system ([system1\*]{})–(\[IM2\*\]) a.e. (Note that since solution component $\mathbf{u}\in \mathbf{V}_{0}$, then its normal component on $\partial \mathcal{O}$ is zero.) We note that for any $\boldsymbol{\psi }$ and $\beta $ as above, we can utilize the weak identities to see that a stationary solution of ([variationally1]{})–(\[variationally2\]) must satisfy, for all $q\in W_{\mathbf{U}}$ and $\mathbf{\psi }\in \mathbf{V}_{\Omega }$, $$\begin{aligned} a_{\mathcal{O}}(\mathbf{u},\boldsymbol{\psi })-(p,\text{div}~\boldsymbol{\psi })_{\mathcal{O}}+& \big(\Delta w,\Delta \beta \big)_{\Omega }+(\text{div}~\mathbf{u},q) \label{tester} \\ =& -\int_{\mathcal{O}}(\mathbf{U}\cdot \nabla \mathbf{u})\boldsymbol{\psi }dx+~\int_{\mathcal{O}}(\text{div}~\mathbf{U})(pq)dx+\int_{\mathcal{O}}(\mathbf{U}\cdot \nabla q)pdx. \notag\end{aligned}$$Since $p\in W_{\mathbf{U}}$ (as we saw in the proof of Lemma \[mark\]), we may choose $q=p$ and $\boldsymbol{\psi }=\mathbf{u}\in \mathbf{V}_{0}$ in (\[tester\]), invoke Green’s theorem and the divergence theorem as above, to see that $$a_{\mathcal{O}}(\mathbf{u},\mathbf{u})=\frac{1}{2}\int_{\mathcal{O}}(\text{div}~\mathbf{U})[|p|^{2}+|\mathbf{u}|^{2}]dx. \label{zero}$$ As discussed above, it is clear that the long-time behavior (stability) properties of solutions to the dynamics – depend on the structure of the flow field $\mathbf{U}$. Thus, from ([zero]{}) it is not an unwelcome assumption (see [@preprint]) to consider a divergence-free flow field $\mathbf{U}$. This yields the following theorem. \[weakchar\] If the ambient vector field $\mathbf{U}$ is divergence free, a weak solution to (\[variationally1\])-(\[variationally2\]) (or equivalently, a solution to –) will be a triple of the form $(c,\mathbf{0},w_{c})$, with $c=$ const, and where $w_{c}\in H_{0}^{2}(\Omega )$ solves the boundary value problem $$\Delta ^{2}w=c~~\text{ in }~~\Omega \text{, \ }w=\nabla w=0\text{ \ on }\partial \Omega \text{.} \label{bvp_2}$$ If div$(\mathbf{U})=0$, then from (\[zero\]) and Korn’s Inequality we have that $\mathbf{u}=0$. Subsequently, the fluid equation in (\[system1\*\]) gives $\nabla p=0$, and so $p=c$. In turn, the structural equation in (\[IM2\*\]) with such $\left\{ \mathbf{u},p\right\} $ becomes (\[bvp\_2\]). Plate Nonlinearity {#nonlinear} ================== The treatment of semilinear, cubic-type nonlinearities in fluid-plate problems has become popular (see the surveys [@berlin11; @dcds] and references therein). In this section we demonstrate well-posedness of mild solutions to the dynamic problem, as well as discuss the stationary problem, *in the presence of the scalar von Karman nonlinearity* [@springer]. We begin with some basic facts about the von Karman nonlinearity, introduced in –. The first of which revolves around the *local Lipschitz* property of $f$ from $H_{0}^{2}(\Omega )\rightarrow L^{2}(\Omega )$. By way of availing ourselves of said Lipschitz continuity for von Karman plates, we further assume that bounded $\Omega \subset \mathbb{R}^{2}$ is sufficiently smooth. This property relies on the so called sharp regularity of the Airy stress function: Corollary 1.4.4 in [@springer].  To begin, one has the estimate,$$\Vert \left( \Delta _{D}^{2}\right) ^{-1}[u,w]\Vert _{W^{2,\infty }(\Omega )}\leq C\Vert u\Vert _{2,\Omega }\Vert w\Vert _{2,\Omega }, \label{bi_D}$$where $\Delta _{D}^{2}$ denotes the biharmonic operator with clamped boundary conditions. With $v(w)=v(w,w)$ in (\[airy-1\]), the estimate (\[bi\_D\]) above yields $$\Vert v(w)\Vert _{W^{2,\infty }(\Omega )}\leq C\Vert w\Vert _{2,\Omega }^{2},$$which, in turn, implies that the Airy stress function $v(w)$ satisfies the inequality $$\Vert \lbrack u_{1},v(u_{1})]-[u_{2},v(u_{2})]\Vert _{L^{2}(\Omega )}\leq C\Big(\Vert u_{1}\Vert _{H_{0}^{2}(\Omega )}^{2}+\Vert u_{2}\Vert _{H_{0}^{2}(\Omega )}\Vert _{H_{0}^{2}(\Omega )}^{2}\Big)\Vert u_{1}-u_{2}\Vert _{H_{0}^{2}(\Omega )} \label{airy-lip}$$(see Corollary 1.4.5 in [@springer]). Thus, the nonlinearity $f(w)=[w,v(w)+F_{0}]$ is locally Lipschitz from $H_{0}^{2}({\Omega })$ into $L^{2}(\Omega )$. The second critical property of the nonlinearity involves the existence of a potential energy functional $\Pi$ associated with $f$. In the case of the von Karman nonlinearity, it has the form $$\Pi(w)=\frac14\int_{\Omega}\Big(|\Delta v(w)|^2 -2w[w,F_0] \Big) dx,$$ and possesses the properties that $\Pi$ is a $C^1$-functional on $H^2_0(\Omega)$ such that $f$ is a Fréchet derivative of $\Pi$: $-f(w)=\Pi^{\prime }(w)$. From this it follows that for a smooth function $w$: $$\dfrac{d}{dt}\Pi(w)=(\Pi^{\prime }(w),w_t)=-(f(w),w_t)_{\Omega}.$$ Moreover $\Pi(\cdot)$ is locally bounded on $H^2_0(\Omega)$, and there exist $\eta<1/2$ and $C\ge 0$ such that $$\label{8.1.1c1} \eta \|\Delta w\|_{\Omega}^2 +\Pi(w)+C \ge 0\;,\quad \forall\, w\in H^2_0(\Omega)\;.$$ The latter fact follows from the bound [@springer Chapter 1.4] $$||w||^2_{\theta} \le \epsilon\big[ ||\Delta w||^2+||\Delta v(w)||^2\big]+C_{\epsilon},~~\theta \in [0,2).$$ We note that the Berger and Kirchhoff nonlinearities, for instance discussed in [@supersonic; @Chu2013-comp], satisfy the above properties; they: (i) are locally Lipschitz $H_0^2(\Omega)\to L^2(\Omega)$, (ii) have a $C^1$ antiderivative $\Pi$ satisfying the above properties. Nonlinear Dynamic Problem ------------------------- We now address the system –\[IC\_2\], taken with active plate nonlinearity: $$w_{tt}+\Delta ^{2}w+\left[ 2\nu \partial _{x_{3}}(\mathbf{u})_{3}+\lambda \text{div}(\mathbf{u})-p\right] _{\Omega }=[w,v(w)+F_{0}]~\text{ on }~\Omega \times (0,\infty ).$$We will show the well-posedness of *mild solutions* (in the sense of [@pazy]) in the presence of the von Karman nonlinearity. To this end, we define a nonlinear operator $\mathcal{F}:\mathcal{H}\rightarrow \mathcal{H}$, given by $$\mathcal{F}\big(\lbrack p,\mathbf{u},w_{1},w_{2}]\big)=[0,\mathbf{0},0,f(w_{1})].$$This mapping is locally Lipschitz (by the properties of $f$ above), and thus will be considered as a perturbation to the linear fluid-structure Cauchy problem which is modelled by generator $\mathcal{A}:\mathcal{H}\rightarrow \mathcal{H}$ In particular we have the abstract problem in variable $\mathbf{y}\in \mathcal{H}$,$$\begin{aligned} \mathbf{y}^{\prime }=& ~\mathcal{A}\mathbf{y}+\mathcal{F}(\mathbf{y}) \label{Cauchy2} \\ \mathbf{y}(0)=& ~\mathbf{y}_{0}\in \mathcal{H}. \label{Cauchy3}\end{aligned}$$ \[th:nonlin\] The nonlinear Cauchy problem in – is well-posed in the sense of mild solutions. This is to say: there is a unique local-in-time mild solution $\mathbf{y}(t)$ on $t\in \lbrack 0,t_{\text{max}})$ (which is also a weak solution). Moreover, for $\mathbf{y}_{0}\in {D}(\mathcal{A})$, the corresponding solution is strong. In either case, when $t_{\text{max}}(\mathbf{y}_{0})<\infty $, we have that $||\mathbf{y}(t)||_{\mathcal{H}}\rightarrow \infty $ as $t\nearrow t_{\text{max}}(y_{0})$. With the fluid-structure semigroup $e^{\mathcal At}$ in hand from Theorem \[wellp\], this is a direct application of Theorem 1.4 [@pazy p.185] and localized version of Theorem 1.6 [@pazy p.189], pertaining to locally Lipschitz perturbations of semigroup solutions. In order to guarantee global solutions, i.e., valid solutions of (\[Cauchy2\])–(\[Cauchy3\]) on $[0,T]$ for any $T>0$, we must utilize the good" structure of $\Pi $. The energy identity, in the presence of nonlinearity (i.e., when has the term $f(w)$), is obtained in a standard way using the properties of $\Pi $ (see [supersonic,Chu2013-comp]{}). Consider $\mathbf{y}_{0}=(p_{0},\mathbf{u}_{0},w_{0},w_{1})\in \mathcal{H}$ and $\mathbf{U}\in \mathbf{V}_{0}$. Any mild solution corresponding to (\[Cauchy2\])–(\[Cauchy3\]) satisfies: $$\begin{aligned} \mathcal{E}\big(p(t),\mathbf{u}(t),w(t),w_{t}(t)\big)+\int_{0}^{t}a_{\mathcal{O}}(\mathbf{u}(\tau ),\mathbf{u}(\tau ))d\tau +\Pi (w(t))=& ~\mathcal{E}\big(p_{0},\mathbf{u}_{0},w_{0},w_{1}\big)+\Pi (w(0)) \label{balancelaw*} \\ & +\frac{1}{2}\int_{0}^{t}\int_{\mathcal{O}}\text{div}(\mathbf{U})[|p(\tau )|^{2}+|\mathbf{u}(\tau )|^{2}]dxd\tau . \notag\end{aligned}$$From this a priori relation, Gronwall’s inequality, and the bound on the $\Pi$ in , we have the final (nonlinear) well-posedness theorem. \[final\] For any $T>0$, the Cauchy problem (\[Cauchy2\])-([Cauchy3]{}) is well-posed on $\mathcal{H}$ for all $[0,T]$. This is to say that the PDE problem in –, taking into account the nonlinear plate equation , is well-posed in the sense of mild solutions. Moreover, in the case of $\text{div}~\mathbf{U }\equiv 0$, we have the *global-in-time* estimate for solutions: $$\label{globalbound} \sup_{t \in [0,\infty)} \mathcal{E}(p(t),\mathbf{u}(t),w(t),w_t(t)) \le \mathbf{C}(p_0,\mathbf{u}_0,w_0,w_1, F_0).$$ The proof follows a standard tack, and is along the lines of [@Chu2013-comp] (in the case of these dynamics, taken with $\mathbf U=0$), or [@supersonic; @webster] (for the case of compressible, inviscid gas dynamics). See also [@springer Chapter 2.3] for an abstract discussion of nonlinear second order evolutions with locally Lipschitz perturbations. Nonlinear Stationary Problem ---------------------------- We now briefly mention the nonlinear stationary problem in the case when $\text{div}~\mathbf{U }\equiv 0$. As noted above (and, as is evident from and ), this is the primary case of interest for *long time behavior* analysis of the dynamics –. We note that the analysis above in Section \[static\] obtains identically in the presence of plate nonlinearity. Thus, with $\mathbf{U}$ divergence free as above, we have the equivalence of weak solutions to the system $$\begin{aligned} & \left\{ \begin{array}{l} \mathbf{U}\cdot \nabla p+\text{div}~\mathbf{u}=0~\text{ in }~\mathcal{O}\times (0,\infty ) \\ \mathbf{U}\cdot \nabla \mathbf{u}-\text{div}~\sigma (\mathbf{u})+\eta \mathbf{u}+\nabla p=0~\text{ in }~\mathcal{O}\times (0,\infty ) \\ (\sigma (\mathbf{u})\mathbf{n}-p\mathbf{n})\cdot \boldsymbol{\tau }=0~\text{ on }~\partial \mathcal{O}\times (0,\infty ) \\ \mathbf{u}\cdot \mathbf{n}=0~\text{ on }~S\times (0,\infty ) \\ \mathbf{u}\cdot \mathbf{n}=0~\text{ on }~\Omega \times (0,\infty )\end{array}\right. \label{system1***} \\ & \notag \\ & \left\{ \begin{array}{l} \Delta ^{2}w+\left[ 2\nu \partial _{x_{3}}(\mathbf{u})_{3}+\lambda \text{div}(\mathbf{u})-p\right] _{\Omega }=[w,v(w)+F_{0}]~\text{ on }~\Omega \times (0,\infty ) \\ w=\frac{\partial w}{\partial \nu }=0~\text{ on }~\partial \Omega \times (0,\infty )\end{array}\right. \label{IM2***}\end{aligned}$$and the following biharmonic problem: Find $w\in H_{0}^{2}(\Omega )$ such that $$(\Delta w,\Delta v)_{\Omega }-(f(w),v)=(c,v)_{\Omega },~~\forall ~~v\in H_{0}^{2}(\Omega ). \label{nonstat}$$(Note, this reduction is equivalent to that in [Chu2013-comp]{}.) With property of $\Pi $, it is well known that (for a given $c$) the solutions to , denoted by $\mathcal{N}_{c}$, form a nonempty, compact set in $H_{0}^{2}(\Omega )$ [Chu2013-comp,ChuRyz2011,springer]{}[^9]. This leaves us with the final theorem: Assume $\text{div}~\mathbf{U }\equiv 0$. Weak solutions to – are fully characterized by points of the form: $$(c,\mathbf{0},w_c),~~w_c \in \mathcal{N}_c \subset \subset H_0^2(\Omega).$$ Acknowledgments =============== The authors would like to thank both anonymous referees for their careful reading of the paper, and thoughtful feedback which improved the quality of the paper. The first and third author would like to thank the National Science Foundation, and acknowledge their partial funding from NSF Grants DMS-1211232 and DMS-1616425 (G. Avalos) and NSF Grant DMS-1504697 (J.T. Webster). Appendix ======== For the reader’s convenience, we provide an explicit proof for the well-posedness of the (uncoupled) pressure equation (\[above\]). \[dV\](See [@dV] and [@LaxPhil]) Consider the linear equation $$k\lambda +\mathbf{v}\cdot \nabla \lambda +\frac{1}{2}\lambda \func{div}(\mathbf{v})=G\text{ \ in }\mathcal{O}, \label{linear}$$(as before, $\mathcal{O}\subset \mathbb{R}^{3}$ is a bounded, convex domain). Also, the parameter $k>0$ and forcing term $G\in L^{2}(\mathcal{O})$. Moreover, the fixed vector field $\mathbf{v}$ in (\[linear\]) is in $\mathbf{H}^{3}(\mathcal{O})$ and further satisfies:$$\begin{array}{l} \text{(i) }\mathbf{v}\cdot \mathbf{n}=0\text{ \ on }\partial \mathcal{O}\text{;} \\ \text{(ii) }2\left\Vert \nabla \mathbf{v}\right\Vert _{L^{\infty }(\mathcal{O})}+\frac{C_{S}}{2}meas(\mathcal{O})^{\frac{1}{6}}\left\Vert \Delta \func{div}(\mathbf{v})\right\Vert _{\mathcal{O}}\leq k,\end{array} \label{v_a}$$where $C_{S}>0$ is a constant which gives rise to the Sobolev embedding inequality,$$\left\Vert f\right\Vert _{L^{6}(\mathcal{O})}\leq \sqrt{C_{S}}\left\Vert \nabla f\right\Vert _{\mathcal{O}}\text{ }\ \text{for all }f\in H_{0}^{1}(\mathcal{O}). \label{sob}$$Then for given $G\in L^{2}(\mathcal{O})$, there exists a unique $\lambda \in L^{2}(\mathcal{O})$ which is a weak solution of (\[linear\]). Moreover, the solution satisfies the bound $$\Vert \lambda \Vert _{\mathcal{O}}\leq \dfrac{1}{k}\Vert G\Vert _{\mathcal{O}}. \label{est}$$By the $L^{2}(\mathcal{O})$-function $\lambda $ being a weak solution of (\[linear\]), we mean that it satisfies the following variational relation:$$k\int_{\mathcal{O}}\lambda \varphi d\mathcal{O-}\int_{\mathcal{O}}\lambda (\mathbf{v}\cdot \nabla \varphi )d\mathcal{O-}\frac{1}{2}\int_{\mathcal{O}}\lambda \func{div}(\mathbf{v})\varphi d\mathcal{O}=\int_{\mathcal{O}}G\varphi d\mathcal{O}\text{ \ for every }\varphi \in H^{1}(\mathcal{O}). \label{var}$$ In large part, the present proof is taken from that of Appendix I of [@dV] (on p.541)—see also [LaxPhil]{}—which however was undertaken on the assumption that geometry $\mathcal{O}$ is smooth. Accordingly, adjustments are made here for general convex domain $\mathcal{O}$, as well as for the perturbation $\frac{\lambda }{2}\func{div}(\mathbf{v})$. Given $\epsilon >0$, we denote $\lambda _{\epsilon }$ to be the solution of the following regularized boundary value problem:$$\left\{ \begin{array}{l} -\epsilon \Delta \lambda _{\epsilon }+k\lambda _{\epsilon }+\mathbf{v}\cdot \nabla \lambda _{\epsilon }+\frac{1}{2}\lambda _{\epsilon }\func{div}(\mathbf{v})=G\text{ \ in }\mathcal{O}\text{,} \\ \left. \lambda _{\epsilon }\right\vert _{\partial \mathcal{O}}=0.\end{array}\right. \label{bvp}$$Assume initially that data $G\in H_{0}^{1}(\mathcal{O})$. We note that the Lax-Milgram Theorem insures the existence of the $H_{0}^{1}(\mathcal{O})$-function $\lambda _{\epsilon }$ which solves (\[bvp\]): Indeed, multiplying the left hand side of the equation in (\[bvp\]) by solution variable $\lambda _{\epsilon }$, integrating, and then integrating by parts, we have $$\begin{aligned} &&\epsilon \left\Vert \nabla \lambda _{\epsilon }\right\Vert _{\mathcal{O}}^{2}+\kappa \left\Vert \lambda _{\epsilon }\right\Vert _{\mathcal{O}}^{2}-\frac{1}{2}\int_{\mathcal{O}}\func{div}(\mathbf{v})\left\vert \lambda _{\epsilon }\right\vert ^{2}d\mathcal{O}+\frac{1}{2}\int_{\mathcal{O}}\func{div}(\mathbf{v})\left\vert \lambda _{\epsilon }\right\vert ^{2}d\mathcal{O} \notag \\ &=&\epsilon \left\Vert \nabla \lambda _{\epsilon }\right\Vert _{\mathcal{O}}^{2}+\kappa \left\Vert \lambda _{\epsilon }\right\Vert _{\mathcal{O}}^{2}. \label{bvp2}\end{aligned}$$ Thus we infer $H_{0}^{1}(\mathcal{O})$-ellipticity for the bilinear form associated with the PDE (\[bvp\]). Subsequently, since the bounded domain $\mathcal{O}$ is convex, we can invoke Theorem 3.2.1.2, p.147, of [@grisvard] to have that solution $\lambda _{\epsilon }\in H^{2}(\mathcal{O})\cap H_{0}^{1}(\mathcal{O})$. This extra regularity allows for a multiplication of both sides of (\[bvp\]) by term $\Delta \lambda _{\epsilon }$, and a subsequent integration and integration by parts, so as to yield the relation$$-\epsilon \left\Vert \Delta \lambda _{\epsilon }\right\Vert _{\mathcal{O}}^{2}-\kappa \left\Vert \nabla \lambda _{\epsilon }\right\Vert _{\mathcal{O}}^{2}+\int_{\mathcal{O}}\Delta \lambda _{\epsilon }\left( \mathbf{v}\cdot \nabla \lambda _{\epsilon }\right) d\mathcal{O+}\frac{1}{2}\int_{\mathcal{O}}\func{div}(\mathbf{v})\lambda _{\epsilon }\Delta \lambda _{\epsilon }\mathcal{=}\int_{\mathcal{O}}G\Delta \lambda _{\epsilon }d\mathcal{O}. \label{bvp3}$$To handle the third term on left hand side: Using classic vector field identities for the wave equation—see e.g., [@morawetz], [@chen], or [@trigg] p.459—we have $$\begin{aligned} \int_{\mathcal{O}}\Delta \lambda _{\epsilon }\left( \mathbf{v}\cdot \nabla \lambda _{\epsilon }\right) d\mathcal{O} &\mathcal{=}&\int_{\partial \mathcal{O}}\frac{\partial \lambda _{\epsilon }}{\partial \mathbf{n}}\mathbf{v}\cdot \nabla \lambda _{\epsilon }d\partial \mathcal{O-}\int_{\mathcal{O}}\left( \nabla \mathbf{v}\nabla \lambda _{\epsilon }\right) \cdot \nabla \lambda _{\epsilon }d\mathcal{O} \notag \\ &&+\frac{1}{2}\int_{\mathcal{O}}\left\vert \nabla \lambda _{\epsilon }\right\vert ^{2}\func{div}(\mathbf{v})d\mathcal{O}. \label{bvp4}\end{aligned}$$(In stating this relation, we are using assumption (i) of (\[v\_a\]).) With respect to the first term on right hand side of (\[bvp4\]): using Proposition 4, p.702 of [@buffa], and the fact that $\left. \lambda _{\epsilon }\right\vert _{\partial \mathcal{O}}=0$, we have on $\partial \mathcal{O}$ $$\begin{aligned} \left. \nabla \lambda _{\epsilon }\right\vert _{\partial \mathcal{O}} &=&\nabla _{\partial \mathcal{O}}(\left. \lambda _{\epsilon }\right\vert _{\partial \mathcal{O}})+\mathbf{n}\frac{\partial \lambda _{\epsilon }}{\partial \mathbf{n}} \\ &=&\mathbf{n}\frac{\partial \lambda _{\epsilon }}{\partial \mathbf{n}}.\end{aligned}$$(Above, $\nabla _{\partial \mathcal{O}}(\left. \lambda _{\epsilon }\right\vert _{\partial \mathcal{O}})\in \mathbf{L}^{2}(\partial \mathcal{O)} $ denotes the tangential gradient of $\left. \lambda _{\substack{ \epsilon \\ }}\right\vert _{\partial \mathcal{O}}$; see [@necas] or p.701 of [@buffa].) Applying this relation to (\[bvp4\]), and considering $\left. \mathbf{v}\cdot \mathbf{n}\right\vert _{\partial \mathcal{O}}=0$, we then have $$\int_{\mathcal{O}}\Delta \lambda _{\epsilon }\left( \mathbf{v}\cdot \nabla \lambda _{\epsilon }\right) d\mathcal{O=}-\int_{\mathcal{O}}\left( \nabla \mathbf{v}\nabla \lambda _{\epsilon }\right) \cdot \nabla \lambda _{\epsilon }d\mathcal{O}+\frac{1}{2}\int_{\mathcal{O}}\left\vert \nabla \lambda _{\epsilon }\right\vert ^{2}\func{div}(\mathbf{v})d\mathcal{O}. \label{bvp4.5}$$ For the fourth term on the left hand side of (\[bvp3\]): Using Green’s Theorem, we have $$\begin{aligned} \left( \lambda _{\epsilon }\func{div}(\mathbf{v}),\Delta \lambda _{\epsilon }\right) _{\mathcal{O}} &=&-\left( \nabla \lbrack \lambda _{\epsilon }\func{div}(\mathbf{v})],\nabla \lambda _{\epsilon }\right) _{\mathcal{O}} \notag \\ &=&-\int_{\mathcal{O}}\left\vert \nabla \lambda _{\epsilon }\right\vert ^{2}\func{div}(\mathbf{v})d\mathcal{O-}\left( \lambda _{\epsilon }\nabla \lbrack \func{div}(\mathbf{v})],\nabla \lambda _{\epsilon }\right) _{\mathcal{O}}. \label{bvp5.51}\end{aligned}$$A further integration by parts then yields$$\left( \lambda _{\epsilon }\func{div}(\mathbf{v}),\Delta \lambda _{\epsilon }\right) _{\mathcal{O}}=-\int_{\mathcal{O}}\left\vert \nabla \lambda _{\epsilon }\right\vert ^{2}\func{div}(\mathbf{v})d\mathcal{O-}\frac{1}{2}\int_{\mathcal{O}}\lambda _{\epsilon }^{2}\Delta \func{div}(\mathbf{v})d\mathcal{O}. \label{bvp5.53}$$ Applying the relations (\[bvp4.5\]) and (\[bvp5.53\]) to (\[bvp3\]), we then have $$\epsilon \left\Vert \Delta \lambda _{\epsilon }\right\Vert _{\mathcal{O}}^{2}+\kappa \left\Vert \nabla \lambda _{\epsilon }\right\Vert _{\mathcal{O}}^{2}\mathcal{+}\int_{\mathcal{O}}\left( \nabla \mathbf{v}\nabla \lambda _{\epsilon }\right) \cdot \nabla \lambda _{\epsilon }d\mathcal{O}+\frac{1}{4}\int_{\mathcal{O}}\lambda _{\epsilon }^{2}\Delta \func{div}(\mathbf{v})d\mathcal{O=-}\int_{\mathcal{O}}G\Delta \lambda _{\epsilon }d\mathcal{O}\text{.} \label{bvp3.5}$$ Now, concerning the fourth term on left hand side:$$\left\vert \int_{\mathcal{O}}\lambda _{\epsilon }^{2}\Delta \func{div}(\mathbf{v})d\mathcal{O}\right\vert \leq \left\Vert \lambda _{\epsilon }^{2}\right\Vert _{\mathcal{O}}\left\Vert \Delta \func{div}(\mathbf{v})\right\Vert _{\mathcal{O}};$$and subsequently applying Hölder’s inequality with conjugates $p=3/2$ and $p^{\ast }=3$, we have then$$\begin{aligned} \left\vert \int_{\mathcal{O}}\lambda _{\epsilon }^{2}\Delta \func{div}(\mathbf{v})d\mathcal{O}\right\vert &\leq &meas(\mathcal{O})^{\frac{1}{6}}\left\Vert \lambda _{\epsilon }\right\Vert _{L^{6}(\mathcal{O)}}^{2}\left\Vert \Delta \func{div}(\mathbf{v})\right\Vert _{\mathcal{O}} \notag \\ &\leq &C_{S}meas(\mathcal{O})^{\frac{1}{6}}\left\Vert \nabla \lambda _{\epsilon }\right\Vert _{\mathcal{O}}^{2}\left\Vert \Delta \func{div}(\mathbf{v})\right\Vert _{\mathcal{O}}, \label{bvp3.6}\end{aligned}$$where positive constant $C_{S}$ is that in (\[sob\]). Estimating the relation (\[bvp3.5\]), by means of (\[bvp3.6\]), we then obtain $$\epsilon \left\Vert \Delta \lambda _{\epsilon }\right\Vert _{\mathcal{O}}^{2}+\left( \kappa -\left[ \frac{C_{S}}{4}meas(\mathcal{O})^{\frac{1}{6}}\left\Vert \Delta \func{div}(\mathbf{v})\right\Vert _{\mathcal{O}}+\left\Vert \nabla \mathbf{v}\right\Vert _{L^{\infty }(\mathcal{O})}\right] \right) \left\Vert \nabla \lambda _{\epsilon }\right\Vert _{\mathcal{O}}^{2}\leq \left\Vert \nabla G\right\Vert _{\mathcal{O}}\left\Vert \nabla \lambda _{\epsilon }\right\Vert _{\mathcal{O}};$$and so after using assumption (\[v\_a\])(ii), we arrive at$$\epsilon \left\Vert \Delta \lambda _{\epsilon }\right\Vert _{\mathcal{O}}^{2}+\frac{k}{2}\left\Vert \nabla \lambda _{\epsilon }\right\Vert _{\mathcal{O}}^{2}\leq \left\Vert \nabla G\right\Vert _{\mathcal{O}}\left\Vert \nabla \lambda _{\epsilon }\right\Vert _{\mathcal{O}}. \label{bvp5}$$From this estimate, we obtain $$\left\Vert \nabla \lambda _{\epsilon }\right\Vert _{\mathcal{O}}\leq \frac{2}{k}\left\Vert \nabla G\right\Vert _{\mathcal{O}}. \label{bvp6}$$In turn, applying this uniform bound to (\[bvp5\]), we have also$$\epsilon \left\Vert \Delta \lambda _{\epsilon }\right\Vert _{\mathcal{O}}\leq \sqrt{\frac{2\epsilon }{k}}\left\Vert \nabla G\right\Vert _{\mathcal{O}}.$$ Consequently, there exists a subsequence $\left\{ \lambda _{\epsilon }\right\} $ and function $\lambda \in H_{0}^{1}(\mathcal{O})$ such that $$\begin{array}{l} \text{(i) }\displaystyle \lim_{\epsilon \rightarrow 0}\lambda _{\epsilon }=\lambda \text{ \emph{weakly} in }H_{0}^{1}(\mathcal{O})\text{ and \emph{strongly} in }L^{2}(\mathcal{O}); \\ \text{(ii) }\displaystyle \lim_{\epsilon \rightarrow 0}\epsilon \Delta \lambda _{\epsilon }=0\end{array} \label{converge}$$ (see e.g., Theorem 3.27(ii), p.87, of [@Mc]). With the convergences above in hand, we multiply the (\[bvp\]) by test function $\varphi \in H^{1}(\mathcal{O})$, and integrate. (Recall that each $\lambda _{\epsilon }$ is a strong solution of (\[bvp\]).) This gives the relation $$\epsilon \int_{\mathcal{O}}\Delta \lambda _{\epsilon }\varphi d\mathcal{O}+k\int_{\mathcal{O}}\lambda _{\epsilon }\varphi d\mathcal{O-}\frac{1}{2}\int_{\mathcal{O}}\lambda _{\epsilon }\func{div}(\mathbf{v})\varphi d\mathcal{O-}\int_{\mathcal{O}}\lambda _{\epsilon }\mathbf{v}\cdot \nabla \varphi d\mathcal{O}=\int_{\mathcal{O}}G\varphi d\mathcal{O}\text{ \ for every }\varphi \in H^{1}(\mathcal{O}). \label{var2}$$Passing to the limit on left hand side, and invoking (\[converge\]) we have that strong $L^{2}$-limit $\lambda $ satisfies (\[var\]). Moreover, from the relations (\[bvp\]) and (\[bvp2\]) for the regularized problem, and the strong limit posted in (\[converge\]), we have the estimate$$k\left\Vert \lambda \right\Vert _{\mathcal{O}}\leq \left\Vert G\right\Vert _{\mathcal{O}}. \label{est_f}$$ In sum: we have justified the existence of a operator $\mathcal{L}_{0}$, say, which satisfies, for given $G\in H_{0}^{1}(\mathcal{O})$, $\mathcal{L}_{0}(G)=\lambda $, where $\lambda \in L^{2}(\mathcal{O})$ solves (\[var\]), and which yields the estimate $$\left\Vert \mathcal{L}_{0}G\right\Vert _{\mathcal{O}}\leq \frac{1}{k}\left\Vert G\right\Vert _{\mathcal{O}}, \label{inher}$$after using (\[est\_f\]). An extension by continuity now yields the solvability of equation (\[linear\]) for given $L^{2}$-data $G$, and this solvability is unique because of the inherent dissipativity in this equation. Lastly, the estimate (\[est\]) is just (\[inher\]). [99]{} Aoyama, R. and Kagei, Y., 2016. Spectral properties of the semigroup for the linearized compressible Navier-Stokes equation around a parallel flow in a cylindrical domain. *Advances in Differential Equations*, 21(3/4), pp.265–300. Aubin, J.P., 2011. *Applied functional analysis* (Vol. 47). John Wiley & Sons. Avalos, G. and Bucci, F., 2014. Exponential decay properties of a mathematical model for a certain fluid-structure interaction. In *New Prospects in Direct, Inverse and Control Problems for Evolution Equations* (pp.49–78). Springer International Publishing. Avalos, G. and Bucci, F., 2015. Rational rates of uniform decay for strong solutions to a fluid-structure PDE system. *Journal of Differential Equations*, 258(12), pp.4398–4423. Avalos, G. and Clark, T., 2014. A Mixed Variational Formulation for the Wellposedness and Numerical Approximation of a PDE Model Arising in a 3-D Fluid-Structure Interaction, *Evolution Equations and Control Theory*, 3(4), pp.557–578. Avalos, G. and Dvorak, M., 2008. A New Maximality Argument for a Coupled Fluid-Structure Interaction, with Implications for a Divergence Free Finite Element Method, *Applicationes Mathematicae*, 35(3), pp.259–280. Avalos, G. and Geredeli, P.G., 2017. Spectral analysis and uniform decay rates for a compressible flow-structure PDE model, *preprint*. Avalos G. and Triggiani R., 2007. The Coupled PDE System Arising in Fluid-Structure Interaction, Part I: Explicit Semigroup Generator and its Spectral Properties, *Contemporary Mathematics*, [440]{}, pp.15–54. Avalos G. and Triggiani R., 2009. Semigroup Wellposedness in The Energy Space of a Parabolic-Hyperbolic Coupled Stokes-Lamé PDE of Fluid-Structure Interactions, *Discrete and Continuous Dynamical Systems*, 2(3), pp.417–447. Bisplinghoff, R.L. and Ashley, H., 2013. *Principles of aeroelasticity*. Courier Corporation. Bolotin, V.V., 1963. *Nonconservative problems of the theory of elastic stability*. Macmillan. Buffa, A. and Geymonat, G., 2001. On traces of functions in $W^{2,p}(\Omega )$ for Lipschitz domains in R3. *Comptes Rendus de l’Académie des Sciences-Series I-Mathematics*, 332(8), pp.699–704. Buffa, A., Costabel, M. and Sheen, D., 2002. On traces for $\mathbf{H}(\func{curl},\Omega )$ in Lipschitz domains. *Journal of Mathematical Analysis and Applications*, 276(2), pp.845–867. Chen, G., 1979. Energy decay estimates and exact boundary-value controllability for the wave-equation in a bounded domain. *Journal de Mathématiques Pures et Appliquées*, 58(3), pp.249–273. Chorin, A.J. and Marsden, J.E., 1990. *A mathematical introduction to fluid mechanics* (Vol. 3). New York: Springer. Chueshov, I., 1999, Introduction to the Theory of Infinite-Dimensional Dissipative Systems. Acta, Kharkov, (in Russian); English translation: 2002, Acta, Kharkov. Chueshov, I., 2013, Personal communication. Chueshov, I., 2014. Dynamics of a nonlinear elastic plate interacting with a linearized compressible viscous fluid. *Nonlinear Analysis: Theory, Methods & Applications*, 95, pp.650–665. Chueshov, I., 2014. Interaction of an elastic plate with a linearized inviscid incompressible fluid. *Communications on Pure & Applied Analysis*, 13(5), pp.1459–1778. Chueshov, I., 2015. *Dynamics of Quasi-Stable Dissipative Systems*. New York: Springer. Chueshov, I. and Fastovska, T., 2016. On interaction of circular cylindrical shells with a Poiseuille type flow. *Evolution Equations & Control Theory*, 5(4), pp.605–629. Chueshov I. and Lasiecka I., 2010. *Von Karman Evolution Equations*. [Springer-Verlag]{}. Chueshov, I., Lasiecka, I. and Webster, J.T., 2013. Evolution semigroups in supersonic flow-plate interactions. *Journal of Differential Equations*, 254(4), pp.1741–1773. Chueshov, I., Lasiecka, I. and Webster, J.T., 2014. Attractors for Delayed, Nonrotational von Karman Plates with Applications to Flow-Structure Interactions Without any Damping. *Communications in Partial Differential Equations*, 39(11), pp.1965–1997. Chueshov, I., Lasiecka, I. and Webster, J.T., 2014. Flow-plate interactions: Well-posedness and long-time behavior. *Discrete & Continuous Dynamical Systems-Series S*, 7(5), pp.925–965. Chueshov, I. and Ryzhkova, I., 2013. Unsteady interaction of a viscous fluid with an elastic shell modeled by full von Karman equations. *Journal of Differential Equations*, 254(4), pp.1833–1862. Chueshov, I. and Ryzhkova, I., 2013. On the interaction of an elastic wall with a poiseuille-type flow. *Ukrainian Mathematical Journal*, 65(1), pp.158–177. Chueshov, I. and Ryzhkova, I., 2013. A global attractor for a fluid-plate interaction model. *Communications on Pure & Applied Analysis*, 12(4), pp.1635–1656. Chueshov, I. and Ryzhkova, I., 2011, September. Well-posedness and long time behavior for a class of fluid-plate interaction models. In *IFIP Conference on System Modeling and Optimization* (pp. 328–337). Springer Berlin Heidelberg. da Veiga, H.B., 1985. *Stationary Motions and Incompressible Limit for Compressible Viscous Fluids*, *Houston Journal of Mathematics*, 13(4), pp.527–544. E. Dowell, 2004. *A Modern Course in Aeroelasticity*. [Kluwer Academic Publishers]{}. Geredeli, P.G. and Webster, J.T., 2016. Qualitative results on the dynamics of a Berger plate with nonlinear boundary damping. *Nonlinear Analysis: Real World Applications*, 31, pp.227–256. Grisvard, P., 2011. *Elliptic problems in nonsmooth domains*. Society for Industrial and Applied Mathematics. Kato, T., 2013. *Perturbation theory for linear operators* (Vol. 132). Springer Science & Business Media. Kesavan, S., 1989. *Topics in functional analysis and applications*. Lasiecka, I. and Webster, J.T., 2016. Feedback stabilization of a fluttering panel in an inviscid subsonic potential flow. *SIAM Journal on Mathematical Analysis*, 48(3), pp.1848–1891. Lax, P.D. and Phillips, R.S., 1960. Local boundary conditions for dissipative symmetric linear differential operators. *Communications on Pure and Applied Mathematics*, 13(3), pp.427–455. McLean, W.C.H., 2000. *Strongly elliptic systems and boundary integral equations*. Cambridge university press. Morawetz, C.S., 1966. Energy identities for the wave equation, NYU Courant Institute, *Math. Sci. Res. Rep. No.* IMM 346. Nečas, 2012. Direct Methods in the Theory of Elliptic Equations (translated by Gerard Tronel and Alois Kufner), Springer, New York. Pazy, A., 2012. *Semigroups of linear operators and applications to partial differential equations* (Vol. 44). Springer Science & Business Media. Webster, J.T., 2011. Weak and strong solutions of a nonlinear subsonic flow-structure interaction: Semigroup approach. *Nonlinear Analysis: Theory, Methods & Applications*, 74(10), pp.3123–3136. Triggiani, R., 1989. Wave equation on a bounded domain with boundary dissipation: an operator approach. *Journal of Mathematical Analysis and applications*, 137(2), pp.438–461. Valli, A., 1987. On the existence of stationary solutions to compressible Navier-Stokes equations. In *Annales de l’IHP Analyse non linéaire* (Vol. 4, No. 1, pp.99–113). [^1]: University of Nebraska-Lincoln, [email protected] [^2]: Hacettepe University, Ankara, Turkey, and University of Nebraska-Lincoln, [email protected] [^3]: University of Maryland, Baltimore County, Maryland, [email protected] [^4]: The research of G. Avalos was partially supported by the NSF Grants DMS-1211232 and DMS-1616425. The research of J.T. Webster was partially supported by the NSF Grant DMS-1504697. [^5]: Throughout, we will assume the fluid is barotropic—the pressure depends only on the density. [^6]: In fact, the late author of [@Chu2013-comp]—to whom this work is dedicated—suggested in personal correspondence the precise model ([system1]{})–(\[IC\_2\]); [@Igor-note]. In this communication, he remarked that the approaches utilized in [@Chu2013-comp] with $\mathbf{U}\equiv 0$ were not amenable to the problem studied with $\mathbf{U}\neq \mathbf{0}$. He noted that a semigroup approach might be fruitful, and this comment provided an impetus for the present work. [^7]: The existence of an $\mathbf{H}^{1}(\mathcal{O})$-function $\widetilde{\mathbf{\mu }}_{0}$ with such a boundary trace on Lipschitz domain $\mathcal{O}$ is assured; see e.g., Theorem 3.33 of [@Mc], or see also the proof of Lemma \[staticwellp\] below. [^8]: We mention that one of the prominent tools utilized in these fluid-structure interactions—with nonlinearity present in the structure—is the recently developed *quasi-stability* theory for dissipative dynamical systems (see [@newigor; @springer]). [^9]: The structure of $\mathcal{N}_{c}$ is dependent upon the in-plane forcing $F_{0}$
--- abstract: 'The parameters governing the standard $\Lambda$ Cold Dark Matter cosmological model have been constrained with unprecedented accuracy by precise measurements of the cosmic microwave background by the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck satellites. Each new data release has refined further our knowledge of quantities – such as the matter density parameter $\Omega_{\rm M}$ – that are imprinted on the dark matter halo mass function (HMF), a powerful probe of dark matter and dark energy models. In this letter we trace how changes in the cosmological parameters over the last decade have influenced uncertainty in our knowledge of the HMF. We show that this uncertainty has reduced significantly since the $3^{\rm rd}$ WMAP data release, but the rate of this reduction is slowing. This is limited by uncertainty in the normalisation $\sigma_8$, whose influence is most pronounced at the high mass end of the mass function. Interestingly, we find that the accuracy with which we can constrain the HMF in terms of the cosmological parameters has now reached the point at which it is comparable to the scatter in HMF fitting functions. This suggests that the power of the HMF as a precision probe of dark matter and dark energy hinges on more accurate determination of the theoretical HMF. Finally, we assess prospects of using the HMF to differentiate between Cold and Warm Dark Matter models based on ongoing improvements in measurements of $\Omega_{\rm M}$, and we comment briefly on optimal survey strategies for constraining dark matter and dark energy models using the HMF.' title: 'How Well Do We Know The Halo Mass Function?' --- Introduction {#sec:intro} ============ The halo mass function (hereafter HMF), which encodes the comoving number density of dark matter haloes in the Universe at a given epoch as a function of their mass, is a powerful probe of cosmology, dark matter and dark energy [@Press1974; @Jenkins2001; @Tinker2008; @Vikhlinin2009]. For example, the amplitude of the HMF on the scale of galaxy clusters at the present epoch may be used to deduce limits on the combination of the power spectrum normalisation $\sigma_8$ and the matter density parameter $\Omega_{\rm M}$ [@Vikhlinin2009; @Allen2011]. Similarly, the evolution of this amplitude over cosmic time may be used to characterise the dark energy equation of state $w_0$ [@Vikhlinin2009; @Allen2011]. Over the last decade, precise measurements of the cosmic microwave background (CMB) by the Wilkinson Microwave Anisotropy Probe [hereafter WMAP; cf. @Spergel2003] and Planck [@PlanckCollaboration2013] satellites have led to increasingly accurate estimates of key cosmological parameters such as $\Omega_{\rm M}$, by factors of $\sim$4 in most cases. It is therefore interesting to ask how our knowledge of the HMF has evolved over the same period. In this Letter, we estimate the uncertainty in the HMF assuming “best-bet” WMAP and Planck cosmological parameters, and we determine the independent parameters that are the primary sources of this uncertainty. This is of crucial importance because quantifying the significance of any deviation between an observationally derived HMF and one predicted within the standard cosmological framework is difficult without understanding the framework’s instrinsic uncertainties. Specifically, we determine uncertainties in the predicted HMF for a suite of flat $\Lambda$ Cold Dark Matter (hereafter $\Lambda$CDM) cosmologies, adopting a range of HMF fitting functions drawn from the literature. We present the 68% error on the amplitude and slope of the HMF, given the reported errors on a number of input parameters; a comparison of errors due to uncertainty in cosmology with errors in the chosen fitting function; and an analysis of the parameters that provide the primary sources of uncertainty. Finally, we consider the sensitivity of the HMF to the assumed dark matter model by exploring the range of Warm Dark Matter (WDM) particle masses for which the HMF can be used to differentiate between $\Lambda$CDM and its $\Lambda$WDM alternatives. Methodology {#sec:method} =========== We calculate the HMF using the formalism of [@Press1974] and [@Bond1991]. This defines the HMF as the differential number density of haloes in logarithmic mass bins, $$\label{eq:hmf} \frac{dn}{d\log M} = \frac{\rho_0}{M} f(\sigma) \left|\frac{d\ln\sigma}{d\ln M}\right|,$$ where $\rho_0$ is the mean matter density of the universe; $\sigma$ is the mass variance at mass scale $M$; and the function $f(\sigma)$ differentiates between fitting functions. Many forms for $f(\sigma)$ have been proposed in the literature. The original form proposed by [@Press1974] is the only one derived completely analytically, but makes the simplifying assumption of collisionless spherical collapse. [@Sheth2001] proposed a form motivated by the more general assumption of ellipsoidal collapse in which three parameters were set by fitting to simulation data. To date, no other analytical form for $f(\sigma)$ has been derived; instead, its form is empirically derived by fitting to the abundance of haloes measured by halo finders in cosmological simulations. In this Letter we adopt many of these empirical fitting functions from the literature in addition to the forms of [@Press1974] and [@Sheth2001]. For clarity, we focus on the form of Sheth-Tormen unless otherwise stated; we note that results are qualitatively similar for all of the fitting functions we have considered. We calculate HMFs using the `hmf` code (cf. [github.com/steven-murray/hmf]{}), which is the backend of the `HMFcalc` web-application (cf. [hmf.icrar.org]{}); further details can be found in Murray, Power & Robotham (In Prep.). Our code uses `CAMB` [cf. [camb.info]{}; details in @Lewis2000] to produce a transfer function for a given input cosmology, and interpolates and integrates this function to compute the mass variance. Importantly, our code is optimized to quickly and easily update parameters of the calculation, which allows us to generate the large number of realisations necessary for this study. We use data spanning the last 7 years of cosmological parameters derived from the CMB – from WMAP3 [@Spergel2007], WMAP5 [@Komatsu2009], WMAP9 [@Hinshaw2012a] and Planck [@PlanckCollaboration2013]. We choose parameter sets that derive from isolated fits to the CMB, not using any extra data (such as BAOs or lensing) to ensure consistency across samples. In all cases we assume a flat $\Lambda$CDM geometry, consistent with the theoretical base model of the Planck results, and consider five parameters – the baryon and CDM densities combined with the hubble constant $\Omega_{\rm b}h^2$ and $\Omega_{\rm c}h^2$, the spectral index $n_s$, the Hubble parameter $H_0$ and the normalization $\sigma_8$. We constrain $H_0$ by adopting Eq. 11 of @PlanckCollaboration2013; this is possible because we can calculate directly the angular size of the sound horizon at last scattering, to better than 0.3% accuracy. In order to sample from the parameter distributions from each base cosmology, we use available Monte Carlo Markov chains (MCMC), randomly choosing 5000 realizations from each chain. This allows for robust sampling of the covariance between each parameter. We calculate the HMF for each realization, finding the resulting 68% uncertainty about the median for both the amplitude and slope. Note that we have checked our results for consistency with more basic approaches based on (i) variance in the parameters, in which parameter uncertainties are assumed to be uncoupled, and (ii) covariance between the parameters, in which parameter uncertainties are coupled but assumed to be Gaussian-distributed. Averaging over masses, we find that the maximum difference between the variance and MCMC approaches is 6% (for WMAP3), while the maximum difference between the covariance and MCMC approaches is 2% (for Planck). If we do not average over masses, we find that the variance approach underestimates the uncertainty in the HMF by $\sim 25\% (10\%)$ for all WMAP (Planck) datasets at high masses ($\sim 10^{15} h^{-1} {\rm M}_{\odot}$), whereas it is $\sim 2\%$ at most if we adopt the covariance approach. The amplitude uncertainty range is simply the 16$^{th}$ – $84^{th}$ quantile of the value of the HMF. Calculation of this quantity eliminates any information associated with the gradient of the mass function, which may be an important aspect in constraining cosmology. Therefore we also calculate the slope, as the arc-tangent of the discrete gradient (via the method of central differences) at each mass bin. For this analysis we use a mass bin width of 0.05 dex. Note that we perform our analysis assuming a “universal" form for the mass function, in which changes in cosmology are captured by changes in the mass variance $\sigma$. Recent work suggests that this assumption does not hold in detail and can lead to $\sim$10% uncertainties [cf. @Tinker2008; @Bhattacharya2011]. We do not explicitly account for this source of uncertainty in our analysis and we suggest that the reader allow for an additional 10% uncertainty in our quoted results to account for this. Results {#sec:results} ======= #### Effect of Choice of Fitting Function. {#effect-of-choice-of-fitting-function. .unnumbered} In Figure \[fig:hmf\] we show the variety of HMF fitting functions drawn from the literature – [@Press1974; @Sheth2001; @Jenkins2001; @Reed2003; @Warren2006; @Reed2007; @Tinker2008; @Crocce2010; @Courtin2010; @Bhattacharya2011; @Angulo2012] and [@Watson2012] – assuming a Planck cosmology. The scatter between fitting functions is noticeable and has been remarked upon already in the comprehensive comparison of halo-finders presented in [@Knebe2011]. Given this scatter, we wish to investigate two effects; (i) whether or not the fitting functions behave similarly under changes of cosmology, and (ii) whether or not the scatter in the fitting functions is the dominant source of uncertainty in the HMF for contemporary cosmological parameter sets. That is, whether the variation between fitting functions for a cosmology with no variance is greater than the variation between HMF’s of a single fitting function for a cosmology with non-zero variance. In Figure \[fig:error\] we show the fractional error intrinsic to each parameter set; the Sheth-Tormen fitting function is shown by the solid lines, while the fractional error attributable to the choice of fitting function, estimated at the median value of the HMF for the given cosmology, is shown by the dashed lines. It is noticeable that the variance in earlier parameter sets (e.g. WMAP3) is dominated by uncertainty in the cosmological parameters; in contrast, the variance in the most recent parameter sets is comparable to the variation between fitting functions across the mass range considered. This implies that future constraints on cosmology using the HMF will be limited by uncertainties in our knowledge of the HMF fitting function itself, unless more robust non-parametric means are used. More encouraging is the similarity of the several dashed curves in Figure \[fig:error\], which imply that a change of cosmology (within reasonable ranges) does not greatly affect the behaviour of the general class of fitting functions[^1]; that is, each fitting function displays a similar trend with the change of cosmology. This derives from the fitting functions depending on cosmology only through its influence on the mass variance (cf. Eq. \[eq:hmf\]), and justifies our use of a single fitting function on which to base a general analysis of the cosmological parameter errors. Figure \[fig:comparison\] confirms this; it shows the normalized HMF for several cosmologies for three fitting functions. While there are noticeable differences between cosmologies, there is consistency of behaviour between the fitting functions. #### Amplitude of Errors in Standard Cosmologies. {#amplitude-of-errors-in-standard-cosmologies. .unnumbered} Figure \[fig:error\] shows that at low masses, the variation asymptotes to a constant value, while at high masses the error rises seemingly exponentially. Indeed, for early parameter sets, cluster-mass haloes had associated uncertainties of almost 70%. This increase in uncertainty at high masses reflects the sensitivity of the abundance of clusters to the value of $\sigma_8$, which has been noted previously [@Vikhlinin2009; @Allen2011] and will be discussed further. From the latest Planck measurements, we deduce a ‘rule-of-thumb’ of $\sim$6% uncertainty in both the amplitude and slope of the HMF on scales less than $10^{13}{\rm M}_{\sun}h^{-1}$, rising to $\sim$20% for the amplitude and $\sim$15% for the slope at scales of $10^{15}{\rm M}_{\sun}h^{-1}$. Furthermore, although we observe a decrease in uncertainty over the consecutive WMAP and Planck measurements, the rate of decrease is becoming smaller, especially at high masses. This occurs because of the dominance of $\sigma_8$, which is the primary contributor to the uncertainty at high masses (cf. Figure \[fig:contribution\]). Therefore, a significantly more constrained HMF will depend strongly on the $\sigma_8$ constraint. Maximum likelihood techniques applied to observed clusters may be the best way to constrain $\sigma_8$, given the strong sensitivity in this regime. Such a constraint could be obtained from clusters drawn from the XMM XXL X-ray survey, which is designed to identify galaxy clusters out to $z \sim 2$ with masses of $M_{\rm vir} \gtrsim 10^{14} {\rm M}_{\odot}$ in 50 square degrees of sky [cf. @Pierre2011]. #### Consistency of Parameter Sets. {#consistency-of-parameter-sets. .unnumbered} Figure \[fig:comparison\] shows the HMF, with error ranges (given by the 16$^{th}$ and 84$^{th}$ quantiles) for several parameter sets, normalized by the HMF of the mean parameters from WMAP1. This gives a visual clue as to the relative amplitude of the HMFs and their error, which reveals whether they are statistically consistent. We see that while Planck results are marginally consistent with WMAP9, they do not overlap with WMAP5 over the entire range of masses considered. This is driven by the inconsistency of $\Omega_{\rm c}h^2$ between these parameter sets. #### Effects of Single Parameters. {#effects-of-single-parameters. .unnumbered} To better understand the parameters that are most important in the variance of the HMF, we marginalise over all but one of the parameters at a time, calculating the uncertainty, and plot the fractional uncertainty of each parameter with respect to the total uncertainty (summed in quadrature) in Figure \[fig:contribution\]. These results are for the Planck parameter set. The dark matter density parameter $\Omega_{\rm c}h^2$ is the dominant source of uncertainty in both the amplitude and slope of the HMF except at high masses. $\Omega_{\rm c}h^2$ has the intuitive effect of increasing (decreasing) the amplitude of the HMF as it increases (decreases). At high masses, the effect of $\sigma_8$ is even more important than $\Omega_{\rm c}h^2$ because it regulates the amplitude of density perturbations. Most fitting functions contain an explicit inverse exponential dependence on the mass variance, which is linear in $\sigma_8$, so for large masses (where the mass variance is very small), the HMF is exponentially sensitive to this parameter. ![Contribution of single parameters to the overall variance. Each curve is the error of a single parameter as a fraction of the error summed in quadrature of all four parameters. Errors are taken from the Planck cosmology. $\Omega_{\rm c}h^2$ is the dominant term for most of the mass range, with $\sigma_8$ becoming extremely important at high mass.[]{data-label="fig:contribution"}](one_parameter_contribution.eps){width="\linewidth"} #### Application to Non-Standard Dark Matter Models. {#application-to-non-standard-dark-matter-models. .unnumbered} We expect the HMF to be sensitive to the nature of dark matter. Warm Dark Matter (WDM) models, in which the abundance of low-mass haloes is suppressed, provide a popular alternative to the CDM model and there has been much interest in them recently [e.g. @Smith2011; @Schneider2013]. The signature of WDM should be evident in the HMF on galaxy-mass scales and below, whereas most work on the HMF has focussed on group- to cluster-mass scales. This raises the question – what is the maximum mass scale for a given WDM model which can theoretically differentiate it from CDM given the current best cosmological parameters? Or conversely, how well constrained must the parameters be to differentiate reasonable WDM models at a certain mass? These questions are important for future efforts to detect a WDM signal in the HMF, such as, for example, the GAMA survey [@Driver2011; @Robotham2011] that aims to measure the HMF down to scales of $10^{12}{\rm M}_{\sun}h^{-1}$. To answer them, one needs to know the intrinsic uncertainty in the CDM mass function, and deduce where the particular WDM model becomes significantly different. Figure \[fig:wdm\] shows the results of calculating the CDM HMF for the Planck parameters, with error region shaded, compared to equivalent WDM HMFs for candidate particle masses between 1 and 10 keV. Here we assume the transfer function of [@Bode2001] is applied to the CDM linear matter power spectrum to produce the WDM linear matter power spectrum, required to compute the mass variance $\sigma$ in Eq. \[eq:hmf\]. For a plausible WDM particle mass of 1 keV – in the sense that it is consistent with observational limits on the WDM particle mass – we find that the maximum mass scale at which the CDM and WDM models are inconsistent is $\sim$10$^{11} {\rm M}_{\sun}h^{-1}$. Increasing the particle mass should mimic a “colder” dark matter model and reduce the discrepancy with the CDM HMF, which is what the results show. ![Theoretical HMFs for WDM models with specified particle masses in keV. Overplotted is the error region for the Planck cosmology. This gives an indication of the maximum mass which can be used to differentiate a particular WDM model.[]{data-label="fig:wdm"}](WDM_functions_z0.eps){width="\linewidth"} By observing that the dominant source of uncertainty at intermediate mass scales is $\Omega_{\rm M}$, we can calculate error-ranges on the HMF for a given error in this parameter (marginalising over other insignificant parameters). By doing so, we may calculate the necessary uncertainty required to theoretically distinguish a WDM model with given particle mass from the CDM model at a given mass scale. This allows us to make a rough estimate of the time it will take to reduce errors in $\Omega_{\rm M}$ sufficiently to differentiate reasonable WDM models. We can already, in principle, distinguish a 1keV WDM model from the CDM model assuming Planck parameters at 10$^{11}{\rm M}_{\sun}h^{-1}$, which is expected from figure \[fig:wdm\], but it will be 20+ years before we can distinguish between WDM and CDM on group mass scales ($\sim 10^{13}{\rm M}_{\sun}h^{-1}$). #### Redshift Dependence {#redshift-dependence .unnumbered} We have repeated our analysis at $z$=1, the redshift at which a number of current and future surveys are operating. Results are qualitatively similar although we note that uncertainties for all base cosmologies are increased by a factor of $\sim$2 across the mass range considered. The uncertainty stemming from choice of fitting function also increases, and we find that Planck results have smaller uncertainties than the fitting functions. This highlights the need for a consistent approach to fitting the HMF, and, as argued by @Bhattacharya2011, raises the question of the suitability of simple analytic fits to the mass function for future percent-level analyses. Summary {#sec:conclusions} ======= We have examined how our knowledge of the HMF has improved over the last decade with the availability of increasingly accurate estimates of the cosmological parameters by the WMAP and Planck satellites. Our analysis reveals the limiting uncertainty in the HMF now comes from the variance between fitting functions, with errors intrinsic to the parameters being constrained to $\sim$6% at low masses, and $\sim$20% at high masses. We find that the primary sources of uncertainty in the HMF for a standard flat $\Lambda$CDM cosmology are the matter density parameter $\Omega_{\rm M}$ and the normalization $\sigma_8$. Of these, $\sigma_8$ is the key parameter which will influence further reductions in the uncertainty in the HMF. In addition, we have examined prospects for differentiating between the standard CDM model and its WDM variants using the HMF and found that reasonable models only become detectable at scales less than $\sim$10$^{11}{\rm M}_{\sun}h^{-1}$ with current parameter uncertainties. We place an upper-limit of 0.8% on the uncertainty in $\Omega_{\rm M}$ before theoretical detection of a WDM model of 1keV is possible at $10^{12}{\rm M}_{\sun}h^{-1}$ – a $\sim$300% tighter constraint than the Planck results. We conclude by noting that, while the strongest constraint on the HMF can be made in theory using data drawn from the entire halo mass range, this is prohibitively difficult to do in practice. Assuming a volume limited sample of haloes, the strongest constraint comes from the smallest haloes because of Poisson statistics. However, for an HMF sample constructed with a volume limit scaled optimally to match halo mass (i.e. detection of larger haloes require larger volumes), there will be a volume disconnect between bins, so the halo mass bin to bin variance will dominate HMF shape. This means that we cannot use the entire HMF to constrain, say, both dark matter and dark energy models, because the volume overlap between observable low mass halos and large clusters is virtually nil. Therefore future surveys that seek to use the HMF should target either the most massive haloes – and tests of dark energy – or the hosts of the lowest mass galaxies – and tests of dark matter; they should not do both. Acknowledgements {#acknowledgements .unnumbered} ================ ASGR acknowledges support of a UWA postdoctoral research fellowship. Part of this research was undertaken as part of the Survey Simulation Pipeline (SSimPL; [ssimpl-universe.tk]{}). The Centre for All-Sky Astrophysics (CAASTRO) is an Australian Research Council Centre of Excellence, funded by grant CE11E0090. We thank the referee for constructive and insightful comments. Ade, P. A. R., Aghanim, N., et al. 2013, arXiv:1303.5062 Allen S. W., Evrard A. E. & Mantz A. B., 2011, ARA&A, 49, 409 Angulo R. E., Springel V., White S. D. M., Jenkins A., Baugh C. M. & Frenk C. S., 2012, MNRAS, 426, 2046 Bode P., Ostriker J. P. & Turok N., 2001, ApJ, 556, 93 Bond J. R., Cole S., Efstathiou G. P. & Kaiser N., 1991, ApJ, 379, 440 Courtin J., Rasera Y., Alimi J.-M., Corasaniti P. S., Boucher V. & Füzfa A., 2011, MNRAS, 410, 1911 Crocce M., Fosalba P., Castander F. J. & Gaztañaga E., 2010, MNRAS, 403, 1353 Driver S. P. et al., 2011, MNRAS, 413, 971 Hinshaw, G., Larson, D., Komatsu, E., et al. 2012, arXiv:1212.5226 Jenkins A. R., Frenk C. S., White S. D. M., Colberg J. M., Cole S., Evrard a. E., Couchman H. M. P. & Yoshida N., 2001, MNRAS, 321, 372 Knebe, A., Pearce, F. R., Lux, H., et al. 2013, arXiv:1304.0585 Komatsu E. et al., 2009, ApJS, 180, 330 Lewis A., Challinor A. & Lasenby A., 2000, ApJ, 538, 473 Pierre, M. 2011, “The X-ray Universe 2011” Symposium, Berlin 27-30 June 2011, Edited by J.-U. Ness & M. Ehle Press W. H. & Schechter P., 1974, ApJ, 187, 425 Reed D., Gardner J., Quinn T., Stadel J., Fardal M., Lake G. & Governato F., 2003, MNRAS, 346, 565 Reed D. S., Bower R., Frenk C. S., Jenkins A. & Theuns T., 2007, MNRAS, 374, 2 Robotham A. S. G. et al., 2011, MNRAS, 416, 2640 Schneider A., Smith R.E. & Reed D., 2013, arXiv:1303.0839 Sheth R. K., Mo H. J. & Tormen G., 2001, MNRAS, 323, 1 Smith R. E. & Markovič K., 2011, PRD, 4, 1 Spergel D. N. et al., 2003, ApJS, 148, 175 Spergel D. N. et al., 2007, ApJS, 170, 377 Tinker J. & Kravtsov A. V., 2008, ApJ, 688, 709 Vikhlinin A. et al., 2009, ApJ, 692, 1060 Warren M. S., Abazajian K., Holz D. E. & Teodoro L., 2006, ApJ, 646, 881 Watson, W. A., Iliev, I. T., D’Aloisio, A., et al. 2012, arXiv:1212.0095 Bhattacharya, S., Heitmann, K., White, M., et al. 2011, 2011, ApJ, 732, 122 [^1]: Even allowing for the 10% uncertainty arising from non-universality – see note in §\[sec:method\].
--- bibliography: - 'bibliography.bib' title: Abstracting Abstract Control --- Introduction ============ Computation through the lens of static analysis {#sec:analysis} =============================================== The AAM methodology {#sec:aam} =================== A refinement for exact stacks {#sec:pushdown} ============================= Stack inspection and recursive metafunctions {#sec:inspection} ============================================ Relaxing contexts for delimited continuations {#sec:delim} ============================================= Short-circuiting via “summarization” {#sec:memo} ==================================== Related Work ============ Conclusion ========== As the programming world continues to embrace behavioral values like functions and continuations, it becomes more important to import the powerful techniques pioneered by the first-order analysis and model-checking communities. It is our view that systematic approaches to analysis construction are pivotal to scaling them to production programming languages. We showed how to systematically construct executable concrete semantics that point-wise abstract to pushdown analyses of higher-order languages. We bypass the automata theoretic approach so that we are not chained to a pushdown automaton to model features such as first-class composable control operators. The techniques employed for pushdown analysis generalize gracefully to apply to non-pushdown models and give better precision than regular methods. We thank the anonymous reviewers of DLS 2014 for their detailed reviews, which helped to improve the presentation and technical content of the paper. This material is based on research sponsored by DARPA under the Automated Program Analysis for Cybersecurity (FA8750-12-2-0106) project. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.
--- abstract: 'We argue that a special step in the chain of dualities used in [@Tan] implicitly suggests to view Langlands duality as being fundamentally rooted in an eight-dimensional theory on the F-theory 7-brane. We give further arguments why such an eight-dimensional perspective might be of interest.' author: - | Karl-Georg Schlesinger\ e-mail: [email protected] date: '11.11.2011' title: 'Eight dimensional physics and the Langlands program – A short note' --- As was shown in the fundamental work of [@KW], geometric Langlands duality for algebraic curves would be a consequence of $S$-duality of $N=4$ SYM-theory in $d=4$ space-time dimensions if the latter could be established. An even deeper perspective on geometric Langlands duality is offered by the six dimensional view as advocated in [@Wit]. String theory conjectures the existence of a $d=6$ conformal field theory of type $(2,0)$ supersymmetry which can be seen e.g. as an effective limit of the worldvolume theory of the $NS5$-brane in type $IIA$ string theory. Suppose this conformal field theory to live on a space-time $X_6$ with $$X_6=X_4 \times T^2$$ In the limit $$vol(T^2) \ll vol(X_4)$$ we get $N=4$ SYM-theory on $X_4$ by dimensional reduction and the $S$-duality of this theory is a *consequence* of the natural action of $SL(2,\mathbb{Z})$ on the torus $T^2$. So, while in $d=4$ we would have to prove $S$-duality for the $N=4$ SYM-theory, in order to derive geometric Langlands duality for algebraic curves, in $d=6$ it would suffice to prove the very existence of the $(2,0)$ superconformal field theory. This means that the $d=6$ theory appears to be the geometric structure which turns $S$-duality (and hence geometric Langlands duality) into a *manifest* symmetry, much the same way in which differential geometry turns general covariance into a manifest symmetry. We tacitly assume, here, and in the sequel that the gauge group $G$ of the $N=4$ SYM-theory is from the $ADE$-series since this is the most general type of $G$ which can be associated to the $d=6$ superconformal theory.\ But the $d=6$ perspective achives even more: Consider the space-time $$X_6=\Sigma \times Y_4$$ with $\Sigma$ a Riemann surface and $Y_4$ a compact Hyperk[ä]{}hler manifold of (real) dimension 4. In the limit $$vol(Y_4) \ll vol(\Sigma)$$ the $d=6$ superconformal field theory reduces to a sigma model on $\Sigma$ with the target space given by the instanton moduli space $Inst_G(Y_4)$, i.e. the space of (anti-) self-dual $G$-connections on $Y_4$.\ As was shown in [@Tan] and [@Wit], one can use this dimensional reduction of the $d=6$ theory to derive the results of [@BF] and [@Nak] on geometric Langlands duality for the algebraic surface $Y_4$, for $G$ from the $A$-series, and extend them to the $D$-series case. As shown in [@Tan], one uses a chain of dualities of string- and $M$-theory , leading to a duality of two different $M$-theory backgrounds involving $M5$-branes (backgrounds (3.1) and (3.8) in [@Tan]), in order to establish the geometric Langlands duality for the algebraic surface $Y_4$.\ But the argument of [@Tan] offers a far wider potential: From a string- and $M$-theory perspective, each single step in the whole chain of backgrounds ((3.1) to (3.8) in [@Tan]) is a duality transformation, not only the whole chain from the first to the last background. So, this chain of dualities embeds geometric Langlands duality for algebraic surfaces into a complete chain of dualities which should offer very different reformulations of the structures of the geometric Langlands program. E. g. background (3.7) is a background in type $IIB$ string theory. Usually – as we have sketched above – the geometric Langlands program is associated to the $(2,0)$ superconformal field theory which is the effective limit of the little stringt theory (LST) on the $NS5$-brane in type $IIA$ (or the $M5$-brane in $d=11$). But $T$-duality links this theory to the LST on the $NS5$-brane in type $IIIB$ which has a gauge theory with $(1,1)$ supersymmetry as its effective limit. The chain of dualities of [@Tan] means that the geometric Langlands data should also have a formulation in this setting. Especially, one step in the chain of dualities is the application of $S$-duality of type $IIB$ string theory, which leads from LST on the $NS5$-brane in type $IIB$ to LST on the $D5$-brane, i.e. a formulation of the $(1,1)$ theory on the $D5$. As was discussed in [@Dij], in this setting we can reduce the $(1,1)$ theory to a sigma model and get, once again, the instanton moduli space on the four dimensional compactification space as target space. This means that the dual formulation with the $(1,1)$ theory should involve a duality on $Inst_G(Y_4)$ for the data of the geometric Langlands program.\ Now, let us focus on the special step in the chain of dualities where one applies $S$-duality of type $IIB$ to pass from an $NS5$-brane to a $D5$-brane. This is an especially interesting step for several reasons. First, as for geometric Langlands duality of algebraic curves, one could also ask for the case of geometric Langlands duality of algebraic surfaces how to make the duality a manifest symmetry. For the step of applying $S$-duality of type $IIB$ in the chain of dualities, there is at least an idea how one should achive this. Very much as in the transition from $N=4$ SYM-theory in $d=4$ to $d=6$ $(2,0)$ superconformal field theory, one would like to make the $S$-duality group $SL(2,\mathbb{Z})$ manifest by multiplying the $d=10$ space-time of type $IIB$ with a torus $T^2$ (or, more generally, consider a $d=12$ space-time, fibered by an elliptic curve), passing from type $IIB$ string theory to a $d=12$ theory. The LST on the $D5$ or $NS5$ in type $IIB$ should in this way arise from a $d=8$ theory on the worldvolume of a 7-brane in this $d=12$ theory. This is the original idea of $F$-theory, as proposed in [@Vaf]. It is meanwhile clear that $F$-theory does not work in this simple form since it is an inherently non-perturbative theory (see e.g. [@Wei] for a review). Nevertheless, it is still the expectation that a fundamental formulation of $F$-theory would be a manifestly $S$-duality invariant formulation of type $IIB$ string theory.\ A second reason why this $d=8$ theory on the $F$-theory 7-brane should be interesting is that it is known that from an $M$-theory perspective $S$-duality of type $IIB$ string theory is a remnant of covariance in $d=11$. How to make the $M5$ contribution of $M$-theory (i.e. the 90% part of the theory, by counting central charges of the $d=11$ superalgebra) covariant is one of the central open questions of $M$-theory (as making the graviton contribution covariant leads one to the full Einstein equations). An $S$-duality invariant $d=8$ formulation of the $(1,1)$ LST of type $IIB$ string theory might possibly shed some light on this question. It is especially interesting that in this way the central question of covariance of $M$-theory ties in with symmetries related to Langlands duality.\ A third point arises from the search for a manifest formulation of Langlands duality of algebraic curves, as discussed above. Though the $(2,0)$ superconformal field theory in $d=6$ arises as an effective limit, it is non-renormalizable. From a field theory perspective the belief is (see e.g. [@Aha] for a general review on the subject of LSTs and their limits) that, though the $d=6$ theory does not have a UV fixed point as a quantum field theory, the $d=6$ LST on the $NS5$-brane in type $IIA$ string theory should provide the UV fixed point (in the sense of a UV completion of the theory). But this means that the effective superconformal field theory can probably not be the basis of a mathematically rigorous formulation of a manifest notion of Langlands duality of algebraic curves (since it is not expected to be a consistent theory) but a mathematically rigorous formulation should ultimately require the full LST (We will in this note ignore the point that ultimately the LST – if we take the $M5$-brane perspective on it – might require embedding into full $M$-theory since the partition function – because of the self-duality condition on the 2-form on the $M5$-brane – seems to require coupling to the $d=11$ $C$-field, in order to make it well defined, see [@Wit; @1996]). Since $S$-duality of type $IIB$ string theory is a strong-weak coupling duality, a mainfest $d=8$ formulation might, again, be appropriate to shed some light on the UV-completion question. So, even for a mathematically rigorous *and* manifest formulation of Langlands duality of algebraic curves, a $d=8$ theory on the 7-brane in $F$-theory should be an interesting starting point.\ Let us remark at this point that it is at least known that the special topological twist of $N=4$ SYM-theory in $d=4$, needed for geometric Langlands duality of algebraic curves, can be gained from a $d=8$ theory, with an interesting relation to nonabelian Seiberg-Witten theory arising in this way (see [@BKS]).\ Let us close this note by taking a very brief look on some additional topics which might offer further support for such a $d=8$ $F$-theory 7-brane perspective: - The gravitational instantons of Taub-NUT geometry, which are so decisive in the chain of dualities of [@Tan] (and gauge instantons on these gravitational instantons, as they appear with the moduli spaces $Inst_G(Y_4)$), also arise in a completely different context in the setting of $F$-theory 7-branes (see [@GS]). The work of [@GS] hints at a relation of these structures to transdimensional tunneling and possibly an inflationary scenario within $F$-theory. Also, the $d=8$ worldvolume theory on the $F$-theory 7-brane strongly relates to string theory phenomenology, embedding the standard model into string theory. Again, in this work structures well known from the geomtetric Langlands side, like Higgs bundles and Hitchin’s equations play a deep role (see [@DW], [@PW]). - There is not only a deep mathematical relation between the landscape of string theory vacua, as it emerges from $F$-theory, and the (string theoretic) microscopic description of black holes but this is rooted in the large entropy of black holes (see [@Den], [@DM]). On the other hand, black hole attractors show the appearance of arithmetic varieties over number fields (see [@Moo]). Hitchin moduli space does not only take a center stage role in the gauge theory approach to the geometric Langlands program, initiated in [@KW], but also in the classical (arithmetic) Langlands program (in the recent proof of the fundamental lemma, see [@Ngo; @2006a], [@Ngo; @2006b]). Since the classical Langlands program uses varieties over number fields, such similarities have often remained just that but the black hole attractor mechanism might be a hint that there could be a common structure, rooted in physics, behind the two versions of the Langlands program (as was speculated in [@Ati]). - Finally, recent work (see [@Tan; @2011]) has shown that for a detailed mathematical picture of geometric Langlands duality it is necessary to include supersymmetry breaking effects. On the other hand, breaking of supersymmetry is an important topic in the study of the $F$-theory landscape. Of course, our arguments are far from conclusive to demonstrate that a $d=8$ approach to the Langlands program would be fruitful. But an $F$-theory 7-brane perspective seems to be at the center stage of a whole bunch of similar structures, appearing in very different areas (from classical and geometric Langlands to inflation and the standard model). In a talk on a – very different – matter of coincidences (and the question if one should search for a deeper explanation of them) Dennis Sciama asked the auditorium: Would you raise your eyebrows? Would you? [xxx]{} O. Aharony, *A brief review of little string theories*, hep-th/9911147. M. Atiyah, *Edinburgh lectures on geometry, analysis and physics*, arXiv:1009.4827. A. Braverman, M. Finkelberg, *Pursuing the double affine Grassmannian I: Transversal slices via instantons on* $A_{k\text{ }}$*singularities*, arXiv:0711.2083. L. Baulieu, H. Kanno, I. M. Singer, *Special quantum field theories in eight and other dimensions*, hep-th/9704167v2. F. Denef, *Les Houches lectures on constructing string vacua*, arXiv:0803:1194. R. Dijkgraaf, *Instanton strings and Hyperkähler geometry*, hep-th/9810210. F. Denef, G. W. Moore, *Split states, entropy enigmas, holes and halos*, hep-th/0702146. R. Donagi, M. Wijnholt, *Higgs bundles and UV completion in F-theory*, arXiv:0904.1218. T. W. Grimm, R. Savelli, *Gravitational instantons and fluxes from M/F-theory on Calabi-Yau fourfolds*, arXiv:1109.3191. A. Kapustin, E. Witten, *Electric-magnetic duality and the geometric Langlands program*, hep-th/0604151. G. W. Moore, *Arithmetic and attractors*, hep-th/9807087. H. Nakajima, *Quiver varieties and branching*, arXiv:0809.2605. B.-C. Ng$\widehat{o}$, *Fibration de Hitchin et endoscopie*, Invent. Math. **164** (2006), 399-453. B.-C. Ng$\widehat{o}$, *Le lemme fondamental pour les algebres de Lie*, preprint, available at http://www.math.u-psud.fr/ngo/LFLS.pdf T. Pantev, M. Wijnholt, *Hitchin’s equations and M-theory phenomenology*, arXiv:0905.1968. M.-C. Tan, *Five-branes in M-theory and a two-dimensional geometric Langlands duality*, arXiv:0807.1107. M.-C. Tan, *Quasi-topological gauged sigma models, the geometric Langlands program, and knots*, arXiv:1111.0691. C. Vafa, *Evidence for F-theory*, hep-th/9602022. T. Weigand, *Lectures on F-theory compactifications and model building*, arXiv:1009:3497. , E. Witten, *Five-brane effective action in M-theory*, hep-th/9610234. E. Witten, *Geometric Langlands from six dimensions*, arXiv:0905.2720.
Ordered states with helical arrangement of the magnetic moments are described by a chiral order parameter $\vec C=\vec S_1 \times \vec S_2$, which yields the left- or right-handed rotation of neighboring spins along the pitch of the helix. Examples for compounds of that sort are rare-earth metals like Ho [@plakhty_01]. Spins on a frustrated lattice form another class of systems, where simultaneous ordering of chiral and spin parameters can be found. For example, in the triangular lattice with antiferromagnetic nearest neighbor interaction, the classical ground-state is given by a non-collinear arrangement with the spin vectors forming a 120$^\circ$ structure. In this case, the ground state is highly degenerate as a continuous rotation of the spins in the hexagonal plane leaves the energy of the system unchanged. In addition, it is possible to obtain two equivalent ground states which differ only by the sense of rotation (left or right) of the magnetic moments from sub-lattice to sub-lattice, hence yielding an example of chiral degeneracy. As a consequence of the chiral symmetry of the order parameter, a new universality class results that is characterized by novel critical exponents as calculated by Monte-Carlo simulations [@kawamura_88] and measured by neutron scattering [@mason_89] in the $XY$-antiferromagnet CsMnBr$_3$. An interesting but still unresolved problem is the characterization of chiral spin fluctuations that have been suggested to play an important role e.g. in the doped high-$T_c$ superconductors [@sulewski]. The measurement of chiral fluctuations is, however, a difficult task and can usually only be performed by projecting the magnetic fluctuations on a field-induced magnetization [@maleyev95; @plakhty99]. In this Letter, we show that chiral fluctuations can be directly observed in non-centrosymmetric crystals without disturbing the sample by a magnetic field. We present results of polarized inelastic neutron scattering experiments performed in the paramagnetic phase of the itinerant ferromagnet MnSi that confirm the chiral character of the spin fluctuations due to spin-orbit coupling and discuss the experimental results in the framework of self-consistent renormalisation theory of spin-fluctuations in itinerant magnets [@moriya85]. Being a prototype of a weak itinerant ferromagnet, the magnetic fluctuations in MnSi have been investigated in the past in detail by means of unpolarized and polarized neutron scattering. The results demonstrate the itinerant nature of the spin fluctuations [@ishikawa77; @ishikawa82; @ishikawa85] as well as the occurrence of spiral correlations [@shirane83] and strong longitudinal fluctuations [@tixier97]. MnSi has a cubic space group P2$_1$3 with a lattice constant $a = 4.558$ Å that lacks a center of symmetry leading to a ferromagnetic spiral along the \[1 1 1\] direction with a period of approximately 180 Å[@bloch75]. The Curie temperature is $T_C = 29.5$ K. The spontaneous magnetic moment of Mn $\mu_s \simeq 0.4 \mu_B$ is strongly reduced from its free ion value $\mu_f = 2.5\mu_B$. As shown in the inset of Fig. \[Fig1\] the four Mn and Si atoms are placed at the positions $(x,x,x)$, $({1\over2}+x,{1\over2}-x,-x)$, $({1\over2}-x,-x,{1\over2}+x)$, and $(-x,{1\over2}+x,{1\over2}-x)$ with $x_{Mn} = 0.138$ and $x_{Si}=0.845$, resepctively. We investigated the paramagnetic fluctuations in a large single crystal of MnSi (mosaic $\eta = 1.5^0$) of about 10 cm$^3$ on the triple-axis spectrometer TASP at the neutron spallation source SINQ using a polarized neutron beam. The single crystal was mounted in a $^4$He refrigerator of ILL-type and aligned with the \[0 0 1\] and \[1 1 0\] crystallographic directions in the scattering plane. Most constant energy-scans were performed around the (0 1 1) Bragg peak and in the paramagnetic phase in order to relax the problem of depolarization of the neutron beam in the ordered phase. The spectrometer was operated in the constant final energy mode with a neutron wave vector $\vec k_f$=1.97 $\AA^{-1}$. In order to suppress contamination by higher order neutrons a pyrolytic graphite filter was installed in the scattered beam. The incident neutrons were polarized by means of a remanent [@remanent] FeCoV/TiN-type bender that was inserted after the monochromator [@semadeni01]. The polarization of the neutron beam at the sample position was maintained by a guide field $B_g = 10$ G that defines also the polarization of the neutrons $\vec P_i$ with respect to the scattering vector $\vec Q = \vec k_i - \vec k_f$ at the sample position. In contrast to previous experiments, where the polarization $\vec P_f$ of the scattered neutrons was also measured in order to distinguish between longitudinal and transverse fluctuations [@tixier97], we did not analyze $\vec P_f$, as our goal was to detect a polarization dependent scattering that is proportional to $\sigma_p \propto (\hat{\vec Q} \cdot \vec P_i)$ as discussed below. A typical constant-energy scan with $\hbar \omega = 0.5$ meV measured in the paramagnetic phase at $T = 31$ K is shown in Fig. \[Fig1\] for the polarization of the incident neutrons $\vec P_i$ parallel and anti-parallel to the scattering vector $\vec Q$. It is clearly seen that the peak positions depend on $\vec P_i$ and appear at the incommensurate positions $\vec Q = \vec \tau \pm \vec\delta$ with respect to the reciprocal lattice vector $\vec\tau_{011}$ of the nuclear unit cell. Obviously, this shift of the peaks with respect to (0 1 1) would be hardly visible with unpolarized neutrons and could not observed in previous inelastic neutron works. In order to discuss our results we start with the general expression for the cross-section of magnetic scattering with polarized neutrons[@izyumov] $$\begin{aligned} {d^2\sigma\over{d\Omega d\omega}} &\sim& \sum_{\alpha, \beta}(\delta_{\alpha, \beta}- \hat Q_\alpha \hat Q_\beta) A^{\alpha \beta} (\vec Q, \omega) \nonumber \\ &+& \sum_{\alpha, \beta} (\hat {\vec Q} \cdot \vec P_i)\sum_{\gamma}\epsilon_{\alpha, \beta, \gamma} \hat Q^\gamma B^{\alpha \beta}(\vec Q, \omega) \label{ncs}\end{aligned}$$ where $(\vec Q, \omega)$ are the momentum and energy-transfers from the neutron to the sample, $\hat {\vec Q} = \vec Q/|\hat Q|$, and $\alpha, \beta, \gamma$ indicate Cartesian coordinates. The first term in Eq. \[ncs\] is independent of the polarization of the incident neutrons, while the second is polarization dependent through the factor $(\hat{\vec Q} \cdot \vec P_i)$. $\vec P_i$ denotes the direction of the neutron polarization and its scalar is equal to 1 when the beam is fully polarized. $A^{\alpha \beta}$ and $B^{\alpha \beta}$ are the symmetric and antisymmetric parts of the scattering function $S^{\alpha \beta}$, that is $A^{\alpha \beta}={1\over 2} (S^{\alpha \beta} + S^{ \beta \alpha})$ and $B^{\alpha \beta}={1\over 2} (S^{\alpha \beta} - S^{\beta \alpha})$. $S^{\alpha \beta}$ are the Fourier transforms of the spin correlation function $<s^\alpha_l s^\beta_{l'}>$, $S^{\alpha \beta}(\vec Q, \omega)={1\over{2\pi N}}\int_{-\infty}^\infty{dt e^{-i\omega t} \sum_{ll'}{e^{i\vec Q (\vec X_l-\vec X_{l'}) }}<s^\alpha_l s^\beta_{l'}(t)> }$. The vectors $\vec X_l$ designate the positions of the scattering centers in the lattice. The correlation function is related to the dynamical susceptibility through the fluctuation-dissipation theorem $S(\vec Q,\omega)=2\hbar/(1-\exp(-\hbar\omega/kT))\Im \chi(\vec Q,\omega)$. Following Ref. [@lovesey] we define now an axial vector $\vec B$ by $\sum_{\alpha \beta}\epsilon_{\alpha \beta \gamma}B^{\alpha \beta} = B^\gamma (\vec Q, \omega)$, that represents the antisymmetric part of the susceptibility which, hence, depends on the neutron polarization as follows $$(\hat {\vec Q }\cdot \vec P_i)(\hat {\vec Q} \cdot \vec B) \label{axial}$$ and vanishes for centro-symmetric systems or when there is no long-range order. In the absence of symmetry breaking fields like external magnetic fields, pressure etc., similar scans with polarized neutrons would yield a peak of diffuse scattering at the zone center and no scattering that depends on the polarization of the neutrons. However, an intrinsic anisotropy of the spin Hamiltonian in a system that lacks lattice inversion symmetry may provide an axial interaction leading to a polarization dependent cross section. The polarization dependent scattering obtained in the present experiments is therefore an indication of fluctuations in the chiral order parameter and points towards the existence of an axial vector $\vec B$ that is not necessarily commensurate with the lattice. Hence, according to Eq. \[axial\] the neutron scattering function in MnSi contains a non-vanishing antisymmetric part. Because the crystal structure of MnSi is non-centrosymmetric and the magnetic ground-state forms a helix with spins perpendicular to the \[1 1 1\] crystallographic direction, it is reasonable to interpret the polarization-dependent transverse part of the dynamical susceptibility in terms of the Dzyaloshinskii-Moriya (DM) interaction [@dzyal58; @moriya60] similarly as it was done in other non-centrosymmetric systems that show incommensurate ordering [@zheludev; @roessli]. Usually the DM-interaction is written as the cross product of interacting spins $H_{DM}=\sum_{l,m}\vec D_{l,m}\cdot (\vec s_l \times \vec s_m)$, where the direction of the DM-vector $\vec D$ is determined by bond symmetry and its scalar by the strength of the spin-orbit coupling [@moriya60]. Although the DM-interaction was originally introduced on microscopic grounds for ionic crystals, it was shown that antisymmetric spin interactions are also present in metals with non-centrosymmetric crystal symmetry [@kataoka_84]. In a similar way as for insulators with localized spin densities, the antisymmetric interaction originates from the spin-orbit coupling in the absence of an inversion center and a finite contribution to the the antisymmetric part of the wave-vector dependent dynamical susceptibility is obtained. For the case of a uniform DM-interaction, the neutron cross-section depends on the polarization of the neutron beam [@aristov_00] as follows $$\begin{aligned} \biggl({{d^2\sigma}\over{d\Omega d\omega}}\biggr)_{np} & \sim & \Im{{(\chi^\perp(\vec q-\vec\delta,\omega)+\chi^\perp (\vec q+\vec\delta,\omega))} }, \nonumber \\ \biggl({{d^2\sigma}\over{d\Omega d\omega}}\biggr)_{p} & \sim & (\hat{\vec D} \cdot \hat{\vec Q})(\hat{\vec Q}\cdot \vec P_i) \nonumber \\ & \times & \Im{{(\chi^\perp(\vec q-\vec\delta,\omega)-\chi^\perp (\vec q+\vec\delta,\omega))} }. \label{pa}\end{aligned}$$ Here, $\vec q$ designates the reduced momentum transfer with respect to the nearest magnetic Bragg peak at $\vec \tau \pm \vec \delta$. The first line of Eq. \[pa\] describes inelastic scattering with a non-polarized neutron beam. The second part describes inelastic scattering that depends on $\vec P_i$ as well as on $\vec D$. Eq. \[pa\] shows that the cross section for $\vec P_i \perp \vec Q$ is indeed independent of $P_i$ as observed in Fig. \[Fig2\]. By subtracting the inelastic spectra taken with $\vec P_i$ parallel and anti-parallel to $\vec Q$, the polarization dependent part of the cross-section can be isolated, as demonstrated in Fig. \[Fig3\] for two temperatures $T = 31$ K and $T = 40$ K. Close to $T_C$, the intensity is rather high and the crossing at $Q =$ (0 1 1) is sharp. At 40 K the intensity becomes small and the transition at (0 1 1) is rather smooth, which mirrors the decreases of the correlation length with increasing temperature. We have measured $({{d^2\sigma}/({d\Omega d\omega}}))_{p}$ in the vicinity of the (0 1 1) Bragg peak at $T = 35$ K. The result shown as a contour plot in Fig. \[Fig4\] indicates that the DM-interaction vector in MnSi has a component along the \[0 1 1\] crystallographic direction which induces paramagnetic fluctuations centered at positions incommensurate with the chemical lattice. In order to proceed further with the analysis we assume for the transverse susceptibilities in Eq. \[pa\] the expression for itinerant magnets as given by self-consistent re-normalization theory (SCR) [@moriya85] $$\chi^\perp (\vec q \pm \vec \delta, \omega) = \chi^\perp (\vec q \pm \vec \delta)/(1-i\omega/\Gamma_{\vec q \pm \vec \delta}). \label{src}$$ $\vec \delta$ is the ordering wave-vector, $\chi^\perp (\vec q \pm \vec \delta)=\chi^\perp(\vec \mp \delta)/(1+q^2/\kappa^2_\delta)$ the static susceptibility, and $\kappa_\delta$ the inverse correlation length. For itinerant ferromagnets the damping of the spin fluctuations is given by $ \Gamma_{\vec q \pm \vec \delta} = uq (q^2+\kappa^2_\delta)$ with $u = u(\vec \delta)$ reflecting the damping of the spin fluctuations. Experimentally, it has been found from previous inelastic neutron scattering measurements that the damping of the low-energy fluctuations in MnSi is adequately described using the results of the SCR-theory rather than the $q^z$ ($z = 2.5$) wave-vector dependence expected for a Heisenberg magnet [@ishikawa82]. The solid lines of Figs. \[Fig1\] to \[Fig3\] show fits of $({{d^2\sigma}/({d\Omega d\omega}}))_{p}$ to the polarized beam data. It is seen that the cross section for itinerant magnets reproduces the data well if the incommensurability is properly taken into account. Using Eqs. \[pa\] and \[src\] and taking into account the resolution function of the spectrometer, we extract values $\kappa_0 = 0.12$ Å$^{-1}$ and $u = 27$ meVÅ$^3$ in reasonable agreement with the analysis given in Ref. [@ishikawa85]. The smaller value for $u$ when compared with $u = 50$ meVÅ$^3$ from Ref. [@ishikawa82] indicates that the incommensurability $\vec \delta = (0.02,0.02,0.02)$ was neglected in the analysis of the non-polarized neutron data. At $T = 40$ K, the chiral fluctuations are broad (Fig. \[Fig3\]) due to the increase of $\kappa_\delta$ with increasing $T$, i.e. $\kappa_\delta(T) = \kappa_0 (1-T_C/T)^\nu$. We note that the mean-field-like value $\nu = 0.5$ obtained here is close to the expected exponent $\nu = 0.53$ for chiral symmetry [@kawamura_88]. This suggests that a chiral-ordering transition also occurs in MnSi in a similar way to the rare-earth compound Ho, pointing toward the existence of a universality class in the magnetic ordering of helimagnets [@plakhty_01]. In conclusion, we have shown that chiral fluctuations can be measured by means of polarized inelastic neutron scattering in zero field, when the antisymmetric part of the dynamical susceptibility has a finite value. We have shown that this is the case in metallic MnSi that has a non-centrosymmetric crystal symmetry. For this compound the axial interaction leading to the polarized part of the neutron cross-section has been identified as originating from the DM-interaction. Similar investigations can be performed in a large class of other physical systems. They will yield direct evidence for the presence of antisymmetric interactions in forming the magnetic ground-state in magnetic insulators with DM-interactions, high-T$_c$ superconductors (e.g. La$_2$CuO$_4$ [@berger]), nickelates [@koshibae], quasi-one dimensional antiferromagnets [@tsukada] or metallic compounds like FeGe[@lebech]. V.P. Plakhty et al., Phys. Rev. B [**64**]{}, 100402(R), 2001. H. Kawamura, Phys. Rev. B [**38**]{} 4916 (1988). T.E. Mason et al., Phys. Rev. B [**39**]{} 586 (1989). P.E. Sulewski et al., Phys. Rev. Lett. [**67**]{}, 3864 (1991). S. V. Maleyev, Phys. Rev. Lett. [**75**]{}, 4682 (1995). V. P. Plakhty et al., Europhys. Lett. [**48**]{}, 215 (1999). T. Moriya, in *Spin Fluctuations in Itinerant Electron Magnetism* **56**, Springer-Verlag, Berlin Heidelberg New-York Tokyo, 1985. Y. Ishikawa et al., Phys. Rev. B **16**, 4956 (1977). Y. Ishikawa et al., Phys. Rev. B **25**, 254 (1982). Y. Ishikawa et al., Phys. Rev. B **31**, 5884 (1985). G. Shirane et al., Phys. Rev. B **28**, 6251 (1983). e.g. Yu A. Izyumov, Sov. Phys. Usp. **27**, 845 (1984). S. Tixier et al., Physica B [**241-243**]{}, 613, (1998). Y. Ishikawa et al., Solid. State. Commun. [**19**]{}, 525 (1976). No spin flipping devices are necessary due to the remanent magnetization of the supermirror coatings of the benders. For details see: [P. Böni et al., Physica B **267-268**, (1999) 320.]{} F. Semadeni, B.Roessli, and P. Böni, Physica B **297**, 152 (2001). S.W. Lovesey and E. Balcar, Physica B [**267-268**]{}, 221 (1999). A. Zheludev et al., Phys. Rev. Lett. **78**, (1997) 4857. B. Roessli et al., Phys. Rev. Lett. **86** (2001) 1885. L. Dzyaloshinskii, J. Phys. Chem. Solids [**4**]{}, 241 (1958). T. Moriya, Phys. Rev. [**120**]{}, 91 (1960). M. Kataoka et al., J. Phys. Soc. Japan [**53**]{}, 3624 (1984). D.N. Aristov and S.V. Maleyev Phys. Rev. B **62** [(2000)]{} R751. J. Berger and A. Aharony, Phys. Rev. B **46**, 6477 (1992). W. Koshibae, Y. Ohta and S. Maekawa, Phys. Rev. B **50**, 3767 (1994). I. Tsukada et al., Phys. Rev. Lett.**87**, 127203 (2001). B. Lebech, J. Bernhard, and T. Freltoft, J. Phys.: Condens. Matter **1**, 6105 (1989).
--- address: | Institut für Theoretische Physik,\ J.W. Goethe-Universität,\ D-60054 Frankurt/Main, Germany\ author: - 'Mei  Huang,   Igor A. Shovkovy' title: 'The gapless 2SC phase [^1]' --- Introduction ============ Because the interaction between two quarks in the color anti-triplet channel is attractive, sufficiently cold dense quark matter is a color superconductor[@CS-general]. It is very likely that a color superconducting phase may exist in cores of compact stars, where bulk matter should satisfy the charge neutrality condition as well as $\beta$-equilibrium. For a three-flavor quark system, when the strange quark mass is small, the color-flavor-locked (CFL) phase [@CFL] is favorable [@N-CFL]. For the two-flavor quark system, the charge neutrality condition plays a nontrivial role. In the ideal two-flavor color superconducting (2SC) phase, the paired $u$ and $d$ quarks have the same Fermi momenta. Because $u$ quark carries electrical charge $2/3$, and $d$ quark carries electrical charge $-1/3$, it is easy to check that quark matter in the ideal 2SC phase is positively charged. To satisfy the electrical charge neutrality condition, roughly speaking, twice as many $d$ quarks as $u$ quarks are needed. This induces a large difference between the Fermi surfaces of the two pairing quarks, i.e., $\mu_d - \mu_u = \mu_e \approx \mu/4$, where $\mu,\mu_e$ are chemical potentials for quark and electron, respectively. Naively, one would expect that the requirement of the charge neutrality condition will destroy the $ud$ Cooper pairing in the 2SC phase. However, it was found in Ref.  that a charge neutral two-flavor color superconducting (N2SC) phase does exist. Comparing with the ideal 2SC phase, the N2SC phase found in Ref.  has a largely reduced diquark gap parameter, and the pairing quarks have different number densities. The latter contradicts the paring ansatz [@enforced]. It is natural to think that the N2SC phase found in Ref.  is an unstable Sarma state [@Sarma]. In Ref. , it was shown that the N2SC phase is a thermally stable state when the local charge neutrality condition is enforced. As a by-product, which comes out as a very important feature, it was found that the quasi-particle spectrum has zero-energy excitations. Thus, this phase was named the gapless 2SC (g2SC) phase. In the following, we first show the thermal stability of the g2SC phase, then we discuss its properties at zero as well as at nonzero temperatures, at last, we present our most recent results about the chromomagnetic instability in the g2SC phase. The gapless 2SC phase and its thermal stability {#stablity} =============================================== Bulk quark matter inside the neutron star should be neutral with respect to the color charge as well as the electrical charge. The color superconducting phase in full QCD is automatically color neutral [@colorneutral]. In the Nambu–Jona-Lasinio (NJL) type model, the color neutrality can be satisfied by tuning the chemical potential $\mu_8$ for the color charge. The value of $\mu_8$ for a charge neutral 2SC phase is very small [@N-2SC; @g2SC-HS]. Correspondingly, the electrical charge neutrality can be satisfied by tuning the chemical potential $\mu_{e}$ for the electrical charge. The value of $\mu_e$ is determined by the electrical charge neutrality condition. The ground state is determined by solving the gap equation together with the charge neutrality condition. It is found that the ground state of charge neutral two-flavor quark matter is very sensitive to the diquark coupling constant $G_D$ [@g2SC-SH]: $$\begin{aligned} & G_D/G_S \gtrsim 0.8, & \Delta>\delta\mu, ~~{\rm 2SC}, \nonumber \\ & 0.7 \lesssim G_D/G_S \lesssim 0.8, & \Delta<\delta\mu, ~~{\rm g2SC}, \nonumber \\ & G_D/G_S \lesssim 0.7, & \Delta=0, ~~{\rm NQM}.\end{aligned}$$ Where $\delta\mu \equiv \mu_e/2$, $G_S$ is the quark-antiquark coupling constant, and “NQM" indicates the normal phase of quark matter. The most interesting case is the g2SC phase, which exists in the diquark coupling regime $0.7 \lesssim G_D/G_S \lesssim 0.8$. Even though this regime is narrow, it is worth to mention that, either from the Fierz transformation ($G_D/G_S=0.75$) or from fitting the vacuum baryon mass ($G_D/G_S\simeq 2.26/3$) [@dnjl2], the value of the ratio $G_D/G_S$ is inside this regime. The g2SC phase, indicated by the order parameter $\Delta<\delta\mu$, resembles the unstable Sarma state [@Sarma]. For the flavor asymmetric $ud$ quark system, i.e., when $\mu_e$ is a free parameter and there is no constraint from the charge neutrality condition, the solution $\Delta<\delta\mu$ of the gap equation indeed corresponds to a maximum of the thermodynamical potential $\Omega_{u,d,e}$. However, bulk quark matter inside neutron stars should be charge neutral. A nonzero net electrical charge density $n_Q$ will cause an extra energy $\Omega_{Coulomb} \sim n_Q^2 V^{2/3}$ ($V$ is the volume of the system) by the repulsive Coulomb interaction. The total thermodynamical potential of the whole system is given by $\Omega = \Omega_{Coulomb} + \Omega_{u,d,e}$. The energy density grows with increasing the volume of the system, as a result, it is impossible for matter inside stars to remain charged over macroscopic distances. So, the proper way to find the ground state of the homogeneous neutral $u, d$ quark matter is to minimize the thermodynamical potential along the neutrality line $\Omega|_{n_Q=0} = \Omega_{u,d,e}|_{n_Q=0}$ with $\Omega_{Coulomb}|_{n_Q=0}=0$. The g2SC phase corresponds to the global minimum of the thermodynamical potential along the charge neutrality line, thus it is a stable state under the restriction of the charge neutrality condition. The g2SC phase at zero and nonzero temperatures {#g2SC-T} =============================================== As we already mentioned, at zero temperature, in the g2SC phase, the pairing quarks have different number densities [@N-2SC; @g2SC-SH]. This is different from the 2SC phase when $\delta\mu < \Delta$. It is the quasi-particle spectrum that makes the g2SC phase different from the 2SC phase. The excitation spectrum for the ideal 2SC phase ($\delta\mu=0$) include: two free blue quarks, which do not participate in the Cooper pairing, and four quarsi-particle excitations (linear superpositions of $u_{r,g}$ and $d_{r,g}$) with an energy gap $\Delta$. If there is a small mismatch ($\delta\mu < \Delta$) between the Fermi surfaces of the pairing $u$ and $d$ quarks, $\delta \mu$ induces two different branches of quasi-particle excitations. One branch moves up with a larger energy gap $\Delta + \delta\mu$, another branch moves down with a smaller energy gap $\Delta - \delta\mu$. If the mismatch $\delta\mu$ is larger than the gap parameter $\Delta$, the lower dispersion relation for the quasi-particle crosses the zero-energy axis. Thus we call the phase with $\Delta < \delta\mu$ the gapless 2SC (g2SC) phase. In the g2SC phase, there are only two gapped fermionic quasiparticles, and the other four quasiparticles are gapless. In a superconducting system, when one increases the temperature at a given chemical potential, thermal motion will eventually break up the quark Cooper pairs. In the weakly interacting Bardeen-Copper-Schrieffer (BCS) theory, the transition between the superconducting and normal phases is usually of second order. The ratio of the critical temperature $T_c^{\rm BCS}$ to the zero temperature value of the gap $\Delta_0^{\rm BCS}$ is a universal value[@ratio-in-BCS] $r_{\rm BCS}={T_c^{\rm BCS}}/{\Delta_0^{\rm BCS}} \approx 0.567$. In the ideal 2SC phase, the ratio of the critical temperature to the zero temperature value of the gap is also the same as in the BCS theory[@PR-sp1]. However, the g2SC phase has very different properties at nonzero temperatures[@g2SC-HS]. The ratio $T_c/\Delta_0$ is not a universal value. It is infinity when $G_D/G_S \lesssim 0.68$ and approaches $r_{\rm BCS}$ when $G_D/G_S$ increases. The temperature dependence of the gap reveals a nonmonotonic behavior. In some cases, the diquark gap could have sizable values at finite temperature even if it is exactly zero at zero temperature. Chromomagnetic instability in the g2SC phase {#g2SC-chromo} ============================================ Because the g2SC phase has four gapless modes and two gapped modes, one may think that the low energy (large distance scale) properties of the g2SC phase should interpolate between those of the normal phase and those of the 2SC phase. However, its color screening properties do not fit this picture. One of the most important properties of an ordinary superconductor is the Meissner effect, i.e., the superconductor expels the magnetic field. Using the linear response theory, the induced current $j^{ind}_{i}$ is related to the external magnetic field $A_{j}$ as $j^{ind}_{i} = \Pi_{ij} A^{j} $, where the response function $\Pi_{ij}$ is the magnetic part of the photon polarization tensor. The response function has two components, diamagnetic and paramagnetic part. In the static and long-wavelength limit, for the normal metal, the paramagnetic component cancels exactly the diamagnetic component. In the superconducting phase, the paramagnetic component is quenched by the energy gap and produces a net diamagnetic response. Thus the ordinary superconductor is a prefect diamagnet. In cold dense quark matter, the gauge bosons connected with the broken generators obtain masses in the ideal 2SC phase [@Meissner2SC] as well as in the CFL phase [@Meissner-CFL], which indicate the Meissner screening effect in these phases. However, in the g2SC phase, it is found that the Meissner screening masses of the five gluons, corresponding to the five broken generators of the $SU(3)_c$ group, are ${\it imaginary}$ [@Chromo-HS]. This is because, in the static and long-wavelength limit, the paramagnetic contribution to the magnetic part of these five gluon polarization tensors becomes dominant. In condensed matter, this phenomenon is called the paramagnetic Meissner effect (PME) [@PME]. The imaginary Meissner screening mass indicates a chromomagnetic instability of the g2SC phase. There are, several possibilities to resolve the instability. One is through a gluon condensate, which may not change the structure the g2SC phase. It is also possible that the instability drives the homogeneous system to an inhomogeneous phase, like the crystalline phase or the vortex lattice phase. This problem remains to be clarified in the future. It is also very interesting to know whether the chromomagnetic instability develops in the gapless CFL phase[@gCFL-AKR]. 0.3cm [**Acknowledgements**]{} The work of M. H. was supported by the Alexander von Humboldt-Foundation, and by the NSFC under Grant Nos. 10105005, 10135030. The work of I.A.S. was supported by Gesellschaft für Schwerionenforschung (GSI) and by Bundesministerium für Bildung und Forschung (BMBF). [0]{} R. Rapp, T. Schäfer, E. V. Shuryak and M. Velkovsky, Phys. Rev. Lett.  [**81**]{}, 53 (1998); M. Alford, K. Rajagopal, and F. Wilczek, Phys. Lett. B [**422**]{}, 247 (1998). M. G. Alford, K. Rajagopal and F. Wilczek, Nucl. Phys. [**B537**]{}, 443 (1999). M. Alford and K. Rajagopal, JHEP [**0206**]{}, 031 (2002). A.W. Steiner, S. Reddy and M. Prakash, Phys. Rev. D [**66**]{}, 094007 (2002). M. Huang, P.F. Zhuang and W.Q. Chao, Phys. Rev. D [**67**]{}, 065015 (2003). K. Rajagopal and F. Wilczek, Phys. Rev. Lett.  [**86**]{}, 3492 (2001). G. Sarma, J. Phys. Chem. Solids [**24**]{}, 1029 (1963). I. Shovkovy and M. Huang, Phys. Lett. B [**564**]{}, 205 (2003). A. Gerhold and A. Rebhan, Phys. Rev. D [**68**]{}, 011502 (2003); D. D. Dietrich and D. H. Rischke, Prog. Part. Nucl. Phys.  [**53**]{}, 305 (2004). D. Ebert, L. Kaschluhn and G. Kastelewicz, Phys. Lett. [**B264**]{},420 (1991). J. R. Schrieffer, [*Theory of Superconductivity*]{} (Benjamin, New York, 1964). R.D. Pisarski and D.H. Rischke, Phys. Rev. D [**61**]{}, 051501 (2000). M. Huang and I. Shovkovy, Nucl. Phys. A [**729**]{}, 835 (2003). D. H. Rischke, Phys. Rev. D [**62**]{}, 034007 (2000); D. H. Rischke and I. A. Shovkovy, Phys. Rev. D [**66**]{}, 054019 (2002). D. H. Rischke, Phys. Rev. D [**62**]{}, 054017 (2000). M. Huang and I. A. Shovkovy, hep-ph/0407049, to appear in Phys. Rev. D; M. Huang and I. A. Shovkovy, hep-ph/0408268. A. K. Geim, S. V. Dubonos, J. G. S. Lok, M. Henini, and J. C. Maan, Nature [**396**]{}, 144 (1998). M. Alford, C. Kouvaris and K. Rajagopal, Phys. Rev. Lett.  [**92**]{}, 222001 (2004); M. Alford, C. Kouvaris and K. Rajagopal, hep-ph/0406137. [^1]: alk given by ei uang.
--- abstract: 'Large socio-economic impact of the Indian Summer Monsoon (ISM) extremes motivated numerous attempts at its long range prediction over the past century. However, a rather estimated low potential predictability limit (PPL) of seasonal prediction of the ISM, contributed significantly by internal interannual variability was considered insurmountable. Here we show that the internal variability contributed by the ISM sub-seasonal (synoptic + intra-seasonal) fluctuations, so far considered chaotic, is partly predictable as found to be tied to slowly varying forcing (e.g. El Niño and Southern Oscillation). This provides a scientific basis for predictability of the ISM rainfall beyond the conventional estimates of PPL. We establish a much higher actual limit of predictability (r$\sim$0.82) through an extensive re-forecast experiment (1920 years of simulation) by improving two major physics in a global coupled climate model, which raises a hope for a very reliable dynamical seasonal ISM forecasting in the near future.' title: 'Unraveling the Mystery of Indian Summer Monsoon Prediction: Improved Estimate of Predictability Limit' --- The observed link between synoptic variability and predictable modes suggest a high predictability of the ISMR. CFSv2 with improved physics shows ISMR prediction skill higher than current estimate of potential predictability limit. Model shows $\sim$70% of interannual variability of ISMR is predictable, which is much higher than earlier estimates ($\sim$45%). Introduction {#sec:intro} ============ The livelihood of about one fifth of the world’s population living in South Asia thrives on regular arrival of the summer monsoon rainfall. The quantum of Indian summer monsoon rainfall (ISMR), however, varies from year-to-year, which has a significant impact on the country’s economy, food production [@Gadgil06], and availability of fresh water for drinking and industrial uses. Therefore, a reliable prediction of the ISMR one season in advance has tremendous value not only to the country’s policy makers, but also to the farmers for planning crop management strategy. However, the seasonal prediction of the ISMR has remained one of the grand challenge problems in climate science over the past century [@Blanford1884; @Gadgil05; @Rajeevan12], with the skill of the most current generation of climate models being sub-optimal [@Rajeevan12]. In the absence of the new knowledge that we are presenting here, various studies, including some of our own, have indicated that the ISMR inherently has a low predictability [@Webster98; @Goswami98; @Sperber13; @Goswami05; @Goswami06c]. The low potential predictability of the ISMR estimated by previous studies [e.g. @Kumar05; @Rajeevan12] remained unquestioned because the skill of prediction of ISMR by all models, so far, remained below this limit. While the skill of seasonal prediction of the ISMR by climate models has improved from an older generation of models [@Kumar05] to a newer generation of models [@Rajeevan12], it still remained low (correlation between observation and predictions being, r$\sim$0.4) and significantly below the conventional potential predictability limit (PPL, r$\sim$0.65). Here we report with the availability of new improved coupled ocean-atmosphere models that skill of large ensembles of retrospective forecasts of ISMR could be higher than the conventional estimates of PPL. This led us to question the sacrosanctness of the current estimate of PPL in the context of ISMR prediction. While the skill of prediction of the tropical climate, in general, is much higher than its extra-tropical counterpart [@Charney81], the Indian summer monsoon (ISM) is considered a special tropical climate system, strongly influenced by sub-seasonal (intra-seasonal and synoptic) variability [@Webster98; @Goswami05; @Goswami06c] and that potentially limits its skill of seasonal prediction. A major component of the sub-seasonal variability is the Monsoon Intra-seasonal Oscillations (MISOs; also known as active and break phases), which contribute significantly to the seasonal mean and often referred to as the building blocks of the ISM, is of a larger spatial scale ( 8,000 km) [@Goswami01; @Goswami06c]. On the other hand, the synoptic systems (lows, depressions and cyclonic storms) are of smaller spatial scale ( 1000 km) and have a relatively smaller contribution to the seasonal mean rainfall [@Sikka80a]. The contribution of the sub-seasonal variability to the seasonal mean is considered as climate noise and perceived to be unpredictable at long lead (e.g. a season) due to their origin from higher frequency chaotic component of the monsoon system, leading to the low potential predictability limit of the ISMR [@Palmer94; @Webster98; @Goswami06c]. Nevertheless, recent study shows that a fraction of sub-seasonal variability may be predictable as the statistics of the MISOs are also modulated by the ENSO [@Dwivedi15]. Ocean-atmosphere interactions being key to both the inter-annual as well as the intra-seasonal variability of the ISM [@Webster98; @Wang05; @Sperber13], a coupled atmosphere-ocean general circulation model (AOGCM) is a necessary tool for seasonal prediction of the ISMR. The simulation of regional ISM climate remained a major challenge with hardly any improvement of the persistent dry bias over the Indian landmass and wet bias over the Indian Ocean in the latest CMIP (Coupled Model Inter-comparison Project) models as compared to the earlier generation of models [@Sperber13]. Therefore, improvement of AOGCMs in simulating the south Asian monsoon precipitation climate is key to improvement of skill of seasonal prediction. While a fraction of the dry bias in CMIP5 models may arise from underestimation of variance of high frequency daily precipitation (likely to be related to underestimation of mesoscale and synoptic variances), most of the dry bias over land and wet bias over the ocean is likely to be related to the poor simulation of the northward propagation and intensity of the MISOs [@Goswami17]. Furthermore, the global teleconnections associated with slowly varying boundary conditions (i.e. sea surface temperature, snow, soil moisture) contributes significantly to the mean and variability of the ISMR and the models often have difficulties in simulating these relationship with reasonable skill. The paradigm shift in our understanding of ISMR predictability evolved with our finding that the monsoon climate noise is not unpredictable as thought previously but a significant part of it is actually linked with the predictable modes (e.g. ENSO). We use extensive re-forecast experiment (equivalent of 1920 years of simulation) using four versions of a AOGCM and demonstrate that improvements in the simulation of sub-seasonal variability, particularly the synoptic variability increases the ISMR forecast skill. Here we provide a much awaited robust evidence in favor of the hypothesis that the ISM is a highly predictable system [@Charney81]. Section \[sec:method\] describes the model, data and method used in this study. The main results are given in Section \[sec:result\] and results are summarized in Section \[sec:summary\]. Data and Methods {#sec:method} ================ Model and Re-forecast experiments --------------------------------- Under the Monsoon Mission project (http://www.tropmet.res.in/monsoon/) of the Ministry of Earth Sciences, Government of India, the NCEP climate forecast system version 2 [CFSv2; @SuSaha14] has been selected as a base model for the future improvement for a reliable ISMR prediction. While the standard version of the CFSv2 has a reasonable skill in seasonal prediction (r$\sim$0.55) [@Ramu16], it is also known to have some significant systematic biases [@Saha13; @Saha14; @Hazra17]. A version of the model has been developed aiming to improve sub-seasonal simulations with an improved convection and microphysics parameterization [@Hazra17], while another version has been developed aiming to improve the teleconnections with an improved snow scheme in the land surface model [@Saha17]. Further, improvements in snow and microphysics are combined in order to reap the benefits of both improvements. Similar to NCEP [i.e. @SuSaha14], model is initialized on every 15th, 20th, 25th February, 3rd March and four cycles in a day (i.e. 00, 06, 12 and 18 GMT) for the years 1981 to 2010 (30 years). Therefore, for each year, 16 member ensemble simulations are performed. Each ensemble member is integrated for a total of 9 months. In this study, initial conditions (1981-2010) are taken from NCEP Climate Forecast System Reanalysis [CFSR; @SuSaha10] (http://cfs.ncep.noaa.gov). Re-forecast experiments is carried out by employing the following four versions of the model: 1. [the original NCEP CFSv2 [termed here as CONT; @SuSaha14]]{} 2. [CFSv2 with improved snow physics [termed here as SNOW; @Saha17]]{} 3. [CFSv2 with improved microphysics [termed here as MPHY; @Hazra17]]{} 4. [CFSv2 with combined SNOW and MPHY (termed here as SNMP)]{} Therefore, all together 4 models, 30 years of simulation and each with 16 ensemble member results into ($4\times 30\times 16)=1920$ years of simulation, which provide us the data for exploring the ISMR skill, as well as the limit of predictability. Estimates of the Limit of Predictability ---------------------------------------- Two different methods are used for estimating potential predictability limit of the ISMR with the above mentioned model re-forecast data. *Perfect Model Correlation:* In the perfect model correlation method, the model is considered perfect and each ensemble member deviates from the others due to error in the initial conditions. ISMR from an ensemble member is correlated with the same from all remaining ensemble members and so on [e.g. @Kumar05; @Rajeevan12]. *Analysis of Variance Method:* The potential predictability is also derived based on analysis of variance [ANOVA; @Rowell95; @Rowell98] technique. The ANOVA is one of the basic approaches to study the predictability of the seasonal mean with a large number of studies relying on this technique for estimating predictability [e.g. @Kang06; @Saha16b]. Low pass filter --------------- Fourier analysis is used for eliminating/filtering the desired frequency from the time series of the all India (66.5-100.5$^\circ$E, 6.5-32.5$^\circ$N only land region) and central India (74.5-86.5$^\circ$E, 16.5-26.5$^\circ$N only land region) averaged rainfall. The harmonics 0, 1, 2, 3 represent the mean, variations on 1, 1/2 and 1/3 year time scale respectively. While mean and first three harmonics together represent the annual cycle, the remaining harmonics together represent the sub-seasonal anomaly. Filtered anomaly is constructed in the following steps:\ a) All India (AI) and Central India (CI) averaged daily observed rainfall from Indian Meteorological Department (IMD) is constructed.\ b) The time series of daily rainfall anomaly are reconstructed back after removing the harmonics one by one in succession. Up to 150th harmonic (i.e. 2.4 days of periodicity) is removed successively. Thus, time series without 0-3rd, 0-4th, 0-5th, ....., 0-150th harmonics represent time series of anomaly less than 91.2, 73.0, 60.8, ......., 2.4 periodicity respectively. Observed data ------------- The Hadley Centre Global Sea Ice and Sea Surface Temperature (HadISST, 1$^\circ$x1$^\circ$ ) daily global data for the period 1981-2010 are used here [@Rayner03]. Global monthly precipitation at 2.5$^\circ$ x 2.5$^\circ$ resolution by merging satellite and gauge observed rainfall [@Adler03] and daily gridded (1$^\circ$x1$^\circ$) IMD rainfall [@Rajeevan06] for the period 1981-2010 are used. Results {#sec:result} ======= We investigate the contribution of the sub-seasonal variability to the interannual variability of the ISMR and its association (if any) with the predictable component of the natural modes of the variability, such as ENSO. In the tropics, cloud clusters are in general formed through small scale updrafts (few hundred kilometers), that are often organized through invisible planetary scale circulations. However, the invisible link between smaller and larger scale systems are not understood clearly. Nevertheless, the synoptic activities, which are of smaller scale and brings intense rainfall, are found to be associated with the planetary scale circulations like Madden-Julian Oscillation [@Liebmann94; @Maloney00] and MISOs [@Goswami03]. In principle, the predictable modes, which are of planetary scale and evolve on longer time scale may leave their signature on the smaller scale events. As the ISMR has a strong teleconnections with the global predictable modes and the sub-seasonal variability are the building block of the monsoon, signature of these predictable modes may be also evident in the synoptic and MISO events. Considering the sub-seasonal fluctuations at three time bands: i) 3-7 days (synoptic), ii) 10-20 days (super-synoptic or higher MISO, and iii) 25-60 days (lower MISO), we use variance (i.e. vigor of fluctuations) of observed daily AI averaged rainfall in different bands as a metric of sub-seasonal activity. Low to high frequency components are removed one-by-one gradually and seasonal (JJAS) variance of filtered anomaly is correlated with ISMR anomaly. An important revelation coming out is that ISMR has the strongest correlation (a maximum of 0.63) with variance in the band of 3-5 day periods (Figure \[figone\]). Since the maximum number of synoptic disturbances happens to be over the CI, correlation of CI averaged variance with ISMR is also consistent with that of AI. The fact that the vigor of synoptic disturbances and not only their frequency or distribution, that is strongly linked with the year-to-year variability of the ISMR, is a new understanding. It is to be noted that ISMR is also moderately (weakly) correlated with MISO variance of 10-20 days (25-60 days) band. We also note that, the synoptic and 10-20 days bands of MISO explains about 25% of the total variance individually. Despite the fact that the spatial structure of the synoptic systems and their contributions to the mean ISMR are relatively smaller (as compared to those of the MISO), the ISMR anomaly is primarily affected by the year-to-year variations of the synoptic activities. Are the variances of one or more of these bands tied to any predictable climate signals like the ENSO? Correlations between variances of filtered rainfall anomalies in various bands and Niño3 SST reveal a strong inverse relationship on synoptic scale (a minimum correlation of -0.51) as well as that of 10-20 days MISO band (a minimum correlation of -0.57; Figure \[figone\]). Similarly, the filtered variance of CI rainfall has strongest correlation over 3-5 days band (a minimum of -0.58), suggesting a strong association between Pacific SST and predominant synoptic rainfall over central India. The spatial structure of correlations between SST (2m air temperature) over ocean (land) and CI/AI variance at strongest correlation bands (at $<$ 5.2 days and $<$14.6 days band) indicates a large scale ENSO like teleconnection pattern (Figure \[figS1\]). Therefore, the statistics of the sub-seasonal fluctuations, which is so far considered as climate noise, is actually associated with the predictable component of the global climate variability. This revelation provides scientific basis for a much higher predictability of the ISMR. If ISM is a highly predictable system, then why all the dynamical forecast systems so far have been failed miserably or have shown limited success ? Large and persisting systematic biases in simulating the South Asian Monsoon by climate models [@Sperber13] made the current models not good enough for achieving the potential high skill. Also, it is a rather common problem in almost all global coupled models that over the ISM region, intense (light) rainfall events, primarily associated with the synoptic systems, are highly underestimated (overestimated) [@Goswami17]. However, enough attention has not been given to improve the rainfall distribution pattern, because synoptic and MISO are considered not important as long as seasonal prediction of the ISMR is concerned. Therefore, a coupled Atmosphere-Ocean-General-Circulation-Model (AOGCM) with high fidelity in simulating the sub-seasonal variability, seasonal mean ISM and its global teleconnections is essential for a skillful prediction of the ISMR. We aimed at achieving this goal by improving the Climate Forecast System, version 2 (CFSv2) model [@SuSaha14] selected under the Monsoon Mission, based on extensive predictability studies and diagnosis of biases (described in section \[sec:method\]). Conventional wisdom makes us believe that it is impossible for a dynamical prediction system to cross the PPL [@Rajeevan12; @Kumar05]. Results from our model experiments are quite contrary to that. The estimate of the PPL based on the perfect model correlation method and that based on analysis of variance (ANOVA) method are rather close to each other and the actual skill of the ISMR (Figure \[figtwo\]a) lies either at the upper edge (CONT=0.56; SNOW=0.62) or exceeds the PPL (MPHY=0.71; SNMP=0.63). We note that, despite the models used here being imperfect, the prediction skill of the models exceeds the conventional PPL. This implies that the actual PPL could be much higher than 0.71. Therefore, neither ANOVA nor the perfect model correlation methods are able estimate the limit of predictability and the actual PPL likely to be much higher than the re-forecast skill in MPHY (i.e. $>$ 0.71). What then is the actual PPL? Here we propose a new measure of predictability, which is based on the actual forecast skill of a particular model. It is well known that forecast error grows due to imperfect physics and numerical method used as well as error in the initial conditions [ICs; @Lorenz63; @Lorenz69]. Further, among the ICs, initial errors also vary. Therefore, some of the ICs may lead to a very accurate/erroneous forecast. As a result, different forecaster with same model and ICs from the same source may end up with different skills just because of the choice of ICs and number of ensemble used for the forecast. In other words, a large number of initial conditions can generate a distribution of all possible forecast skills of a model. It may be noted that perfect initial conditions are never achievable. Therefore, the maximum skill achieved by a forecast system with a large set of initial conditions is likely to be always below the skill with that of perfect ICs. As these are actual correlation skill (correlation between observed and ensemble averaged ISMR), we define the maximum of these skills as the actual PPL. In order to demonstrate it, all possible subsets of n ensemble members from a combination of 16 ensemble members are constructed (i.e. $^{16}C_n$ subsets, each with n ensemble member, where n=2 to 16). As an example, using 16 ensemble member only one combination (or subset) is possible ($^{16} C_{16}$). Similarly, using 15 ensemble members, 16 combination/subset is possible ($^{16} C_{15}$). Ensemble average rainfall of all possible combinations are correlated with the observed rainfall individually. Hence, all possible combinations of 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 ensembles (out of 16) will eventually generate 120, 560, 1820, 4368, 8008, 11440, 12870, 11440, 8008, 4368, 1820, 560, 120, 16, 1 subsets respectively and hence a similar number of correlation skills with the observed rainfall can be obtained. The variations of minimum and maximum correlation skill as a function of ensemble members (Figure \[figtwo\]b) lead to the following noteworthy conclusions: (i) the maximum actual ISMR skill or the actual PPL in the improved model (i.e. MPHY) can reach up to 0.82 with 3-8 ensemble average, (ii) systematic improvements of a model leads to a shift in both the maximum and minimum of potential skill to the higher side, (iii) the 16 ensemble member averaged re-forecast skill of each model lies well within maximum and minimum of the ALP. These results clearly demonstrate that right now about 70% of inter-annual variability of the ISMR is predictable by the model, a much higher limit than the earlier estimate of limit of predictability (i.e. about 45%). We note that the increase in the minimum skill due to increase in the number of ensemble members is much faster than the change in the maximum skill with the size of the ensemble number. This implies that use of larger ensemble members may increase the confidence level in the forecast by ensuring a minimum level of skill with a given number of ensemble members. It is notable from the Figure \[figtwo\]b that the maximum skill or the PPL is a stronger function of model improvements compared to its dependence on the ensemble size. However, whether this increase in the minimum skill with the size of ensemble members would continue beyond the size used here (i.e. 16) remains to be explored. The observed relationship between ISMR/Niño3 and the variance of filtered rainfall anomaly should be faithfully reproduced by the model in order to achieve such a high PPL. On the synoptic time scale (3-7 days) the model is able to capture the relationship quite well, where MPHY outperforms all the models (Figures \[figthree\], \[figS1\]). However, the performance of all models in the 10-20 and 25-60 days MISO bands are limited. It is to be noted that synoptic events have maximum contribution to the seasonal ISMR skill followed by MISOs of 10-20 and 25-60 days band. Improvement in contribution of MISOs to the seasonal anomaly will be key towards further improvements in ISMR skill in the model. The relationship between sub-seasonal variance and Niño3 SST in models also corroborates the observation. At three month lead time, ISMR is predictable with a correlation skill of 0.71, which is in fact higher than prediction skill of Niño3/Niño3.4 (r$\sim$0.6; Table \[tab:cor\]). This finding suggests that non-ENSO sources of predictability are also important contributors to the improved ISMR skill in the model. We find that seasonal forecast skill on smaller spatial scales (i.e. state or district level) also increase with the improvement of the skill of the ISMR, which are more valuable to the farmers for planning of crop and water management. The grid point correlation exceeding  0.5 increases significantly with the ISMR skill (from 0.56 to 0.82; Figure \[figS3\]). Discussion and Conclusions {#sec:summary} ========================== The tropics is known to be more predictable on seasonal time scale than its extra-tropical counterpart, with clear exception in case of the Indian summer monsoon. It has been believed that a significant part of the ISMR is unpredictable due to strong influence of the sub-seasonal variability, which are chaotic in nature. @Goswami05 suggested that only about 50% of interannual variability of the ISMR is predictable and the remaining part is climate noise (i.e. synoptic and MISO). Several past studies using state-of-the-art climate have estimated a rather low PPL (r$\sim$0.65) of the ISMR. Therefore, it was thought that it is not possible for a dynamical forecast system to cross this limit in terms of actual prediction skill. However, our model with improved physics has achieved ISMR correlation skill 0.71, which is above the PPL. We found that the synoptic variability, which contributes about 25-30% of the total sub-seasonal variance is predictable (Figure \[figone\]). As the synoptic events are also known to be clustered with the active phases of the MISO [@Goswami03], and a maximum predictability of the ISMR at less than 15 days is evident (Figure \[figone\]), it is very likely that a part of high frequency MISO (10-20 days band and also known as super-synoptic) is also predictable. Therefore, in principle 70-80% of the interannual variability of the ISMR is likely to be predictable. The newly found relationship is illustrated through schematic diagram (Figure \[figfive\]). The Octopus is a Big Brother which consists of all natural predictable modes and controls synoptic and a part of the MISO on his own way. By keeping a tight lease on the statistics (e.g. variances) of the sub-seasonal fluctuations and their contribution to the seasonal mean, the ENSO effectively enhances the ISMR predictability significantly. As the contribution of the sub-seasonal variance to the ISMR improves, models show improvements in the ISMR re-forecast skill (Figure \[figthree\]). While observation shows a very weak relationship between ISMR and variance of lower MISOs (30-60 days), it is very strong positive in some models (CONT, SNOW). As a result, reasonable contribution by the synoptic events is nullified by unrealistic contributions of MISOs. This is also reflected in the actual skill (Figure \[figtwo\]). Almost all global climate models show very poor performance in simulating the distribution of heavy and moderate to low rainfall events over the ISM region [@Goswami17]. Furthermore, the synoptic system, which is primarily responsible for heavy rainfall event, is likely to change under the future global warming scenario [@Sandeep18]. Therefore, fidelity of a model to simulate the relationship between ISMR and sub-seasonal variability will define the reliability of the future scenario of the mean as well as predictability of the ISM. In many previous studies it is found that often the skill of a model is greater than the PPL and that showed limitations of the methods for estimating PPL [e.g. @Kumar14; @Saha16]. Here we propose a new method for calculating the actual limit of predictability of the ISMR, which has a binding relationship with the actual observation. As the model improves (i.e. ISMR skill increases), the distribution pattern of actual correlation skill also moves to the higher side (Figure \[figtwo\]) and the PPL scales up to 0.82 . Therefore, right now the model is able to predict about 70% of the IAV, which is much higher than earlier estimates (i.e. about 45%). Further improvements in the contribution of sub-seasonal anomaly to the ISMR anomaly in the model may raise the PPL to around 0.9 ($\sim$80% of observed variance), a target seems within reach in the near future. It appears that, further developments in model physics and ICs will take the forecast to such a level that at least sign of the ISMR anomaly (dry or wet year) is predicted without failure and amplitude with much greater confidence. This work results from extensive model development activities at IITM under the Monsoon Mission project of the MoES, Govt. Of India. We thank MoES and Director IITM, HPCS for all the support to carry out this work. We also thank NCEP for providing initial conditions and modeling support. BNG is grateful to the Science and Engineering Research Board (SERB), Govt. India for a Fellowship. Authors duly acknowledge Mrs. Yashashri Rohan Jadav, Mantri Avenue-1, Panchavati, Pune for preparing schematic (Figure \[figfive\]). The observational data sets used in the study are Hadley Centre Sea Ice and Sea Surface Temperature dataset [HadISST, $1^\circ \times 1^\circ$; @Rayner03]. Gridded rainfall data taken from India Meteorological Department [IMD; with $1^\circ \times 1^\circ$ horizontal resolution @Rajeevan06] as well as the Global Precipitation Climatology Project version 2 [GPCP; $2.5^\circ \times 2.5^\circ$ @Adler03]. Model re-forecast data is archived at IITM and can be accessed from the corresponding author upon request. [39]{} \[1\][\#1]{} urlstyle \[1\][doi:\#1]{} Adler, R. F., G. J. Huffman, A. Chang, R. Ferraro, P. Xie, J. Janowiak, B. Rudolf, U. Schneider, S. Curtis, D. Bolvin, A. Gruber, J. Susskind, P. Arkin, and E. Nelkin (2003), The version 2 global precipitation climatology project (gpcp) monthly precipitation analysis (1979- present), *J. Hydro. Meteorol.*, *4*, 1147–1167. Blanford, H. F. (1884), On the connection of the himalayan snowfall with dry winds and seasons of droughts in india, *Proc. Roy. Soc. London*, *37*, 3–22. Charney, J. G., and J. Shukla (1981), Predictability of monsoons, in *Monsoon Dynamics*, edited by J. Lighthill and R. P. Pearce, pp. 99–108, Cambridge University Press, Cambridge. Dwivedi, S., B. N. Goswami, and F. Kucharski (2015), Unraveling the missing link of enso control over the indian monsoon rainfall, *Geophys. Res. Lett.*, *42*, 8201–8207. Gadgil, S., and K. R. Kumar (2006), The asian monsoon – agriculture and economy, in *The Asian Monsoon*, edited by B. Wang, pp. 651–683, Praxis, Springer, Berlin, Heidelberg. Gadgil, S., M. Rajeevan, and R. Nanjundiah (2005), Monsoon prediction- why yet another failure?, *Curr. Sci.*, *88*, 1389 – 1400. Goswami, B., G. Wu, and T. Yasunari (2006), The annual cycle, intraseasonal oscillations, and roadblock to seasonal predictability of the asian summer monsoon, *J. Clim.*, *19*, 5078–5099. Goswami, B. B., and B. N. Goswami (2017), A road map for improving dry-bias in simulating the south asian monsoon precipitation by climate models, *Clim. Dyn.*, *49*, 2025–2034. Goswami, B. N. (1998), Interannual variations of indian summer monsoon in a gcm: External conditions versus internal feedbacks, *J. Clim.*, *11*, 501 – 522. Goswami, B. N., and R. S. Ajayamohan (2001), Intraseasonal oscillation and interannual variability of the indian summer monsoon, *J. Clim.*, *14*, 1180 – 1198. Goswami, B. N., and P. K. Xavier (2005), Dynamics of internal interannual variability of the indian summer monsoon in a gcm, *J. Geophys. Res.*, *110*, D24,104, doi:10.1029/2005JD006,042. Goswami, B. N., R. S. A. Mohan, P. K. Xavier, and D. Sengupta (2003), Clustering of low pressure systems during the indian summer monsoon by intraseasonal oscillations, *Geophys. Res. Lett.*, *30,8*, doi: 10.1029/2002GL016,734. Hazra, A., H. S. Chaudhari, S. K. Saha, S. Pokhrel, and B. N. Goswami (2017), Progress towards achieving the challenge of indian summer monsoon climate simulation in a coupled ocean‐atmosphere model, *J. Adv. Model. Eart. Syst.*, *9*, 2268–2290. Kang, I.-S., and J. Shukla (2006), Dynamic seasonal prediction and predictability of the monsoon, in *The Asian Monsoon*, edited by B. Wang, pp. 585–612, Praxis, Springer, Berlin, Heidelberg. Kumar, A., P. Peng, and M. Chen (2014), Is there a relationship between potential and actual skill?, *Mon. Weather Rev.*, *142*, 2220–2227. Kumar, K. K., M. Hoerling, and B. Rajagopalan (2005), Advancing dynamical prediction of indian monsoon rainfall, *Geophys. Res. Lett.*, *32*, L08,704, doi:10.1029/2004GL021,979. Liebmann, B., H. H. Hendon, and J. D. Glick (1994), The relationship between tropical cyclones of the western pacific and indian oceans and the madden-julian oscillation, *J. Meteor. Soc. Jap.*, *72*, 401–412. Lorenz, E. N. (1963), Deterministic nonperiodic flow, *J. Atmos. Sci.*, *20*, 130 – 141. Lorenz, E. N. (1969), Three approaches to atmospheric predictability, *Bul. Amer. Meteor. Soc.*, *50*, 345 – 351. Maloney, E. D., and D. L. Hartmann (2000), Modulation of hurricane activity in the gulf of mexico by the madden-julian oscillation, *Science*, *287*, 2002–2004. Palmer, T. N. (1994), Chaos and predictability in forecasting the monsoons, *Proc. Indian nat. Sci. Acad*, *60*, 57–66. Rajeevan, M., J. Bhate, J. D. Kale, and B. Lal (2006), High resolution daily gridded rainfall data for indian region: Analysis of break and active monsoon spells, *Curr. Sci.*, *9(3)*, 296 – 306. Rajeevan, M., C. K. Unnikrishnan, and B. Preethi (2012), Evaluation of the ensembles multi-model seasonal forecasts of indian summer monsoon variability, *Clim. Dyn.*, *38*, 2257–2274. Ramu, D. A., C. T. Sabeerali, R. Chattopadhyay, D. N. Rao, G. George, A. R. Dhakate, K. Salunke, A. Srivastava, and S. A. Rao (2016), Indian summer monsoon rainfall simulation and prediction skill in the cfsv2 coupled model: Impact of atmospheric horizontal resolution, *J. Geophys. Res.*, *121*, 2205–2221. Rayner, N. A., D. E. Parker, E. B. Horton, C. K. Folland, L. V. Alexander, D. P. Rowell, E. C. Kent, and A. Kaplan (2003), Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century, *J. Geophys. Res.*, *108*, 2156 – 2202. Rowell, D. P. (1998), Assessing potential seasonal predictability with an ensemble of multidecadal gcm simulation, *J. Clim.*, *11*, 109–120. Rowell, D. P., C. K. Folland, K. Maskell, and M. N. Ward (1995), Variability of summer rainfall over tropical north africa (1906–92): Observations and modeling, *Q. J. R. Meteorol. Soc.*, *121*, 669–704. Saha, S., S. Moorthi, H.-L. Pan, X. Wu, J. Wang, S. Nadiga, P. Tripp, R. Kistler, J. Woollen, D. Behringer, H. Liu, D. Stokes, R. Grumbine, G. Gayno, J. Wang, Y. T. Hou, H. Y. Chuang, H.-M. H. Juang, J. Sela, M. Iredell, R. Treadon, D. Kleist, P. V. Delst, D. Keyser, J. Derber, M. Ek, J. Meng, H. Wei, R. Yang, S. Lord, H. V. D. Dool, A. Kumar, W. Wang, C. Long, M. Chelliah, Y. Xue, B. Huang, J. K. Schemm, W. Ebisuzaki, R. Lin, P. Xie, M. Chen, S. Zhou, W. Higgins, C. Z. Zou, Q. Liu, Y. Chen, Y. Han, L. Cucurull, R. W. Reynolds, G. Rutledge, and M. Goldberg (2010), The ncep climate forecast system reanalysis, *Bul. Amer. Meteor. Soc.*, *91*, 1015 – 1057. Saha, S., S. Moorthi, X. Wu, J. Wang, H.-L. Pan, J. Wang, S. Nadiga, P. Tripp, D. Behringer, Y. T. Hou, H. Y. Chuang, M. Iredell, M. Ek, J. Meng, R. Yang, M. P. Mensez, H. V. D. Dool, Q. Zhang, W. Wang, M. Chen, and E. Becker (2014), The ncep climate forecast system version 2, *J. Clim.*, *27*, 2185 – 2208. Saha, S. K., S. Pokhrel, , and H. S. Chaudhari (2013), Influence of eurasian snow on indian summer monsoon in ncep cfsv2 freerun, *Clim. Dyn.*, *41*, 1801–1815. Saha, S. K., S. Pokhrel, , H. S. Chaudhari, A. Dhakate, S. Shewale, C. T. Sabeerali, K. Salunke, A. Hazra, S. Mahaptra, and A. S. Rao (2014), Improved simulation of indian summer monsoon in latest ncep climate forecast system free run, *Int. J. Climatol.*, *35*, 1628–1641. Saha, S. K., K. Sujith, S. Pokhrel, H. S. Chaudhari, and A. Hazra (2016), Predictability of global monsoon rainfall in ncep cfsv2, *Clim. Dyn.*, *47*, 1693–1715. Saha, S. K., S. Pokhrel, K. Salunke, A. Dhakate, H. S. Chaudhari, H. Rahaman, K. Sujith, A. Hazra, and D. R. Sikka (2016), Potential predictability of indian summer monsoon rainfall in ncep cfsv2, *J. Adv. Model. Eart. Syst.*, p. DOI: 10.1002/2015MS000542. Saha, S. K., K. Sujith, S. Pokhrel, H. S. Chaudhari, and A. Hazra (2017), Effects of multilayer snow scheme on the simulation of snow: Offline noah and coupled with ncepcfsv2, *J. Adv. Model. Eart. Syst.*, *9*, 271–290. Sandeep, S., R. S. Ajayamohan, W. R. Boos, T. P. Sabin, and V. Praveen (2018), Decline and poleward shift in indian summer monsoon synoptic activity in a warming climate, *Proc. Natl. Acad. Sci. USA*, *201709031*, DOI: 10.1073/pnas.1709031,115. Sikka, D. R. (1980), Some aspects of the large-scale fluctuations of summer monsoon rainfall over india in relation to fluctuations in the planetary and regional scale circulation parameters., *Proc. Ind. Acad. Sci. (Earth Planet Sci.)*, *89*, 179 – 195. Sperber, K. R., H. Annamalai, I.-S. Kang, A. Kitoh, A. Moise, A. Turner, B. Wang, and T. Zhou (2013), The asian summer monsoon: an intercomparison of cmip5 vs. cmip3 simulations of the late 20th century, *Clim. Dyn.*, *41*, 2711 – 2744. Wang, B., Q. Ding, X. Fu, I. Kang, K. Jin, J. Shukla, and F. Doblas‐Reyes (2005), Fundamental challenge in simulation and prediction of summer monsoon rainfall, *Geophys. Res. Lett.*, *32*, L15,711, doi: 10.1029/2005GL022,734. Webster, P. J., V. O. Magaa, T. N. Palmer, J. Shukla, R. A. Tomas, M. Yanai, and T. Yasunari (1998), Monsoons: Processesp, redictability, and the prospects for prediction, *J. Geophys. Res.*, *103*, 14,451 – 14,510. Niño3 Niño3.4 ------ ------- --------- -- CONT 0.53 0.59 SNOW 0.52 0.57 MPHY 0.57 0.61 SNMP 0.52 0.58 : Correlations between observed (HadISST) and model SST over Niño3 ($150^\circ$-$90^\circ$W, $5^\circ$S-$5^\circ$N) and Niño3.4 ($170^\circ$-$120^\circ$W, $5^\circ$S-$5^\circ$N) regions and ensemble mean ISMR predicted by the four models. \[tab:cor\] ![Seasonal (June to September average) ISMR and Niño3 SST anomaly correlated with the sub-seasonal variance of rainfall at various time bands (or period). Correlation between variance of AI (CI) averaged filtered rainfall and ISMR anomaly is shown by solid deep blue (light blue) line. Correlations between variance of AI (CI) rainfall and seasonal anomaly of Niño3 SST are shown by red (orange) line. Dotted dark blue (light blue) shows percentage of filtered variance to the total variance of AI (CI) averaged rainfall.[]{data-label="figone"}](corr_OBSanom_vs_OBSanom_harmwise_1981-2010_V02.jpg){height="3.5in"} ![The AI averaged sub-seasonal rainfall variance below a particular periodicity is correlated with SST (2m temperature) at every grid point of global ocean (land). a, Using seasonal variance of central India (CI) averaged rainfall $<$ 5.2 days periodicity. b, using seasonal variance of all India (AI) averaged rainfall $<$ 14.6 days periodicity. Correlations significant at 95% level are stippled. The similarity of the patterns of correlation with the canonical ENSO SST confirms the modulation of the of sub-seasonal fluctuations over the ISM region through ENSO teleconnection.[]{data-label="figS1"}](Figure_S1.jpg){height="5.0in"} ![Limits of ISMR predictability and actual re-forecast skill. a, Potential skill of the ISMR based on perfect model correlation (gray open circle), ANOVA method (blue closed circle) and actual correlation skill between 16 ensemble averaged mean and Global Precipitation Climatology Project (GPCP) observation (red closed circle). b, The maximum (solid line) and minimum (dotted line) of actual correlation skill using all combination of n ensemble averaged ISMR (i.e. $^{16}C_n$ , where n varies from 2 to 16) from CONT (black), SNOW (purple), MPHY (red) and SNMP (yellow).[]{data-label="figtwo"}](actual_perfect_corr_V03.jpg){height="3.0in"} ![Same as Figure 1, but also using two model simulations. The relationship between filtered variances and seasonal anomaly of AI rain/Niño3 SST from model having minimum (i.e. CONT, with black line) and maximum (i.e. MPHY, with red line) ISMR skill is presented along with the same from observation (blue line).[]{data-label="figthree"}](F_03_cor_Nino_AI_CFS_vs_CFS_harmwise_1981-2010_V03.jpg){height="3.5in"} ![Same as Figure 1, but using observation and all four models. Blue, black, red, purple, and yellow lines correspond to observation (IMD, HadISST), CONT, MPHY, SNOW and SNMP respectively. The green line is for MPHY, with best 8 ensemble member resulting into maximum ISMR re-forecast skill (correlation = 0.82). The dotted lines represent percentage of total variance at various bands. Grey dashed lines represents correlation significant at 99% level using two-tailed t-test.[]{data-label="figS1"}](Fig_Sxx.jpg){height="3.5in"} ![Grid point correlations between JJAS averaged GPCP rainfall and ensemble mean re-forecast. Correlation skill in (a) CONT, (b) MPHY and (c) MPHY with maximum ALP using 8 ensemble member (MPHY\_Max). Correlation significant at 95% (tow-tailed test) are stippled .[]{data-label="figS3"}](grid_corr_Max_E08_v02.jpg){height="2.5in"} ![Schematic diagram illustrating the newly discovered association of large scale predictor (represented by a giant Octopus) with sub-seasonal component (synoptic + MISO) of the Indian summer monsoon. The predictor consists of all natural modes of variability, i.e. ENSO, Indian Ocean Dipole (IOD), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation ( NAO), snow, soil moisture etc., each represents an arm of the predictor. The two long arms of the predictor aiming towards the synoptic and MISO indicate its influences on these sub-seasonal processes. The large scale Walker circulation is shown by colored (red, blue) arrows.[]{data-label="figfive"}](schematic_octopas.jpg){height="1.8in"}
--- abstract: 'A propositional logic program $P$ may be identified with a $P_fP_f$-coalgebra on the set of atomic propositions in the program. The corresponding $C(P_fP_f)$-coalgebra, where $C(P_fP_f)$ is the cofree comonad on $P_fP_f$, describes derivations by resolution. That correspondence has been developed to model first-order programs in two ways, with lax semantics and saturated semantics, based on locally ordered categories and right Kan extensions respectively. We unify the two approaches, exhibiting them as complementary rather than competing, reflecting the theorem-proving and proof-search aspects of logic programming. While maintaining that unity, we further refine lax semantics to give finitary models of logic progams with existential variables, and to develop a precise semantic relationship between variables in logic programming and worlds in local state.' address: - 'Department of Computer Science, Heriot-Watt University, Edinburgh, UK' - 'Department of Computer Science, University of Bath, BA2 7AY, UK' author: - Ekaterina Komendantskaya - John Power bibliography: - 'CMCS.bib' title: 'Logic programming: laxness and saturation' --- Logic programming , coalgebra ,coinductive derivation tree , Lawvere theories,lax transformations ,saturation Introduction ============ Over recent years, there has been a surge of interest in category theoretic semantics of logic programming. Research has focused on two ideas: lax semantics, proposed by the current authors and collaborators [@KoPS], and saturated semantics, proposed by Bonchi and Zanasi [@BonchiZ15]. Both ideas are based on coalgebra, agreeing on variable-free logic programs. Both ideas use subtle, well-established category theory, associated with locally ordered categories and with right Kan extensions respectively [@K]. And both elegantly clarify and extend established logic programming constructs and traditions, for instance [@GC] and [@BMR]. Until now, the two ideas have been presented as alternatives, competing with each other rather than complementing each other. A central thesis of this paper is that the competition is illusory, the two ideas being two views of a single, elegant body of theory, those views reflecting different but complementary aspects of logic programming, those aspects broadly corresponding with the notions of theorem proving and proof search. Such reconciliation has substantial consequences. In particular, it means that whenever one further refines one approach, as we shall do to the original lax approach in two substantial ways here, one should test whether the proposed refinement also applies to the other approach, and see what consequences it has from the latter perspective. The category theoretic basis for both lax and saturated semantics is as follows. It has long been observed, e.g., in [@BM; @CLM], that logic programs induce coalgebras, allowing coalgebraic modelling of their operational semantics. Using the definition of logic program in Lloyd’s book [@Llo], given a set of atoms $At$, one can identify a variable-free logic program $P$ built over $At$ with a $P_fP_f$-coalgebra structure on $At$, where $P_f$ is the finite powerset functor on $Set$: each atom is the head of finitely many clauses in $P$, and the body of each clause contains finitely many atoms. It was shown in [@KMP] that if $C(P_fP_f)$ is the cofree comonad on $P_fP_f$, then, given a logic program $P$ qua $P_fP_f$-coalgebra, the corresponding $C(P_fP_f)$-coalgebra structure characterises the and-or derivation trees generated by $P$, cf. [@GC]. That fact has formed the basis for our work on lax semantics [@KoPS; @KSH14; @JohannKK15; @FK15; @FKSP16] and for Bonchi and Zanasi’s work on saturation semantics [@BZ; @BonchiZ15]. In attempting to extend the analysis to arbitrary logic programs, both groups followed the tradition of [@ALM; @BM; @BMR; @KP96]: given a signature $\Sigma$ of function symbols, let $\mathcal{L}_{\Sigma}$ denote the Lawvere theory generated by $\Sigma$, and, given a logic program $P$ with function symbols in $\Sigma$, consider the functor category $[\mathcal{L}_{\Sigma}^{op},Set]$, extending the set $At$ of atoms in a variable-free logic program to the functor from $\mathcal{L}_{\Sigma}^{op}$ to $Set$ sending a natural number $n$ to the set $At(n)$ of atomic formulae with at most $n$ variables generated by the function symbols in $\Sigma$ and the predicate symbols in $P$. We all sought to model $P$ by a $[{\mathcal{L}_{\Sigma}}^{op},P_fP_f]$-coalgebra $p:At\longrightarrow P_fP_fAt$ that, at $n$, takes an atomic formula $A(x_1,\ldots ,x_n)$ with at most $n$ variables, considers all substitutions of clauses in $P$ into clauses with variables among $x_1,\ldots ,x_n$ whose head agrees with $A(x_1,\ldots ,x_n)$, and gives the set of sets of atomic formulae in antecedents, naturally extending the construction for variable-free logic programs. However, that idea is too simple for two reasons. We all dealt with the second problem in the same way, so we shall discuss it later, but the first problem is illustrated by the following example. \[ex:listnat\] ListNat (for lists of natural numbers) denotes the logic program\ $1.\ \mathtt{nat(0)} \gets $\ $2.\ \mathtt{nat(s(x))} \gets \mathtt{nat(x)}$\ $3.\ \mathtt{list(nil)} \gets $\ $4.\ \mathtt{list(cons (x, y))} \gets \mathtt{nat(x), list(y)}$\ ListNat has nullary function symbols $\mathtt{0}$ and $\mathtt{nil}$, a unary function symbol $\mathtt{s}$, and a binary function symbol $\mathtt{cons}$. So the signature $\Sigma$ of ListNat contains four elements. There is a map in $\mathcal{L}_{\Sigma}$ of the form $0\rightarrow 1$ that models the nullary function symbol $0$. So, naturality of the map $p:At\longrightarrow P_fP_fAt$ in $[\mathcal{L}_{\Sigma}^{op},Set]$ would yield commutativity of the diagram At(1) & \^[p\_1]{} & P\_fP\_fAt(1)\ \ At(0) & \_[p\_0]{} & P\_fP\_fAt(0) But consider $\mathtt{nat(x)}\in At(1)$: there is no clause of the form $\mathtt{nat(x)}\gets \, $ in ListNat, so commutativity of the diagram would imply that there cannot be a clause in ListNat of the form $\mathtt{nat(0)}\gets \, $ either, but in fact there is one. Thus $p$ is not a map in the functor category $[\mathcal{L}_{\Sigma}^{op},Set]$. Proposed resolutions diverged: at CALCO in 2011, we proposed lax transformations [@KoP], then at CALCO 2013, Bonchi and Zanasi proposed saturation semantics [@BZ]. First we shall describe our approach. Our approach was to relax the naturality condition on $p$ to a subset condition, following [@Ben; @BKP; @Kelly], so that, given a map in $\mathcal{L}_{\Sigma}$ of the form $f:n \rightarrow m$, the diagram At(m) & \^[p\_m]{} & P\_fP\_fAt(m)\ \ At(n) & \_[p\_n]{} & P\_fP\_fAt(n) need not commute, but rather the composite via $P_fP_fAt(m)$ need only yield a subset of that via $At(n)$. So, for example, $p_1(\mathtt{nat(x)})$ could be the empty set while $p_0(\mathtt{nat(0)})$ could be non-empty in the semantics for ListNat as required. We extended $Set$ to $Poset$ in order to express such laxness, and we adopted established category theoretic research on laxness, notably that of [@Kelly], in order to prove that a cofree comonad exists and, on programs such as ListNat, behaves as we wish. This agrees with, and is indeed an instance of, He Jifeng and Tony Hoare’s use of laxness to model data refinement [@HH; @HH1; @KP1; @P]. Bonchi and Zanasi’s approach was to use saturation semantics [@BZ; @BonchiZ15], following [@BM]. The key category theoretic result that supports it asserts that, regarding $ob({\mathcal{L}_{\Sigma}})$, equally $ob({\mathcal{L}_{\Sigma}})^{op}$, as a discrete category with inclusion functor $I:ob({\mathcal{L}_{\Sigma}})\longrightarrow {\mathcal{L}_{\Sigma}}$, the functor $$[I,Set]:[{\mathcal{L}_{\Sigma}}^{op},Set]\longrightarrow [ob({\mathcal{L}_{\Sigma}})^{op},Set]$$ that sends $H:{\mathcal{L}_{\Sigma}}^{op}\longrightarrow Set$ to the composite $HI:ob({\mathcal{L}_{\Sigma}})^{op} \longrightarrow Set$ has a right adjoint, given by right Kan extension. The data for , although not forming a map in $[{\mathcal{L}_{\Sigma}}^{op},Set]$, may be seen as a map in $[ob({\mathcal{L}_{\Sigma}})^{op},Set]$. So, by the adjointness, the data for $p$ corresponds to a map $\bar{p}:At\longrightarrow R(P_fP_fAtI)$ in $[{\mathcal{L}_{\Sigma}}^{op},Set]$, thus to a coalgebra on $At$ in $[{\mathcal{L}_{\Sigma}}^{op},Set]$, where $R(P_fP_fAtI)$ is the right Kan extension of $P_fP_fAtI$ along the inclusion $I$. The right Kan extension is defined by $$R(P_fP_fAtI)(n) = \prod_{m \in {\mathcal{L}_{\Sigma}}} (P_fP_fAt(m))^{{\mathcal{L}_{\Sigma}}(m,n)}$$ and the function $$\bar{p}(n):At(n) \longrightarrow \prod_{m \in {\mathcal{L}_{\Sigma}}} (P_fP_fAt(m))^{{\mathcal{L}_{\Sigma}}(m,n)}$$ takes an atomic formula $A(x_1,\ldots,x_n)$, and, for every substitution for $x_1,\ldots,x_n$ generated by the signature $\Sigma$, gives the set of sets of atomic formulae in the tails of clauses with head $A(t_1,\ldots,t_n)$, where the $t_i$’s are determined by the substitution. By construction, $\bar{p}$ is natural, but one quantifies over all possible substitutions for $x_1,\ldots,x_n$ in order to obtain that naturality, and one ignores the laxness of $p$. As we shall show in Section \[sec:sat\], the two approaches can be unified. If one replaces $$[I,Set]:[{\mathcal{L}_{\Sigma}}^{op},Set]\longrightarrow [ob({\mathcal{L}_{\Sigma}})^{op},Set]$$ by the inclusion $$[{\mathcal{L}_{\Sigma}}^{op},Poset]\longrightarrow Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$$ $[{\mathcal{L}_{\Sigma}}^{op},Set]$ being a full subcategory of $[{\mathcal{L}_{\Sigma}}^{op},Poset]$, one obtains exactly Bonchi and Zanasi’s correspondence between $p$ and $\bar{p}$, with exactly the same formula, starting from lax transformations as we proposed. Thus, from a category theoretic perspective, saturation can be seen as complementary to laxness rather than as an alternative to it. This provides a robustness test for future refinements to models of logic programming: a refinement of one view of category theoretic semantics can be tested by its effect on the other. We now turn to such refinements. Recently, we have refined lax semantics in two substantial ways, the first of which was the focus of the workshop paper [@KoP1] that this paper extends, and the second of which we introduce here. For the first, a central contribution of lax semantics has been the inspiration it provided towards the development of an efficient logic programming algorithm [@KoPS; @KSH14; @JohannKK15; @FK15; @FKSP16]. That development drew our attention to the semantic significance of *existential* variables: such variables do not appear in ListNat, and they are not needed for a considerable body of logic programming, but they do appear in logic programs such as the following, which is a leading example in Sterling and Shapiro’s book [@SS]. \[ex:lp\] GC (for graph connectivity) denotes the logic program\ $1.\ \mathtt{connected(x,x)} \gets $\ $2.\ \mathtt{connected(x,y)} \gets \mathtt{edge(x,z)}, \mathtt{connected(z,y)}$\ There is a variable $z$ in the tail of the second clause of GC that does not appear in its head, whereas no such variable appears in ListNat. Such a variable is called an existential variable, the presence of which challenges the algorithmic significance of lax semantics. In describing the putative coalgebra $p:At\longrightarrow P_fP_fAt$ just before Example \[ex:listnat\], we referred to *all* substitutions of clauses in $P$ into clauses with variables among $x_1,\ldots ,x_n$ whose head agrees with $A(x_1,\ldots ,x_n)$. If there are no existential variables, that amounts to term-matching, which is algorithmically efficient; but if existential variables do appear, the mere presence of a unary function symbol generates an infinity of such substitutions, creating algorithmic difficulty, which, when first introducing lax semantics, we, also Bonchi and Zanasi, avoided modelling by replacing the outer instance of $P_f$ by $P_c$, thus allowing for countably many choices. That is the second of the two problems mentioned just before Example \[ex:listnat\]. We have long sought a more elegant resolution to that, one that restricts the construction of $p$ to finitely many substitutions. We finally found and presented such a resolution in the workshop paper [@KoP1] that this paper extends. We both refine it a little more, as explained later, and give more detail here. The conceptual key to the resolution was to isolate and give finitary lax semantics to the notion of *coinductive tree* [@KoP2; @KoPS]. Coinductive trees arise from term-matching resolution [@KoP2; @KoPS], which is a restriction of SLD-resolution. It captures the theorem proving aspect of logic programming, which is distinct from, but complementary with, its problem solving aspect, which is captured by SLD-resolution  [@FK15; @JohannKK15]. We called the derivation trees arising from term-matching coinductive trees in order to mark their connection with coalgebraic logic programming, which we also developed. Syntactically, one can observe the difference between lax semantics and saturation semantics in that lax semantics models coinductive trees, which are finitely branching, whereas saturation involves infinitely many possible substitutions, leading Bonchi and Zanasi to model different kinds of trees, their focus being on proof search rather than on theorem proving. Chronologically, we introduced lax semantics in 2011 as above [@KoP]; lax semantics inspired us to investigate term-matching and to introduce the notion of coinductive tree [@KoP2]; because of the possibility of existential variables, our lax semantics for coinductive trees, despite inspiring the notion, was potentially infinitary [@KoPS]; so we have now refined lax semantics to ensure finitariness of the semantics for coinductive trees, even in the presence of existential variables [@KoP1], introducing it in the workshop paper that this paper extends. We further refine lax semantics here to start to build a precise relationship with the semantics of local variables [@PP2], which we plan to develop further in future. We regard it as positive that lax semantics brings to the fore, in semantic terms, the significance of existential variables, and allows a precise semantic relationship between the role of variables in logic programming and local variables as they arise in programming more generally. The paper is organised as follows. In Section \[sec:backr\], we set logic programming terminology, explain the relationship between term-rewriting and SLD-resolution, and introduce the notion of coinductive tree. In Section \[sec:parallel\], we give semantics for variable-free logic programs. This semantics could equally be seen as lax semantics or saturated semantics, as they agree in the absence of variables. In Section \[sec:recall\], we model coinductive trees for logic programs without existential variables and explain the difficulty in modelling coinductive trees for arbitrary logic programs. In Section \[sec:sat\], we recall saturation semantics and make precise the relationship between it and lax semantics. We devote Section \[sec:derivation\] of the paper to refining lax semantics, while maintaining the relationship with saturation semantics, to model the coinductive trees generated by logic programs with existential variables, and in Section \[sec:local\], we start to build a precise relationship with the semantics of local state [@PP2]. Theorem proving in logic programming {#sec:backr} ==================================== A *signature* $\Sigma$ consists of a set $\mathcal{F}$ of function symbols $f,g, \ldots$ each equipped with an arity. Nullary (0-ary) function symbols are constants. For any set $\mathit{Var}$ of variables, the set $Ter(\Sigma)$ of terms over $\Sigma$ is defined inductively as usual: - $x \in Ter(\Sigma)$ for every $x \in \mathit{Var}$. - If $f$ is an n-ary function symbol ($n\geq 0$) and $t_1,\ldots ,t_n \in Ter(\Sigma) $, then $f(t_1,\ldots ,t_n) \in Ter(\Sigma)$. A *substitution* over $\Sigma$ is a (total) function $\sigma: \mathit{Var} \to \mathbf{Term}(\Sigma)$. Substitutions are extended from variables to terms as usual: if $t\in \mathbf{Term}(\Sigma)$ and $\sigma$ is a substitution, then the [*application*]{} $\sigma(t)$ is a result of applying $\sigma$ to all variables in $t$. A substitution $\sigma$ is a *unifier* for $t, u$ if $\sigma(t) = \sigma(u)$, and is a *matcher* for $t$ against $u$ if $\sigma(t) = u$. A substitution $\sigma$ is a [*most general unifier*]{} ([*mgu*]{}) for $t$ and $u$ if it is a unifier for $t$ and $u$ and is more general than any other such unifier. A [*most general matcher*]{} ([*mgm*]{}) $\sigma$ for $t$ against $u$ is defined analogously. In line with logic programming (LP) tradition [@Llo], we consider a set $\mathcal{P}$ of predicate symbols each equipped with an arity. It is possible to define logic programs over terms only, in line with the term-rewriting (TRS) tradition [@Terese], as in [@JohannKK15], but we will follow the usual LP tradition here. That gives us the following inductive definitions of the sets of atomic formulae, Horn clauses and logic programs (we also include the definition of terms for convenience). \[df:syntax\] Terms $Ter \ ::= \ Var \ | \ \mathcal{F}(Ter,..., Ter)$ Atomic formulae (or atoms) $At \ ::= \ \mathcal{P}(Ter,...,Ter)$ (Horn) clauses $HC \ ::= \ At \gets At,..., At$ Logic programs $Prog \ ::= HC, ... , HC$ In what follows, we will use letters $A,B,C,D$, possibly with subscripts, to refer to elements of $At$. Given a logic program $P$, we may ask whether a given atom is logically entailed by $P$. E.g., given the program ListNat we may ask whether $\mathtt{list(cons(0,nil))}$ is entailed by ListNat. The following rule, which is a restricted form of SLD-resolution, provides a semi-decision procedure to derive the entailment. \[def:resolution\] [$$\begin{array}{c} \infer[] {P \vdash [\ ] } { } \ \ \ \ \ \ \ \ \infer[\text{if}~( A \gets A_1, \ldots, A_n) \in P] {P \vdash \sigma A } { P \vdash \sigma A_1 \quad \cdots \quad P \vdash \sigma A_n } \end{array}$$]{} In contrast, the SLD-resolution rule could be presented in the following form: $$B_1, \ldots , B_j, \ldots , B_n \leadsto_P \sigma B_1, \ldots, \sigma A_1, \ldots, \sigma A_n, \ldots , \sigma B_n$$ if $(A \gets A_1, \ldots, A_n) \in P$, and $\sigma$ is the mgu of $A$ and $B_{j}$. The derivation for $A$ succeeds when $A \leadsto_P [\ ]$; we use $\leadsto_P^*$ to denote several steps of SLD-resolution. At first sight, the difference between TM-resolution and SLD-resolution may seem only to be notational. Indeed, both $ListNat \vdash \mathtt{list(cons(0,nil))}$ and\ $ \mathtt{list(cons(0,nil))} \leadsto^*_{ListNat} [\ ]$ by the above rules (see also Figure \[pic:tree\]). However, $ListNat \nvdash \mathtt{list(cons(x,y))}$ whereas $ \mathtt{list(cons(x,y))} \leadsto^*_{ListNat} [\ ]$. And, even more mysteriously, $GC \nvdash \mathtt{connected(x,y)}$ while $\mathtt{connected(x,y)} \leadsto_{GC} [\ ]$. In fact, TM-resolution reflects the *theorem proving* aspect of LP: the rules of Definition \[def:resolution\] can be used to semi-decide whether a given term $t$ is entailed by $P$. In contrast, SLD-resolution reflects the *problem solving* aspect of LP: using the SLD-resolution rule, one asks whether, for a given $t$, a substitution $\sigma$ can be found such that $P \vdash \sigma(t)$. There is a subtle but important difference between these two aspects of proof search. For example, when considering the successful derivation $ \mathtt{list(cons(x,y))}$ $ \leadsto^*_{ListNat} [\ ]$, we assume that $\mathtt{list(cons(x,y))}$ holds only relative to a computed substitution, e.g. $\mathtt{x \mapsto 0, \ y \mapsto nil}$. Of course this distinction is natural from the point of view of theorem proving: $\mathtt{list(cons(x,y))}$ is not a “theorem" in this generality, but its special case, $\mathtt{list(cons(0,nil))}$, is. Thus, $ListNat \vdash \mathtt{list(cons(0,nil))}$ but $ListNat \nvdash \mathtt{list(cons(x,y))}$ (see also Figure \[pic:tree\]). Similarly, $\mathtt{connected(x,y)} \leadsto_{GC} [\ ]$ should be read as: $\mathtt{connected(x,y)}$ holds relative to the computed substitution $\mathtt{y\mapsto x}$. According to the soundness and completeness theorems for SLD-resolution [@Llo], the derivation $\leadsto$ has *existential* meaning, i.e. when $\mathtt{list(cons(x,y))} \leadsto^*_{ListNat} [\ ]$, the successful goal $\mathtt{list(cons(x,y))}$ is not meant to be read as universally quantified over $\mathtt{x}$ and $\mathtt{y}$. In contrast, TM-resolution proves a universal statement. So $GC \vdash \mathtt{connected(x,x)}$ reads as: $\mathtt{connected(x,x)}$ is entailed by GC for any $\mathtt{x}$. Much of our recent work has been devoted to formal understanding of the relation between the theorem proving and problem solving aspects of LP [@JohannKK15; @FK15]. The type-theoretic semantics of TM-resolution, given by “Horn clauses as types, $\lambda$-terms as proofs" is given in [@FK15; @FKSP16]. Definition \[def:resolution\] gives rise to derivation trees. E.g. the derivation (or, equivalently, the proof) for $ListNat \vdash \mathtt{list(cons(0,nil))}$ can be represented by the following derivation tree: (root) [$\mathtt{list(cons(0,nil))}$]{} child [ node [$\mathtt{nat(0)}$]{} child [ node [$[\ ]$]{} ]{}]{} child [ node [$\mathtt{list(nil)}$]{} child [ node [$[\ ]$]{}]{}]{}; In general, given a term $t$ and a program $P$, more than one derivation for $P \vdash t$ is possible. For example, if we add a fifth clause to the program $ListNat$:\ $5. \ \mathtt{list(cons(0,x)) \gets list(x)}$\ then yet another, alternative, proof is possible for the extended program: $ListNat^+ \vdash \mathtt{list(cons(0,nil))}$ via Clause $5$: (root) [$\mathtt{list(cons(0,nil))}$]{} child [ node [$\mathtt{list(nil)}$]{} child [ node [$[\ ]$]{}]{}]{}; To reflect the choice of derivation strategies at every stage of the derivation, we introduce a new kind of node called an *or-node*. In our example, this would give us the tree shown in Figure \[pic:tree\]: note the $\bullet$-nodes. (root) [$\mathtt{list(cons(0,nil))}$]{} child [\[fill\] circle (2pt) child [ node [$\mathtt{nat(0)}$]{} child [\[fill\] circle (2pt) child [ node [$[\ ]$]{} ]{}]{}]{} child [ node [$\mathtt{list(nil)}$]{} child [\[fill\] circle (2pt) child [ node [$[\ ]$]{}]{}]{}]{}]{} child [\[fill\] circle (2pt) child [ node [$\mathtt{list(nil)}$]{} child [\[fill\] circle (2pt) child [ node [$[\ ]$]{}]{}]{}]{}]{};      (root) [$\mathtt{list(cons(x,y))}$]{} child [\[fill\] circle (2pt) child [ node [$\mathtt{nat(x)}$]{}]{} child [ node [$\mathtt{list(y)}$]{}]{}]{}; This intuition is made precise in the following definition of a *coinductive tree*, which first appeared in [@KoP; @KoPS] and was refined in [@JohannKK15] under the name of a rewriting tree. Note the use of mgms (rather than mgus) in the last item. \[def:cointree\] Let P be a logic program and $A$ be an atomic formula. The *coinductive tree* for $A$ is the possibly infinite tree T satisfying the following properties. - $A$ is the root of $T$ - Each node in $T$ is either an and-node or an or-node - Each or-node is given by $\bullet$ - Each and-node is an atom - For every and-node $A'$ occurring in $T$, if there is a clause $C_i$ in P of the form $B_i \gets B^i_1,\ldots ,B^i_{n_i}$ for some $n_i$, such that $A' = \theta B_i$ for the mgm $\theta$, then $A'$ has an or-node, and that or-node has children given by and-nodes $\theta(B^i_j), \ldots ,\theta(B^i_{k})$, where $B_j, \ldots , B_k \subseteq B_1, \ldots , B_{n_i}$ and $B_j, \ldots , B_k$ is the maximal such set for which $\theta(B^i_j), \ldots ,\theta(B^i_{k})$ are distinct. Coinductive trees provide a convenient model for proofs by TM-resolution. Let us make one final observation on TM-resolution. Generally, given a program $P$ and an atom $t$, one can prove that $ t \leadsto^*_P [\ ]$ with computed substitution $\sigma$ if and only if $P \vdash \sigma t$. This simple fact may leave the impression that proofs (and correspondingly coinductive trees) for TM-resolution are in some sense fragments of reductions by SLD-resolution. Compare, for example, the right-hand tree of Figure \[pic:tree\] before substitution with the larger left-hand tree obtained after the substitution. In this case, we could emulate the problem solving aspect of SLD-resolution by using coinductive trees and allowing the application of substitutions within coinductive trees, as was proposed in [@KoP2; @JohannKK15; @FK15]. That works perfectly for programs such as ListNat, but not for existential programs: although there is a one step SLD-derivation for $ \mathtt{connected(x,y)} \leadsto_{GC} [\ ]$ (with $\mathtt{y \mapsto x}$), the TM-resolution proof for $ \mathtt{connected(x,y)}$ diverges and gives rise to the following infinite coinductive tree: child [\[fill\] circle (2pt) child [ node [$\mathtt{edge(x,z)}$]{}]{} child [ node [$\mathtt{connected(z,y)}$]{} child [\[fill\] circle (2pt) child [ node[$\mathtt{edge(x,w)}$]{}]{} child [ node[$\mathtt{connected(w,y)}$]{} child [node[$\vdots$]{}]{}]{}]{} ]{}]{}; Not only is the proof for $GC \vdash \mathtt{connected(x,y)}$ not a fragment of the derivation $\mathtt{connected(x,y)} \leadsto_{GC} [\ ]$, but it also requires more (infinitely many) variables. Thus, the operational semantics of TM-resolution and SLD-resolution can be very different for existential programs, in regard both to termination and to the number of variables involved. This issue is largely orthogonal to that of non-termination. Consider the non-terminating (but not existential) program Bad: $\mathtt{bad(x)} \gets \mathtt{bad(x)}$\ For Bad, the operational behaviours of TM-resolution and SLD-resolution are similar: in both cases, derivations do not terminate, and both require only finitely many variables. Moreover, such programs can be analysed using similar coinductive methods in TM- and SLD-resolution [@FKSP16; @SimonBMG07]. The problems caused by existential variables are known in the literature on theorem proving and term-rewriting [@Terese]. In TRS [@Terese], existential variables are not allowed to appear in rewriting rules, and in type inference based on term rewriting or TM-resolution, the restriction to non-existential programs is common [@Jones97]. So theorem-proving, in contrast to problem-solving, is modelled by term-matching; term-matching gives rise to coinductive trees; and as explained in the introduction and, in more detail, later, coinductive trees give rise to laxness. So in this paper, we use laxness to model coinductive trees, and thereby theorem-proving in LP, and we relate our semantics with Bonchi and Zanasi’s saturated semantics, which we believe primarily models the problem-solving aspect of logic programming. Categorical semantics for existential programs, which are known to be challenging for theorem proving, is a central contribution of Section \[sec:derivation\] and of this paper. Semantics for variable-free logic programs {#sec:parallel} ========================================== In this section, we recall and develop the work of [@KMP], in regard to variable-free logic programs, i.e., we take $Var = \emptyset$ in Definition \[df:syntax\]. Variable-free logic programs are operationally equivalent to propositional logic programs, as substitutions play no role in derivations. In this (propositional) setting, coinductive trees coincide with the and-or derivation trees known in the LP literature [@GC], and this semantics appears as the ground case of both lax semantics [@KoPS] and saturated semantics [@BonchiZ15]. \[const:coal\] For any set ${\mathrm{\textrm{At}}}$, there is a bijection between the set of variable-free logic programs over the set of atoms ${\mathrm{\textrm{At}}}$ and the set of $P_fP_f$-coalgebra structures on ${\mathrm{\textrm{At}}}$, where $P_f$ is the finite powerset functor on $Set$. \[constr:Gcoalg\] Let $C(P_fP_f)$ denote the cofree comonad on $P_fP_f$. Then, given a logic program $P$ over ${\mathrm{\textrm{At}}}$, equivalently $p: {\mathrm{\textrm{At}}}\longrightarrow P_f P_f({\mathrm{\textrm{At}}})$, the corresponding $C(P_fP_f)$-coalgebra $\overline{p}: {\mathrm{\textrm{At}}}\longrightarrow C(P_fP_f)({\mathrm{\textrm{At}}})$ sends an atom $A$ to the coinductive tree for $A$. Applying the work of [@W] to this setting, the cofree comonad is in general determined as follows: $C(P_fP_f)({\mathrm{\textrm{At}}})$ is the limit of the diagram $$\ldots \longrightarrow {\mathrm{\textrm{At}}}\times P_fP_f({\mathrm{\textrm{At}}}\times P_fP_f({\mathrm{\textrm{At}}})) \longrightarrow {\mathrm{\textrm{At}}}\times P_fP_f({\mathrm{\textrm{At}}}) \longrightarrow {\mathrm{\textrm{At}}}$$ with maps determined by the projection $\pi_0:At\times P_fP_f(At)\longrightarrow At$, with applications of the functor $At \times P_fP_f(-)$ to it. Putting ${\mathrm{\textrm{At}}}_0 = {\mathrm{\textrm{At}}}$ and ${\mathrm{\textrm{At}}}_{n+1} = {\mathrm{\textrm{At}}}\times P_fP_f{\mathrm{\textrm{At}}}_n$, and defining the cone $$\begin{aligned} p_0 & = & id: {\mathrm{\textrm{At}}}\longrightarrow {\mathrm{\textrm{At}}}( = {\mathrm{\textrm{At}}}_0)\\ p_{n+1} & = & \langle id, P_fP_f(p_n) \circ p \rangle : {\mathrm{\textrm{At}}}\longrightarrow {\mathrm{\textrm{At}}}\times P_fP_f {\mathrm{\textrm{At}}}_n ( = {\mathrm{\textrm{At}}}_{n+1}) \end{aligned}$$ the limiting property of the diagram determines the coalgebra $\overline{p}: {\mathrm{\textrm{At}}}\longrightarrow C(P_fP_f)({\mathrm{\textrm{At}}})$. The image $\overline{p}(A)$ of an atom $A$ is given by an element of the limit, equivalently a map from $1$ into the limit, equivalently a cone of the diagram over $1$. To give the latter is equivalent to giving an element $A_0$ of $At$, specifically $p_0(A) = A$, together with an element $A_1$ of $At\times P_fP_f(At)$, specifically $p_1(A) = (A,p_0(A)) = (A,p(A))$, together with an element $A_2$ of $At\times P_fP_f(At\times P_fP_f(At))$, etcetera. The definition of the coinductive tree for $A$ is inherently coinductive, matching the definition of the limit, and with the first step agreeing with the definition of $p$. Thus it follows by coinduction that $\overline{p}(A)$ can be identified with the coinductive tree for $A$. \[ex:free\] Let $At$ consist of atoms $\mathtt{A,B,C}$ and $\mathtt{D}$. Let $P$ denote the logic program $$\begin{aligned} \mathtt{A} & \gets & \mathtt{B,C} \\ \mathtt{A} & \gets & \mathtt{B,D} \\ \mathtt{D} & \gets & \mathtt{A,C}\\ \end{aligned}$$ So $p(\mathtt{A}) = \{ \{ \mathtt{B,C}\} , \{ \mathtt{B,D} \} \}$, $p(\mathtt{B}) = p(\mathtt{C}) = \emptyset$, and $p(\mathtt{D}) = \{ \{ \mathtt{A,C}\} \}$. Then $p_0(\mathtt{A}) = \mathtt{A}$, which is the root of the coinductive tree for $\mathtt{A}$. Then $p_1(\mathtt{A}) = (\mathtt{A},p(\mathtt{A})) = (\mathtt{A},\{ \{ \mathtt{B,C}\} , \{ \mathtt{B,D} \} \})$, which consists of the same information as in the first three levels of the coinductive tree for $\mathtt{A}$, i.e., the root $\mathtt{A}$, two or-nodes, and below each of the two or-nodes, nodes given by each atom in each antecedent of each clause with head $\mathtt{A}$ in the logic program $P$: nodes marked $\mathtt{B}$ and $\mathtt{C}$ lie below the first or-node, and nodes marked $\mathtt{B}$ and $\mathtt{D}$ lie below the second or-node, exactly as $p_1(\mathtt{A})$ describes. Continuing, note that $p_1(\mathtt{D}) = (\mathtt{D},p(\mathtt{D})) = (\mathtt{D},\{ \{\mathtt{A,C}\} \})$. So $$\begin{array}{ccl} p_2(\mathtt{A}) & = & (\mathtt{A},P_fP_f(p_1)(p(\mathtt{A})))\\ & = & (\mathtt{A},P_fP_f(p_1)( \{ \{ \mathtt{B,C}\} , \{ \mathtt{B,D} \} \}))\\ & = & (\mathtt{A}, \{ \{( \mathtt{B},\emptyset),(\mathtt{C},\emptyset)\} , \{( \mathtt{B},\emptyset),(\mathtt{D},\{ \{\mathtt{A,C}\} \}) \} \}) \end{array}$$ which is the same information as that in the first five levels of the coinductive tree for $\mathtt{A}$: $p_1(\mathtt{A})$ provides the first three levels of $p_2(\mathtt{A})$ because $p_2(\mathtt{A})$ must map to $p_1(\mathtt{A})$ in the cone; in the coinductive tree, there are two and-nodes at level 3, labelled by $\mathtt{A}$ and $\mathtt{C}$. As there are no clauses with head $\mathtt{B}$ or $\mathtt{C}$, no or-nodes lie below the first three of the and-nodes at level 3. However, there is one or-node lying below $\mathtt{D}$, it branches into and-nodes labelled by $\mathtt{A}$ and $\mathtt{C}$, which is exactly as $p_2(\mathtt{A})$ tells us. For picture of this tree, see Figure \[fig:gtree\]. child [ \[fill\] circle (2pt) child [ node [$\mathtt{B}$]{}]{} child [ node [$\mathtt{C}$]{}]{}]{} child [\[fill\] circle (2pt) child [node [$\mathtt{B}$]{}]{} child [node [$\mathtt{D}$]{} child [\[fill\] circle (2pt) child [ node [$A$]{} child [ node [$\ldots$]{} ]{}]{} child [ node [$C$]{} ]{}]{}]{}]{}; Lax semantics for logic programs {#sec:recall} ================================ We now lift the restriction on $Var = \emptyset$ in Definition \[df:syntax\] and consider first-order terms and atoms in full generality. There are several equivalent ways in which to describe the Lawvere theory generated by a signature. So, for precision, in this paper, we define the *Lawvere theory* $\mathcal{L}_{\Sigma}$ *generated by* a signature $\Sigma$ as follows: $\texttt{ob}(\mathcal{L}_{\Sigma})$ is the set of natural numbers. For each natural number $n$, let $x_1,\ldots ,x_n$ be a specified list of distinct variables. Define $\mathcal{L}_{\Sigma}(n,m)$ to be the set of $m$-tuples $(t_1,\ldots ,t_m)$ of terms generated by the function symbols in $\Sigma$ and variables $x_1,\ldots ,x_n$. Define composition in $\mathcal{L}_{\Sigma}$ by substitution. One can readily check that these constructions satisfy the axioms for a category, with $\mathcal{L}_{\Sigma}$ having strictly associative finite products given by the sum of natural numbers. The terminal object of $\mathcal{L}_{\Sigma}$ is the natural number $0$. There is a canonical identity-on-objects functor from $Nat^{op}$ to ${\mathcal{L}_{\Sigma}}$, just as there is for any Lawvere theory, and it strictly preserves finite products. \[ex:arrows\] Consider ListNat. The constants $\mathtt{O}$ and $\mathtt{nil}$ are maps from $0$ to $1$ in $\mathcal{L}_{\Sigma}$, $\mathtt{s}$ is modelled by a map from $1$ to $1$, and $\mathtt{cons}$ is modelled by a map from $2$ to $1$. The term $\mathtt{s(0)}$ is the map from $0$ to $1$ given by the composite of the maps modelling $\mathtt{s}$ and $\mathtt{0}$. Given an arbitrary logic program $P$ with signature $\Sigma$, we can extend the set $At$ of atoms for a variable-free logic program to the functor $At:{\mathcal{L}_{\Sigma}}^{op} \rightarrow Set$ that sends a natural number $n$ to the set of all atomic formulae, with variables among $x_1,\ldots ,x_n$, generated by the function symbols in $\Sigma$ and by the predicate symbols in $P$. A map $f:n \rightarrow m$ in ${\mathcal{L}_{\Sigma}}$ is sent to the function $At(f):At(m) \rightarrow At(n)$ that sends an atomic formula $A(x_1, \ldots,x_m)$ to $A(f_1(x_1, \ldots ,x_n)/x_1, \ldots ,f_m(x_1, \ldots ,x_n)/x_m)$, i.e., $At(f)$ is defined by substitution. As explained in the Introduction and in [@KMP], we cannot model a logic program by a natural transformation of the form $p:At\longrightarrow P_fP_fAt$ as naturality breaks down, e.g., in ListNat. So, in [@KoP; @KoPS], we relaxed naturality to lax naturality. In order to define it, we extended $At:{\mathcal{L}_{\Sigma}}^{op}\rightarrow Set$ to have codomain $Poset$ by composing $At$ with the inclusion of $Set$ into $Poset$. Mildly overloading notation, we denote the composite by $At:{\mathcal{L}_{\Sigma}}^{op}\rightarrow Poset$. Given functors $H,K:{\mathcal{L}_{\Sigma}}^{op} \longrightarrow Poset$, a [*lax transformation*]{} from $H$ to $K$ is the assignment to each object $n$ of ${\mathcal{L}_{\Sigma}}$, of an order-preserving function $\alpha_n: Hn \longrightarrow Kn$ such that for each map $f:n \longrightarrow m$ in ${\mathcal{L}_{\Sigma}}$, one has $(Kf)(\alpha_m) \leq (\alpha_{n})(Hf)$, pictured as follows: Hm & \^[\_m]{} & Km\ \ Hn & \_[\_n]{} & Kn Functors and lax transformations, with pointwise composition, form a locally ordered category denoted by $Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$. Such categories and generalisations have been studied extensively, e.g., in [@Ben; @BKP; @Kelly; @KP1]. \[def:poset\] Define $P_f:Poset\longrightarrow Poset$ by letting $P_f(P)$ be the partial order given by the set of finite subsets of $P$, with $A\leq B$ if for all $a \in A$, there exists $b \in B$ for which $a\leq b$ in $P$, with behaviour on maps given by image. Define $P_c$ similarly but with countability replacing finiteness. We are not interested in arbitrary posets in modelling logic programming, only those that arise, albeit inductively, by taking subsets of a set qua discrete poset. So we gloss over the fact that, for an arbitrary poset $P$, Definition \[def:poset\] may yield factoring, with the underlying set of $P_f(P)$ being a quotient of the set of subsets of $P$. It does not affect the line of development here. \[ex:listnat2\] Modelling Example \[ex:listnat\], ListNat generates a lax transformation of the form $p:At\longrightarrow P_fP_fAt$ as follows: $At(n)$ is the set of atomic formulae in $ListNat$ with at most $n$ variables. For example, $At(0)$ consists of $\mathtt{nat(0)}$, $\mathtt{nat(nil)}$, $\mathtt{list(0)}$, $\mathtt{list(nil)}$, $\mathtt{nat(s(0))}$, $\mathtt{nat(s(nil))}$, $\mathtt{list(s(0))}$, $\mathtt{list(s(nil))}$, $\mathtt{nat(cons(0, 0))}$, $\mathtt{nat(cons( 0, nil))}$,\ $\mathtt{nat(cons (nil, 0))}$, $\mathtt{nat(cons( nil, nil))}$, etcetera. Similarly, $At(1)$ includes all atomic formulae containing at most one (specified) variable $x$, thus all the elements of $At(0)$ together with $\mathtt{nat(x)}$, $\mathtt{list(x)}$, $\mathtt{nat(s(x))}$, $\mathtt{list(s(x))}$, $\mathtt{nat(cons( 0, x))}$, $\mathtt{nat(cons (x, 0))}$, $\mathtt{nat(cons (x, x))}$, etcetera. The function $p_n:At(n)\longrightarrow P_fP_fAt(n)$ sends each element of $At(n)$, i.e., each atom $A(x_1,\ldots ,x_n)$ with variables among $x_1,\ldots ,x_n$, to the set of sets of atoms in the antecedent of each unifying substituted instance of a clause in $P$ with head for which a unifying substitution agrees with $A(x_1,\ldots ,x_n)$. Taking $n=0$, $\mathtt{nat(0)}\in At(0)$ is the head of one clause, and there is no other clause for which a unifying substitution will make its head agree with $\mathtt{nat(0)}$. The clause with head $\mathtt{nat(0)}$ has the empty set of atoms as its tail, so $p_0(\mathtt{nat(0)}) = \{ \emptyset \}$. Taking $n=1$, $\mathtt{list(cons( x, 0))}\in At(1)$ is the head of one clause given by a unifying substititution applied to the final clause of ListNat, and accordingly $p_1(\mathtt{list(cons (x, 0))}) = \{ \{ \mathtt{nat(x)},\mathtt{list(0)} \} \}$. The family of functions $p_n$ satisfy the inequality required to form a lax transformation precisely because of the allowability of substitution instances of clauses, as in turn is required to model logic programming. The family does not satisfy the strict requirement of naturality as explained in the introduction. \[ex:lp2\] Attempting to model Example \[ex:lp\], that of graph connectedness, GC, by mimicking the modelling of ListNat in Example \[ex:listnat2\], i.e., defining the function by sending each element of $At(n)$, i.e., each atom $A(x_1,\ldots ,x_n)$ with variables among $x_1,\ldots ,x_n$, to the set of sets of atoms in the antecedent of each unifying substituted instance of a clause in $P$ with head for which a unifying substitution agrees with $A(x_1,\ldots ,x_n)$, fails. Consider the clause $$\mathtt{connected(x,y)} \gets \mathtt{edge(x,z)}, \mathtt{connected(z,y)}$$ Modulo possible renaming of variables, the head of the clause, i.e., the atom $\mathtt{connected(x,y)}$, lies in $At(2)$ as it has two variables. There is trivially only one substituted instance of a clause in GC with head for which a unifying substitution agrees with $\mathtt{connected(x,y)}$, and the singleton set consisting of the set of atoms in its antecedent is $\{ \{ \mathtt{edge(x,z)},\mathtt{connected(z,y)} \} \}$, which does not lie in $P_fP_fAt(2)$ as it has three variables appear in it rather than two. See Section \[sec:backr\] for a picture of the coinductive tree for $\mathtt{connected(x,y)}$. We dealt with that inelegantly in [@KoP]: in order to force $p_2(\mathtt{connected(x,y)})$ to lie in $P_fP_fAt(2)$ and model GC in any reasonable sense, we allowed substitutions for $z$ in $\{ \{ \mathtt{edge(x,z)},\mathtt{connected(z,y)} \} \}$ by any term on $x,y$ on the basis that there is no unifying such, so we had better allow all possibilities. So, rather than modelling the clause directly, recalling that $At(2)\subseteq At(3)\subseteq At(4)$, etcetera, modulo renaming of variables, we put etcetera: for $p_2$, as only two variables $x$ and $y$ appear in any element of $P_fP_fAt(2)$, we allowed substitution by either $x$ or $y$ for $z$; for $p_3$, a third variable may appear in an element of $P_fP_fAt(3)$, allowing an additional possible subsitution; for $p_4$, a fourth variable may appear, etcetera. Countability arises if a unary symbol $s$ is added to GC, as in that case, for $p_2$, not only did we allow $x$ and $y$ to be substituted for $z$, but we also allowed $s^n(x)$ and $s^n(y)$ for any $n>0$, and to do that, we replaced $P_fP_f$ by $P_cP_f$, allowing for the countably many possible substitutions. Those were inelegant decisions, but they allowed us to give some kind of model of all logic programs. We shall revisit this in Section \[sec:derivation\]. We shall refine lax semantics to account for existential variables later, so for the present, we shall ignore Example \[ex:lp2\] and only analyse semantics for logic programs without existential variables such as in Example \[ex:listnat2\]. Specifically, we shall analyse the relationship between a lax transformation $p:At\longrightarrow P_fP_fAt$ and $\overline{p}:At\longrightarrow C(P_fP_f)At$, the corresponding coalgebra for the cofree comonad $C(P_fP_f)$ on $P_fP_f$. We recall the central abstract result of [@KoP], the notion of an “oplax" map of coalgebras being required to match that of lax transformation. Notation of the form refers to coalgebras for an endofunctor $H$, while notation of the form refers to coalgebras for a comonad $C$. The subscript $oplax$ refers to oplax maps and, given an endofunctor $E$ on $Poset$, the notation $Lax({\mathcal{L}_{\Sigma}}^{op},E)$ denotes the endofunctor on $Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$ given by post-composition with $E$; similarly for a comonad. \[main\][@KoP] For any locally ordered endofunctor $E$ on $Poset$, if $C(E)$ is the cofree comonad on $E$, then there is a canonical isomorphism $$Lax({\mathcal{L}_{\Sigma}}^{op},E)\mbox{-}coalg_{oplax} \simeq Lax({\mathcal{L}_{\Sigma}}^{op},C(E))\mbox{-}Coalg_{oplax}$$ Theorem \[main\] tells us that for any endofunctor $E$ on $Poset$, the relationship between $E$-coalgebras and $C(E)$-coalgebras extends pointwise from $Poset$ to $Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$ providing one matches lax natural transformations by oplax maps of coalgebras. It follows that, given an endofunctor $E$ on $Poset$ with cofree comonad $C(E)$, the cofree comonad for the endofunctor on $Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$ sending $H:{\mathcal{L}_{\Sigma}}^{op}\longrightarrow Poset$ to the composite $EH:{\mathcal{L}_{\Sigma}}^{op}\longrightarrow Poset$ sends $H$ to the composite $C(E)H$. Taking the example $E = P_fP_f$ allows us to conclude the following. \[oldcor\][@KoP] $Lax({\mathcal{L}_{\Sigma}}^{op},C(P_fP_f))$ is the cofree comonad on $Lax({\mathcal{L}_{\Sigma}}^{op},P_fP_f)$. Corollary \[oldcor\] means that there is a natural bijection between lax transformations $$p:At\longrightarrow P_fP_fAt$$ and lax transformations $$\overline{p}:At\longrightarrow C(P_fP_f)At$$ subject to the two conditions required of a coalgebra of a comonad given pointwise, thus by applying the construction of Theorem \[constr:Gcoalg\] pointwise. So it is the abstract result we need in order to characterise the coinductive trees generated by logic programs with no existential variables, extending Theorem \[constr:Gcoalg\] as follows. \[constr:Gcoalgcount\] Let $C(P_fP_f)$ denote the cofree comonad on the endofunctor $P_fP_f$ on $Poset$. Then, given a logic program $P$ with no existential variables on $At$, defining $p_n(A(x_1,\ldots ,x_n))$ to be the set of sets of atoms in each antecedent of each unifying substituted instance of a clause in $P$ with head for which a unifying substitution agrees with $A(x_1,\ldots ,x_n)$, the corresponding $Lax({\mathcal{L}_{\Sigma}}^{op},C(P_fP_f))$-coalgebra $\overline{p}:At\longrightarrow C(P_fP_f)At$ sends an atom $A(x_1,\ldots ,x_n)$ to the coinductive tree for $A(x_1,\ldots ,x_n)$. The absence of existential variables ensures that any variable that appears in the antecedent of a clause must also appear in its head. So every atom in every antecedent of every unifying substituted instance of a clause in $P$ with head for which a unifying substitution agrees with $A(x_1,\ldots ,x_n)$ actually lies in $At(n)$. Moreover, there are only finitely many sets of sets of such atoms. So the construction of each $p_n$ is well-defined, i.e., the image of $A(x_1,\ldots ,x_n)$ lies in $P_fP_fAt(n)$. The $p_n$’s collectively form a lax transformation from $At$ to $P_fP_fAt$ as substitution preserves the truth of a clause. By Corollary \[oldcor\], $\overline{p}$ is determined pointwise. So, to construct it, we may fix $n$ and follow the proof of Theorem \[constr:Gcoalg\], consistently replacing $At$ by $At(n)$. To complete the proof, observe that the construction of $p$ from a logic program $P$ matches the construction of the coinductive tree for an atom $A(x_1,\ldots ,x_n)$ if $P$ has no existential variables. So following the proof of Theorem \[constr:Gcoalg\] completes this proof. Theorem \[constr:Gcoalgcount\] models the coinductive trees generated by ListNat as the latter has no existential variables, but for GC, as explained in Example \[ex:lp2\], the natural construction of $p$ did *not* model the clause $$\mathtt{connected(x,y)} \gets \mathtt{edge(x,z)}, \mathtt{connected(z,y)}$$ directly, and so its extension *a fortiori* could *not* model the coinductive trees generated by $\mathtt{connected(x,y)}$. For arbitrary logic programs, the way we defined $\overline{p}(A(x_1,\ldots ,x_n))$ in earlier papers such as [@KoPS] was in terms of a variant of the coinductive tree generated by $A(x_1,\ldots ,x_n)$ in two key ways: 1. coinductive trees allow new variables to be introduced as one passes down the tree, e.g., with $$\mathtt{connected(x,y)} \gets \mathtt{edge(x,z)}, \mathtt{connected(z,y)}$$ appearing directly in it, whereas, if we extended the construction of $p$ in Example \[ex:lp2\], $\overline{p_1}(\mathtt{connected(x,y)})$ would not model such a clause directly, but would rather substitute terms on $x$ and $y$ for $z$, continuing inductively as one proceeds. 2. coinductive trees are finitely branching, as one expects in logic programming, whereas $\overline{p}(A(x_1,\ldots ,x_n))$ could be infinitely branching, e.g., for GC with an additional unary operation $s$. Saturated semantics for logic programs {#sec:sat} ====================================== Bonchi and Zanasi’s saturated semantics approach to modelling logic programming in [@BZ] was to consider $P_fP_f$ as we did in [@KoP], sending $At$ to $P_fP_fAt$, but to ignore the inherent laxness, replacing $Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$ by $[ob({\mathcal{L}_{\Sigma}}),Set]$, where $ob({\mathcal{L}_{\Sigma}})$ is the set of objects of ${\mathcal{L}_{\Sigma}}$ treated as a discrete category, i.e., as a category containing only identity maps. Their central construction may be seen in a more axiomatic setting as follows. For any small category $C$, let $ob(C)$ denote the discrete subcategory with the same objects as $C$, with inclusion $I:ob(C)\longrightarrow C$. Then the functor $$[I,Set]:[C,Set]\longrightarrow [ob(C),Set]$$ has a right adjoint given by right Kan extension, and that remains true when one extends from $Set$ to any complete category, and it all enriches, e.g., over $Poset$ [@K]. As $ob(C)$ has no non-trivial arrows, the right Kan extension is a product, given by $$(ran_I H)(c) = \prod_{d \in C} Hd^{C(c,d)}$$ By the Yoneda lemma, to give a natural transformation from $K$ to $(ran_I H)(-)$ is equivalent to giving a natural, or equivalently in this setting, a “not necessarily natural", transformation from $KI$ to $H$. Taking $C = {\mathcal{L}_{\Sigma}}^{op}$ gives exactly Bonchi and Zanasi’s formulation of saturated semantics [@BZ]. It was the fact of the existence of the right adjoint, rather than its characterisation as a right Kan extension, that enabled Bonchi and Zanasi’s constructions of saturation and desaturation, but the description as a right Kan extension informed their syntactic analysis. Note for later that products in $Poset$ are given pointwise, so agree with products in $Set$. So if we replace $Set$ by $Poset$ here, and if $C$ is an ordinary category without any non-trivial $Poset$-enrichment, the right Kan extension would yield the same set as above, with an order on it determined by that on $H$. In order to unify saturated semantics with lax semantics, we need to rephrase Bonchi and Zanasi’s formulation a little. Upon close inspection, one can see that, in their semantics, they only used objects of $[ob({\mathcal{L}_{\Sigma}})^{op},Set]$, equivalently $[ob({\mathcal{L}_{\Sigma}}),Set]$, of the form $HI$ for some $H:{\mathcal{L}_{\Sigma}}^{op}\longrightarrow Set$ [@BZ]. That allows us, while making no substantive change to their body of work, to reformulate it a little, in axiomatic terms, as follows. Let $[C,Set]_d$ denote the category of functors from $C$ to $Set$ and “not necessarily natural" transformations between them, i.e., a map from $H$ to $K$ consists of, for all $c \in C$, a function $\alpha_c:Hc\longrightarrow Kc$, without demanding a naturality condition. The functor $[I,Set]:[C,Set]\longrightarrow [ob(C),Set]$ factors through the inclusion of $[C,Set]$ into $[C,Set]_d$ as follows: \[C,Set\] & & \[C,Set\]\_d & & \[ob(C),Set\] In this decomposition, the functor from $[C,Set]_d$ to $[ob(C),Set]$ sends a functor $H:C\longrightarrow Set$ to its restriction $HI$ to $ob(C)$ and is fully faithful. Because it is fully faithful, it follows that the inclusion of $[C,Set]$ into $[C,Set]_d$ has a right adjoint also given by right Kan extension. Thus one can reprhrase Bonchi and Zanasi’s work to assert that the central mathematical fact that supports saturated semantics is that the inclusion $$[{\mathcal{L}_{\Sigma}}^{op},Set]\longrightarrow [{\mathcal{L}_{\Sigma}}^{op},Set]_d$$ has a right adjoint that sends a functor $H:{\mathcal{L}_{\Sigma}}^{op}\longrightarrow Set$ to the right Kan extension $ran_I HI$ of the composite $HI:ob({\mathcal{L}_{\Sigma}})^{op}\longrightarrow Set$ along the inclusion $I:ob({\mathcal{L}_{\Sigma}}) \longrightarrow {\mathcal{L}_{\Sigma}}$. We can now unify lax semantics with saturated semantics by developing a precise body of theory that relates the inclusion $$J:[C,Set] \longrightarrow [C,Set]_d$$ which has a right adjoint that sends $H:C\longrightarrow Set$ to $ran_I HI$, with the inclusion $$J:[C,Poset] \longrightarrow Lax(C,Poset)$$ which also has a right adjoint, that right adjoint being given by a restriction of the right Kan extension $ran_I HI$ of the composite $HI:ob(C)\longrightarrow Poset$ along the inclusion $I:ob(C)\longrightarrow C$. The existence of the right adjoint follows from the main result of [@BKP], but we give an independent proof here and a description of it in terms of right Kan extensions in order to show that Bonchi and Zanasi’s explicit constructions of saturation and desaturation apply equally in this setting. Consider the inclusions $$[C,Poset] \longrightarrow Lax(C,Poset) \longrightarrow [C,Poset]_d$$ As we have seen, the composite has a right adjoint sending $H:C\longrightarrow Poset$ to $ran_I HI$. So, to describe a right adjoint to , we need to restrict $ran_I HI$ so that to give a natural transformation from $K$ into the restriction $R(H)$ of $ran_I HI$ is equivalent to giving a map from $H$ to $K$ in $[C,Poset]_d$ that satisfies the condition that, for all $f:c\longrightarrow d$, one has $Hf.\alpha_c \leq \alpha_d.Kf$. This can be done by defining $R(H)$ to be an *inserter*, which is a particularly useful kind of limit that applies to locally ordered categories and is a particular kind of generalisation of the notion of equaliser. \[def:inserter\][@BKP] Given parallel maps $f,g:X\longrightarrow Y$ in a locally ordered category $K$, an *inserter* from $f$ to $g$ is an object $Ins(f,g)$ of $K$ together with a map $i:Ins(f,g)\longrightarrow X$ such that $fi \leq gi$ and is universal such, i.e., for any object $Z$ and map $z:Z\longrightarrow X$ for which $fz\leq gz$, there is a unique map $k:Z\longrightarrow Ins(f,g)$ such that $ik = z$. Moreover, for any such $z$ and $z'$ for which $z\leq z'$, then $k\leq k'$, where $k$ and $k'$ are induced by $z$ and $z'$ respectively. An inserter is a form of limit. Taking $K$ to be $Poset$, the poset $Ins(f,g)$ is given by the full sub-poset of $X$ determined by $\{ x\in X|f(x)\leq g(x)\}$. Being limits, inserters in functor categories are determined pointwise. \[thm:R\] The right adjoint $R$ to the inclusion $J:[C,Poset]\longrightarrow Lax(C,Poset)$ sends $H:C\longrightarrow Poset$ to the inserter in $[C,Poset]$ from $\delta_1$ to $\delta_2$ $$\delta_1,\delta_2:(ran_I H)(-) = \prod_{d \in C} Hd^{C(-,d)}\longrightarrow \prod_{d,d' \in C} Hd'^{C(-,d)\times C(d,d')}$$ where $\delta_1$ and $\delta_2$ are defined to be equivalent, by Currying, to $(d,d')$-indexed collections of maps of the form $$(\delta_1)_{(d,d')},(\delta_2)_{(d,d')}:C(-,d)\times C(d,d')\times \prod_{d \in C} Hd^{C(-,d)}\longrightarrow Hd'$$ which, in turn, are defined as follows: 1. the $(d,d')$-component of $\delta_1c$ is determined by composing $$\circ_C\times id:C(c,d)\times C(d,d') \times \prod_{d \in C} Hd^{C(c,d)} \longrightarrow C(c,d') \times \prod_{d \in C} Hd^{C(c,d)}$$ with the evaluation of the product at $d'$ 2. the $(d,d')$-component of $\delta_2c$ is determined by evaluating the product at $d$ $$C(c,d)\times C(d,d') \times \prod_{d \in C} Hd^{C(c,d)} \longrightarrow C(d,d') \times Hd$$ then composing with C(d,d’) Hd & \^[Hid]{} & Hd’\^[Hd]{} Hd & \^[eval]{} & Hd’ Although the statement of the theorem is complex, the proof is routine. One simply needs to check that $\delta_1$ and $\delta_2$ are natural, which they routinely are, and that the inserter satisfies the universal property we seek, which it does by construction. Bonchi and Zanasi’s saturation and desaturation constructions remain exactly the same: the saturation of $p:At\longrightarrow P_fP_fAt$ is a natural transformation $\overline{p}:At\longrightarrow ran_I P_fP_fAtI$ that factors through $Ins(\delta_1,\delta_2)$ without any change whatsoever to its construction, that being so because of the fact of $p$ being lax. With this result in hand, it is routine to work systematically through Bonchi and Zanasi’s papers, using their saturation and desaturation constructions exactly as they had them, without discarding the inherent laxness that logic programming, cf data refinement, possesses. So this unifies lax semantics, which flows from, and may be seen as an instance of, Tony Hoare’s semantics for data refinement [@HH; @HH1; @KP1], with saturated semantics and its more denotational flavour [@BM]. Lax semantics for logic progams refined: existential variables {#sec:derivation} ============================================================== In Section \[sec:recall\], following [@KoP], we gave lax semantics for logic programs without existential variables, such as ListNat. In particular, we modelled the coinductive trees they generate. Restriction to non-existential examples such as ListNat is common for implementational reasons [@KoPS; @JohannKK15; @FK15; @FKSP16], so Section \[sec:recall\] allowed the modelling of coinductive trees for a natural class of logic programs. Nevertheless, we would like to model coinductive trees generated by logic programming in full generality, including examples such as that of GC. We need to refine the lax semantics of Section \[sec:recall\] in order to do so, and, having just unified lax semantics with saturated semantics in Section \[sec:sat\], we would like to retain that unity in making such a refinement. So that is what we do in this section. We initially proposed such a refinement in the workshop paper [@KoP1] that this paper extends, but since the workshop, we have found a further refinement that strengthens the relationship with the modelling of local state [@PP2]. So our constructions here are a little different to those in [@KoP1]. In order to model coinductive trees, it follows from Example \[ex:lp2\] that the endofunctor $Lax({\mathcal{L}_{\Sigma}}^{op},P_fP_f)$ on $Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$ that sends $At$ to $P_fP_fAt$, needs to be refined as $\{ \{\mathtt{edge(x,z),connected(z,y)}\}\}$ is not an element of $P_fP_fAt(2)$ as it involves three variables $x$, $y$ and $z$. In general, we need to allow the image of $p_n$ to lie in the set given by applying $P_fP_f$ to a superset of $At(n)$, one that includes $At(m)$ for all $m\geq n$. However, we do not want to double-count: there are six injections of $2$ into $3$, inducing six inclusions $At(2)\subseteq At(3)$, and one only wants to count each atom in $At(2)$ once. So we refine $P_fP_fAt(n)$ to become $P_fP_f(\int\! At(n))$, where $\int\! At$ is defined as follows. Letting $Inj$ denote the category of natural numbers and injections, for any Lawvere theory $L$, there is a canonical identity-on-objects functor $J:Inj^{op}\longrightarrow L$. We define $\int\! At(n)$ to be the colimit of the composite functor n/Inj & \^[cod]{} & Inj & \^[J]{} & [\_]{}\^[op]{} & \^[At]{} & Poset This functor sends an injection $j:n\longrightarrow m$ to $At(m)$, with the $j$-th component of the colimiting cocone being of the form $\rho_j:At(m)\longrightarrow \int\! At(n)$. The colimiting property is precisely the condition required to ensure no double-counting (see [@Mac] or, for the enriched version, [@K] of this construction in a general setting). It is not routine to extend the construction of $\int\! At(n)$ to be functorial in $Inj$. So we mimic the construction on arrows used to define the monad for local state in [@PP2]. We first used this idea in [@KoP1] and we refine our use of it in this paper to make for a closer technical relationship with the semantics of local state in [@PP2]: we do not fully understand the relationship yet, but there seems considerable potential based on the work here to make precise comparison between the role of variables in logic programming with that of worlds in modelling local state. In detail, the definition of $\int\! At(n)$ extends canonically to become a functor that sends a map $f:n\longrightarrow n'$ in ${\mathcal{L}_{\Sigma}}$ to the order-preserving function $$\int\! At(f):\int\! At(n') \longrightarrow \int\! At(n)$$ determined by the colimiting property of $\int\! At(n')$ as follows: each $j'\in n'/Inj$ is, up to coherent isomorphism, the canonical injection $j':n'\longrightarrow n'+k$ for a unique natural number $k$; that induces a cocone At(n’+k) & \^[At(f+k)]{} & At(n+k) & \^[\_j]{} & At(n) where $j:n\longrightarrow n+k$ is the canonical injection of $n$ into $n+k$. It is routine to check that this assignment respects compostion and identities, thus is functorial. There is nothing specific about $At$ in the above construction. So it generalises without fuss from $At$ to apply to an arbitrary functor $H:{\mathcal{L}_{\Sigma}}^{op}\longrightarrow Poset$. In order to make the construction $\int\! H$ functorial in $H$, i.e., in order to make it respect maps $\alpha:H\Rightarrow K$, we need to refine $Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$. Specifically, we need to restrict its maps to allow only those lax transformations $\alpha:H\Rightarrow K$ that are strict with respect to maps in $Inj$, i.e., those $\alpha$ such that for any injection $i:n\longrightarrow m$, the diagram Hn & \^[\_n]{} & Kn\ \ Hm & \_[\_m]{} & Km commutes. The reason for the restriction is that the colimit that defines $\int\! H(n)$ strictly respects injections, so we need a matching condition on $\alpha$ in order to be able to define $\int\! \alpha (n)$. Summarising this discussion yields the following: \[def:laxinj\] Let $Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)$ denote the category with objects given by functors from ${\mathcal{L}_{\Sigma}}^{op}$ to $Poset$, maps given by lax transformations that strictly respect injections, and composition given pointwise. \[prop:ff\] cf [@PP2] Let $J:Inj^{op}\longrightarrow {\mathcal{L}_{\Sigma}}$ be the canonical inclusion. Define $$\int :Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)\longrightarrow Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)$$ on objects as above. Given $\alpha:H\Rightarrow K$, define $\int\! \alpha(n)$ by the fact that $j\in n/Inj$ is coherently isomorphic to the canonical inclusion $j:n\longrightarrow n+k$ for a unique natural number $k$, and applying the definition of $\int\! H(n)$ as a colimit to the cocone given by composing $$\alpha_{n+k}:H(m) = H(n+k)\longrightarrow K(n+k) = K(m)$$ with the canonical map $K(m)\longrightarrow \int\! K(n)$ exhibiting $\int\! K(n)$ as a colimit. Then $\int\!(-)$ is an endofunctor on $Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)$. The proof is routine, albeit after lengthy calculation involving colimits. We can now model an arbitrary logic program by a map $p:At\longrightarrow P_fP_f\int\! At$ in $Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)$, modelling ListNat as we did in Example \[ex:listnat2\] but now modelling the clauses of GC directly rather than using the awkward substitution instances of Example \[ex:lp2\]. \[ex:listnat3\] Except for the restriction of $Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$ to $Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)$, ListNat is modelled in exactly the same way here as it was in Example \[ex:listnat2\], the reason being that no clause in ListNat has a variable in the tail that does not already appear in the head. We need only observe that, although $p$ is not strictly natural in general, it does strictly respect injections. For example, if one views $\mathtt{list(cons( x, 0))}$ as an element of $At(2)$, its image under $p_2$ agrees with its image under $p_1$. \[ex:lp3\] In contrast to Example \[ex:lp2\], using $P_fP_f\int$, we can emulate the construction of Examples \[ex:listnat2\] and \[ex:listnat3\] for ListNat to model GC. Modulo possible renaming of variables, $\mathtt{connected(x,y)}$ is an element of $At(2)$. The function $p_2$ sends it to the element $\{ \{ \mathtt{edge(x,z)},\mathtt{connected(z,y)}\}\}$ of $(P_fP_f\int\! At)(2)$. This is possible by taking $n=2$ and $m=3$ in the formula for $\int\! At$. In contrast, $\{ \{ \mathtt{edge(x,z)},\mathtt{connected(z,y)}\}\}$ is not an element of $P_fP_fAt(2)$, hence the failure of Example \[ex:lp2\]. The behaviour of $P_fP_f\int\! At$ on maps ensures that the lax transformation $p$ strictly respects injections. For example, if $\mathtt{connected(x,y)}$ is seen as an element of $At(3)$, the additional variable is treated as a fresh variable $w$, so does not affect the image of $\mathtt{connected(x,y)}$ under $p_3$. \[constr:Gcoalgrefined\] The functor $P_fP_f\!\int:Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)\longrightarrow Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)$ induces a cofree comonad $C(P_fP_f\!\int)$ on $Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)$. Moreover, given a logic progam $P$ qua $P_fP_f\!\int\!$-coalgebra $p:At\longrightarrow P_fP_f\!\int\!At$, the corresponding $C(P_fP_f\!\int )$-coalgebra $\overline{p}:At\longrightarrow C(P_fP_f\!\int )(At)$ sends an atom $A(x_1,\ldots ,x_n)\in At(n)$ to the coinductive tree for $A(x_1,\ldots ,x_n)$. If one restricts $P_fP_f\!\int$ to $[Inj,Poset]$, there is a cofree comonad on it for general reasons, $[Inj,Poset]$ being locally finitely presentable and $P_fP_f\!\int$ being an accessible functor [@W]. However, as we seek a little more generality than that, and for completeness, we shall construct the cofree comonad. Observe that products in the category $Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)$ are given pointwise, with pointwise projections. Moreover, those projections are strictly natural, as one can check directly but which is also an instance of the main result of [@BKP]. We can describe the cofree comonad $C(P_fP_f\!\int)$ on $Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)$ pointwise as the same limit as in the proof of Theorem \[constr:Gcoalg\], similarly to Theorem \[constr:Gcoalgcount\]. In particular, replacing $At$ by $At(n)$ and replacing $P_fP_f$ by $P_fP_f\!\int$ in the diagram in the proof of Theorem \[constr:Gcoalg\], one has $$\ldots \longrightarrow {\mathrm{\textrm{At}}}(n) \times (P_fP_f\!\int)({\mathrm{\textrm{At}}}\times P_fP_f\!\int\!{\mathrm{\textrm{At}}})(n) \longrightarrow {\mathrm{\textrm{At}}}(n) \times (P_fP_f\!\int\!{\mathrm{\textrm{At}}})(n) \longrightarrow {\mathrm{\textrm{At}}}(n)$$ with maps determined by the projection $\pi_0:At(-)\times (P_fP_f\!\int)At(-)\longrightarrow At(-)$, with the endofunctor $P_fP_f\!\int$ applied to it. One takes the limit, potentially transfinite [@W], of the diagram. The limit property routinely determines a functor $C(P_fP_f\!\int)At$. It is routine, albeit tedious, to use the limiting property to verify functoriality of $C(P_fP_f\!\int )$ with respect to all maps, to define the counit and comultiplication, and to verify their axioms and the universal property. The construction of $\overline{p}$ is given pointwise, with it following from its coinductive construction that it yields the coinductive trees as required: because of our construction of $\int\! At$ to take the place $At$ in Theorem \[constr:Gcoalgcount\], the image of $p$ lies in $P_fP_f\!\int\! At$. The lax naturality in respect to general maps $f:m\longrightarrow n$ means that a substitution applied to an atom $A(x_1,\ldots ,x_n)\in At(n)$, i.e., application of the function $At(f)$ to $A(x_1,\ldots ,x_n)$, followed by application of $\overline{p}$, i.e., taking the coinductive tree for the substituted atom, or application of the function $(C(P_fP_f\!\int )At)f)$ to the coinductive tree for $A(x_1,\ldots ,x_n)$ potentially yield different trees: the former substitutes into $A(x_1,\ldots ,x_n)$, then takes its coinductive tree, while the latter applies a substitution to each node of the coinductive tree for $A(x_1,\ldots ,x_n)$, then prunes to remove redundant branches. \[ex:lp4\] Extending Example \[ex:lp3\], consider $\mathtt{connected(x,y)}\in At(2)$. In expressing GC as a map $p:At\longrightarrow P_fP_f\!\int\!At$ in Example \[ex:lp3\], we put $$p_2(\mathtt{connected(x,y)}) = \{ \{ \mathtt{edge(x,z)},\mathtt{connected(z,y)}\}\}$$ Accordingly, $\overline{p}_2(\mathtt{connected(x,y)})$ is the coinductive tree for $\mathtt{connected(x,y)}$, thus the infinite tree generated by repeated application of the same clause modulo renaming of variables. If we substitute $x$ for $y$ in the coinductive tree, i.e., apply the function $(C(P_fP_f\!\int )At)(x,x)$ to it (see the definition of $L_{\Sigma}$ at the start of Section \[sec:recall\] and observe that $(x,x)$ is a $2$-tuple of terms generated trivially by the variable $x$), we obtain the same tree but with $y$ systematically replaced by $x$. However, if we substitute $x$ for $y$ in $\mathtt{connected(x,y)}$, i.e., apply the function $At(x,x)$ to it, we obtain $\mathtt{connected(x,x)}\in At(1)$, whose coinductive tree has additional branching as the first clause of GC, i.e., $\mathtt{connected(x,x)}\gets \,$ may also be applied. In contrast to this, we have strict naturality with respect to injections: for example, an injection $i:2\longrightarrow 3$ yields the function $At(i):At(2)\longrightarrow At(3)$ that, modulo renaming of variables, sends $\mathtt{connected(x,y)}\in At(2)$ to itself seen as an element of $At(3)$, and the coinductive tree for $\mathtt{connected(x,y)}$ is accordingly also sent by $(C(P_fP_f\!\int )At)(i)$ to itself seen as an element of $(C(P_fP_f\!\int )At)(3)$. Example \[ex:lp4\] illustrates why, although the condition of strict naturality with respect to injections holds for $P_fP_f\!\int$, it does not hold for $Lax({\mathcal{L}_{\Sigma}}^{op},P_fP_f)$ in Example \[ex:lp2\] as we did not model the clause $$\mathtt{connected(x,y)} \gets \mathtt{edge(x,z)}, \mathtt{connected(z,y)}$$ directly there, but rather modelled all substitution instances into all available variables. Turning to the relationship between lax semantics and saturated semantics given in Section \[sec:sat\], we need to refine our construction of the right adjoint to the inclusion $$[{\mathcal{L}_{\Sigma}}^{op},Poset]\longrightarrow Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$$ to give a construction of a right adjoint to the inclusion $$[{\mathcal{L}_{\Sigma}}^{op},Poset]\longrightarrow Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)$$ As was the case in Section \[sec:sat\], such a right adjoint exists for general reasons as an example of the main result of [@BKP]. An explicit construction of it arises by emulating the construction of Theorem \[thm:R\]. In the statement of Theorem \[thm:R\], putting $C = {\mathcal{L}_{\Sigma}}^{op}$, we described a parallel pair of maps in $[{\mathcal{L}_{\Sigma}}^{op},Poset]$ and constructed their inserter, the inserter being exactly the universal property corresponding to the laxness of the maps in $Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$. Here, we use the same technique but with equaliser replacing inserter, to account for the equalities in $Lax_{Inj}(C,Poset)$. Thus we take an equaliser of two variants of $\delta_1$ and $\delta_2$ seen as maps in $[{\mathcal{L}_{\Sigma}}^{op},Poset]$ with domain $Ins(\delta_1,\delta_2)$ Again, the constructions of saturation and desaturation remain the same, allowing us to maintain the relationship between lax semantics and saturated semantics. That said, our refinement of lax semantics constitutes a refinement of saturated semantics too, as, just as we now model $GC$ by a lax transformation $p:At\longrightarrow P_fP_f\!\int\!At$, one can now consider the saturation of this definition of $p$ rather than that of the less subtle map with codomain $P_cP_fAt$ used in previous papers such as [@KoPS] and [@BonchiZ15]. Semantics for variables in logic programs: local variables {#sec:local} ========================================================== The relationship between the semantics of logic programming we propose here and that of local state is yet to be explored fully, and we leave the bulk of it to future work. However, as explained in Section \[sec:derivation\], the definition of $\int$ was informed by the semantics for local state in [@PP2], and we have preliminary results that strengthen the relationship. \[prop:monad\] The endofunctor $\int\!(-)$ on $Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)$ canonically supports the structure of a monad, with unit $\eta_H:H\Rightarrow \int\! H$ defined, at $n$, by the $id_n$ component $\rho_{id_n}:Hn\longrightarrow \int\! Hn$ of the colimiting cocone, and with multiplication $\mu_H:\int\! \int\! H \Longrightarrow \int\! H$ defined, at $n$, by observing that if $m=n+k$ and $p=m+l$, then $p=n+(k+l)$ with canonical injections $j_k$, $j_l$ and $j_{k+l}$ coherent with each other, and applying the doubly indexed colimiting property of $\int\! \int\! H$ to $\rho_{j_{k+l}}:H(p)\longrightarrow \int\! H(n)$, This bears direct comparison with the monad for local state in the case where one has only one value, as studied by Stark [@Stark]. The setting is a little different. Stark does not consider maps in ${\mathcal{L}_{\Sigma}}$ or laxness, and his base category is $Set$ rather than $Poset$. However, if one restricts our definition of $\int$ and the other data for the monad of Proposition \[prop:monad\] to $[Inj,Set]$, one obtains Stark’s construction. The monad for local state in [@PP2] also extends Stark’s construction but in a different direction: for local state, neither $Inj$ nor $Set$ is extended, but state, which is defined by a functor into $Set$ is interpolated into the definition of the functor $\int$, which restricts to $[Inj,Set]$. That interpolation of state is closely related to our application of $P_fP_f$ to $\int$: just as the former gives rise to a monad for local state on $[Inj,Set]$, the latter bears the ingredients for a monad as follows. \[prop:monad2\] For any endofunctor $P$ on $Set$, here is a canonical distributive law $$\int\! P(-)\longrightarrow P\!\int\! (-)$$ of the endofunctor $P\circ -$ over the monad $\int$ on $[Inj,Set]$. The canonicity of the distributive law arises as $\int$ is defined pointwise as a colimit, and the distributive law is the canonical comparison map determined by applying $P$ pointwise to the colimiting cone defining $\int$. The functor $P_fP_f$ does not quite satisfy the axioms for a monad [@Engeler] (see also [@BonchiZ15]), but variants of $P_fP_f$, in particular $P_fM_f$, where $M_f$ is the finite multiset monad on $Set$, do [@Engeler] (also see [@BonchiZ15]). Putting $P = P_fM_f$, the distributive law of Proposition \[prop:monad2\] respects the monad structure of $P_fM_f$, yielding a canonical monad structure on the composite $P_fM_f\int$. The full implications of that are yet to be investigated, but, trying to emulate the analysis of local state in [@PP2], we believe we have a natural set of operations and equations that generate the monad $P_fM_f\int$. That encourages us considerably towards the possibility of seeing the semantics for logic programming, both lax and saturated, as an example of a general semantics of local effects. We have not yet fully understood the significance of the specific combination of operations and equations generating the monad, but we are currently investigating it. Conclusions and Further Work {#sec:concl} ============================ Let $P_f$ be the covariant finite powerset functor on $Set$. Then, to give a variable-free logic program $P$ is equivalent to giving a $P_fP_f$-coalgebra structure $p:At\longrightarrow P_fP_fAt$ on the set $At$ of atoms in the program. Now let $C(P_fP_f)$ be the cofree comonad on $P_fP_f$. Then, the $C(P_fP_f)$-coalgebra $\overline{p}:At\longrightarrow C(P_fP_f)At$ corresponding to $p$ sends an atom to the coinductive tree it generates. This fact is the basis for both our lax semantics and Bonchi and Zanasi’s saturated semantics for logic programming. Two problems arise when, following standard category theoretic practice, one tries to extend this semantics to model logic programs in general by extending from $Set$ to $[{\mathcal{L}_{\Sigma}}^{op},Set]$, where ${\mathcal{L}_{\Sigma}}$ is the free Lawevere theory generated by a signature $\Sigma$. The first is that the natural construction $p:At\longrightarrow P_fP_fAt$ does not form a natural transformation, so is not a map in $[{\mathcal{L}_{\Sigma}}^{op},Set]$. Two resolutions were proposed to that: lax semantics [@KoPS], which we have been developing in the tradition of semantics for data refinement [@HH], and saturated semantics [@BonchiZ15], which Bonchi and Zanasi have been developing. In this paper, we have shown that the two resolutions are complementary rather than competing, the first modelling the theorem-proving aspect of logic programming, while the latter models proof search. In modelling theorem-proving, lax semantics led us to identify and develop the notion of coinductive tree. To express the semantics, we extended $[{\mathcal{L}_{\Sigma}}^{op},Set]$ to $Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$, the category of strict functors and lax transformations between them. We followed standard semantic practice in extended $P_f$ from $Set$ to $Poset$ and we postcomposed the functor $At:{\mathcal{L}_{\Sigma}}^{op}\longrightarrow Poset$ by $P_fP_f$. Bonchi and Zanasi also postcomposed $At$ by $P_fP_f$, but then saturated. We showed that their saturation and desaturation constructions are generated exactly by starting from $Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$ rather than from $[ob({\mathcal{L}_{\Sigma}})^{op},Set]$ as they did, thus unifying the underlying mathematics of the two developments, supporting their computational coherence. The second problem mentioned above relates to existential variables, those being variables that appear in the antecedent of a clause but not in its head. The problem of existential clauses is well-known in the literature on theorem proving and within communities that use term-rewriting, TM-resolution or their variants. In TRS [@Terese], existential variables are not allowed to appear in rewriting rules, and in type inference, the restriction to non-existential programs is common [@Jones97]. In logic programming, the problem of handling existential variables when constructing proofs with TM-resolution marks the boundary between its theorem-proving and problem-solving aspects. Existential variables are not present in many logic programs, but they do occasionally occur in important examples, such as those developed by Sterling and Shapiro [@SS]. The problem for us was that, in the presence of existential variables, the natural model $p:At\longrightarrow P_fP_fAt$ of a logic program might escape its codomain, i.e., $p_n(A)$ might not lie in $P_fP_fAt(n)$ because of the new variables. On one hand, we want to model them, but on the other, the fact of the difficulty for us means that we have semantically identified the concept of existential variable, which is positive. In this paper, we have resolved the problem by refining $Lax({\mathcal{L}_{\Sigma}}^{op},Poset)$ to $Lax_{Inj}({\mathcal{L}_{\Sigma}}^{op},Poset)$, insisting upon strict naturality for injections, and by refining the construction $P_fP_fAt$ to $P_fP_f\!\int\! At$, thus allowing for additional variables in the tail of a clause in a logic program. That has allowed us to model coinductive trees for arbitrary logic programs, in particular those including existential variables. We have also considered the effect of such refinement on saturated semantics. In order to refine $P_fP_f(-)$, we followed a technique developed in the semantics of local state [@PP2]. That alerted us to the relationship between variables in logic programming with the use of worlds in modelling local state. So, as ongoing work, we are now relating our semantics for logic programming with that for local state. For the future, we shall continue to develop that, with the hope of being able to locate our semantics of logic programming within a general semantics for local effects. Beyond that, a question that we have not considered semantically at all yet but which our applied investigations are encouraging is that of modelling recursion. There are fundamentally two kinds of recursion that arise in logic programming as there may be recursion in terms and recursion in proofs. For example, $\mathtt{stream(scons(x,y)) \gets stream(y)}$ is a standard (co-)recursive definition of infinite streams in logic programming literature. More abstractly, the following program $P_1$: $\mathtt{p(f(x)) \gets p(x)}$ defines an infinite data structure $p$ with constructor $f$. For such cases, proofs given by coinductive trees will be finite. An infinite sequence of (finite) coinductive trees will be needed to approximate the intended operational semantics of such a program, as we discuss in detail in [@KoPS; @KJ15]. In contrast, there are programs like $P_2$: $\mathtt{p(x) \gets p(x)}$ or $P_3$: $\mathtt{p(x) \gets p(f(x))}$ that are also recursive, but additionally their proofs as given by coinductive trees will have an infinite size. In [@KJS16; @FK16] programs like $P_1$ were called productive, for producing (infinite) data, and programs like $P_2$ and $P_3$ – non-productive, for recursing without producing substitutions. The productive case amounts to a loop in the Lawvere theory ${\mathcal{L}_{\Sigma}}$, while the non-productive case amounts to repetition within a coinductive tree, possibly modulo a substitution. This paper gives a close analysis of trees. That should set the scene for investigation of recursion, as it seems likely to yield more general kinds of graph that arise by identifying loops in ${\mathcal{L}_{\Sigma}}$ and by equating nodes in trees. The lax semantics we presented here has recently inspired investigations into importance of TM-resolution (as modelled by the coinductive trees) in programming languages. In particular, TM-resolution is used in type class inference in Haskell [@Jones97]. In [@FKSP16; @FKHF16] we showed applications of nonterminating TM-resolution in Haskell type classes. We plan to continue looking for applications of this work in programming language design beyond logic programming. Acknowledgements {#acknowledgements .unnumbered} ================ Ekaterina Komendantskaya would like to acknowledge the support of EPSRC Grant EP/K031864/1-2. John Power would like to acknowledge the support of EPSRC grant EP/K028243/1 and Royal Society grant IE151369 Bibliography {#bibliography .unnumbered} ============
--- abstract: 'We consider the moduli space ${\cal M}_r$ of polygons with fixed side lengths in five-dimensional Euclidean space. We analyze the local structure of its singularities and exhibit a real-analytic equivalence between ${\cal M}_r$ and a weighted quotient of $n$-fold products of the quaternionic projective line $\H\P^1$ by the diagonal $PSL(2,\H)$-action. We explore the relation between ${\cal M}_r$ and the fixed point set of an anti-symplectic involution on a GIT quotient $Gr_{\C}(2,4)^n/SL(4,\C)$. We generalize the Gelfand-MacPherson correspondence to more general complex Grassmannians and to the quaternionic context, and realize our space ${\cal M}_r$ as a quotient of a subspace in the quaternionic Grassmannian $Gr_{\mathbb H}(2,n)$ by the action of the group $Sp(1)^n$. We also give analogues of the Gelfand-Tsetlin coordinates on the space of quaternionic Hermitean matrices and briefly describe generalized action-angle coordinates on ${\cal M}_r$.' author: - Philip Foth and Guadalupe Lozano bibliography: - 'FLQuatPol.bib' date: - - title: 'The Geometry of Polygons in $\R^5$ and Quaternions' --- Introduction ============ Spaces of polygons and linkages provide a useful class of geometric examples, as their structure can be effectively visualized using linkages themselves. For example, the spaces of polygons ${\cal M}_r^{(3)}$ with fixed side lengths in the Euclidean space ${\mathbb E}^3$ (studied by Klyachko [@Kly], Kapovich and Millson [@KM], Hausmann and Knutson [@HK] among others), carry interesting symplectic and Kähler structures. These spaces also possess natural action-angle coordinates, which can be obtained using symplectic quotients of complex Grassmannians by the action of maximal tori. Moreover, for a rational choice of side lengths, these spaces can be identified with the Mumford quotients of $({{{\mathbb C}{\mathbb P}}^1})^n$ by the diagonal action of $SL(2, \C)$. By a theorem of Kapranov [@Kapr], the Deligne-Mumford compactification $\overline{M}_{0,n}$ of the moduli space of $n$-marked projective lines dominates all of ${\cal M}_r^{(3)}$. This map was explicitly constructed in [@F2]. Hu in [@Hu] used polygon spaces to study the Kähler cone of the space $\overline{M}_{0,n}$. He also constructed an explicit sequence of blow-ups leading from ${\cal M}_r^{(3)}$ to $\overline{M}_{0,n}$. In the present paper we study the moduli spaces $\M_r$ of polygons in ${\mathbb E}^5$ and relate them to certain quotients of products of quaternionic projective lines by the diagonal action of $SL(2, {\mathbb H})$. As in the case of ${{{\cal M}_r}^{(3)}}$, each ${{\cal M}_r}$ can be naturally associated to the $r$-weighted quotient of $(S^4)^n$ by the diagonal action of $SO(5)$, where $r\in R^n_{>0}$ is the weight vector prescribing the side lengths. As constructed, the spaces ${{\cal M}_r}$ turn out to be singular, with the singular locus isomorphic to a quotient of ${\cal M}_r^{(3)}$ by an involution. In Section 2, we study the local structures of these singularities and show that they come in two types. A neighborhood of a planar $n$-gon is analytically isomorphic to $\R^{n-3}\times [(\R^3)^{n-3}/SO(3)]$ and a neighborhood of a spatial $n$-gon is isomorphic to $\R^{2n-6}\times [(\R^2)^{n-4}/SO(2)]$. In both cases, the first factor corresponds to deformations along the smooth singular locus. In Section 3 we establish a real-analytic isomorphism between ${{\cal M}_r}$ and the weighted quotient of $n$-copies of the quaternionic projective line, $({{\mathbb H}{\mathbb P}}^1)^n$, by $PSL(2,\H)$ for a weight vector $r$. We use stable measures due to Leeb and Millson [@LeebM] and apply them to $S^4$, the geometric boundary of the symmetric space $B^5\simeq SL(2, {\mathbb H})/Sp(2)$. We also use properties of the conformal barycenter described by Douady and Earle in [@DE]. The main result in Section 4 generalizes the Gelfand-MacPherson correspondence to the quaternionic context and shows that our spaces ${\cal M}_r$ may be realized as certain quotients of the quaternionic Grassmannian $Gr_{\mathbb H}(2,n)$ by the action of the group $Sp(1)^n$. These correspondence together with the analogue of the Gelfand-Tsetlin coordinates on the space of quaternionic Hermitean matrices, described in Section 5, allows us to use the results of [@HK] and introduce a certain generalization of action-angle coordinates on ${\cal M}_r$, as shown in Section 7. One of the major results of the paper appears in Section 6. There we relate the space ${\cal M}_r$ with a fixed point set of an involution on the Mumford quotient $Gr_{\C}(2,4)^n/SL(4,\C)$ for an appropriate choice of linearization. The involution in question arises both from multiplication by ${\bf j}$ on the space $\C^{2m}$ identified with ${\mathbb H}^m$, and the involution of $SL(4,\C)$ defining the real form $SL(2, {\mathbb H})$. In this section we also extend the Gelfand-MacPherson correspondence for projective spaces studied in [@Kapr] to more general complex Grassmannians. As we show, a certain number of methods for studying ${\cal M}_r$ can essentially be extended from those for the polygon spaces studied previously. However, we always keep our focus on a number of new phenomena, pertinent to our specific objects and settings. [**Acknowledgments.**]{} We would like to thank Sam Evens, Yi Hu, Jiang-Hua Lu, John Millson, and Reyer Sjamaar for useful conversations and correspondence. The first author is partially supported by NSF grant DMS-0072520. Real analytic structure of ${{\cal M}_r}$ ========================================= Our goal in this section is to prove two main results related to the singularities of ${{\cal M}_r}$. First, we will show that, for generically chosen $r$, the singular locus, $D_r$, of ${{\cal M}_r}$ is a real orbifold of dimension $2n-6$. It is naturally isomorphic to ${{{\cal M}_r}^{(3)}}/({\mathbb Z}/{2{\mathbb Z}})$, where ${{{\cal M}_r}^{(3)}}$ denotes the moduli space of closed polygons in ${\mathbb E}^3$. We further show that, for $n\geq 4$, ${{\cal M}_r}$ possesses two types of singularities, each one of which is locally equivalent to a neighborhood of zero in $\R^{2n-6}\times [(\R^2)^{n-4}/SO(2)]$ or in $\R^{n-3}\times [(\R^3)^{n-3}/SO(3)]$. We conclude the section with a geometric description of singularities for $n=5,6$. Fix an ordered $n$-tuple $r=(r_1,...,r_n)$ of positive real numbers and let ${{\cal L}_r}$ denote the collection of $n$-sided polygonal linkages in ${\mathbb E}^5$ with fixed side lengths $r_i$, $i=1,...,n$. Clearly, a polygonal linkage $P$ lying in ${{\cal L}_r}$ determines an $n$-tuple of unit vectors $U=(u_1,...,u_n)$ in $(S^4)^n$. Conversely, such an $n$-tuple uniquely determines a linkage $P$ with side lengths prescribed by the $r_i$. Hence, by considering the $SO(5)$-equivariant map: $$\phi:(S^4)^n\rightarrow {\mathbb R}^5, \ U\mapsto \sum_{i=1}^n {r_iu_i}$$ we can realize the space ${{\cal P}_r}$ of $5$-dimensional (closed) $n$-gons as the zero locus, ${{\cal U}_r}:=\phi^{-1}(0)$ of the above map and obtain ${{\cal M}_r}$ as the quotient of ${{\cal U}_r}$ by the diagonal action of $SO(5)$ on ${{\cal U}_r}$: $${{\cal M}_r}={{\cal U}_r}/SO(5).$$ A polygon $P\in {{\cal P}_r}$ (or its representative, $U\in{{\cal U}_r}$) is called degenerate if it is stabilized by a non-trivial subgroup of $SO(5)$. We first note that if $P$ is a non-degenerate $n$-gon then it has trivial stabilizer in $SO(5)$ or, equivalently, any collection of unit vectors $U=(u_1,...,u_n)$ in $(S^4)^n$ representing $P$ spans at least ${\mathbb R}^4$. It follows that any infinitesimal deformation of $P$ in ${{\cal P}_r}$ yields another non-degenerate polygon, $P_{{\delta}}$, in ${{\cal P}_r}$. Of course, $P_{{\delta}}$ and $P$ will be equivalent if and only if they can be joined by an integral curve of some fundamental vector field of the $SO(5)$-action on ${{\cal P}_r}$. Thus, any non-degenerate polygon has a sufficiently small neighborhood in ${{\cal P}_r}$ in which $SO(5)$ acts freely. Consequently, ${{\cal M}_r}$ is locally smooth at $[P]$ as long as $P$ is non-degenerate. A weight vector $r$ is called $admissible$ if the corresponding space ${{\cal P}_r}$ is non-empty or, equivalently, $2r_j\leq \sum r_i$. Unless noted otherwise, we will restrict our attention to those admissible $r$ which also satisfy the trivial non-degeneracy condition, namely $r_1\pm r_2\pm\cdots\pm r_n\neq 0$. This last condition eliminates degenerate polygons contained in a straight line, that is, all those stabilized by a subgroup of $SO(5)$ isomorphic to $SO(4)$. It is easy, however, to see that such trivial polygons do not exhaust the collection of degenerate ones. For example, if $P\in {{\cal P}_r}$ is such that its edges merely span a linear subspace equivalent to $\R^3$ or $\R^2$, then $P$ is stabilized by $H\subseteq SO(5)$ where $H\simeq SO(2)$ or $H\simeq SO(3)$, respectively and thus is degenerate. In turns out that these two are, in fact, the only types of degeneracies allowed by our choice of $r$. A degenerate polygon is of type $k$, if it is stabilized by $H_k\subseteq SO(5)$ where $H_k\simeq SO(k)$. Accordingly, a type $k$ singularity in ${{\cal M}_r}$ is one which lifts to a degenerate polygon of type $k$ in ${{\cal U}_r}$. The only degenerate $n$-gons in ${{\cal U}_r}$ are of type $2$ and $3$. [[*Proof.*]{}]{}If $U=(u_1,...,u_n)$ is stabilized by $H$, then, perhaps after a change of basis, any element of $H$ may be represented by a block-diagonal matrix $h$ having a $k\times k$ identity block, where $k=\dim{\langle}u_1,...,u_n{\rangle}$. The fact that $h$ must lie in $SO(5)$ forces the second $(n-k)\times (n-k)$ block to lie in $SO(n-k)$. ${\hfill{\Box}\medskip}$ EXCISED. (Extended proof of Lemma) Suppose a polygon $P$ is degenerate. Then its edges fail to span ${\mathbb R^5}$. Let $A$ be the subspace of ${\mathbb R^5}$ spanned by the edges of $P$ and let $\{a_1,..., a_k\}$ be an basis for $A$. (Note that $k$ is necessarily less than $5$). Then $B\in SO(5)$ stabilizes $P$ if and only if $B|_A=id$. So, with respect to the basis $\{a_1,....,a_k,b_1,...b_{5-k}\}$ of ${\mathbb R^5}$, the first $k$ columns of $B$ looks like $(0,...0,1,0,...0)^T$, where the $1$is in the $k^{\mbox{{\small th}}}$ spot. Since $B$ satisfies $BB^T=I$, then the first $k$ rows of $B$ are the transposes of the first $k$ columns. This says that $B$ is block diagonal with the top-left block being just a $k\times k$ identity matrix. The second $n-k\times n-k$ block, call it $D$, must also satisfy $DD^T=I$ since this is necessary for $B$ to satisfy this same orthogonality condition. The above orthogonality condition implies $\det(D)=\pm 1$. We know $\det(B)=1$ so $D$ must either belong to $SO(5)$ or to $O(5)$ at worst, depending on the value of $k$. If $k=1$, $\det(B)=1(\det(D))$ which implies $\det(D)=1$ and hence $D\in SO(4)$ (this is the trivial degeneracy case). If $k=2$, $\det(B)=(\det(upper block=2\times 2 id)(\det(D))=1(\det(D))$ which implies $\det(D)=1$ and hence $D\in SO(3)$. Since the upper block is always and identity matrix, and the total determinant is the product of the determinant of the blocks, we see that $D$ must alway have determinant 1, hence lie in $SO(n-k)$.// $$B= \left( \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & a & c\\ 0 & 0 & 0 & c & b \end{array}\right)\ , ab-c^2=1$$ Recall that ${{{\cal M}_r}^{(3)}}$ denotes the moduli space of closed polygons in $\E^3$. The singular points in ${{\cal M}_r}$ form a $2n-6$ dimensional orbifold, $D_r$, which is naturally isomorphic to ${{{\cal M}_r}^{(3)}}/({\mathbb Z}/{2{\mathbb Z}})$. [[*Proof.*]{}]{}Define an involution ${\vartheta}$ in ${{{\cal M}_r}^{(3)}}$ by ${\vartheta}([P])=[\s(P)]$, where $P$ is any representative of $[P]\in {{{\cal M}_r}^{(3)}}$ and $\s$ is an arbitrary reflection about a plane in $\R^3$. Note that ${\vartheta}$ is well defined, as for any two reflections $\s_1$, $\s_2$ in $\R^3$ $[\s_1(P)]=[\s_2(P)]$. Also $[\s_1(P)]=[\s_1(Q)]$, if $[P]=[Q]$. So ${\vartheta}$ defines a ${\mathbb Z}/{2{\mathbb Z}}$-action on ${{{\cal M}_r}^{(3)}}$ fixing all planar polygons. Now suppose $[P]$ and $[Q]$ share a ${\mathbb Z}/{2{\mathbb Z}}$-orbit. If $[P]$ and $[Q]$ are planar, then $[P]=[Q]$, that is, $P$ and $Q$ lie in the same $SO(5)$-orbit. Otherwise, assume that $P$ and $Q$ have been chosen so that they span the same $3$-dimensional subspace $T\in \R^5$. As $P$ and $Q$ differ by a reflection in $T$, we may choose an orthonormal basis $\{e_1, e_2\}$ for the plane fixed by the reflection and extend it to a basis of $T$ by choosing $e_3$ to be normal to the fixed plane. Then, in any basis $\{e_1, e_2, e_3, \cdot, \cdot\}$ of $\R^5$, the matrix $$B= \left( \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & -1 & 0 & 0\\ 0 & 0 & 0 & a & c\\ 0 & 0 & 0 & c & b \end{array}\right)\ , ab-c^2=-1$$ is an element of $SO(5)$ mapping $P$ to $Q$. ${\hfill{\Box}\medskip}$ EXCISED. Shorter pf: It is now straight-forward to check that if $[P]$ and $[Q]$ lie in the same ${\mathbb Z}/{2{\mathbb Z}}$-orbit, corresponding $P$ and $Q$ lie in the same $SO(5)$-orbit. At this point, we note that for $n=4$, ${{\cal M}_r}=D_r$, as every $4$-sided polygon must lie entirely within a $3$-dimensional subspace in $\R^5$. Since the moduli space of $4$-gons in $\R^3$ is homeomorphic to a $2$-sphere (for $r$ satisfying the trivial non-degeneracy condition), it follows that in this case ${{\cal M}_r}\simeq S^2/({\mathbb Z}/{2{\mathbb Z}})$; that is, ${{\cal M}_r}$ is isomorphic to a closed disk. We also remark that Kamiyama in [@Kami2] showed that for equilateral hexagons, the space ${{\cal M}_r}$ is homeomorphic to $S^9$. In [@Kami1] he computed the Euler characteristic of the space of equilateral septagons, which turns out to be equal to $-7$. Also, Schoenberg in [@Schoen] showed that, for the case of equilateral septagons, the quotient of ${{\cal M}_r}$ by a natural involution (defined as in Proposition 2.4 above) is homeomorphic to $S^{13}$. We now turn to the analysis of the local structure of ${{\cal M}_r}$ near singular points. Our strategy makes use of slices. These are essentially generalized cross-sections which allow us to understand the structure of ${{\cal M}_r}$ near its singularities by studying the action of point stabilizers along directions transverse to local orbits. Recall that for a compact Lie group $G$, a $G$-space $M$ and $q\in M$, a $slice$, $S$, around $q$ is a subset of $M$ containing $q$ and satisfying the following conditions: - $S$ is closed in $G\cdot S$; - $G\cdot S$ is an open neighborhood of the orbit of $q$, $G\cdot q$; - $G_q\cdot S=S$, ($G_q$ denotes the stabilizer of $q$ in $G$); - $gS\cap S \neq \varnothing \Rightarrow g \in G_q$. We will begin by considering the type of singularity arising from degenerate polygons of type 2. The case of type 3 degeneracies is analogous and will be outlined at the end of this section. Let $P_0$ denote a degenerate polygon of type 2. Since we are interested in a local result, we may, if necessary, permute the edges of $P_0$ so that the first three span some 3-dimensional subspace of $\R^3$. Assume next that $U_0=(u_1^0,...,u_n^0)$, $u_i^0 \in S^4$ represents $P_0$, so that $\dim({\langle}u_1,u_2,u_3{\rangle})=3$. Because $[U_0]$ is defined up to the action of $SO(5)$, we may assume that $u_1^0=e_1$, $u_2^0\in {\langle}e_1,e_2{\rangle}^+$ and $u_3^0\in {\langle}e_1,e_2,e_3{\rangle}^+$, where $e_i, \ i=1,...,5$ denote the standard basis vectors of $\R^5$. (Here, the $+$ sign selects vectors with positive coordinates in the $e_2, \ e_3$ directions.) Define, $$\begin{aligned} S=\{U=(u_1,...,u_n)&\in& (S^4)^n:u_1=e_1, u_2\in {\langle}e_1,e_2{\rangle}, \nonumber \\ u_3&\in& {\langle}e_1,e_2, e_3{\rangle}, \ U \mbox{ spans }{\langle}e_1,e_2, e_3{\rangle}\}.\nonumber\end{aligned}$$ Then $$S_0=\{U\in S: r\cdot U=\sum_{i=1}^n{r_i u_i}=0\}$$ is a smooth $4n-14$ dimensional submanifold of ${{\cal U}_r}$ and a slice through $U_0$ for the $SO(5)$-action on ${{\cal U}_r}$. Note that the requirement that $U\in S$ spans $\R^3$ ensures that no degenerate polygons of type $3$ lie in $S$. Intuitively, a slice should contain at least one representative from each of the $SO(5)$-orbits near $u_0$ that is, we must make sure that each and all infinitesimal deformations of $U_0$ are represented in the slice. Observe that since $U_0$ spans $\R^3$ there exists a neighborhood of $U_0$ transversal to the local orbits (so just comprised of “non-equivalent" deformations) which does not contain any planar configurations. As mentioned above, this allows us to choose $S$ containing only type $2$ degeneracies. The fact that $S_0$ is a slice implies that the natural map $$\varphi: SO(5)\times_{H_2} S_0\rightarrow{{\cal U}_r}, \ [g,U]\mapsto gU$$ is a tube around the $SO(5)$-orbit of $U_0$ that is, $\varphi$ is an $SO(5)$-equivariant diffeomorphism onto a neighborhood of the orbit of $U_0$ in ${{\cal U}_r}$, where the $SO(5)$-action on $SO(5)\times_{H_2} S_0$ is just left translation $$g([g',U])=[gg',U], \ \forall g\in SO(5).$$ It follows that, $${{\cal U}_r}/SO(5)\simeq(SO(5)\times_{H_2} S_0)/SO(5)\simeq S_0/H_2$$ near $U_0$. Since left translation is a transitive action, the orbit of $[g,U]$ in $SO(5)\times_{H_2} S_0$ is $[SO(5),U]$. Now, $[SO(5),U]=[SO(5),V]$ if and only if there exist $h \in H_2$ such that $SO(5)h^{-1}=SO(5)$ (any $h$ will do), and $hU=V$, i.e., $U$ and $V$ share the same $H_2$-orbit in $S_0$. $SO(5)\times_{H_2} S_0$ denotes the [*twisted product*]{} of $SO(5)$ and $S_0$, i.e., the orbit space of the $H_2$-action on $SO(5)\times S_0$ given by $h(g,u)=(g h^{-1},hu)$. We can think of this space as a fiber bundle over $SO(5)/H_2$ with fiber $S_0$, hence the naturality of the map $\varphi$. In fact, a slice can also be characterized as $G_q$-invariant subspace of $M$ for which the map $\varphi$ is a tube around $G\cdot q$ (an equivariant embedding onto some open neighborhood of this orbit). Now, as $S$ is clearly smooth near $U_0$ and $0$ is a regular value of the map $\phi|_S$, it follows that $S_0=\phi|_S^{-1}(0)$ is also locally smooth near $U_0$. Thus, $H_2$ induces a linear action on $T_{U_0}S_0$ via the isotropy representation. Let $N(U_0)$ denote a sufficiently small neighborhood of $U_0$ in $S_0$. In the presence of an $H_2$-equivariant map $\psi:N(U_0)\rightarrow T_{U_0}S_0$, we have that $N(U_0)/H_2\simeq T_{U_0}S_0/H_2$. Thus, understanding the local structure near the singularity $[P_0]\in {{\cal M}_r}$ amounts to understanding the linear action of $H_2\simeq SO(2)$ on $T_{U_0}S_0$. If $[P_0]$ is a type $2$ singularity, then there exists a neighborhood of $[P_0]$ in ${{\cal M}_r}$ isomorphic to a neighborhood of $0$ in $$\R^{2n-6} \times [(\R^2)^{n-4}/SO(2)].$$ The factor $\R^{2n-6}$ corresponds to infinitesimal deformations along the singular locus, that is, those spanning three dimensions. [[*Proof.*]{}]{}The vector space $T_{U_0}S_0$ consists of vectors ${\epsilon}=({\epsilon}_1,...,{\epsilon}_n)$ in $(\R^5)^n$ satisfying the following conditions: - ${\epsilon}_i\cdot u_i^0=0$, $i=1,...,n$; - - ${\epsilon}_1=0$, - ${\epsilon}_2\in {\langle}e_1,e_2{\rangle}$, - ${\epsilon}_3\in {\langle}e_1,e_2,e_3{\rangle};$ - $\sum_{i=1}^n {\epsilon}_i=0.$ Conditions ([**ii**]{}) above may be regarded as infinitesimal “slice" conditions while ([**iii**]{}) is the infinitesimal closing condition. Let us write each of the component vectors of ${\epsilon}$ as a sum of vectors ${\delta}_i+\mu_i={\epsilon}_i$, where $\mu_i$ is the projection of ${\epsilon}_i$ onto ${\langle}e_4,e_5{\rangle}$ for each $i$. Then, conditions ([**ii**]{}) above imply that any ${\epsilon}$ in $T_{U_0}S_0$ has the form $${\epsilon}=(0,{\delta}_2,{\delta}_3,{\delta}_4+\mu_4,...,{\delta}_n+\mu_n).$$ At the same time, condition ([**i**]{}) implies ${\delta}_2$ has $1$ degree of freedom within ${\langle}e_1,e_2{\rangle}$ whereas the remaining ${\delta}_i$, $i=3,...,n$, have $2$ degrees of freedom within ${\langle}e_1,e_2,e_3{\rangle}$. Finally, condition ([**iii**]{}) says that $$\begin{aligned} \sum_{i=2}^n{\delta}_i=0 &\Leftrightarrow& -({\delta}_2+...+{\delta}_{n-1})={\delta}_n\in {\langle}e_1,e_2,e_3{\rangle}, \nonumber \\ \sum_{i=3}^n\mu_i=0 &\Leftrightarrow& -(\mu_3+...+\mu_{n-1})=\mu_n\in {\langle}e_4,e_5{\rangle}. \nonumber\end{aligned}$$ Now, the linear action of $H_2$ is clearly trivial on the $(2n-6)$-dimensional subspace of $T_{U_0}S_0$ spanned by the ${\delta}_i, \ i=2,...,n-1$. However, it is a standard diagonal circle action on each of the $n-4$, two-dimensional subspaces spanned by each pair of linearly independent $\mu_i, \ i=1,...,n-1$. The proposition follows. ${\hfill{\Box}\medskip}$ The local structure of ${{\cal M}_r}$ at a singularity of type $3$ can be unveiled with a similar argument, as outlined below. Let $Q_0$ be a degenerate polygon of type $3$. As for the case of type 2 singularities, we may, if necessary, relabel the edges of $Q_0$ so that the first two span a 2-dimensional subspace of $\R^5$. We may also choose a canonical representative $V_0=(e_1,v_2^0...,v_n^0)\in (S^4)^n$ of $[Q_0]$, where $v_2^0\in {\langle}e_1,e_2{\rangle}^+$, and define a $(4n-12)$-dimensional slice through $V_0$ for the $SO(5)$-action on ${{\cal U}_r}$ by $$\begin{aligned} S_0=\{V=(v_1,...,v_n)\in (S^4)^n: v_1&=&e_1, v_2\in{\langle}e_1,e_2{\rangle}, \nonumber \\ r\cdot V&=&\sum_{i=1}^n r_iv_i=0\}. \nonumber\end{aligned}$$ The smooth nature of $S_0$ near $V_0$ allows us to linearize our problem by considering the induced action of $H_3\simeq SO(3)$ on $T_{V_0}S_0$ given by the isotropy representation. An argument parallel to the one offered in the previous proposition, shows that an arbitrary element ${\epsilon}$ of $T_{V_0}S_0$ may be written as ${\epsilon}=(0,\g_2,\g_3+\eta_3,...,\g_n+\eta_n)$, where $\g_i$, $i=2,...,n$, is the projection of ${\epsilon}_i$ onto ${\langle}e_1,e_2{\rangle}$ and has 1-degree of freedom within this span. Similarly, $\eta_i, \ i=3,...,n$, has $3$-degrees of freedom within ${\langle}e_3,e_4,e_5{\rangle}$. Taking into account the infinitesimal closing condition, we see that the fixed point set of $H_3$ is equivalent to $\R^{n-3}$. We obtain the following proposition. If $[Q_0]$ is a type $3$ singularity, then there exists a neighborhood of $[Q_0]$ in ${{\cal M}_r}$ isomorphic to a neighborhood of $0$ in $$\R^{n-3} \times [(\R^3)^{n-3}/SO(3)].$$ The linear factor $\R^{n-3}$ corresponds to planar deformations of $[Q_0]$. ${\hfill{\Box}\medskip}$ We finish this section with a geometric description of a singularity of type $2$ for the moduli space of $5$-gons and $6$-gons, respectively. For $n=5$, Proposition $2.6$ tells us that any type $2$ singularity has a neighborhood in ${{\cal M}_r}$ which is analytically isomorphic to a neighborhood of $0$ in $\R^4\times \R^{>0}$. That is, ${{\cal M}_r}$ looks like a $5$-dimensional smooth manifold with boundary near this type of singularity. The $4$-dimensional boundary component corresponds to the smooth, “horizontal" deformations, namely those generated by the linearly-independent vector fields which locally span the singular locus $D_r$. (Note that indeed, $D_r\simeq \R^4$ near any type 2 singularity as, in this case, no infinitesimal deformations lead to planar polygons). The fifth “transversal" direction corresponds, of course, to the non-degenerate deformations of the polygon. For $n=6$, any type $2$ singularity possesses a neighborhood in ${{\cal M}_r}$ isomorphic to $\R^6\times [(\R^2\times \R^2)/SO(2)]$. The first component corresponds to the infinitesimal deformations along the (locally) smooth 6-dimensional singular locus $D_r$. The transversal component is, in this case, equivalent to a 3-dimensional homogeneous quadratic cone. Indeed, the linear action of $H_2\simeq SO(2)$ along transverse directions, $$H_2\times(\R^2\times \R^2)\rightarrow(\R^2\times \R^2):(h,(X,Y))\mapsto (hX,h^{-1}Y),$$ induces an action on the polynomial ring $\R[X,Y]$, $X=(x_1,x_2)$, $Y=(y_1,y_2)$ given by $$H_2\times \R[X,Y]\rightarrow\R[X,Y]:(h,f(X,Y))\mapsto f(hX,h^{-1}Y).$$ It is then immediate that $p_1=x_1^2+x_2^2$, $p_2=y_1^2+y_2^2$, $p_3=x_1y_1-x_2y_2$ and $p_4=x_2y_1+x_1y_2$ lie in the ring of invariant functions, $\R[X,Y]^{H_2}$, and satisfy the relation $p_1p_2=p_3^2+p_4^2$. Let $V_p$ denote the (irreducible) $3$-dimensional semi-algebraic variety associated to the ring $\R[p_1,p_2,p_3,p_4]/\langle{p_1p_2-p_3^2+p_4^2}\rangle$, $p_1, p_2\geq 0$. Then $V_p$ is isomorphic to $\R[X,Y]^{H_2}$. Indeed, any given $(p_1,p_2,p_3,p_4)$ in $V_p$ with $p_1\neq 0$, is the image of the $H_2$-orbit of $(X,Y)=(\sqrt{p_1},0,\frac{p_3}{\sqrt{p_1}},\frac{p_4}{\sqrt{p_1}})$, so that the standard map $\R[X,Y]^{H_2}\rightarrow V_p$ is onto. Also, if two $H_2$-orbits map to a single $(p_1,p_2,p_3,p_4)$ in $V_p$, then the specific form of the invariant functions guarantees the existence of $h\in H_2$ mapping one orbit to the other, that is, the orbits are the same and the map is one-to-one. It follows that, in the transversal direction, ${{\cal M}_r}$ is equivalent to a homogeneous quadratic cone cut out by the equation $p_1p_2=p_3^2+p_4^2$. Similar arguments can be applied to reveal the geometric characteristics of a type 3 singularity. For instance, for $n=5$ the $H_3\simeq SO(3)$-action along transversal directions gives once more the usual $H_3$-action on $\R[X,Y]$, $X=(x_1,x_2,x_3)$, $Y=(y_1,y_2,y_3)$. It then follows that $|X|^2$, $|Y|^2$, $X\cdot Y$, $|X\times Y|$ are invariant functions subject to the relation $(X\cdot Y)^2+|X\times Y|^2=|X|^2|Y|^3$. Real analytic equivalence between ${{\cal M}_r}$ and the weighted quotient of $({{\mathbb H}{\mathbb P}}^1)^n$ by $PSL(2,\H)$ ============================================================================================================================= In this section, we construct the weighted quotient of the configuration space of $n$ points in ${{\mathbb H}{\mathbb P}}^1$ by ${PSL(2,\H)}$, denoted $Q_{st}$, and exhibit a real analytic equivalence between ${{\cal M}_r}$ and $Q_{st}$. Establishing this equivalence involves extending the natural action of ${PSL(2,\H)}$ on the rank-one symmetric space ${PSL(2,\H)}/SO(5)$ to its geometric boundary in a manner that is consistent with the $SO(5)$-action on $S^4$. In this sense, this construction can perhaps be seen as a special case of the results found in [@LeebM]. We begin by recalling that the real Lie group $GL(2,\H)$ may be viewed as the space of $2\times 2$ invertible matrices with entries in the skew field of quaternions, $\H$. The invertibility condition is equivalent to the requirement that the Dieudonné determinant, $D(A)$, of any $A$ in $GL(2,\H)$ be non-zero. If we impose the additional condition that $D(A)=1$, we obtain the (semi-simple) Lie group $SL(2,\H)$. Then, ${PSL(2,\H)}=SL(2,\H)/\pm 1$. Let ${{\mathbb H}{\mathbb P}}^1\simeq S^4$, denote the quotient space $\H^2\setminus\{(0,0)\}/\H^*$, where we assume the group of units $\H^*$ acts by right multiplication on $\H^2$. We define a left action of ${PSL(2,\H)}$ on ${{\mathbb H}{\mathbb P}}^1$ by linear fractional transformations as follows: $${PSL(2,\H)}\times {{\mathbb H}{\mathbb P}}^1\rightarrow{{\mathbb H}{\mathbb P}}^1:\ g\cdot[q_1:q_2]\mapsto[aq_1+bq_2:cq_1+dq_2],$$ $$\mbox{where }g=\left( \begin{array}{cc} a & b\\ c & d \end{array}\right)\, \in {PSL(2,\H)}\mbox{, }(q_1,q_2)\in\H^2\setminus\{(0,0)\}.$$ Let $M\subseteq({{\mathbb H}{\mathbb P}}^1)^n$ be the collection of $n$-tuples of distinct points. Then the left diagonal action of ${PSL(2,\H)}$ on $({{\mathbb H}{\mathbb P}}^1)^n$ induces an action on $M$ such that the quotient space $M/{PSL(2,\H)}$ is a Hausdorff, real manifold. Consider an $n$-tuple of positive real numbers, $r=(r_1,...,r_2)$. As ${{\cal M}_r}\simeq \M_{\l r}$ for all $\l\in \R^+$, we may adopt the normalization $\sum_{i=1}^n{r_i}=2$. A point $p=(p_1,...,p_n)\in({{\mathbb H}{\mathbb P}}^1)^n$ is called stable if $$\sum_{p_i=q}r_i<1$$ for all $q\in {{\mathbb H}{\mathbb P}}^1$. It is immediate that if we restrict ourselves to (admissible) $r$ satisfying the non-degeneracy condition (see Section 2), all $p\in({{\mathbb H}{\mathbb P}}^1)^n$ are stable. In virtue of the results of Section 2, we shall adopt this assumption for the remainder of this section. However, it is not hard to define semi-stable and nice semi-stable configurations as in [@LeebM]. Let $M_{st}$ denote the space of all stable points in $({{\mathbb H}{\mathbb P}}^1)^n$ and set $Q_{st}=M_{st}/{PSL(2,\H)}$ with the quotient topology. Recall from Section 2 that ${{\cal U}_r}\subseteq (S^4)^n$ represents the collection of closed $n$-gons, ${{\cal P}_r}$. The first step in establishing the correspondence between $Q_{st}$ and ${{\cal M}_r}={{\cal U}_r}/SO(5)$ is to note that the closing condition implies the stability condition in Definition 3.1. As $SO(5)$ is a maximal compact subgroup of ${PSL(2,\H)}$, the inclusion ${{\cal U}_r}\subseteq\M_{st}$ induces the (injective) quotient map $$\xi:{{\cal M}_r}={{\cal U}_r}/SO(5)\rightarrow Q_{st}=M_{st}/{PSL(2,\H)}.$$ The quotient map $\xi$ gives a real analytic isomorphism between ${{\cal M}_r}$ and $Q_{st}$. Proving Theorem 3.2 essentially amounts to proving Lemma 3.4 below, which, modulo some preliminary results on the action of ${PSL(2,\H)}$ on a space of discrete stable measures on $S^4$, guarantees the surjectivity of the map $\xi$. The same ideas were used by Kapovich and Millson in [@KM] to establish a complex analytic equivalence between ${{{\cal M}_r}^{(3)}}$ and the weighted quotient of $(S^2)^n$ by $PSL(2,\C)$, constructed by Deligne and Mostow in [@DM]. As it is also the case in this section, the arguments used in [@KM] to establish such equivalence rely on some special properties of the conformal barycenter constructed by Douady and Earle in [@DE] for stable measures on the sphere. A probability measure on $S^4$ is said to be stable if the mass of any atom is strictly less than $\frac{1}{2}$. For our choice of $r$, each point in ${{\cal M}_r}$ gives rise to a stable, finite probability measure of total mass 1 defined by $$\mu=\frac{1}{2}\sum_{i=1}^nr_i{\delta}_{u_i},$$ where ${\delta}_{u_i}$ denotes the delta function on $S^4$ centered at $u_i$. The center of mass, $C(\mu)$ of such a measure is given by $$C(\mu)=\frac{\sum_{i=1}^nr_iu_i}{\sum_{i=1}^nr_i}=\frac{1}{2}\sum_{i=1}^nr_iu_i.$$ Note that the ${PSL(2,\H)}$-action on $S^4$ by linear fractional transformations induces an action on probability measures on $S^4$ defined by $$g\cdot\mu(A)=g_*\mu(A):=\mu(g^{-1}(A)),\mbox{ where } A\subseteq S^4 \mbox{ is a Borel set}. \label{e:push}$$ For each finite stable measure $\mu$ on $S^4$ there exists $g\in{PSL(2,\H)}$ such that the center of mass $C(g\cdot \mu)=0$. [[*Proof.*]{}]{}Consider the Cartan decomposition of ${PSL(2,\H)}=SO(5) P,$ $$P=\left\{ \left( \begin{array}{cc} \rho_1 & q \\ \bar{q} & \rho_2 \end{array}\right),\ \rho_1\rho_2=1+|q|^2,\ \rho_1>0,\ \rho_2>0,\ q\in\H\right\},$$ and the natural action of ${PSL(2,\H)}$ on $P$ given by $${PSL(2,\H)}\times P\rightarrow P:(g, x)\mapsto g x g^*, \ g^*=\bar{g}^t.$$ We will first show that the geometric boundary of $P$ may be identified with $S^4$ and that the above ${PSL(2,\H)}$-action may be continuously extended to an action on this geometric boundary which coincides with the action on $S^4$ by linear fractional transformations. To this purpose, we identify the vector space $P$ with ${{\mathbb R}^5_{>0}}=\{(x_1,x_2,x_3,x_4,x_5), \ x_5>0\}$ by setting $$\begin{aligned} x_i&=&\frac{q_i}{\rho_2},\mbox{ }i=1,2,3,4; \nonumber \\ x_5&=&\frac{1}{\rho_2}>0 \nonumber\end{aligned}$$ where $q=q_1+iq_2+jq_3+kq_4\in \H$. The ${PSL(2,\H)}$-action can be written in terms of the $x_i$ coordinates as $$\begin{aligned} v_x & \mapsto & \frac{{|x|^2}a\bar{c}+b\bar{v_x}\bar{c}+av_x\bar{d}+b\bar{d}}{{|x|^2}|c|^2+d\bar{v_x}\bar{c}+cv_x\bar{d}+|d|^2}\nonumber \\ x_5 & \mapsto & \frac{x_5}{{|x|^2}|c|^2+d\bar{v_x}\bar{c}+cv_x\bar{d}+|d|^2}>0\nonumber\end{aligned}$$ where $|x|^2=\sum_{i=1}^5{x_i^2}$,  $v_x=x_1+ix_2+jx_3+kx_4\in \H, \ g=\left( \begin{array}{cc} a & b\\ c & d \end{array}\right)$. Let us now continuously extend this ${PSL(2,\H)}$-action to the geometric boundary of ${{\mathbb R}^5_{>0}}$ consisting of $\{x\in \R^5: x_5=0\}\cup\{\infty\}$ by $$\begin{aligned} g\cdot (x_1,x_2,x_3,x_4,0)&:=&\lim_{x_5\rightarrow 0} g\cdot (x_1,x_2,x_3,x_4,x_5)\nonumber \\ &= &\lim_{x_5\rightarrow 0} \frac{{|x|^2}a\bar{c}+b\bar{v_x}\bar{c}+av_x\bar{d}+b\bar{d}}{{|x|^2}|c|^2+d\bar{v_x}\bar{c}+cv_x\bar{d}+|d|^2}\nonumber \\ &= &\frac{|v_x|^2a\bar{c}+b\bar{v_x} \bar{c}+av_x\bar{d}+b\bar{d}}{|v_x|^2|c|^2+d\bar{v_x}\bar{c}+cv_x\bar{d}+|d|^2}\nonumber \\ &= &(av_x+b)(cv_x+d)^{-1}\nonumber.\end{aligned}$$ Upon identifying each $v_x\in\H$ with the corresponding $[v_x:1]\in{{\mathbb H}{\mathbb P}}^1$, we obtain an action of ${PSL(2,\H)}$ on $\H\cup{\infty}={{\mathbb H}{\mathbb P}}^1$ which coincides with the original ${PSL(2,\H)}$-action on ${{\mathbb H}{\mathbb P}}^1$ by linear fractional transformations. Let $B^5$ denote the closed unit ball in $\R^5$. One can introduce coordinates $y_i,\ i=1,...,5$ in $\R^5$, and define an invertible map $\R^5_{> 0}\rightarrow B^5$ by $$\begin{aligned} y_i & = & \frac{2x_i}{1+|x|^2},\ i=1,...,4;\nonumber \\ y_5 & = & \frac{1-|x|^2}{1+|x|^2}. \nonumber\end{aligned}$$ If we then express the ${PSL(2,\H)}$-action in terms of the $y$-coordinates for the ball model, we may directly check that the map sending a finite stable measure $\mu$ on $S^4$ to its center of mass $C(\mu)\in B^5$ fails to be ${PSL(2,\H)}$-equivariant. However, in their paper [@DE], Douady and Earle construct a conformal barycenter $B\in B^n$ associated to every stable probability measure on $S^{n-1}$. For $n=5$, the conformality of $B$ amounts to the ${PSL(2,\H)}$-equivariance of the assignment $\mu\mapsto B(\mu)$, namely, $$B(g\cdot\mu)=g(B(\mu)),$$ where ${PSL(2,\H)}$ acts on $B^5$ as indicated above, and on measures by push-forward, as defined in (\[e:push\]). It turns out that, for stable $\mu$, the conformal barycenter $B(\mu)$ coincides with the unique zero of a vector field $\xi_{\mu}$ defined on (the appropriate) $B^n$. For $n=5$ and stable, finite $\mu$, we have that $$\xi_\mu(y)=\frac{1}{2}\sum_{i=1}^n\left({\frac{1-|y|^2}{|y-u_i|^2}}\right)^4r_i (y-u_i),$$ where $y\in B^5$, $u_i\in S^4$ for all $i$, and $\mu$ is defined by $u=(u_1,...,u_n)$. As $\xi_\mu(0)=\frac{1}{2}\sum_{i=1}^n r_iu_i$, one immediately verifies that: $$B(\mu)=0\Leftrightarrow C(\mu)=0.$$ Lemma 3.4 then follows at once from the transitivity of the ${PSL(2,\H)}$-action on $B^5$. ${\hfill{\Box}\medskip}$ The real analytic equivalence in Theorem 3.2 is thus established. ${\hfill{\Box}\medskip}$ Gelfand-MacPherson correspondence over the quaternions ====================================================== The classical Gelfand-MacPherson correspondence [@GMacP] asserts that the quotient of the generic part of $Gr_{\R}(p, q)$ by the Cartan subgroup of $PGL(p+q, \R)$ is diffeomorphic to the space of equivalence classes of generic configurations of $(p+q)$ points in $\RP^{q-1}$. Later on, it was realized (see e.g. [@Kapr]) that in the complex context, there is an isomorphism between appropriately chosen symplectic quotients of these spaces, as well as between the corresponding GIT quotients. The goal of this section is to show that a similar result holds over the quaternions. More precisely, let $n=p+q$, and let $P$ be the subgroup of $SL(n, \H)$ which preserves a $q$-dimensional subspace of $\H^n$. Then $Gr(p,q)=SL(n, \H)/P\simeq Sp(n)/Sp(p)Sp(q)$ is the quaternionic Grassmannian with the $PSL(n,\H)$-action. In this case, the maximal compact subgroup of $SL(n, \H)$ is $Sp(n)$, and $\S^n\simeq Sp(1)^n\subset Sp(n)$ is the diagonal subgroup of $Sp(n)$ and thus, a natural analog of the torus. We can then consider the tri-momentum map $\mu: Gr(p,q)\to \R^n$ defined similarly to the moment map in the complex case [@F1]. The level sets of $\mu$ are $\S^n$-invariant, so for any $x=(x_1, ..., x_n)\in \R^n$, one gets the reduced space $X_x:=\mu^{-1}(x)/\S^n$, which is not smooth in general, as the action of $\S^n$ on $\mu^{-1}(x)$ is free only generically. On the other hand, let us consider the space ${\tilde Y}=({{\mathbb H}{\mathbb P}}^{p-1})^n$ with the diagonal $PSL(p, \H)$-action. There is a map $\phi$ from ${\tilde Y}$ to the (real) projectivization of $\calH_p$ —the space of quaternionic Hermitean $p\times p$ matrices, denoted by $\P\calH_p$. In terms of the homogeneous coordinates $[w_1^{(i)}:\cdots : w_p^{(i)}]$ on the $i^{\mbox{{\small th}}}$ component ${{\mathbb H}{\mathbb P}}^{p-1}$, the map $\phi$ sends a point with these coordinates to the projectivization of the matrix whose $(i,j)^{\mbox{{\small th}}}$ entry is given by $\sum_{l=1}^p w_l^{(i)}{\bar w}_l^{(j)}$. There is an obvious $Sp(p)$-action on $\P\calH_p$ by conjugation, and if $\calO$ is an orbit of this action, then $\phi^{-1}(\calO)$ is preserved by $Sp(p)$. We have the following: For the appropriate choices of $x$ and ${\cal O}$, the corresponding quotients $X_x$ and $Y_{\calO}:=\phi^{-1}(\calO)/Sp(p)$ are homeomorphic, while their open dense smooth parts are in fact diffeomorphic. [[*Proof.*]{}]{}For both quotients, we can start with the same space $\H^{pn}$, with coordinates $(w_i^{(j)})$, $1\le i\le p$, $1\le j\le n$, from where we have two maps $\mu: \H^{pn}\to \R^n$ given by $x_i=\sum_{j=1}^n |w_i^{(j)}|^2$ and $\phi: \H^{pn}\to \calH_p$ given as above, prior to the projectivization. One can easily establish that the level sets of the first map are preserved by the action of $Sp(p)$ and the level sets of $\phi$ are preserved by $\S^n$. Our result then follows from a simple application of reduction in stages and the fact that $Gr(p,q)$ is obtained from $\H^{pq}$ by choosing a level set of $\phi$ and quotiening by $Sp(p)$, while $({{\mathbb H}{\mathbb P}}^{p-1})^n$ is obtained by choosing a level set of $\mu$ and quotiening by $\S^n$. Therefore, we see that for the appropriate choices of $x$ and ${\cal O}$ we obtain a bijection on the level of orbits. ${\hfill{\Box}\medskip}$ An alternate way of establishing the quaternionic version of the Gelfand-MacPherson correspondence by using a certain natural involution will be presented in Section 7. Gelfand-Tsetlin coordinates and application to $\M_r$ ===================================================== The relationship between the bending flows on the moduli spaces of polygons in $\R^3$ defined by Kapovich and Millson in [@KM] and the Gelfand-Tsetlin (GT) system on the Grassmannian $Gr_{\C}(2, n)$ was exhibited by Hausmann and Knutson in [@HK]. In this section we simply follow the road paved in [@HK] to show that one can obtain a similar relation in the quaternionic context. First of all, let us recall that for any $n\times n$ quaternionic Hermitean matrix $A\in \calH_n$, the set of $n$ real eigenvalues (and hence all real invariant polynomials) is well-defined by formulas analogous to the complex case. (Note that this is not the case for a general matrix with quaternionic entries.) Therefore, the whole system of Gelfand-Tsetlin coordinates $\{ \l_i^{(j)}\}$ described below, is also well-defined on $\calH_n$. Recall that given a matrix $A\in \calH_n$, we may denote by $A^{(j)}$ the $j\times j^{\mbox{{\small th}}}$ upper-left corner submatrix of $A$. Then, by definition, $\l_i^{(j)}(A)$ is the $i^{\mbox{{\small th}}}$ eigenvalue of $A^{(j)}$ (these are arranged in non-increasing order). The whole system $\{ \l_i^{(j)}\}$ satisfies the interlacing property: $$\l_i^{(j)}\ge \l_i^{(j-1)}\ge \l_{i+1}^{(j)}.$$ Due to the collision of eigenvalues, the functions $\l_i^{(j)}$ are not smooth on all of $\calH_n$, but only continuous. However, their importance in the complex case, as was shown by Guillemin and Sternberg [@GSGT] [@GST] based on Thimm [@Th], is manifested by the fact that, generically, these functions form a set of complete integrals on the co-adjoint orbits of $U(n)$ (that is, on the level sets of the functions $\l_i^{(n)}$) for the Lie Poisson structure. Moreover, generically, the level sets of the rest of the (GT) coordinates are Lagrangian tori. Similarly, one can define (GT) coordinates on $\calH_n$ in the quaternionic case, as every element of $\calH_m$ has $m$ real eigenvalues. In this case, the orbits of the natural $Sp(n)$-action by conjugation are analogues of the co-adjoint orbits. To see that the interlacing property holds, one can use the embedding $\nu: \calH_n\hookrightarrow \calH_{2n}^{\C}$ defined in Equation (\[e:nu\]) in the next section, and use the interlacing property for the eigenvalues of the complex Hermitean sub-matrices. The main point is that each eigenvalue in the embedding defined by $\nu$ will appear twice. One can also see that the generic level sets of these functions on the orbits $\calO$ of the $Sp(n)$-action are diffeomorphic to $\S^m$, the diagonal subgroup of $Sp(n)$, for $m=\dim (\calO)/4$. This fact is also easy to prove by an inductive argument. Let $v_i, \ i=1,...,n$, denote the vertices of a arbitrary $n$-gon in ${{\cal M}_r}$ and $d_i=v_{i+2}-v_1, \ i=1,..., n-3$, its $n-3$ diagonals. Set $\ell_i=||d_i||,\ i=1,...,n-3$. We would like to show that an analogue of Theorem 5.2 in [@HK] holds in our case. Namely, that if one fixes a level set of the diagonal lengths, $L_\ell$, for a generic polygon in ${{\cal M}_r}$, then the transitive spheroid action on $L_{\bf \ell}$ comes from the residual action of $\S^{2n-4}$ on a level set of the (GT) system on $Gr(2, n)$. Indeed, an element of $Gr(2,n)$ can be represented by a $2\times n$ matrix $M$ with quaternionic entries, where the columns span a $2$-dimensional subspace of $\H^n$. If $(a_1,..., a_n)^t$ and $(b_1, ..., b_n)^t$ are the columns, then we can also introduce the truncated $2\times i$ submatrices $M_i$, where only the first $i$ rows remain. As in Section 3, let $M^*=\overline{M}^t$. Then the non-zero eigenvalues of the $2\times 2$ quaternionic Hermitean matrix $$M_i^*M_i=\sum_{j=1}^i \left( \begin{array}{cc} |a_j|^2 & {\bar a}_jb_j \\ {\bar b}_j a_j & |b_j|^2 \end{array} \right)$$ are the same as the non-zero eigenvalues of the matrix $M_iM_i^*\in\calH_i$. The proof of this fact is analogous to Section 5 of [@HK] and gives the result. Relations with complex geometry =============================== Let us identify $\C^{2m}$ with $\H^m$ as follows. The point $(z_1, ..., z_{2m})\in \C^{2m}$ corresponds to the point $(q_1, ..., q_m)\in\H^m$ if $q_i=z_{2i-1}+{\bf j}z_{2i}$ for $1\le i\le m$. Using this identification, let $J$ be the real operator on $\C^{2m}$ which comes from the right multiplication by ${\bf j}$ on the space $\H^m$. Since $J^2=-\mbox{Id}$, the action of $J$ on $\C^{2m}$ extends to an involution $\theta$ on all complex partial flag manifolds. However, this involution is only real and not a complex diffeomorphism. If all the dimensions of the subspaces are even, then the fixed point set of $\theta$ is clearly the quaternionic (partial) flag manifold: $$(F_{2m}^{\C}(2m_1, ..., 2m_k))^{\theta}=F_m^{\H}(m_1, ..., m_k).$$ Moreover, if $\omega$ is an invariant Kähler form on the complex flag manifold, then the fixed point set of $\theta$ is a Lagrangian submanifold with respect to $\omega$. Let us now restrict our attention to the case of the complex Grassmannian $Y=Gr_{\C}(2,4)$ of complex 2-planes in a 4-dimensional complex space. In particular, the above discussion implies that $Y^\theta={{\mathbb H}{\mathbb P}}^1$. We can always view the group $SL(n, \H)$ as a real form of the complex semi-simple group $SL(2n, \C)$, with the corresponding Satake diagram [@Araki] having odd numbered vertices painted and no arrows. The multiplicity of each restricted root is $4$. In terms of matrices, we have the embedding $$\nu: SL(n, \H)\hookrightarrow SL(2n, \C), \ A+B{\bf j}\ \mapsto \left( \begin{array}{cc} A & B \\ -{\bar B} & {\bar A} \end{array} \right). \label{e:nu}$$ If we let $J$ be the $2n\times 2n$ matrix $$\left( \begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array} \right),$$ then one can define the involution $\t$ on $SL(2n, \C)$ by $\t(C)=-J{\bar C}J$, which defines the real form $SL(n, \H)$. One can see that using the notation $\t$ for both the involution just defined and the involution on $\C^{2n}$ presents no harm, since both have the same origin and the action of $SL(n, \H)$ on the fixed point set of $\t$ in flag manifolds comes from the action of $SL(2n, \C)$. Let us now consider the $n$-fold product $Y^n=\underbrace{Y\times\cdots\times Y}_n$ with the diagonal action of the group $G=SL(4, \C)$. We will always assume that $n>4$, since we would like generic orbits to depend on continuous parameters. The space $Y^n$ has two different interpretations. On one hand, it can be viewed as the configuration space of $n$-tuples of ordered points on $Y=Gr_{\C}(2,4)$. Another point of view is that the space $Y^n$ is the configuration space of ordered lines in ${{\mathbb C}{\mathbb P}}^3$. In both situations the group $G$ acts by identifying projectively equivalent configurations. Consider a weighted configuration of lines $\{ l_1, ..., l_n\}$ and corresponding weights $(r_1, ..., r_n)$ subject to the normalization $\sum_{i=1}^n r_i=2$. Then, following [@GIT], pages 86-88, we say that such an $n$-tuple of lines in ${{\mathbb C}{\mathbb P}}^3$ is (properly) [*stable*]{} if the following three conditions hold: - For every point $p\in {{\mathbb C}{\mathbb P}}^3$, the sum of weights of lines passing through $p$ is $< 1$. - For every line $l\in{{\mathbb C}{\mathbb P}}^3$, the sum of weights of lines intersecting $l$ plus twice the sum of weights of lines coincident with $l$ is $< 2$. - For every plane $\Pi\subset{{\mathbb C}{\mathbb P}}^3$, the sum of weights of lines contained in $\Pi$ is $< 1$. As usual, for a semi-stable configuration, one replaces $<$ by $\le$ in the above definition. In particular, if a $G$-orbit contains a stable point, then it is entirely stable. Further, stable orbits are $15$-dimensional, and hence, as large as possible. We notice that there always exist semi-stable configurations, which are not stable. The easiest way to see this is to choose a semi-stable configuration of lines each of which meets a certain fixed line in $\C\P^3$ at a point. Then, in ([**ii**]{}) above, we will have a strict equality. We would now like to consider the involution $\t$ on $Y^n$ and show that it descends to GIT quotients by the action of $G$. We will see that the fixed point sets of this action on the quotients are related to our polygon spaces $\M_r$. More precisely, on the Grassmannian $Y$ there is an essentially unique line bundle $\L$, dual to the second exterior power of the tautological plane bundle. One can also view $\L$ as the pull-back of the canonical line bundle ${\cal O}(1)$ over projective space, in the Plücker embedding of $Y$. Therefore, all the choices for linearization on the product $Y^n$ are given by taking the tensor product of the pull-backs of the line bundles $\L^{\otimes a_1}, ..., \L^{\otimes a_n}$. We assume that all $a_i >0$ so that the corresponding line bundle is ample. Let $a=(a_1, ..., a_n)$ and let us set $r_i=2a_i/\sum_{i=1}^n a_i$. Whenever we need to specify the linearizing line bundle, we will denote the corresponding GIT quotient by $(Y^n/G)_r$, with $r=(r_1, ..., r_n)$. Now we would like to show that the action of $\t$ on $Y^n$ maps orbits of $G$ to orbits of $G$. Indeed, if we let $A$ be an element of $G$, a simple computation shows that $\t$ maps the element $A\cdot x$ to the element $\t(A)\cdot\t(x)$. In particular, if $x$ is in the fixed point set of $\t$ and $A\in SL(2, \H)$, then $\t$ will stabilize $A\cdot x$. We also note that $\theta$ maps semistable configurations to semistable configurations. This immediately shows that the action of $\t$ descends to the GIT quotients $(Y^n/G)_r$. Next, we give an interpretation of the quaternionic Gelfand-MacPherson correspondence using the involution $\t$. In complex algebraic geometry, for an $n$-tuple of positive integers $a=(a_1, ..., a_n)$, we have the following isomorphism of GIT quotients: $$Gr_{\C}(2,4)^n/SL(4,\C)\simeq Gr_{\C}(4,2n)/GL(2, \C)^n, \label{e:GM2}$$ where the corresponding linearizing line bundles are defined as follows. On the left hand side, the linearizing line bundle is the same as before, the tensor product of the pull-backs of $\L^{\otimes a_i}$. On the right hand side, we think of $GL(2, \C)^n$ as the block-diagonal subgroup of $GL(2n, \C)$. Notice that the one-dimensional center of the latter group is a subgroup of the former and acts trivially on $Gr_{\C}(4, 2n)$. In particular, the dimension count is correct. The linearization of the action of $GL(2, \C)^n$ on the right can be defined analogously to (2.4.3) of [@Kapr]. More precisely, let $Z_i$ be the center of the $i^{\mbox{{\small th}}}$ copy of $GL(2, \C)$ in the $n$-fold product. The subgroup $Z_i$ is identified with $\C^*$ using our matrix embedding. Our point is that both varieties in Equation (\[e:GM2\]) are the same as the projective spectrum of the ring $$R=\oplus R_d, \label{e:R_d}$$ where $R_d$ consists of polynomials $\Phi(M)$ in entries of a $(4\times 2n)$-matrix, such that - $\Phi(gM)=\Phi(M)$ for the left action of $g\in SL(4,\C)$, - $\Phi(M\cdot h)=\chi(h)\Phi(M)=t^{da}\Phi(M)$, where $h\in GL(2,\C)^n$. Here, $\chi$ is the character of $GL(2,\C)^n$ corresponding to the character $t^{da}$ of its center, where $t\in Z_1\times\cdots\times Z_n\simeq (\C^*)^n$, and $t^a=t_1^{a_1}\cdots t_n^{a_n}$ component-wise. The correspondence above can be easily extended to an equivalence between GIT quotients of Grassmannians $Gr_{\C}(k,d)^n/SL(d, \C)$ and $Gr_{\C}(d, nk)/GL(k, \C)^n$, and even to an equivalence between more general partial flag manifolds (for appropriate choices of linearizations). Now, one can see that the correspondence defined by Equation (\[e:GM2\]) is preserved by the action of the involution $\t$. This yields an alternate way of establishing the Gelfand-MacPherson correspondence in the quaternionic context. We summarize our discussion in the following proposition. For an admissible $n$-tuple of weights $r=(r_1, ..., r_n)$ the GIT quotient $(Y^n/G)_r$ is a projective variety of dimension $4n-15$, determined by the projective spectrum of the ring $R$, defined in Equation (\[e:R\_d\]) above. The GIT quotients $$Gr_{\C}(2,4)^n/SL(4, \C)\ \ \ \ {\rm and} \ \ \ \ Gr_{\C}(4,2n)/GL(2, \C)^n$$ are isomorphic for appropriate choices of linearizations. The action of the involution $\t$ on $Y^n$ descends to the quotient $(Y^n/G)_r$. $\hfill\Box$ Let us now consider symplectic reductions of $Y^n$ with respect to the $K=SU(4)$-action. The symplectic form on $Y^n$ is the sum of pull-backs of $K$-invariant forms on each multiple. These are proportional, with the corresponding coefficients $r_i$, to the one determined by the line bundle $\L$. Let us identify $\su_4^*$ (the dual of the Lie algebra $\su_4$) with the space of complex traceless Hermitean $4\times 4$ matrices. Then, the fixed point set of $\t$ is again given by the image of the space of traceless quaternionic $2\times 2$ Hermitean matrices under the embedding defined by the map $\nu$ in Equation (\[e:nu\]). Since the involution $\t$ maps unitary matrices to unitary matrices, then it also maps $K$-orbits on $Y^n$ to $K$-orbits. Therefore, $\t$ descends to the symplectic quotient $(Y^n//K)_r$, which is homeomorphic to the GIT quotient $(Y^n/G)_r$ as explained in [@GIT], for example. Our convention is that we always reduce at the zero level set of the moment map, unless stated otherwise. The quotient $(Y^n//K)_r$ is not expected to be smooth even for a sufficiently general choice of weight vector $r$, as even then one can always produce a configuration with a non-trivial connected stabilizer as explained above. However, we have the generic smoothness on the symplectic side, since the $K$-action is, generically, locally free. In any case, the quotient space $(Y^n//K)_r$ has the structure of a stratified symplectic manifold, in the terminology of [@SjamLer]. Thus, we arrive at the following result: For an admissible $n$-tuple of weights $r=(r_1, ..., r_n)$, the symplectic quotient $(Y^n//K)_r$ is a stratified symplectic manifold. The action of $\t$ on $Y^n$ descends to a smooth anti-symplectic involution on each smooth symplectic stratum of $(Y^n//K)_r$, where its fixed point set is a Lagrangian submanifold. [[*Proof.*]{}]{}Recall that if $\tau$ is a smooth involution on a manifold $M$, then the fixed point set $M^{\tau}$ is a smooth submanifold of $M$. Indeed, one can use the fact that for each fixed point $p$ in $M^{\tau}$, the tangent space $T_pM$ splits as $V^+\oplus V^-$ according to the eigenvalues $1$ and $-1$ of the induced tangent space map $\tau_*(p)$. Then, one can use a $\tau$-invariant Riemannian metric on $M$ together with the exponential map on $T_pM$ to see that $p$ is a smooth point of $M^{\tau}$. The fact that $\t$ descends to the anti-symplectic involution on $(Y^n//K)_r$ immediately follows from the fact that $\t$ is an anti-symplectic involution on $Y^n$. It follows from Proposition 2.3 of [@OSS] that the fixed point set of $\t$ on each smooth symplectic stratum, is a Lagrangian submanifold. ${\hfill{\Box}\medskip}$ Note that the dimension of the fixed point set of $\t$ on the big smooth open subset of $(Y^n//K)_r$ is $4n-15$, half of the dimension of $(Y^n//K)_r$. Let us now recall a general Lagrangian reduction procedure that was considered in Section 7 of [@OSS]. One can interpret the space $(Y^n//K)_r$ as the space of solutions of the equation $A_1+\cdots +A_n=0$, where $A_i\in\su^*_4$ is in the orbit of ${\rm diag}(r_i, - r_i, r_i, -r_i)$, modulo the diagonal action of $SU(4)$. Our space of polygons $\M_r$ can also be interpreted as the space of solutions of the same equation $B_1+\cdots +B_n=0$ but with $B_i$ belonging to the space $\calH_2$ of quaternionic Hermitean traceless $2\times 2$ matrices, modulo the diagonal action of the group $Sp(2)$ by conjugation. To see that these two spaces are related by an involution, let us construct a map: $$\psi: \M_r\to (Y^n//K)_r^{\t},$$ and study its properties. The construction is already clear from the previous paragraph. Indeed, using the embedding $\nu$ defined in Equation (\[e:nu\]), one can map the $Sp(2)$-orbit of an $n$-tuple of matrices $(B_1$, ..., $B_n)\in\calH_2$ satisfying $B_1+\cdots+ B_n=0$ to the $SU(4)$-orbit of an $n$-tuple of matrices from $\su^*_4$ satisfying the same equation, where we let $A_i=\nu(B_i)$. The image of such orbit not only lies entirely in an $SU(4)$-orbit, but also belongs to the fixed point set of the involution $\t$. Therefore, when we pass to the quotients, we get a well-defined map $\psi$ defined as above. The map $\psi$ is finite and generically four to one. [[*Proof.*]{}]{}The finiteness of $\psi$ follows from [@OSS Proposition 2.3 (iii)], which, applied to our case, implies that an $Sp(2)$-orbit through an $n$-tuple $(B_1$,...,$B_n)\in\calH_2$ satisfying $B_1+\cdots +B_n=0$ is open in the intersection of the $SU(4)$-orbit through $(B_1, ..., B_n)$ with $\calH_2$. Assume that two different $Sp(2)$-orbits, through the points $x_1$ and $x_2$ from the zero level set are mapped to the same point by $\psi$. This would imply that there exists an element $g\in SU(4)$ such that $\nu(x_1)=g.\nu(x_2)$. Now if we apply the involution $\theta$ defining the quaternionic subspace inside the complex one to both sides, we will see that $\nu(x_1)=\theta(\nu(x_1))$ $=\theta(g)\theta(\nu(x_2))=\theta(g)\nu(x_2)$, where $\theta$ applied to an element from $SU(4)$ stands for the involution inside $SU(4)$ defining $Sp(2)$. Since, generically, points have central stabilizer and $\nu$ is an injection, we see that $g=c\cdot\theta(g)$, where $c$ is in the center of $SU(4)$, which consists of four elements. Thus, generically, each $SU(4)$ orbit will contain four $Sp(2)$-orbits, which are stabilized by $\theta$. ${\hfill{\Box}\medskip}$ Notice that both $\M_r$ and $(Y^n//K)_r^{\t}$ are real analytic varieties of dimension $4n-15$. Thus the map $\psi$ surjects onto a connected component of $(Y^n//K)_r^{\t}$ cf. [@OSS Corollary7.2], and the image of $\psi$ consists of those $SU(4)$-orbits which are preserved by $\t$ and intersect its fixed point set. However, the map $\psi$ is not surjective. The question of surjectivity, as shown in [@F3], boils down to the question of how many conjugacy classes of involutions inner to $\theta$ there are in $PU(4)$. This fact holds in general for a compact group $K$ of adjoint type and an involution $\theta$. An involution $\tau$ is called [*inner*]{} to $\theta$ if there exists an element $g$ from $PU(4)$ such that $\tau={\rm Ad}_{g}\circ\theta$. Two involutions $\tau$ and $\tau'$ are called conjugate, if there exists a group element $g$ such that $\tau'={\rm Ad}_{g}\circ\tau\circ{\rm Ad}_{g^{-1}}$. For the group $PU(4)$ and the involution $\theta$ defining the symlpectic subgroup, there is another conjugacy class of involutions represented by $\tau$, the complex conjugation, whose fixed point set is the group $PO(4)$. The corresponding involution, also denoted by $\tau$ on $Y=Gr_{\C}(2,4)$ has the real grassmannian $Y^\tau=Gr_{\R}(2,4)$ as its fixed point set. Therefore, we can also consider the lagrangian quotient of the $n$-fold product of the real grassmanian of two-planes in $\R^4$ by the diagonal action of the group $PO(4)$. This quotient is defined as the quotient of the intersection of $Gr_{\R}(2,4)^n$ with the zero level set of the momentum map inside $Y^n=Gr_{\C}(2,4)^n$ and denoted by ${\mathcal T}_r$. Similar to the map $\psi$ defined above, we can define the map $$\varphi: \ \ {\mathcal T}_r\to (Y^n//SU(4))^\tau_r.$$ It was proved in [@F3] that the spaces $(Y^n//SU(4))^\tau_r$ and $(Y^n//SU(4))^\theta_r$ are naturally identified, and the images of the maps $\psi$ and $\varphi$ are actually disjoint. It is easy to see that for an admissible choice of $r$, the space ${\mathcal T}_r$ is non-empty (also of dimension $4n-15$). The following follows from [@F3]: The space $(Y^n//SU(4))^\theta_r$ is homeomorphic to the disjoint union of $\psi({\mathcal M}_r)$ and $\phi({\mathcal T}_r)$. Now we would like to consider the Chow quotient $Y^n//G$, as defined by Kapranov in [@Kapr] and extend the action of $\t$ to an involution on $Y^n//G$, whose fixed point set $\M_n$ will have a similar meaning to the Grothendieck-Knudson space $\overline{M}_{0,n}$ —a certain compactification of projective equivalence classes of $n$-tuples of distinct points on ${{\mathbb C}{\mathbb P}}^1$. In particular, we will construct surjective maps $\M_n\to\M_r$. Thus, in a sense, the space $\M_n$ will serve as a “universal” polygon space dominating all the $\M_r$ spaces at once. First of all, let us fix a weight vector $r$ and consider the bi-rational morphism $\phi_r: Y^n//G\to(Y^n/G)_r$ defined in [@Kapr]. Now let us denote by $\M_{n,r}$ the fiber product $\M_r\times_{(Y^n/G)_r}Y^n//G$, where the map $\M_r\to (Y^n/G)_r$ is defined using the map $\psi$ from Proposition 6.3. Since the fixed point set of the action of $\t$ on the corresponding Chow variety is closed, the action of $\t$ extends to an involution on the Chow quotient $Y^n//G$. There is natural map $\eta_r: \M_{n,r}\to (Y^n//G)^\t$, which is defined using the above construction of $\M_{n,r}$ and the fact that $\psi$ actually maps $\M_r$ to $(Y^n/G)_r^{\t}$. As in the above proposition, we can show that the map $\eta_r$ is generically injective. We define the space $\M_n$ as the common fiber product of the spaces $\M_{n,r}$ using the maps $\eta_r: \M_{n,r}\to(Y^n//G)^\t$. This product is actually finite for every $r$, because the isomorphism class of $\M_r$ is clearly determined by the combinatorial choice of $<0$, $>0$, or $=0$ for the expressions of the form $\sum_{i=1}^n\pm r_i$. The space $\M_n$ dominates all the spaces $\M_r$, but each $\M_r$ has a dense open subset which is smooth and diffeomorphic to a smooth dense open subset of $\M_n$. We do not know whether or not the real analytic space $\M_n$ or the Chow quotient $Y^n//G$ is smooth. As in [@Hu], a point in the space $\M_n$ can be represented as a “bubble” polygon. Another interesting observation, which emphasizes the importance of the involution $\t$ on the complex flag manifolds, is that, with just a little effort, one can recover the Schubert calculus for the quaternionic Grassmannians from that of the complex ones, restricting the attention to the cycles preserved by $\t$ and considering their $\t$-fixed point subsets. Previously, the quaternionic Schubert calculus was considered in [@PragaczR] and was dealt with using different methods. Generalized action-angle coordinates on ${{\cal M}_r}$ ====================================================== In this section we describe certain local “action-angle coordinates" in a open dense subset of ${{\cal M}_r}$, which are analogous to those defined in [@KM] for the ${{{\cal M}_r}^{(3)}}$ case. Let $\ell_i, \ i=3,...,n-1$, denote the length of each of the $n-3$ diagonals $d_i$ of $[P]$ in ${{\cal M}_r}$, as defined in Section $4$. We will call $[P]\in {{\cal M}_r}$ [*generic*]{} if $\ell_i\neq 0$ and $\ell_i+r_{i+2}\neq \ell_{i+1}$, for all $i=3,...,n-2$. For every generic $[P]\in{{\cal M}_r}$ we construct a canonical planar $n$-gon $[P_c]\in{{\cal M}_r}$ as follows. Choose any representative $P$ of $[P]$ having its first vertex at the origin of $\R^5$, and let $\Pi^P$ be the $2$-dimensional subspace of $\R^5$ spanned by the first three vertices of $P$. Then one may use the copy of $SO(4)$ fixing $\Pi^P$ to rigidly move the fourth vertex of $P$ to $\Pi^P$ in such a way that the line segment joining $v_2$ and $v_4$ intersects the $1$-dimensional subspace of $\R^5$ containing $d_3$, the first diagonal of $P$. The generic character of $P$ ensures that one may repeat this procedure enough times so as to eventually obtain a unique planar polygon $P_c$ lying entirely on $\Pi^P$, and such that the segment connecting $v_i$ and $v_{i+2}$ always intersects the line in $\R^5$ containing $d_{i+1}$. Clearly, the correspondence $[P]\rightarrow[P_c]$ is well-defined, and $[P_c]$ is unique for generic $[P]$. Let $L_{\bf \ell}\subseteq{{\cal M}_r}$ be a level set of the $n-3$ length functions $\ell_i$. We now describe $3n-12$ local “angle" coordinates for $L_{\bf \ell}$ within the open subset of ${{\cal M}_r}$ consisting of generic $n$-gons. Let $[P_c]$ denote the canonical planar $n$-gon associated to some generic $[P]\in L_{\bf \ell}$. Assume that $P_c\subseteq\Pi^P$ is chosen so that $v_1$ lies at the origin of $\R^5$ and let $K_i$ denote the copy of $SO(4)\subseteq SO(5)$ fixing $d_i, \ i=3,...,n-1$. Consider all possible deformations of $P_c$ obtained by moving its second vertex, $v_2$, by $K_3$ and fixing its remaining vertices. Let $F_2$ denote the subgroup of $K_3$ fixing $v_2$ (and hence all of $P_c$). As $F_2$ is isomorphic to $SO(3)$, we see that we may parameterize the above family of deformations of $P_c$ in $\R^5$ by $K_3/F_2\simeq SO(4)/SO(3)\simeq S^3$. (Indeed, whatever the deformation of $P_c$, $v_2$ is constrained to the intersection of two $4$-spheres of radius $r_1$ and $r_2$ respectively). Note however, that the collection of $\R^5$-polygons obtained by deforming $P_c$ as above is not in one-to-one correspondence with generic $[P]$ in ${{\cal M}_r}$; that is, there are polygons in this collection which differ by a rigid motion of $\R^5$. Given the nature of the deformations of $P_c$ being considered, any such rigid motion must fix the plane $\Pi^{P_c}$. Consequently, deformations of $[P_c]$ in ${{\cal M}_r}$ by bending along its first diagonal are parameterized by $SO(3)\setminus SO(4)/SO(3)\simeq S^3/SO(3)\simeq [0,\pi]$, and so we get a first angle coordinate for $L_{\bf \ell}$. Let $P_c^3$ be a generic deformation of $P_c$ obtained from $K_3$ by bending along $d_3$ as described above, and consider all possible deformations of $P_c^3$ obtained from $K_4$ by bending along $d_4$. Since the first four vertices of $P_c^3$ span a copy of $\R^3$ fixed by some $F_3\simeq SO(2)\subseteq K_4$, we see that these deformations of $P_c^3$ may be parameterized by $SO(4)/SO(2)$. As above, corresponding deformations of $[P_c^3]$ are then parameterized by $SO(3)\setminus SO(4)/SO(2)\simeq \overline{D^2}$, where $SO(3)$ fixes $\Pi^{P_c}$ and $\overline{D^2}$ denotes a closed disk. Thus we obtain two more (independent) “generalized" angle variables. Finally, let $P_c^i$ denote a generic deformation of $P_c^{i-1}$ obtained from $K_i$ by bending along $d_i$, $i=4,...,n-1$. Since the first $i+1$ vertices of $P_c^i$ span at least $\R^4$, we need all of $K_{i+1}\simeq SO(4)$ to parameterize all possible rigid deformations of $P_c^i$ by bending about $d_{i+1}$. Corresponding deformations of $[P_c^i], \ i=4,...,n-1$, are hence parameterized by $SO(3)\setminus SO(4)$, yielding $3(n-5)$ new generalized angle coordinates. Hence, we obtain a total of $3n-12$ generalized angle variables, which together with the $n-3$ action variables prescribed by the lengths $\ell_i$, yield a set of local generalized coordinates for the dense open subset of ${{\cal M}_r}$ consisting of generic polygons. We remark that it is easy to see from the above description how to stratify $L_{\ell}$ as a disjoint union of smooth manifolds. 0.2in Department of Mathematics\ University of Arizona\ Tucson, AZ 85721-0089 [[email protected]]{}\ [[email protected]]{} +.5in
--- abstract: 'In this paper we establish in the fast diffusion range the higher integrability of the spatial gradient of weak solutions to porous medium systems. The result comes along with an explicit reverse Hölder inequality for the gradient. The novel feature in the proof is a suitable intrinsic scaling for space-time cylinders combined with reverse Hölder inequalities and a Vitali covering argument within this geometry. The main result holds for the natural range of parameters suggested by other regularity results. Our result applies to general fast diffusion systems and includes both, nonnegative and signed solutions in the case of equations. The methods of proof are purely vectorial in their structure.' address: - | Verena Bögelein\ Fachbereich Mathematik, Universität Salzburg\ Hellbrunner Str. 34, 5020 Salzburg, Austria - | Frank Duzaar\ Department Mathematik, Universität Erlangen–Nürnberg\ Cauerstrasse 11, 91058 Erlangen, Germany - | Christoph Scheven\ Fakultät für Mathematik, Universität Duisburg-Essen\ 45117 Essen, Germany author: - Verena Bögelein - Frank Duzaar - Christoph Scheven title: | Higher integrability for the\ singular porous medium system --- Introduction and results ======================== In this paper we study regularity of solutions to second-order parabolic systems $$\label{general-PME} \partial_t u -\operatorname{div}\mathbf A\big(x,t,u,D\big(|u|^{m-1}u\big)\big) =\operatorname{div}F$$ on a space-time cylinder $\Omega_T:= \Omega\times (0,T)$ over a bounded domain $\Omega\subset\R^n$, $n\in\N$, and $T>0$. Precise structural assumptions for the vector field $\mathbf A$ are presented later. The principal prototype is the inhomogeneous porous medium system $$\label{PME} \partial_t u - \Delta\big(|u|^{m-1}u\big) = \operatorname{div}F,$$ with $m>0$. As usual, solutions to are taken in a weak sense, i.e. they are assumed to belong to a parabolic Sobolev space whose amount of integrability is determined by the growth of the vector field $\mathbf{A}$ with respect to the gradient variable, cf. Definition \[def:weak\_solution\]. With the choice $m=1$ we recover the heat equation. Equation has a different behavior when $m>1$ or $m<1$. The first case is called slow diffusion range, since disturbances propagate with finite speed and free boundaries may occur, while in the second case disturbances propagate with infinite speed and extinction in finite time may occur. This range is called fast diffusion range. For more information on the theory for the porous medium equation and related regularity results we refer to [@CaVaWo; @DiBenedetto_Holder; @Vazquez-1; @Vazquez-2] and the references therein. The main purpose of this paper is to establish a higher integrability result for the gradient of weak solutions of porous medium equations and systems of the type in the fast diffusion range. More precisely, we show that there exists a universal constant ${\varepsilon}>0$, such that $$\label{hi-int-PME} D\big(|u|^{m-1}u\big) \in L^{2+{\varepsilon}}_{\rm loc} ,$$ whenever $u$ is a weak solution to , thereby ensuring that for weak solutions $u$ of the porous medium system the spatial gradient of $|u|^{m-1}u$ belongs to a slightly better Lebesgue space than the natural energy space $L^2$. This implies that porous medium systems as in possess the self-improving property of integrability. Our result comes along with a quantitative local reverse Hölder type estimate for $|D(|u|^{m-1}u)|$; see Theorem \[thm:higherint\]. The higher integrability for porous medium systems as in has been an open problem for a long time, even in the case of equations and non-negative solutions. Here we give a positive answer in the fast diffusion range $$\label{lower-bound} m_c:=\frac{(n-2)_+}{n+2}<m\le 1.$$ The lower bound on $m$ is natural and appears also in other regularity results for porous medium equations, cf. the discussion in [@DBGV-book §6.21]. For example, solutions might be unbounded in the super-critical range $0<m\le m_c$. The central idea in the proof of our main result is a new kind of intrinsic geometry. Until now, variants of this idea have been successfully used in establishing the self-improving property of integrability for the parabolic $p$-Laplacian system [@Kinnunen-Lewis:1] and very recently in the slow diffusion range $m\ge 1$ for the porous medium equation [@Gianazza-Schwarzacher] and system [@BDKS-higher-int]. The central idea here is the construction of suitable intrinsic cylinders $ Q_{r,s}(z_o):=B_r(x_o)\times(t_o-s,t_o+s)$ with $z_o=(x_o,t_o)$. Since the equation is nonlinear with respect to $u$, we use cylinders whose space-time scaling depends on the mean values of $|u|^{1+m}$. This choice is dictated by the leading term on the right-hand side in the energy estimate, which is of order $1+m$; cf. Lemma \[lem:energy\]. This heuristic argument motivates to consider space-time cylinders $Q_{r,s}(z_o)$, such that the quotient $\frac{s}{r^{\frac{1+m}{m}}}$ satisfies $$\label{scaling-intro} \frac{s}{r^{\frac{1+m}{m}}}=\theta^{1-m}, \mbox{ with}\quad \theta^{1+m} \approx \biint_{Q_{r,s}(z_o)} \frac{|u|^{1+m}}{r^{\frac{1+m}m}}\,{\mathrm{d}x}{\mathrm{d}t}.$$ In this geometry the only ingredients for the proof of parabolic Sobolev-Poincaré and reverse Hölder type inequalities are the standard energy estimate and a gluing lemma; see Section \[sec:energy\]. The construction of a system of such intrinsic cylinders is quite involved, since the cylinders on the right-hand side also depend on the parameter $\theta$. In fact, we have to distinguish between two regimes, the non-singular and the singular regime. The first is characterized by the fact that cylinders are intrinsic, the latter by the fact that cylinders are only sub-intrinsic, which means that $_2$ only holds as an inequality where the mean value integral is bounded from above by $\theta^{1+m}$. In both regimes we need to establish reverse Hölder type inequalities. In the actual construction of the cylinders, we modify the argument from [@Gianazza-Schwarzacher]; see also [@BDKS-higher-int] which is better suited to our purposes here. At this stage some words to classify our result in the history of the problem of higher integrability are appropriate. In the stationary case of elliptic systems the self-improving property was first observed by Elcrat & Meyers [@Meyers-Elcrat], see also the monographs [@Giaquinta:book Chapter V, Theorem 2.1] and [@Giusti:book Section 6.4] and the references therein. The first higher integrability result for parabolic systems goes back to Giaquinta & Struwe [@Giaquinta-Struwe Theorem 2.1]. For parabolic systems with $p$-growth, whose principle prototype is the parabolic $p$-Laplacian system, the higher integrability of the gradient of weak solutions was established by Kinnunen & Lewis [@Kinnunen-Lewis:1] in the range $p>\frac{2n}{n+2}$. This lower bound is natural and appears also in other contexts in the regularity theory of parabolic $p$-Laplace type systems; cf. the monograph [@DiBe]. In the meantime the result has been generalized in various directions, such as global results and higher order parabolic systems with $p$-growth; see [@Boegelein:1; @Boegelein-Parviainen; @Parviainen]. The corresponding problem for the porous medium equation turned out to be more involved and remained open for a long time, even in the scalar case for non-negative solutions. Additionally to the obvious anisotropic behavior of the equation with respect to scalar multiplication of solutions, it is also not possible to add constants to a solution without destroying the property of being a solution. This difficulty has recently been overcome by Gianazza & Schwarzacher [@Gianazza-Schwarzacher] who proved in the slow diffusion range $m\ge1$ that non-negative weak solutions of admit the self-improving property of higher integrability of the gradient. The main novelty in their proof is the use of a new intrinsic scaling. Instead of scaling cylinders with respect to $|Du|$ as in the case of the parabolic $p$-Laplacian (cf. [@DiBe] and the references therein), they work with cylinders which are intrinsically scaled with respect to $u$. The proof, however, uses the method of expansion of positivity and therefore can not be extended to signed solutions, porous medium type systems and the fast diffusion range. A simpler and more flexible proof, which does not rely on the expansion of positivity and which covers both signed solutions and porous medium systems is given in [@BDKS-higher-int]. Finally, in [@BDKS-doubly] the higher integrability is shown for doubly nonlinear parabolic systems, whose prototype is $$\partial_t\big(|u|^{p-2}u\big)-\operatorname{div}(|Du|^{p-2}Du)=\operatorname{div}\big( |F|^{p-2}F\big).$$ In this equation aspects of both the porous medium equation and the parabolic $p$-Laplace equation play a role. Therefore the intrinsic scaling has to take into account the degeneracy of the system both with respect to the gradient variable and with respect to the solution itself. In [@BDKS-doubly] the higher integrability is established for exponents $p$ in the somewhat unexpected range $\max\{\frac{2n}{n+2},1\}<p<\frac{2n}{(n-2)_+}$. The lower bound also appears for the parabolic $p$-Laplace system [@Kinnunen-Lewis:1], while the upper bound corresponds exactly to the lower bound in  for the porous medium equation in the fast diffusion range. We point out that independently of us, Gianazza & Schwarzacher proved the higher integrability result in the scalar case for nonnegative solutions in the fast diffusion range . In contrast to , we prove the higher integrability regardless of whether the solution is non-negative or signed in the scalar case, or vector-valued in the case of systems. In another point, our results are also different. Instead of an inhomogeneity given by a bounded function $f$, we consider a right-hand side in divergence form $\operatorname{div}F$ with $F\in L^\sigma$ for some $\sigma>2$. In the boundedness assumption on $f$ is imposed to ensure that weak solutions are bounded. Here, we are able to deal with unbounded solutions. Notation and main result ======================== Notations --------- To keep formulations as simple as possible, we define the [ *power of a vector*]{} or of a possibly [*negative number*]{} by $${\bm{u^{\mbox{\unboldmath{\scriptsize$\alpha$}}}}} := |u|^{\alpha-1}u, \quad\mbox{for $u\in\R^N$ and $\alpha>0$,}$$ which in the case $u=0$ and $\alpha\in (0,1)$ we interpret as ${\bm{u^{\mbox{\unboldmath{\scriptsize$\alpha$}}}}}=0$. Throughout the paper we write $z_o=(x_o,t_o)\in \R^n\times\R$ for points in space-time. We use space-time cylinders $$\label{def-Q} Q_{\varrho}^{(\theta)}(z_o) := B_{\varrho}^{(\theta)}(x_o)\times\Lambda_{\varrho}(t_o),$$ where $$B_{\varrho}^{(\theta)}(x_o) := \Big\{x\in\R^n: |x-x_o|<\theta^{\frac{m(m-1)}{1+m}}{\varrho}\Big\}$$ and $$\Lambda_{\varrho}(t_o) := \big(t_o-{\varrho}^{\frac{1+m}{m}},t_o+{\varrho}^{\frac{1+m}{m}}\big)$$ with some scaling parameter $\theta >0$. In the case $\theta =1$, we simply omit the parameter in the notation and write $$Q_{\varrho}(z_o) := B_{\varrho}(x_o)\times\big(t_o-{\varrho}^{\frac{1+m}{m}},t_o+{\varrho}^{\frac{1+m}{m}}\big)$$ instead of $Q_{\varrho}^{(1)}(z_o)$. If the center $z_o$ is clear from the context we omit it in the notation. For a map $u\in L^1(0,T;L^1(\Omega,\R^N))$ and given measurable sets $A\subset\Omega$ and $E\subset\Omega_T$ with positive Lebesgue measure the slicewise mean $\langle u\rangle_{A}\colon (0,T)\to \R^N$ of $u$ on $A$ is defined by $$\langle u\rangle_{A}(t) := {- \mskip-19,5mu \int}_{A} u(t)\,{\mathrm{d}x}, \quad\mbox{for a.e.~$t\in(0,T)$,}$$ whereas the mean value $(u)_{E}\in \R^N$ of $u$ on $E$ is defined by $$(u)_{E} := \biint_{E} u\,{\mathrm{d}x}{\mathrm{d}t}.$$ Note that if $u\in C^0((0,T);L^2(\Omega,\R^N))$ the slicewise means are defined for any $t\in (0,T)$. If $A$ is a ball $B_{\varrho}^{(\theta)}(x_o)$, we write $\langle u\rangle_{x_o;{\varrho}}^{(\theta)}(t):=\langle u\rangle_{B_{\varrho}^{(\theta)}(x_o)}(t)$. Similarly, if $E$ is a cylinder of the form $Q_{\varrho}^{(\theta)}(z_o)$, we use the shorthand notation $(u)^{(\theta)}_{z_o;{\varrho}}:=(u)_{Q_{\varrho}^{(\theta)}(z_o)}$. General Setting and Results --------------------------- We consider porous medium type systems of the form $$\label{por-med-eq} \partial_t u -\operatorname{div}\mathbf A(x,t,u,D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}) =\operatorname{div}F \quad\mbox{in $\Omega_T$,}$$ where $\mathbf A\colon \Omega_T\times\R^N\times\R^{Nn}\to \R^{Nn}$ is a Carathéodory vector field satisfying the following ellipticity and growth conditions that are modeled after the prototype system . For structural constants $0<\nu\le L<\infty$, we assume that $$\label{growth} \left\{ \begin{array}{c} \mathbf A(x,t,u,\xi)\cdot\xi\ge \nu|\xi|^2\, ,\\[6pt] | \mathbf A(x,t,u,\xi)|\le L|\xi|, \end{array} \right.$$ for a.e. $(x,t)\in \Omega_T$ and any $(u,\xi)\in \R^N\times\R^{Nn}$. To formulate the main result, we introduce the notion of [*weak solution*]{}. \[def:weak\_solution\]Let $m>0$ and $\mathbf A\colon \Omega_T\times \R^N\times\R^{Nn}\to\R^{Nn}$ be a vector field satisfying and $F\in L^{2}(\Omega_T,\R^{Nn})$. A function $$\label{spaces} u\in C^0 \big((0,T); L^{1+m}(\Omega,\R^N)\big) \quad\mbox{with}\quad {\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}\in L^2\big(0,T;W^{1,2}(\Omega,\R^N)\big)$$ is a *weak solution* to the porous medium type system if and only if the identity $$\begin{aligned} \label{weak-solution} \iint_{\Omega_T}\big[u\cdot\partial_t\varphi - \mathbf A(x,t,u,D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}})\cdot D\varphi\big]{\mathrm{d}x}{\mathrm{d}t}= \iint_{\Omega_T} F\cdot D\varphi \,{\mathrm{d}x}{\mathrm{d}t}\end{aligned}$$ holds true, for any testing function $\varphi\in C_0^\infty(\Omega_T,\R^N)$. Our main result reads as follows: \[thm:higherint\] Assume that $$m_c:=\frac{(n-2)_+}{n+2}<m\le 1$$ and $\sigma>2$. Then, there exists ${\varepsilon}_o={\varepsilon}_o(n,m,\nu,L)\in (0,1]$ such that whenever $F\in L^\sigma(\Omega_T,\R^{Nn})$ and $u$ is a weak solution of Equation  in the sense of Definition \[def:weak\_solution\] under the assumptions , then with ${\varepsilon}_1:=\min\{{\varepsilon}_o,\sigma-2\}$ we have $$D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}} \in L^{2+{\varepsilon}_1}_{\rm loc}\big(\Omega_T,\R^{Nn}\big).$$ Moreover, for every ${\varepsilon}\in(0,{\varepsilon}_1]$ and every cylinder $ Q_{2R}(z_o)\subseteq\Omega_T $, we have the quantitative local higher integrability estimate $$\begin{aligned} \label{eq:higher-int} &\biint_{Q_{R}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2+{\varepsilon}} {\mathrm{d}}x{\mathrm{d}}t\nonumber \\ &\qquad \le c \Bigg[ 1+\biint_{Q_{2R}} \bigg[\frac{|u|^{1+m}}{R^{\frac{1+m}{m}}} + |F|^{2}\bigg] {\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac{{\varepsilon}d}{2}} \biint_{Q_{2R}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2} \,{\mathrm{d}x}{\mathrm{d}t}\nonumber\\ &\qquad\qquad+ c\,\biint_{Q_{2R}} |F|^{2+{\varepsilon}} \,{\mathrm{d}x}{\mathrm{d}t}. \end{aligned}$$ with $c=c(n,m,$ $\nu,L)\ge 1$. Here, $$\label{def:d} d := \frac{2(1+m)}{2(1+m)-n(1-m)}$$ denotes the scaling deficit. The quantitative local estimate can easily be converted into an estimate on standard parabolic cylinders $C_R(z_o):=B_{R}(x_o)\times(t_o-R^2,t_o+R^2)$. The precise statement is: \[cor:higher-int\] Under the assumptions of Theorem \[thm:higherint\], on any cylinder $C_{2R}(z_o)\subseteq\Omega_T$ and for every ${\varepsilon}\in(0,{\varepsilon}_1]$ we have $$\begin{aligned} R^{2+{\varepsilon}} & \biint_{C_{R}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2+{\varepsilon}} {\mathrm{d}}x{\mathrm{d}}t \\ &\le c\,R^2\bigg[ 1+\biint_{C_{2R}(z_o)} \big[{|u|}^{1+m} + R^2|F|^2\big] {\mathrm{d}}x{\mathrm{d}}t \bigg]^{\frac{{\varepsilon}d}{2}} \biint_{C_{2R}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2} {\mathrm{d}x}{\mathrm{d}t}\\ &\quad + c\, R^{2+{\varepsilon}} \biint_{C_{2R}(z_o)} |F|^{2+{\varepsilon}} {\mathrm{d}x}{\mathrm{d}t}, \end{aligned}$$ for a constant $c=c(n,m,\nu,L)$. Auxiliary Material ================== In this section we provide the necessary tools which will be used later. To “re-absorb” certain terms, we frequently shall use the following iteration lemma, cf. [@Giusti:book Lemma 6.1]. \[lem:tech\] Let $0<\vartheta<1$, $A,C\ge 0$ and $\alpha > 0$. Then there exists a constant $c = c(\alpha,\vartheta)$ such that for any non-negative bounded function $\phi\colon[r,{\varrho}]\to [0,\infty)$ with $0<r<{\varrho}$ satisfying $$\phi(t) \le \vartheta\, \phi(s) + \frac{A}{(s-t)^\alpha} + C \qquad \text{for all $r\le t<s\le \varrho$,}$$ we have $$\phi(r) \le c\, \bigg[\frac{A}{(\varrho - r)^\alpha} + C\bigg].$$ The following lemma can be deduced as in [@Giusti:book Lemma 8.3]. \[lem:Acerbi-Fusco\] For any $\alpha>0$, there exists a constant $c=c(\alpha)$ such that, for all $a,b\in\R^N$, $N\in\N$, the following inequality holds true: $$\begin{aligned} \tfrac1c\big|{\bm{b^{\mbox{\unboldmath{\scriptsize$\alpha$}}}}} - {\bm{a^{\mbox{\unboldmath{\scriptsize$\alpha$}}}}}\big| \le \big(|a| + |b|\big)^{\alpha-1}|b-a| \le c \big|{\bm{b^{\mbox{\unboldmath{\scriptsize$\alpha$}}}}} - {\bm{a^{\mbox{\unboldmath{\scriptsize$\alpha$}}}}}\big|.\end{aligned}$$ The next lemma is an immediate consequence of Lemma \[lem:Acerbi-Fusco\]. \[lem:a-b\] For any $\alpha\ge 1$, there exists a constant $c=c(\alpha)$ such that, for all $a,b\in\R^N$, $N\in\N$, the following inequality holds true: $$\begin{aligned} |b-a|^\alpha \le c\big|{\bm{b^{\mbox{\unboldmath{\scriptsize$\alpha$}}}}} - {\bm{a^{\mbox{\unboldmath{\scriptsize$\alpha$}}}}}\big|.\end{aligned}$$ It is well known that mean values over subsets $A\subset B$ are quasi-minimizers of the mapping $ \R^N\ni a\mapsto \int_B |u-a|^p {\mathrm{d}x}$. The following lemma shows that this also applies to powers ${\bm{u^{\mbox{\unboldmath{\scriptsize$\alpha$}}}}}$ of $u$, provided $\alpha\ge\frac1p$. For $p=2$ and $A=B$, the lemma has been proved in [@Diening-Kaplicky-Schwarzacher Lemma 6.2]. The general version is established in [@BDKS-doubly Lemma 3.5]. \[lem:alphalemma\] For any $p\ge 1$ and $\alpha\ge\frac1p$, there exists a universal constant $c=c(\alpha,p)$ such that whenever $A\subset B\subset \R^k$, $k\in\N$, are two bounded domains with positive measure, then for any $u \in L^{\alpha p}(B,\R^N)$ and any $a\in\R^N$, we have $${- \mskip-19,5mu \int}_B \big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\alpha$}}}}}-{\bm{(u)_A^{\mbox{\unboldmath{\scriptsize$\alpha$}}}}}\big|^p {\mathrm{d}x}\le \frac{c\,|B|}{|A|} {- \mskip-19,5mu \int}_B \big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\alpha$}}}}}-{\bm{a^{\mbox{\unboldmath{\scriptsize$\alpha$}}}}}\big|^p {\mathrm{d}x}.$$ Energy bounds {#sec:energy} ============= In this section we state an energy inequality and a gluing lemma. Both follow with standard arguments from the weak form of the differential equation by testing with suitable testing functions. Later on, they will be used in the proof of Sobolev-Poincaré and reverse Hölder type inequalities. At this point it should be emphasized, that these two lemmas are the only places in the proof of the higher integrability where the porous medium system is utilized. The proof of the energy estimate is along the lines of [@BDKS-higher-int Lemma 3.1], taking into account [@BDKS-higher-int Lemma 2.3(i)] or [@BDKS-doubly Lemma 3.4] and the different definition of scaled cylinders. The latter means that the radii ${\varrho}$ and $r$ in [@BDKS-higher-int Lemma 3.1] have to be replaced by $\theta^{\frac{m(m-1)}{1+m}}{\varrho}\,$ and $\theta^{\frac{m(m-1)}{1+m}}r$. \[lem:energy\] Let $m>0$ and $u$ be a weak solution to in $\Omega_T$ in the sense of Definition [\[def:weak\_solution\]]{}. Then, on any cylinder $Q_{{\varrho}}^{(\theta)}(z_o)\subseteq\Omega_T$ with ${\varrho}, \theta>0$, for any $r\in[{\varrho}/2,{\varrho})$ and any $a \in\R^N$, we have $$\begin{aligned} & \sup_{t \in \Lambda_r (t_o)} {- \mskip-19,5mu \int}_{B_r^{(\theta)} (x_o)} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}(t) - {\bm{a^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}\big|^2} {r^{\frac{1+m}{m}}} {\mathrm{d}x}+ \biint_{Q_r^{(\theta)}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}x}{\mathrm{d}t}\\ &\qquad\leq c\,\biint_{Q_{\varrho}^{(\theta)}(z_o)} \bigg[ \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}-{\bm{a^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}\big|^2} {{\varrho}^{\frac{1+m}{m}}-r^{\frac{1+m}{m}}} + \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}} - {\bm{a^{\mbox{\unboldmath{\scriptsize$m$}}}}}\big|^2} {\theta^{\frac{2m(m-1)}{1+m}}({\varrho}-r)^2} + |F|^{2} \bigg]{\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ where $c=c(m,\nu,L)$. The following lemma serves to compare the slice-wise mean values of a given weak solution at different times. It is often called [*gluing lemma*]{}. Such an assertion is necessary and very useful since Poincaré’s and Sobolev’s inequality can only be applied slice-wise. The proof is exactly as in [@BDKS-higher-int Lemma 3.2], taking into account the different definition of scaled cylinders. \[lem:time-diff\] Let $m>0$ and $u$ be a weak solution to in $\Omega_T$ in the sense of Definition [\[def:weak\_solution\]]{}. Then, for any cylinder $Q_{{\varrho}}^{(\theta)}(z_o)\subseteq\Omega_T$ with ${\varrho},\theta>0$ there exists $\hat{\varrho}\in [\frac{{\varrho}}{2},{\varrho}]$ such that for all $t_1,t_2\in\Lambda_{\varrho}(t_o)$ we have $$\begin{aligned} \big|\langle u\rangle_{x_o;\hat{\varrho}}^{(\theta)}(t_2) - \langle u\rangle_{x_o;\hat{\varrho}}^{(\theta)}(t_1)\big| &\le c\,\theta^{\frac{m(1-m)}{1+m}}{\varrho}^{\frac{1}{m}} \biint_{Q_{{\varrho}}^{(\theta)}(z_o)} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}| + |F|\big] {\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ for a constant $c=c(L)$. Sobolev-Poincaré type inequality {#sec:poin} ================================ In this section we consider cylinders $Q_{\varrho}^{(\theta)}(z_o)\subseteq\Omega_T$, where ${\varrho},\theta>0$, which satisfy a [*sub-intrinsic coupling*]{} in the sense that for some constant $K\ge 1$ we have $$\label{sub-intrinsic-poincare} \biint_{Q_{\varrho}^{(\theta)}(z_o)} \frac{|u|^{1+m}}{{\varrho}^{\frac{1+m}m}}\,{\mathrm{d}x}{\mathrm{d}t}\le K \theta^{2m}.$$ Furthermore, we assume that either $$\label{super-intrinsic-poincare} \theta^{2m} \le K \biint_{Q_{\varrho}^{(\theta)}(z_o)} \frac{|u|^{1+m}}{{\varrho}^{\frac{1+m}m}}\,{\mathrm{d}x}{\mathrm{d}t}\quad\mbox{or}\quad \theta^{2m} \le K \biint_{Q_{{\varrho}}^{(\theta)}(z_o)} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^2\big] {\mathrm{d}x}{\mathrm{d}t}$$ holds true. The principal goal of the section is to establish the following Sobolev-Poincaré type inequality. This inequality illustrates the significance of the lower bound $m>m_c$, since only in this case, we obtain an integrability exponent $2q<2$ on the right-hand side. \[lem:poin\] Let $m\in(m_c,1]$ and $u$ be a weak solution to in $\Omega_T$ in the sense of Definition [\[def:weak\_solution\]]{}. Then, on any cylinder $Q_{\varrho}^{(\theta)}(z_o)\subseteq\Omega_T$ satisfying , with ${\varrho},\theta>0$, and for any ${\varepsilon}\in(0,1]$, we have $$\begin{aligned} &\biint_{Q_{\varrho}^{(\theta)}(z_o)} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{z_o;{\varrho}}^{(\theta)}\big|^2} {{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}\\ &\quad\le {\varepsilon}\Bigg[ \sup_{t\in\Lambda_{\varrho}(t_o)} \bint_{B_{\varrho}^{(\theta)}(x_o)} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}(t)- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{z_o;{\varrho}}^{(\theta)}\big|^2} {{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}+ \biint_{Q^{(\theta)}_{\varrho}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 \,{\mathrm{d}x}{\mathrm{d}t}\Bigg]\\ &\quad \phantom{\le\,}+ \frac{c}{{\varepsilon}^{\frac{2}{n}}} \Bigg[ \bigg[\biint_{Q_{\varrho}^{(\theta)}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} \,{\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac{1}{q}} + \biint_{Q_{\varrho}^{(\theta)}(z_o)} |F|^{2} \,{\mathrm{d}x}{\mathrm{d}t}\Bigg]\end{aligned}$$ for a constant $c=c(n,m,L,K)$. Here the integrability exponent $q$ is given by $$\label{def:q} q:=\max\bigg\{\frac{n(1+m)}{2(nm+1+m)},\frac12\bigg\}<1.$$ Throughout the proof we omit the center $z_o$ in our notation. By $\hat{\varrho}\in [\frac12{\varrho}, {\varrho}]$ we denote the radius introduced in Lemma \[lem:time-diff\]. We start our considerations by estimating $$\begin{aligned} \biint_{Q_{\varrho}^{(\theta)}} \! \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{{\varrho}}^{(\theta)}\big|^2} {{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}&\le \biint_{Q_{\varrho}^{(\theta)}} \! \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- {\bm{\big[(u)_{\hat{\varrho}}^{(\theta)}\big]^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}\big|^2} {{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}\le 2[\mathrm{I}+\mathrm{II}].\end{aligned}$$ Here we have abbreviated $$\begin{aligned} \mathrm{I} &:= \biint_{Q_{\varrho}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- {\bm{\big[\langle u\rangle_{\hat{\varrho}}^{(\theta)}(t)\big]^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}} \big|^2} {{\varrho}^{\frac{1+m}{m}}}\,{\mathrm{d}x}{\mathrm{d}t},\\[7pt] \mathrm{II} &:= \bint_{\Lambda_{\varrho}} \frac{\big|{\bm{\big[\langle u\rangle_{\hat{\varrho}}^{(\theta)}(t)\big]^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- {\bm{\big[(u)_{\hat{\varrho}}^{(\theta)}\big]^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}\big|^2} {{\varrho}^{\frac{1+m}{m}}}\,{\mathrm{d}t}.\end{aligned}$$ The first term can be estimated with Young’s inequality and Lemma \[lem:alphalemma\]. We obtain $$\begin{aligned} \label{estimate-of-I} \mathrm{I} &\le \frac{1}{{\varrho}^{\frac{1+m}m}} \sup_{t\in\Lambda_{\varrho}} \bigg[\bint_{B_{\varrho}^{(\theta)}} \big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- {\bm{\big[\langle u\rangle_{\hat{\varrho}}^{(\theta)}(t)\big]^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}} \big|^2 \,{\mathrm{d}x}\bigg]^{\frac{2}{n+2}} \nonumber\\ &\qquad\cdot \bint_{\Lambda_{\varrho}} \bigg[\bint_{B_{\varrho}^{(\theta)}} \big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- {\bm{\big[\langle u\rangle_{\hat{\varrho}}^{(\theta)}(t)\big]^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}} \big|^2 \,{\mathrm{d}x}\bigg]^{\frac{n}{n+2}} {\mathrm{d}t}\nonumber\\ &\le {\varepsilon}\sup_{t\in\Lambda_{\varrho}} \bint_{B_{\varrho}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}(t)- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{{\varrho}}^{(\theta)}\big|^2} {{\varrho}^{\frac{1+m}m}} \,{\mathrm{d}x}+ \frac{c}{{\varepsilon}^{\frac2n}{\varrho}^{\frac{1+m}m}} \, \mathrm{III}^{\frac{n+2}{n}} ,\end{aligned}$$ where $c=c(n,m)$ and $$\mathrm{III} := \bint_{\Lambda_{\varrho}} \bigg[\bint_{B_{\varrho}^{(\theta)}} \big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- {\bm{\big[\langle u^m\rangle_{{\varrho}}^{(\theta)}(t)\big]^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2m}$}}}}} \big|^2 \,{\mathrm{d}x}\bigg]^{\frac{n}{n+2}} {\mathrm{d}t}.$$ If $m<1$ we estimate the integral $\mathrm{III}$ by means of Lemma \[lem:Acerbi-Fusco\] with $\alpha=\frac{1+m}{2m}$ and Hölder’s inequality in space with exponents $\frac{1+m}{1-m}$ and $\frac{1+m}{2m}$, which yields $$\begin{aligned} \mathrm{III} &\le \bint_{\Lambda_{\varrho}} \bigg[\bint_{B_{\varrho}^{(\theta)}} \big(|{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|+|\langle {\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}\rangle_{\varrho}^{(\theta)}(t)|\big)^{\frac{1-m}{m}} \big|{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}-\langle{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}\rangle_{\varrho}^{(\theta)}(t)\big|^2 \,{\mathrm{d}x}\bigg]^{\frac{n}{n+2}} {\mathrm{d}t}\\ &\le \bint_{\Lambda_{\varrho}} \bigg[\bint_{B_{\varrho}^{(\theta)}} |u|^{1+m} \,{\mathrm{d}x}\bigg]^{\frac{n}{n+2}\frac{1-m}{1+m}} \bigg[\bint_{B_{\varrho}^{(\theta)}} \big|{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}-\langle{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}\rangle_{\varrho}^{(\theta)}(t)\big|^{\frac{1+m}{m}} \,{\mathrm{d}x}\bigg]^{\frac{n}{n+2}\frac{2m}{1+m}} {\mathrm{d}t}.\end{aligned}$$ To proceed further, we recall the definition of $q$. Now, again in the case $m<1$, we apply Hölder’s inequality in time with exponents $\frac{(n+2)(1+m)}{n(1-m)}$ and $\frac{(n+2)(1+m)}{2(nm+1+m)}\le\frac{q(n+2)}{n}$, the sub-intrinsic coupling and Sobolev’s inequality on the time slices. This leads to $$\begin{aligned} \mathrm{III} &\le \bigg[ \biint_{Q_{\varrho}^{(\theta)}}|u|^{1+m} \,{\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac{n(1-m)}{(n+2)(1+m)}}\\ &\qquad\qquad\cdot \Bigg[\bint_{\Lambda_{\varrho}} \bigg[\bint_{B_{\varrho}^{(\theta)}} \big|{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}-\langle{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}\rangle_{\varrho}^{(\theta)}(t)\big|^{\frac{1+m}{m}} \,{\mathrm{d}x}\bigg]^{\frac{2mq}{1+m}}{\mathrm{d}t}\Bigg]^{\frac{n}{q(n+2)}}\\ &\le c\,\big(\theta^{2m}{\varrho}^{\frac{1+m}{m}}\big)^{\frac{n(1-m)}{(n+2)(1+m)}} \big(\theta^{\frac{m(m-1)}{1+m}}{\varrho}\big)^{\frac{2n}{n+2}} \bigg[\biint_{Q_{\varrho}^{(\theta)}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} \,{\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac{n}{q(n+2)}}\\ &= c\Bigg[{\varrho}^{\frac{1+m}{m}} \bigg[\biint_{Q_{\varrho}^{(\theta)}}|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} \,{\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac{1}{q}} \Bigg]^{\frac{n}{n+2}},\end{aligned}$$ where $c=c(n,m,K)$. Note that this inequality also holds true for $m=1$. In this case we directly apply Sobolev’s inequality on the time slices. In any case, the combination of the last inequality with yields $$\begin{aligned} \mathrm{I} \le {\varepsilon}\sup_{t\in\Lambda_{\varrho}} \bint_{B_{\varrho}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}(t)- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{{\varrho}}^{(\theta)}\big|^2} {{\varrho}^{\frac{1+m}m}}\,{\mathrm{d}x}+ \frac{c}{{\varepsilon}^{\frac{2}{n}}} \bigg[\biint_{Q_{\varrho}^{(\theta)}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} \,{\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac{1}{q}}.\end{aligned}$$ It remains to estimate $\mathrm{II}$. To this end, we use the fact $\frac{1+m}{2}\le 1$ in Lemma \[lem:a-b\] and the gluing Lemma \[lem:time-diff\] to deduce $$\begin{aligned} \label{est-II-1} \mathrm{II} &\le c\, \bint_{\Lambda_{\varrho}} \frac{\big|\langle u\rangle_{\hat{\varrho}}^{(\theta)}(t)-(u)_{\hat{\varrho}}^{(\theta)}\big|^{1+m}} {{\varrho}^{\frac{1+m}{m}}}\,{\mathrm{d}t}\nonumber\\ &\le \frac{c}{{\varrho}^{\frac{1+m}m}} \bint_{\Lambda_{\varrho}}\bint_{\Lambda_{\varrho}} \big|\langle u\rangle_{\hat{\varrho}}^{(\theta)}(t)-\langle u\rangle_{\hat{\varrho}}^{(\theta)}(\tau)\big|^{1+m} \,{\mathrm{d}t}{\mathrm{d}}\tau \nonumber\\ &\le c\,\theta^{m(1-m)} \bigg[\biint_{Q_{\varrho}^{(\theta)}}\big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|+|F|\big] {\mathrm{d}x}{\mathrm{d}t}\bigg]^{1+m}\end{aligned}$$ for a constant $c=c(m,L)$. If either $_2$ is satisfied or if $m=1$, then we have $$\begin{aligned} \mathrm{II} &\le c\,\bigg[\biint_{Q_{\varrho}^{(\theta)}}\big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2+|F|^2\big] {\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac{1-m}{2}} \bigg[\biint_{Q_{\varrho}^{(\theta)}}\big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|+|F|\big] {\mathrm{d}x}{\mathrm{d}t}\bigg]^{1+m} \\ &\le {\varepsilon}\,\biint_{Q_{\varrho}^{(\theta)}}|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}x}{\mathrm{d}t}+ \frac{c}{{\varepsilon}^{\frac{1-m}{1+m}}}\Bigg[ \bigg[\biint_{Q_{\varrho}^{(\theta)}}|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac1q} + \biint_{Q_{\varrho}^{(\theta)}} |F|^2 {\mathrm{d}x}{\mathrm{d}t}\Bigg],\end{aligned}$$ where the constant $c$ depends only on $m$, $L$ and $K$. Together with the estimate for $\mathrm{I}$, this proves the asserted inequality. Note that $\frac{1-m}{1+m}<\frac{2}{n}$ since $m>m_c$. Otherwise, if $m<1$ and $_1$ is satisfied, then we argue as follows. First, observe that $$\begin{aligned} \theta^{2m} &\le 2K\,\biint_{Q_{\varrho}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- {\bm{\big[(u)_{\hat{\varrho}}^{(\theta)}\big]^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}\big|^2} {{\varrho}^{\frac{1+m}m}}\,{\mathrm{d}x}{\mathrm{d}t}+ \frac{2K\,\big|(u)_{\hat{\varrho}}^{(\theta)}\big|^{1+m}}{{\varrho}^{\frac{1+m}{m}}} \,.\end{aligned}$$ Therefore, we have $$\begin{aligned} \mathrm{II} &= \frac{\theta^{\frac{2m(1-m)}{1+m}}\, \mathrm{II}}{\theta^{\frac{2m(1-m)}{1+m}}} \le c\big[\mathrm{II}_1+\mathrm{II}_2\big],\end{aligned}$$ with $$\begin{aligned} \mathrm{II}_1 &:= \frac{1}{\theta^{\frac{2m(1-m)}{1+m}}} \Bigg[\biint_{Q_{\varrho}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- {\bm{\big[(u)_{\hat{\varrho}}^{(\theta)}\big]^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}\big|^2} {{\varrho}^{\frac{1+m}m}}\,{\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac{1-m}{1+m}}\cdot\mathrm{II}, \\[7pt] \mathrm{II}_2 &:= \frac{\big|(u)_{\hat{\varrho}}^{(\theta)}\big|^{1-m}}{\theta^{\frac{2m(1-m)}{1+m}}{\varrho}^{\frac{1-m}{m}}}\cdot\mathrm{II}.\end{aligned}$$ To estimate $\mathrm{II}_1$, we apply in turn , assumption , Lemma \[lem:alphalemma\] and Young’s inequality with exponents $\frac{2}{1-m}$, $\frac{2}{1+m}$. This gives $$\begin{aligned} \mathrm{II}_1 &\le \frac{c}{\theta^{\frac{m(1-m)^2}{1+m}}} \Bigg[\biint_{Q_{\varrho}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- {\bm{\big[(u)_{\hat{\varrho}}^{(\theta)}\big]^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}\big|^2} {{\varrho}^{\frac{1+m}m}}\,{\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac{1-m}{1+m}}\\ &\qquad\qquad\qquad\qquad\cdot \bigg[ \biint_{Q_{\varrho}^{(\theta)}}\big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|+|F|\big]{\mathrm{d}x}{\mathrm{d}t}\bigg]^{1+m}\\ &\le c\Bigg[\biint_{Q_{\varrho}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- {\bm{\big[(u)_{\hat{\varrho}}^{(\theta)}\big]^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}\big|^2} {{\varrho}^{\frac{1+m}m}}\,{\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac{1-m}{2}} \bigg[ \biint_{Q_{\varrho}^{(\theta)}}\big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|+|F|\big]{\mathrm{d}x}{\mathrm{d}t}\bigg]^{1+m}\\ &\le \tfrac12 \biint_{Q_{\varrho}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{{\varrho}}^{(\theta)}\big|^2} {{\varrho}^{\frac{1+m}{m}}}\,{\mathrm{d}x}{\mathrm{d}t}+ c\bigg[\biint_{Q_{\varrho}^{(\theta)}}\big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|+|F|\big]{\mathrm{d}}x{\mathrm{d}}t \bigg]^2,\end{aligned}$$ with a constant $c=c(m,L,K)$. For the term $\mathrm{II}_2$, we proceed as follows. We first insert the expression for the term $\mathrm{II}$, then use Lemma \[lem:Acerbi-Fusco\] with $\alpha:=\frac{2}{1+m}$, and finally apply the gluing Lemma \[lem:time-diff\]. This leads to $$\begin{aligned} \mathrm{II}_2 &\le \frac{c}{\theta^{\frac{2m(1-m)}{1+m}}{\varrho}^{\frac2m}}\, \bint_{\Lambda_{\varrho}} \big|(u)_{\hat{\varrho}}^{(\theta)}\big|^{1-m} \Big|{\bm{\big[\langle u\rangle_{\hat{\varrho}}^{(\theta)}(t)\big]^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- {\bm{\big[(u)_{\hat{\varrho}}^{(\theta)}\big]^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}\Big|^2 \,{\mathrm{d}t}\\ &\le \frac{c}{\theta^{\frac{2m(1-m)}{1+m}}{\varrho}^{\frac2m}}\, \bint_{\Lambda_{\varrho}} \big|\langle u\rangle_{\hat{\varrho}}^{(\theta)}(t)-(u)_{\hat{\varrho}}^{(\theta)}\big|^2 \,{\mathrm{d}t}\\ &\le \frac{c}{\theta^{\frac{2m(1-m)}{1+m}}{\varrho}^{\frac2m}}\, \bint_{\Lambda_{\varrho}}\bint_{\Lambda_{\varrho}} \big|\langle u\rangle_{\hat{\varrho}}^{(\theta)}(t)-\langle u\rangle_{\hat{\varrho}}^{(\theta)}(\tau)\big|^2 \,{\mathrm{d}t}{\mathrm{d}}\tau\\ &\le c\bigg[ \biint_{Q^{(\theta)}_{\varrho}} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|+|F|\big] {\mathrm{d}x}{\mathrm{d}t}\bigg]^2,\end{aligned}$$ again with a constant $c=c(m,L,K)$. Collecting the estimates for $\mathrm{I}$, $\mathrm{II}_1$, and $\mathrm{II}_2$, we arrive at $$\begin{aligned} \biint_{Q_{\varrho}^{(\theta)}} & \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{{\varrho}}^{(\theta)}\big|^2} {{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}\\ &\le {\varepsilon}\sup_{t\in\Lambda_{\varrho}} \bint_{B_{\varrho}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}(t)- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{{\varrho}}^{(\theta)}\big|^2} {{\varrho}^{\frac{1+m}{m}}} {\mathrm{d}x}+ \frac{c}{{\varepsilon}^{\frac{2}{n}}} \bigg[\biint_{Q_{\varrho}^{(\theta)}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac{1}{q}}\\ &\phantom{\le\,} + \tfrac12 \biint_{Q_{\varrho}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{{\varrho}}^{(\theta)}\big|^2} {{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}+ c\bigg[ \biint_{Q^{(\theta)}_{\varrho}} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|+|F|\big] {\mathrm{d}x}{\mathrm{d}t}\bigg]^2.\end{aligned}$$ Re-absorbing the second last term into the left-hand side, and applying in turn Hölder’s inequality we again obtain the asserted Sobolev-Poincaré inequality. Reverse Hölder inequality {#sec:revholder} ========================= The core of any proof of higher integrability of the gradient is a reverse Hölder inequality. In this section we establish such an inequality on certain intrinsic cylinders. Throughout this section we assume that $Q_{2{\varrho}}^{(\theta)}(z_o)\subseteq\Omega_T$ with ${\varrho},\theta> 0$ is a scaled cylinder satisfying a sub-intrinsic coupling $$\label{sub-intrinsic} \biint_{Q_{2{\varrho}}^{(\theta)}(z_o)} \frac{|u|^{1+m}}{(2{\varrho})^{\frac{1+m}m}}\,{\mathrm{d}x}{\mathrm{d}t}\le K \theta^{2m},$$ for some constant $K\ge 1$. Furthermore, we assume that either $$\label{super-intrinsic} \theta^{2m} \le K \biint_{Q_{\varrho}^{(\theta)}(z_o)} \! \frac{|u|^{1+m}}{{\varrho}^{\frac{1+m}m}}\,{\mathrm{d}x}{\mathrm{d}t}\quad\mbox{or}\quad \theta^{2m} \le K \biint_{Q_{{\varrho}}^{(\theta)}(z_o)} \! \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^2\big] {\mathrm{d}x}{\mathrm{d}t}.$$ This specifies the setup for the following reverse Hölder inequality. \[prop:revhoelder\] Let $m\in(m_c,1]$ and $u$ be a weak solution to in $\Omega_T$ in the sense of Definition [\[def:weak\_solution\]]{}. Then, on any cylinder $Q_{2{\varrho}}^{(\theta)}(z_o)\subseteq\Omega_T$ with ${\varrho},\theta>0$ satisfying and , we have $$\begin{aligned} \biint_{Q_{{\varrho}}^{(\theta)}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}x}{\mathrm{d}t}\le c\bigg[\biint_{Q_{2{\varrho}}^{(\theta)}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac{1}{q}} + c\, \biint_{Q_{2{\varrho}}^{(\theta)}(z_o)} |F|^{2} {\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ for a constant $c=c(n,m,\nu, L,K)$. Here, $q<1$ is the integrability exponent from . We omit the reference to the center $z_o$ in the notation and consider radii $r,s$ with ${\varrho}\le r<s\le 2{\varrho}$. Note that hypothesis and imply that the coupling conditions and are satisfied on $Q_s^{(\theta)}$ with constant $2^{n+2+\frac2m}K$ instead of $K$. From the energy estimate in Lemma \[lem:energy\], we obtain with a constant $c=c(m,\nu,L)$ that $$\begin{aligned} &\sup_{t \in \Lambda_r} {- \mskip-19,5mu \int}_{B_r^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}(t) - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_r^{(\theta)}\big|^2}{r^{\frac{1+m}{m}}} {\mathrm{d}x}+ \biint_{Q_r^{(\theta)}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}x}{\mathrm{d}t}\nonumber\\ &\quad\le c\,\biint_{Q_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}} - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_r^{(\theta)}\big|^2} {s^{\frac{1+m}{m}}-r^{\frac{1+m}{m}}} {\mathrm{d}x}{\mathrm{d}t}+ c\,\biint_{Q_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}} - {\bm{\big[({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_r^{(\theta)}\big]^{\mbox{\unboldmath{\scriptsize$\frac{2m}{1+m}$}}}}} \big|^2} {\theta^{\frac{2m(m-1)}{1+m}}(s-r)^2} {\mathrm{d}x}{\mathrm{d}t}\nonumber\\ &\quad\quad + c\, \biint_{Q_{s}^{(\theta)}} |F|^{2}{\mathrm{d}x}{\mathrm{d}t}\nonumber\\ &\quad =: \mbox{I} + \mbox{II} + \mbox{III},\end{aligned}$$ where the meaning of $\mathrm I$, $\mathrm{II}$ and $\mathrm{III}$ is clear in this context. We let $$\mathcal R_{r,s} := \frac{s}{s-r}.$$ To estimate the term $\mathrm I$ we first observe that $(s-r)^{\frac{1+m}{m}} \le s^{\frac{1+m}{m}}-r^{\frac{1+m}{m}}$. This, together with an application of Lemma \[lem:alphalemma\] implies $$\begin{aligned} \mbox{I} &\le c\,\mathcal R_{r,s}^{\frac{1+m}m} \biint_{Q_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}} - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_s^{(\theta)}\big|^2} {s^{\frac{1+m}{m}}} {\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ again with a constant $c$ depending on $m,\nu ,L$ only. We now turn our attention to the term $\mbox{II}$, which we re-write as $$\begin{aligned} \mbox{II} = c\,\mathcal R_{r,s}^{2} \theta^{\frac{2m(1-m)}{1+m}} \biint_{Q_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}} - {\bm{\big[(u^{\frac{1+m}{2}})_r^{(\theta)}\big]^{\mbox{\unboldmath{\scriptsize$\frac{2m}{1+m}$}}}}} \big|^2} {s^2} {\mathrm{d}x}{\mathrm{d}t}.\end{aligned}$$ If $_2$ is satisfied, we apply Lemma \[lem:a-b\], Hölder’s inequality, Lemma \[lem:alphalemma\] and Young’s inequality to obtain for ${\varepsilon}\in(0,1]$ that $$\begin{aligned} \mbox{II} &\le c\mathcal R_{r,s}^{2} \theta^{\frac{2m(1-m)}{1+m}} \Bigg[\biint_{Q_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}} - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_r^{(\theta)}\big|^2} {s^{\frac{1+m}{m}}} {\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac{2m}{1+m}} \\ &\le c\mathcal R_{r,s}^{2} \bigg[ \biint_{Q_{{\varrho}}^{(\theta)}}\! \! \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^2\big] {\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac{1-m}{1+m}} \Bigg[\biint_{Q_s^{(\theta)}}\! \! \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}} - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_s^{(\theta)}\big|^2} {s^{\frac{1+m}{m}}} {\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac{2m}{1+m}} \\ &\le {\varepsilon}\mathcal R_{r,s}^{2} \biint_{Q_{s}^{(\theta)}} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^2\big] {\mathrm{d}x}{\mathrm{d}t}+ \frac{c\,\mathcal R_{r,s}^{2}}{{\varepsilon}^{\frac{1-m}{2m}}} \biint_{Q_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}} - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_s^{(\theta)}\big|^2} {s^{\frac{1+m}{m}}} {\mathrm{d}x}{\mathrm{d}t}\end{aligned}$$ with $c=c(m,\nu,L,K)$. In the case $m=1$, this estimate follows even without an application of Young’s inequality. Otherwise, if $_1$ is in force, we have that $$\begin{aligned} \theta^{2m} &\le 2^{n+2+\frac2m}K \biint_{Q_s^{(\theta)}} \frac{|u|^{1+m}}{s^{\frac{1+m}m}}\,{\mathrm{d}x}{\mathrm{d}t}\\ &\le c\, \biint_{Q_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{r}^{(\theta)}\big|^{2}} {s^{\frac{1+m}m}}\,{\mathrm{d}x}{\mathrm{d}t}+ \frac{c\,\big|({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_r^{(\theta)}\big|^2} {s^{\frac{1+m}m}} .\end{aligned}$$ This leads to $$\begin{aligned} \mbox{II} \le c\,\mathcal R_{r,s}^{2} [\mathrm{II}_1 + \mathrm{II}_2],\end{aligned}$$ where we have set $$\mathrm{II}_1 := \Bigg[\biint_{Q_s^{(\theta)}}\!\!\! \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{r}^{(\theta)}\big|^{2}} {s^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac{1-m}{1+m}} \biint_{Q_s^{(\theta)}} \!\!\! \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}} - {\bm{\big[(u^{\frac{1+m}{2}})_r^{(\theta)}\big]^{\mbox{\unboldmath{\scriptsize$\frac{2m}{1+m}$}}}}}\big|^2} {s^2} {\mathrm{d}x}{\mathrm{d}t}$$ and $$\begin{aligned} \mathrm{II}_2 &:= \frac{\big|({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_r^{(\theta)}\big|^{\frac{2(1-m)}{1+m}}} {s^{\frac{1-m}{m}}} \biint_{Q_s^{(\theta)}} \!\!\! \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}} - {\bm{\big[(u^{\frac{1+m}{2}})_r^{(\theta)}\big]^{\mbox{\unboldmath{\scriptsize$\frac{2m}{1+m}$}}}}}\big|^2} {s^2} {\mathrm{d}x}{\mathrm{d}t}.\end{aligned}$$ To term $\mathrm{II}_1$ we apply in turn Lemma \[lem:a-b\], Hölder’s inequality and Lemma \[lem:alphalemma\], and obtain $$\begin{aligned} \mathrm{II}_1 \le c\, \biint_{Q_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{r}^{(\theta)}\big|^{2}} {s^{\frac{1+m}m}}\,{\mathrm{d}x}{\mathrm{d}t}\le c\, \biint_{Q_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{s}^{(\theta)}\big|^{2}} {s^{\frac{1+m}m}}\,{\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ while to term $\mathrm{II}_2$ we apply Lemma \[lem:Acerbi-Fusco\] with $\alpha =\frac{1+m}{2m}$ and Lemma \[lem:alphalemma\] and find $$\begin{aligned} \mathrm{II}_2 \le c\, \biint_{Q_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}} - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_r^{(\theta)}\big|^2} {s^{\frac{1+m}{m}}} {\mathrm{d}x}{\mathrm{d}t}\le c\, \biint_{Q_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}} - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_s^{(\theta)}\big|^2} {s^{\frac{1+m}{m}}} {\mathrm{d}x}{\mathrm{d}t}.\end{aligned}$$ Combining both cases we have $$\begin{aligned} \mbox{II} &\le {\varepsilon}\,\mathcal R_{r,s}^{2} \biint_{Q_{s}^{(\theta)}} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^2\big] {\mathrm{d}x}{\mathrm{d}t}+ \frac{c\,\mathcal R_{r,s}^{2}}{{\varepsilon}^{\frac{1-m}{2m}}} \biint_{Q_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{s}^{(\theta)}\big|^2}{s^{\frac{1+m}{m}}} {\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ with a constant $c=c(m,\nu,L,K)$. Inserting the estimates for I and II above and applying Lemma \[lem:poin\] with ${\varepsilon}$ replaced by ${\varepsilon}^{\frac{1+m}{2m}}$, we find for any ${\varepsilon}\in(0,1]$ that $$\begin{aligned} \sup_{t \in \Lambda_r}& {- \mskip-19,5mu \int}_{B_r^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}(t) - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_r^{(\theta)}\big|^2}{r^{\frac{1+m}{m}}} {\mathrm{d}x}+ \biint_{Q_r^{(\theta)}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}x}{\mathrm{d}t}\nonumber\\ &\le \frac{c\,\mathcal R_{r,s}^{\frac{1+m}{m}}}{{\varepsilon}^{\frac{1-m}{2m}}} \biint_{Q_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{s}^{(\theta)}\big|^2}{s^{\frac{1+m}{m}}} {\mathrm{d}x}{\mathrm{d}t}\\ &\qquad + {\varepsilon}\,\mathcal R_{r,s}^{2} \biint_{Q_{s}^{(\theta)}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}x}{\mathrm{d}t}+ c\mathcal R_{r,s}^{2} \biint_{Q_{s}^{(\theta)}} |F|^2{\mathrm{d}x}{\mathrm{d}t}\\ &\le c\,{\varepsilon}\mathcal R_{r,s}^{\frac{1+m}{m}} \Bigg[ \sup_{t \in \Lambda_s} {- \mskip-19,5mu \int}_{B_s^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}(t) - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_s^{(\theta)}\big|^2}{s^{\frac{1+m}{m}}} {\mathrm{d}x}+ \biint_{Q_s^{(\theta)}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}x}{\mathrm{d}t}\Bigg]\\ &\qquad + \frac{c\,\mathcal R_{r,s}^{\frac{1+m}{m}}} {{\varepsilon}^{\frac{(1+m)(n+2)}{2nm}-1}} \Bigg[ \bigg[\biint_{Q_s^{(\theta)}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac{1}{q}} + \biint_{Q^{(\theta)}_s} |F|^2 {\mathrm{d}x}{\mathrm{d}t}\Bigg].\end{aligned}$$ Here we choose ${\varepsilon}=1/[2c\mathcal R_{r,s}^{\frac{1+m}{m}}]$. With this choice the first term on the right-hand side tuns into $\frac12[\dots ]$, where $[\dots ]$ is the expression from the left-hand side with $r$ replaced by $s$. Moreover, the pre-factor in front of the second term on the right-hand side changes to $\mathcal R_{r,s}^{\alpha}$ with $\alpha=\frac{n+2}{2n}(\frac{1+m}m)^2$. To this inequality we apply the Iteration Lemma \[lem:tech\] to re-absorb the term $\frac12[\dots]$ (with radius $s$) from the right-hand side into the left. This leads to the claimed reverse Hölder type inequality $$\begin{aligned} \sup_{t \in \Lambda_{\varrho}}& {- \mskip-19,5mu \int}_{B_{\varrho}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}(t) - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{\varrho}^{(\theta)}\big|^2} {{\varrho}^{\frac{1+m}{m}}} {\mathrm{d}x}+ \biint_{Q_{\varrho}^{(\theta)}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}x}{\mathrm{d}t}\\ &\le c\bigg[\biint_{Q_{2{\varrho}}^{(\theta)}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac{1}{q}} + c\,\biint_{Q_{2{\varrho}}^{(\theta)}} |F|^{2} {\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ and finishes the proof. At the end of this section we provide a technical auxiliary result, which essentially is a direct consequence of Lemma \[lem:poin\] and the energy estimate. \[lem:theta\] Let $m\in(m_c,1]$ and $u$ be a weak solution to in $\Omega_T$ in the sense of Definition [\[def:weak\_solution\]]{}. Then, on any cylinder $Q_{2{\varrho}}^{(\theta)}(z_o)\subseteq\Omega_T$ with ${\varrho},\theta>0$ satisfying and $_1$ with $K=1$, we have $$\begin{aligned} \theta^m \le \tfrac1{\sqrt2} \Bigg[\biint_{Q_{{\varrho}/2}^{(\theta)}(z_o)} \frac{|u|^{1+m}}{({\varrho}/2)^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac12} + c\,\bigg[\biint_{Q_{2{\varrho}}^{(\theta)}(z_o)} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2}+|F|^2\big] {\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac12} ,\end{aligned}$$ where $c=c(n,m,\nu,L)$. We omit the reference to the center $z_o$ in the notation. We use $_1$ with $K=1$, Minkowski’s inequality and Lemma \[lem:alphalemma\] to deduce $$\begin{aligned} \theta^{m} &\le \bigg[\biint_{Q_{{\varrho}}^{(\theta)}} \frac{|u|^{1+m}}{{\varrho}^{\frac{1+m}m}} \,{\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac12} \\ &\le \Bigg[\biint_{Q_{{\varrho}}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}} - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{{\varrho}/2}^{(\theta)}\big|^{2}} {{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac12} + \frac{{\big|({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{{\varrho}/2}^{(\theta)}\big|}} {{\varrho}^{\frac{1+m}{2m}}} \\ &\le c\,\Bigg[\biint_{Q_{{\varrho}}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}} - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{{\varrho}}^{(\theta)}\big|^{2}} {{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac12} + \bigg[\biint_{Q_{{\varrho}/2}^{(\theta)}} \frac{|u|^{1+m}}{{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac12}.\end{aligned}$$ We estimate the first term on the right with Lemma \[lem:poin\] and Hölder’s inequality and get $$\begin{aligned} \biint_{Q_{{\varrho}}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}} - ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{{\varrho}}^{(\theta)}\big|^{2}} {{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}&\le {\varepsilon}\sup_{t\in\Lambda_{{\varrho}}} \bint_{B_{{\varrho}}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}(t)- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{{\varrho}}^{(\theta)}\big|^2} {{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}\\ &\quad+ \frac{c}{{\varepsilon}^{\frac{2}{n}}} \biint_{Q_{{\varrho}}^{(\theta)}} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2}+|F|^2\big] {\mathrm{d}x}{\mathrm{d}t}, \end{aligned}$$ for a constant $c=c(n,m,L)$ and an arbitrary ${\varepsilon}\in(0,1]$. In order to bound the $\sup$-term appearing in the last estimate, we apply the energy estimate from Lemma \[lem:energy\], combined with Lemma \[lem:alphalemma\] with $a=0$, Hölder’s inequality and hypothesis , with the result $$\begin{aligned} &\sup_{t\in\Lambda_{{\varrho}}} \bint_{B_{{\varrho}}^{(\theta)}} \frac{\big|{\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}}(t)- ({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{{\varrho}}^{(\theta)}\big|^2} {{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}\\ &\qquad\le c\,\biint_{Q_{2{\varrho}}^{(\theta)}} \bigg[\frac{|u|^{1+m}}{{\varrho}^{\frac{1+m}{m}}} + \frac{|u|^{2m}+\big|({\bm{u^{\mbox{\unboldmath{\scriptsize$\frac{1+m}{2}$}}}}})_{\varrho}^{(\theta)}\big|^{\frac{4m}{1+m}}}{\theta^{\frac{2m(m-1)}{1+m}}{\varrho}^2} + |F|^{2}\bigg]{\mathrm{d}x}{\mathrm{d}t}\\ &\qquad\le c\,\biint_{Q_{2{\varrho}}^{(\theta)}} \frac{|u|^{1+m}}{{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}+ c\,\bigg[\biint_{Q_{2{\varrho}}^{(\theta)}} \frac{\theta^{1-m}|u|^{1+m}}{{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac{2m}{1+m}} + c\,\biint_{Q_{2{\varrho}}^{(\theta)}} |F|^{2} \,{\mathrm{d}x}{\mathrm{d}t}\\ &\qquad\le c\,\theta^{2m} + c\,\biint_{Q_{2{\varrho}}^{(\theta)}} |F|^{2} \,{\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ where $c=c(m,\nu,L)$. Joining the preceding inequalities leads us to $$\begin{aligned} \theta^m \le c\,\sqrt{{\varepsilon}}\, \theta^m + \bigg[\biint_{Q_{{\varrho}/2}^{(\theta)}} \frac{|u|^{1+m}}{{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac12} + \frac{c}{{\varepsilon}^{\frac{1}{n}}} \bigg[\biint_{Q_{2{\varrho}}^{(\theta)}} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2}+|F|^2\big] {\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac12} .\end{aligned}$$ After choosing ${\varepsilon}={\varepsilon}(n,m,L)\in(0,1]$ so small that $c\sqrt{{\varepsilon}}\le 1-2^{-\frac{1}{2m}} $, we can re-absorb the first term of the right-hand side into the left. In this way, we obtain $$\begin{aligned} \theta^m \le 2^{\frac{1}{2m}} \bigg[\biint_{Q_{{\varrho}/2}^{(\theta)}} \frac{|u|^{1+m}}{{\varrho}^{\frac{1+m}{m}}} \,{\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac12} + c\,\bigg[\biint_{Q_{2{\varrho}}^{(\theta)}} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2}+|F|^2\big] {\mathrm{d}x}{\mathrm{d}t}\bigg] ,\end{aligned}$$ which proves the asserted inequality. Proof of the higher integrability {#sec:hi} ================================= We consider a fixed cylinder $$Q_{8R}(y_o,\tau_o) \equiv B_{8R}(y_o)\times \big(\tau_o-(8R)^\frac{1+m}{m},\tau_o+(8R)^\frac{1+m}{m}\big) \subseteq\Omega_T$$ with $R>0$. Again, we omit the center in the notation and write $Q_{{\varrho}}:=Q_{{\varrho}}(y_o,\tau_o)$ for short, for any radius ${\varrho}\in(0,8R]$. We consider a parameter $$\label{lambda-0} \lambda_o \ge 1+\bigg[\biint_{Q_{4R}} \frac{{|u|}^{1+m}}{(4 R)^{\frac{1+m}m}}{\mathrm{d}}x{\mathrm{d}}t \bigg]^{\frac{d}{2m}},$$ which will be fixed later. Recall that the [*scaling deficit*]{} $d$ is defined in . We again point out that the assumption $m>m_c$ ensures $d>0$. Furthermore, for $n\ge 2$ the scaling deficit blows up when $m\downarrow m_c$. For $z_o\in Q_{2R}$, ${\varrho}\in(0,R]$, and $\theta\ge 1$, we consider space-time cylinders $Q_{\varrho}^{(\theta)}(z_o)$ as defined in . Note that these cylinders depend monotonically on $\theta$ in the sense that $Q_{\varrho}^{(\theta_2)}(z_o)\subset Q_{\varrho}^{(\theta_1)}(z_o)$ whenever $1\le \theta_1<\theta_2$, and that $Q_{\varrho}^{(\theta)}(z_o)\subset Q_{4R}$ for $z_o\in Q_{2R}$, ${\varrho}\in(0,R]$, and $\theta\ge 1$. Construction of a non-uniform system of cylinders {#sec:cylinders} ------------------------------------------------- The following construction of a non-uniform system of cylinders is similar to the one in [@Gianazza-Schwarzacher; @Schwarzacher]. Let $z_o\in Q_{2R}$. For a radius ${\varrho}\in (0,R]$ we define $$\widetilde\theta_{\varrho}\equiv \widetilde\theta_{z_o;{\varrho}} := \inf\bigg\{\theta\in[\lambda_o,\infty): \frac{1}{|Q_{\varrho}|} \iint_{Q^{(\theta)}_{\varrho}(z_o)} \frac{{|u|}^{1+m}}{{\varrho}^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}\le \theta^{\frac{2m}{d}} \bigg\}.$$ We note that $\widetilde\theta_{\varrho}$ is well defined, since the infimum in the definition is taken over a non-empty set. In fact, in the limit $\theta\to\infty$ the integral on the left-hand side converges to zero (and is constant in the case $m=1$, respectively), while the right-hand side grows with speed $\theta^{\frac{2m}{d}}$. The choice of the exponent on the right-hand side becomes more clear after taking means in the integral condition, since then the condition takes the form $$\biint_{Q^{(\theta)}_{\varrho}(z_o)} \frac{{|u|}^{1+m}}{{\varrho}^{\frac{1+m}m}}{\mathrm{d}x}{\mathrm{d}t}\le \theta^{2m};$$ compare Sections \[sec:poin\] and \[sec:revholder\]. As an immediate consequence of the definition of $\widetilde\theta_{\varrho}$, we either have $$\widetilde\theta_{\varrho}=\lambda_o \quad\mbox{and}\quad \biint_{Q_{{\varrho}}^{(\widetilde\theta_{\varrho})}(z_o)} \frac{{|u|}^{1+m}}{{\varrho}^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}\le \widetilde\theta_{\varrho}^{2m} = \lambda_o^{2m},$$ or $$\label{theta>lambda} \widetilde\theta_{\varrho}>\lambda_o \quad\mbox{and}\quad \biint_{Q_{{\varrho}}^{(\widetilde\theta_{\varrho})}(z_o)} \frac{{|u|}^{1+m}}{{\varrho}^{\frac{1+m}m}}{\mathrm{d}x}{\mathrm{d}t}= \widetilde\theta_{\varrho}^{2m}.$$ In the case ${\varrho}=R$, we have $\widetilde \theta_{R}\ge \lambda_o\ge 1$. Moreover, in the case $\widetilde\theta_{R}>\lambda_o$, property , the inclusion $Q_{R}^{(\widetilde\theta_{R})}(z_o)\subset Q_{4R}$ and yield that $$\begin{aligned} \widetilde\theta_{R}^{\frac{2m}{d}} &= \frac{1}{|Q_{R}|} \iint_{Q_{R}^{(\widetilde\theta_{R})}(z_o)} \frac{{|u|}^{1+m}}{R^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}\\ &\le \frac{4^{\frac{1+m}m}}{|Q_{R}|} \iint_{Q_{4R}} \frac{{|u|}^{1+m}}{(4R)^{\frac{1+m}m}}{\mathrm{d}}x{\mathrm{d}}t \le 4^{n+2+\frac2m}\lambda_o^{\frac{2m}{d}},\end{aligned}$$ from which we infer the bound $$\begin{aligned} \label{bound-theta-R} \widetilde\theta_{R} \le 4^{\frac d{2m}(n+2+\frac2m)}\lambda_o.\end{aligned}$$ Our next goal is to prove the continuity of the mapping $(0,R]\ni{\varrho}\mapsto \widetilde\theta_{\varrho}$. For ${\varrho}\in(0,R]$ and ${\varepsilon}>0$, we abbreviate $\theta_+:=\widetilde\theta_{\varrho}+{\varepsilon}$. We first observe that there exists $\delta=\delta({\varepsilon},{\varrho})>0$ such that for all radii $r\in(0,R]$ with $|r-{\varrho}|<\delta$ there holds $$\label{claim-cont} \frac{1}{|Q_r|} \iint_{Q_{r}^{(\theta_+)}(z_o)} \frac{{|u|}^{1+m}}{r^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}< \theta_+^{\frac{2m}{d}}.$$ In fact, if $r={\varrho}$, this is a consequence of the definition of $\widetilde\theta_{\varrho}$, since $\theta_+^{\frac{2m}{d}}>\widetilde\theta_{\varrho}^{\frac{2m}{d}}$. By the absolute continuity of the integral, the inequality continues to hold for radii $r$ sufficiently close to ${\varrho}$. Hence, the definition of $\widetilde\theta_r$ implies that $\widetilde\theta_r<\theta_+=\widetilde\theta_{\varrho}+{\varepsilon}$, provided $|r-{\varrho}|<\delta$. For the corresponding lower bound $\widetilde\theta_r>\theta_-:=\widetilde\theta_{\varrho}-{\varepsilon}$, we proceed similarly. First, we note that we can assume $\theta_-\ge\lambda_o$ and hence $\widetilde\theta_{\varrho}>\lambda_o$, since otherwise the claim immediately follows from the property $\widetilde\theta_r\ge\lambda_o$. Now, we claim that $$\label{claim-cont-2} \frac{1}{|Q_r|} \iint_{Q_{r}^{(\theta_-)}(z_o)} \frac{{|u|}^{1+m}}{r^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}> \theta_-^{\frac{2m}{d}}$$ for all $r\in(0,R]$ with $|r-{\varrho}|<\delta$, after diminishing $\delta=\delta({\varepsilon},{\varrho})>0$ if necessary. Again, we first consider the case $r={\varrho}$, in which the claim follows from the definition of $\widetilde\theta_{\varrho}$. In fact, if the claim did not hold, we would arrive at the contradiction $\widetilde\theta_{\varrho}\le\theta_-$. Now, for radii $r$ with $|r-{\varrho}|<\delta$ the assertion follows from the continuous dependence of the left-hand side upon $r$. Having established , we can conclude from the definition of $\widetilde\theta_r$ that $\widetilde\theta_r>\theta_-=\widetilde\theta_{\varrho}-{\varepsilon}$. Altogether we have shown that $\widetilde\theta_{\varrho}-{\varepsilon}< \widetilde\theta_r< \widetilde\theta_{\varrho}+{\varepsilon}$ for all radii $r\in(0,R]$ with $|r-{\varrho}|<\delta$, which completes the proof of the continuity of $(0,R]\ni{\varrho}\mapsto \widetilde\theta_{\varrho}$. Unfortunately, the mapping $(0,R]\ni{\varrho}\to \widetilde\theta_{\varrho}$ might not be decreasing. For this reason we work with a modified version of $\widetilde\theta_{\varrho}$, which we denote by $\theta_{\varrho}$. This modification is done by a rising sun type construction. More precisely, we define $$\theta_{\varrho}\equiv \theta_{z_o;{\varrho}} := \max_{r\in[{\varrho},R]} \widetilde\theta_{z_o;r}\,.$$ As an immediate consequence of the construction, the mapping $(0,R]\ni{\varrho}\mapsto \theta_{\varrho}$ is continuous and monotonically decreasing. In general, the modified cylinders $Q_{{\varrho}}^{(\theta_{\varrho})}(z_o)$ cannot be expected to be intrinsic in the sense of . However, we can show that the cylinders $Q_{s}^{(\theta_{\varrho})}(z_o)$ are sub-intrinsic for all radii $s\ge{\varrho}$. More precisely, we have $$\begin{aligned} \label{sub-intrinsic-2} \biint_{Q_{s}^{(\theta_{{\varrho}})}(z_o)} \frac{{|u|}^{1+m}}{s^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}\le \theta_{\varrho}^{2m} \quad\mbox{for any $0<{\varrho}\le s\le R$.}\end{aligned}$$ For the proof of this inequality, we use the chain of inequalities $\widetilde\theta_s\le \theta_{s}\le \theta_{{\varrho}}$, which implies $Q_{s}^{(\theta_{{\varrho}})}(z_o)\subseteq Q_{s}^{(\widetilde\theta_{s})}(z_o)$, and the fact that the latter cylinder is sub-intrinsic. In this way, we deduce $$\begin{aligned} \biint_{Q_{s}^{(\theta_{{\varrho}})}(z_o)} \frac{{|u|}^{1+m}}{s^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}&\le \Big(\frac{\theta_{{\varrho}}}{\widetilde\theta_{s}}\Big)^{2m-\frac{2m}{d}} \biint_{Q_{s}^{(\widetilde\theta_{s})}(z_o)} \frac{{|u|}^{1+m}}{s^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}\\ &\le \Big(\frac{\theta_{{\varrho}}}{\widetilde\theta_{s}}\Big)^{2m-\frac{2m}{d}} \widetilde\theta_{s}^{2m} = \theta_{\varrho}^{2m-\frac{2m}{d}}\,\widetilde\theta_s^{\frac{2m}{d}} \le \theta_{\varrho}^{2m},\end{aligned}$$ which is exactly assertion . Next, we define $$\label{rho-tilde} \widetilde{\varrho}:= \left\{ \begin{array}{cl} R, & \quad\mbox{if $\theta_{\varrho}=\lambda_o$,} \\[5pt] \min\big\{s\in[{\varrho}, R]: \theta_s=\widetilde \theta_s \big\}, & \quad\mbox{if $\theta_{\varrho}>\lambda_o$.} \end{array} \right.$$ By definition, for any $s\in [{\varrho},\widetilde{\varrho}]$ we have $\theta_s=\widetilde\theta_{\widetilde{\varrho}}$. Our next goal is the proof of the upper bound $$\begin{aligned} \label{bound-theta} \theta_{\varrho}\le \Big(\frac{s}{{\varrho}}\Big)^{\frac d{2m}(n+2+\frac2m)} \theta_{s} \quad\mbox{for any $s\in({\varrho},R]$.}\end{aligned}$$ In the case $\theta_{\varrho}=\lambda_o$ this is immediate since $\theta_s\ge\lambda_o$. Another easy case is that of radii $s\in({\varrho},\widetilde{\varrho}]$, since then we have $\theta_s=\widetilde\theta_{\widetilde{\varrho}}=\theta_{\varrho}$. Therefore, it only remains to prove for the case $\theta_{\varrho}>\lambda_o$ and radii $s\in(\widetilde{\varrho},R]$. To this end, we use the monotonicity of ${\varrho}\mapsto\theta_{\varrho}$, and to conclude $$\begin{aligned} \theta_{\varrho}^{\frac{2m}{d}} &= \widetilde \theta_{\widetilde{\varrho}}^{\frac{2m}{d}} = \frac{1}{|Q_{\widetilde{\varrho}}|} \iint_{Q_{\widetilde{\varrho}}^{(\theta_{\widetilde{\varrho}})}(z_o)} \frac{{|u|}^{1+m}}{\widetilde{\varrho}^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}\\ &\le \Big(\frac{s}{\widetilde{\varrho}}\Big)^{n+2+\frac2m} \frac{1}{|Q_{s}|} \iint_{Q_{s}^{(\theta_{s})}(z_o)} \frac{{|u|}^{1+m}}{s^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}\le \Big(\frac{s}{{\varrho}}\Big)^{n+2+\frac2m} \theta_{s}^{\frac{2m}{d}} .\end{aligned}$$ This yields the claim  also in the remaining case. We now apply with $s=R$. Using moreover the fact $\theta_{R}=\widetilde\theta_{R}$ and estimate for $\widetilde\theta_{R}$, we deduce $$\begin{aligned} \label{bound-theta-2} \theta_{\varrho}\le \Big(\frac{R}{{\varrho}}\Big)^{\frac d{2m}(n+2+\frac2m)} \theta_{R} \le \Big(\frac{4R}{{\varrho}}\Big)^{\frac d{2m}(n+2+\frac2m)} \lambda_o\end{aligned}$$ for every ${\varrho}\in(0,R]$. In summary, for every $z_o\in Q_{2R}$, we have constructed a system of concentric sub-intrinsic cylinders $Q_{{\varrho}}^{(\theta_{z_o;{\varrho}})}(z_o)$ with radii ${\varrho}\in (0,R]$. As a consequence of the monotonicity of ${\varrho}\mapsto \theta_{z_o;{\varrho}}$, these cylinders are nested in the sense that $$\mbox{$Q_{r}^{(\theta_{z_o;r})}(z_o) \subset Q_{s}^{(\theta_{z_o;s})}(z_o)$ whenever $0<r<s\le R$.}$$ However, keep in mind that in general these cylinders are not intrinsic but only sub-intrinsic. Covering property {#sec:covering} ----------------- Our next goal is to establish the following Vitali type covering property for the cylinders constructed in the last section. \[lem:vitali\] There exists a constant $\hat c=\hat c(n,m)\ge 20$ such that, whenever $\mathcal F$ is a collection of cylinders $Q_{4r}^{(\theta_{z;r})}(z)$, where $Q_{r}^{(\theta_{z;r})}(z)$ is a cylinder of the form constructed in Section [\[sec:cylinders\]]{} with radius $r\in(0,\tfrac{R}{\hat c})$, then there exists a countable subfamily $\mathcal G$ of disjoint cylinders in $\mathcal F$ such that $$\label{covering} \bigcup_{Q\in\mathcal F} Q \subseteq \bigcup_{Q\in\mathcal G} \widehat Q,$$ where $\widehat Q$ denotes the $\frac{\hat c}{4}$-times enlarged cylinder $Q$, i.e. if $Q=Q_{4r}^{(\theta_{z;r})}(z)$, then $\widehat Q=Q_{\hat c r}^{(\theta_{z;r})}(z)$. For $j\in\N$ we subdivide $\mathcal{F}$ into the subfamilies $$\mathcal F_j := \big\{Q_{4r}^{(\theta_{z;r})}(z)\in \mathcal F: \tfrac{R}{2^j\hat c}<r\le \tfrac{R}{2^{j-1}\hat c} \big\}.$$ Then, we choose finite subfamilies $\mathcal G_j\subset \mathcal F_j$ according to the following scheme. We start by choosing $\mathcal G_1$ as an arbitrary maximal disjoint collection of cylinders in $\mathcal F_1$. The subfamily $\mathcal G_1$ is finite, since and the definition of $\mathcal F_1$ imply a lower bound on the volume of each cylinder in $\mathcal G_1$. Now, assuming that the subfamilies $\mathcal G_1, \mathcal G_2, \dots, \mathcal G_{k-1}$ have already been constructed for some integer $k\ge 2$, we choose $\mathcal G_k$ to be any maximal disjoint subcollection of $$\Bigg\{Q\in \mathcal F_k: Q\cap Q^\ast=\emptyset \mbox{ for any $ \displaystyle Q^\ast\in \bigcup_{j=1}^{k-1} \mathcal G_j $} \Bigg\}.$$ For the same reason as above, the collection $\mathcal G_k$ is finite. Hence, the family $$\mathcal G := \bigcup_{j=1}^\infty \mathcal G_j\subseteq\mathcal{F}$$ defines a countable collection of disjoint cylinders. It remains to prove that for each cylinder $Q\in\mathcal F$ there exists a cylinder $Q^\ast\in\mathcal G$ with $Q\subset \widehat {Q}^\ast$. To this end, we fix a cylinder $Q=Q_{4r}^{(\theta_{z;r})}(z)\in\mathcal F$. Let $j\in\N$ be such that $Q\in\mathcal F_j$. The maximality of $\mathcal G_j$ ensures the existence of a cylinder $Q^\ast=Q_{4r_\ast}^{(\theta_{z_\ast;r_\ast})}(z_\ast)\in \bigcup_{i=1}^{j} \mathcal G_i$ with $Q\cap Q^\ast\not=\emptyset$. We will show that this cylinder has the desired property $Q\subset \widehat {Q}^\ast$. First, we observe that the properties $r\le\tfrac{R}{2^{j-1}\hat c}$ and $r_\ast>\tfrac{R}{2^j\hat c}$ imply $r\le 2r_\ast$, which ensures $\Lambda_{4r}(t)\subseteq \Lambda_{20r_\ast}(t_\ast)$. For the proof of the corresponding spatial inclusion $B^{(\theta_{z,r})}_{4r}(x)\subseteq B_{\hat c r_\ast}^{(\theta_{z_\ast,r_\ast})}(x_\ast)$, we first shall derive the bound $$\label{control-theta-1} \theta_{z_\ast;r_\ast} \le 52^{\frac d{2m}(n+2+\frac2m)}\, \theta_{z;r}\,.$$ We recall the definition of the radius $\widetilde r_\ast\in [r_\ast,R]$ which is associated to the cylinder $Q_{r_\ast}^{(\theta_{z_\ast;r_\ast})}(z_\ast)$. According to the definition, we either have that $Q_{\widetilde r_\ast}^{(\theta_{z_\ast;r_\ast})}(z_\ast)$ is intrinsic or that $\widetilde r_\ast=R$ and $\theta_{z_\ast;r_\ast}=\lambda_o$. In the second alternative, the claim  is immediate, since $$\theta_{z_\ast;r_\ast} = \lambda_o \le \theta_{z;r}\,.$$ Therefore, it remains to consider the case that $Q_{\widetilde r_\ast}^{(\theta_{z_\ast;r_\ast})}(z_\ast)$ is intrinsic in the sense that $$\begin{aligned} \label{control-theta-2} \theta_{z_\ast;r_\ast}^{\frac{2m}{d}} = \frac{1}{|Q_{\widetilde r_\ast}|} \iint_{Q_{\widetilde r_\ast}^{(\theta_{z_\ast;r_\ast})}(z_\ast)} \frac{{|u|}^{1+m}}{\widetilde r_\ast^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}.\end{aligned}$$ We distinguish between the cases $\widetilde r_\ast\le \frac{R}{\mu}$ and $\widetilde r_\ast> \frac{R}{\mu}$, where $\mu:= 13$. We start with the latter case. Using and the definition of $\lambda_o$ and $\theta_{z;r}$, we estimate $$\begin{aligned} \theta_{z_\ast;r_\ast}^{\frac{2m}{d}} &\le \Big(\frac{4R}{\widetilde r_\ast}\Big)^{\frac{1+m}m} \frac{1}{|Q_{\widetilde r_\ast}|} \iint_{Q_{4R}} \frac{{|u|}^{1+m}}{(4R)^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}\\ &\le \Big(\frac{4 R}{\widetilde r_\ast}\Big)^{n+2+\frac2m} \lambda_o^{\frac{2m}{d}}\\ &\le (4\mu)^{n+2+\frac2m} \theta_{z;r}^{\frac{2m}{d}},\end{aligned}$$ which can be rewritten in the form $$\begin{aligned} \theta_{z_\ast;r_\ast} \le (4\mu)^{\frac d{2m}(n+2+\frac2m)}\,\theta_{z;r}\,.\end{aligned}$$ This yields in the second case, and it only remains to consider the first case $\widetilde r_\ast\le \frac{R}{\mu}$. Here, the key step is to prove the inclusion $$\label{incl-cyl} Q_{\widetilde r_\ast}^{(\theta_{z_\ast;r_\ast})}(z_\ast) \subseteq Q_{\mu\widetilde r_\ast}^{(\theta_{z;\mu\widetilde r_\ast})}(z).$$ We first observe that $\widetilde r_\ast\ge r_\ast$ and $|t-t_\ast|<(4r)^{\frac{1+m}m}+(4r_\ast)^{\frac{1+m}m}\le (12r_\ast)^{\frac{1+m}m}$ implies $\Lambda_{\widetilde r_\ast}(t_\ast)\subseteq \Lambda_{\mu\widetilde r_\ast}(t)$. In addition, we have $$\label{x-x_ast} |x-x_\ast|\le \theta_{z;r}^{\frac{m(m-1)}{1+m}}4r + \theta_{z_\ast;r_\ast}^{\frac{m(m-1)}{1+m}}4r_\ast.$$ At this point, we may assume that $\theta_{z;r}\le\theta_{z_\ast;r_\ast}$, since clearly is satisfied in the alternative case. Then, the monotonicity of ${\varrho}\mapsto \theta_{z;{\varrho}}$ and $r\le 2r_\ast\le2\widetilde r_\ast\le\mu\widetilde r_\ast$ imply $$\theta_{z_\ast;r_\ast} \ge \theta_{z;r} \ge \theta_{z;\mu \widetilde r_\ast}.$$ Combining this with , we conclude that $$\begin{aligned} \theta_{z_\ast;r_\ast}^{\frac{m(m-1)}{1+m}}\widetilde r_\ast + |x-x_\ast| \le \theta_{z_\ast;r_\ast}^{\frac{m(m-1)}{1+m}}5\widetilde r_\ast+ \theta_{z;r}^{\frac{m(m-1)}{1+m}}4r \le \theta_{z;\mu\widetilde r_\ast}^{\frac{m(m-1)}{1+m}}\mu\widetilde r_\ast,\end{aligned}$$ from which we deduce the inclusion $$B_{\widetilde r_\ast}^{(\theta_{z_\ast;r_\ast})}(x_\ast) \subseteq B_{\mu\widetilde r_\ast}^{(\theta_{z;\mu\widetilde r_\ast})}(x).$$ This completes the proof of . Using , , and with ${\varrho}=s=\mu\tilde r_\ast$, we estimate $$\begin{aligned} \theta_{z_\ast;r_\ast}^{\frac{2m}{d}} \le \frac{\mu^{\frac{1+m}{m}}}{|Q_{\widetilde r_\ast}|} \iint_{Q_{\mu\widetilde r_\ast}^{(\theta_{z;\mu\widetilde r_\ast})}(z)} \frac{{|u|}^{1+m}}{(\mu\tilde r_\ast)^{\frac{1+m}{m}}} {\mathrm{d}x}{\mathrm{d}t}\le \mu^{n+2+\frac2m} \theta_{z;r}^{\frac{2m}{d}},\end{aligned}$$ which implies $$\begin{aligned} \theta_{z_\ast;r_\ast} \le \mu^{\frac d{2m}(n+2+\frac2m)}\,\theta_{z;r}.\end{aligned}$$ This yields also in the last case. Having established , it remains to prove the inclusion $Q_{4r}^{(\theta_{z;r})}(z)\subseteq Q_{\hat c r_\ast}^{(\theta_{z_\ast;r_\ast})}(z_\ast)$, which will complete the proof of the Vitali covering property. First, we note that for any choice of $\hat c$ with $\hat c\ge 20$, we have $\Lambda_{4r}(t)\subseteq\Lambda_{\hat c r_\ast}(t_\ast)$. Moreover, from the facts , $r\le 2r_\ast$, and we conclude $$\begin{aligned} \theta_{z;r}^{\frac{m(m-1)}{1+m}}4r + |x-x_\ast| &\le 2\theta_{z;r}^{\frac{m(m-1)}{1+m}}4r + \theta_{z_\ast;r_\ast}^{\frac{m(m-1)}{1+m}}4r_\ast \\ &\le 4\Big[4\cdot 52^{\frac{d(1-m)[m(n+2)+2]}{2m(1+m)}}+1\Big] \theta_{z_\ast;r_\ast}^{\frac{m(m-1)}{1+m}}r_\ast \\ &\le \theta_{z_\ast;r_\ast}^{\frac{m(m-1)}{1+m}}\hat c r_\ast,\end{aligned}$$ for a suitable choice of the constant $\hat c=\hat c(n,m)\ge 20$. This implies the spatial inclusion $B_{4r}^{(\theta_{z;r})}(x)\subseteq B_{\hat c r_\ast}^{(\theta_{z_\ast;r_\ast})}(x_\ast)$, which is the remaining piece of information to conclude that $$Q = Q_{4r}^{(\theta_{z;r})}(z) \subseteq Q_{\hat c r_\ast}^{(\theta_{z_\ast;r_\ast})}(z_\ast) = \widehat Q^\ast.$$ Thereby we have established the inclusion , which yields the desired Vitali type covering property. Stopping time argument ---------------------- Now, we fix the parameter $\lambda_o$ by letting $$\lambda_o := 1+\Bigg[\biint_{Q_{4R}} \bigg[\frac{{|u|}^{1+m}}{(4 R)^{\frac{1+m}m}} + |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^{2}\bigg]\,{\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac{d}{2m}}.$$ For $\lambda>\lambda_o$ and $r\in(0,2R]$, we define the super-level set of the function $|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|$ by $$\boldsymbol E(r,\lambda) := \Big\{z\in Q_{r}: \mbox{$z$ is a Lebesgue point of $|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|$ and $|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|(z) > \lambda^{m}$}\Big\}.$$ In the definition of $\boldsymbol E(r,\lambda)$, the notion of Lebesgue points is to be understood with regard to the system of cylinders constructed in Section \[sec:cylinders\]. We point out that also with respect to these cylinders, $\mathcal L^{n+1}$-a.e. point is a Lebesgue point. This follows from [@Federer 2.9.1], since we already have verified the Vitali type covering property in Lemma \[lem:vitali\]. Now, we fix radii $R\le R_1<R_2\le 2R$. Note that for any $z_o\in Q_{R_1}$, $\kappa\ge1$ and ${\varrho}\in(0,R_2-R_1]$ we have $$Q_{\varrho}^{(\kappa)}(z_o) \subseteq Q_{R_2} \subseteq Q_{2R}.$$ For the following argument, we restrict ourselves to levels $\lambda$ with $$\label{choice_lambda} \lambda > B\lambda_o, \quad\mbox{where} \quad B := \Big(\frac{4\hat c R}{R_2-R_1}\Big)^{\frac{d(n+2)(1+m)}{(2m)^2}} >1,$$ and where $\hat c=\hat c(n,m)$ is the constant from the Vitali-type covering Lemma \[lem:vitali\]. We fix $z_o\in \boldsymbol E(R_1,\lambda)$ and abbreviate $\theta_s\equiv \theta_{z_o;s}$ for $s\in(0,R]$ throughout this section. By definition of $\boldsymbol E(R_1,\lambda)$, we have $$\label{larger-lambda} \liminf_{s\downarrow 0} \biint_{Q_{s}^{(\theta_{s})}(z_o)} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^2 \big] {\mathrm{d}x}{\mathrm{d}t}\ge |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2(z_o) > \lambda^{2m}.$$ On the other hand, for any radius $s$ with $$\begin{aligned} \label{radius-s} \frac{R_2-R_1}{\hat c}\le s\le R\end{aligned}$$ the definition of $\lambda_o$, estimate , assumption and the definition of $d$ imply $$\begin{aligned} \label{smaller-lambda} \biint_{Q_{s}^{(\theta_s)}(z_o)} & \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2+|F|^2\big] {\mathrm{d}x}{\mathrm{d}t}\nonumber\\ &\le \frac{|Q_{4R}|}{|Q_{s}^{(\theta_s)}|} \biint_{Q_{4R}} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^2\big] {\mathrm{d}x}{\mathrm{d}t}\nonumber\\ &\le \frac{|Q_{4R}|}{|Q_{s}|}\, \theta_s^{\frac{nm(1-m)}{1+m}} \lambda_o^{\frac{2m}{d}} \nonumber\\ &\le \Big(\frac{4R}{s}\Big)^{n+1+\frac{1}m+\frac{d}{2m}(n+2+\frac2m)\cdot\frac{nm(1-m)}{1+m}} \lambda_o^{\frac{nm(1-m)}{1+m}+\frac{2m}d} \nonumber\\ &= \Big(\frac{4R}{s}\Big)^{\frac{d(n+2)(1+m)}{2m}} \lambda_o^{2m} \nonumber\\ &\le B^{2m} \lambda_o^{2m} < \lambda^{2m}.\end{aligned}$$ By the continuity of the mapping $s\mapsto\theta_s$ and the absolute continuity of the integral, the left-hand side of depends continuously on $s$. Therefore, in view of and , there exists a maximal radius $0<{\varrho}_{z_o} < \tfrac{R_2-R_1}{\hat c}$ for which the above inequality becomes an equality, i.e. ${\varrho}_{z_o}$ is the maximal radius with $$\begin{aligned} \label{=lambda} \biint_{Q_{{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^2\big] {\mathrm{d}x}{\mathrm{d}t}= \lambda^{2m}.\end{aligned}$$ The maximality of the radius ${\varrho}_{z_o}$ implies in particular that $$\begin{aligned} \biint_{Q_{s}^{(\theta_{s})}(z_o)} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^2\big] {\mathrm{d}x}{\mathrm{d}t}< \lambda^{2m} \quad \mbox{for any $s\in ({\varrho}_{z_o}, R]$.}\end{aligned}$$ Due to the monotonicity of the mapping ${\varrho}\mapsto \theta_{\varrho}$ and we have $$\begin{aligned} \theta_\sigma \le \theta_{s} \le \Big(\frac{\sigma}{s}\Big)^{\frac{d}{2m}(n+2+\frac2m)} \theta_{\sigma} \quad \mbox{for any ${\varrho}_{z_o}\le s<\sigma\le R$,}\end{aligned}$$ so that $$\begin{aligned} \label{<lambda} \biint_{Q_{\sigma}^{(\theta_{s})}(z_o)} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^2\big] {\mathrm{d}x}{\mathrm{d}t}&\le \Big(\frac{\theta_{s}}{\theta_\sigma}\Big)^{\frac{nm(1-m)}{1+m}} \biint_{Q_{\sigma}^{(\theta_{\sigma})}(z_o)} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^2\big] {\mathrm{d}x}{\mathrm{d}t}\nonumber\\ &< \Big(\frac{\sigma}{s}\Big)^{\frac{dn(1-m)}{2(1+m)}(n+2+\frac2m)}\, \lambda^{2m}\end{aligned}$$ for any ${\varrho}_{z_o}\le s<\sigma\le R$. Finally, we recall that the cylinders are constructed in such a way that $$Q_{\hat c{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o) \subseteq Q_{\hat c{\varrho}_{z_o}}(z_o) \subseteq Q_{R_2}.$$ A Reverse Hölder Inequality --------------------------- For a level $\lambda$ as in and a point $z_o\in \boldsymbol E(R_1,\lambda)$, we consider the radius $\widetilde{\varrho}_{z_o}\in[{\varrho}_{z_o},R]$ as defined in . In the sequel we write $\theta_{{\varrho}_{z_o}}$ instead of $\theta_{z_o;{\varrho}_{z_o}}$. We recall that $\widetilde{\varrho}_{z_o}$ has been defined in such a way that for any $s\in [{\varrho}_{z_o}, \widetilde{\varrho}_{z_o}]$ we have $\theta_s=\theta_{{\varrho}_{z_o}}$, and, in particular, $\theta_{\widetilde{\varrho}_{z_o}}=\theta_{{\varrho}_{z_o}}$. The aim of this section is the proof of a reverse Hölder inequality on $Q_{2{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)$. To this end, we need to verify the assumptions of Proposition \[prop:revhoelder\]. First, we note that with $s=4{\varrho}_{z_o}$ implies $$\biint_{Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} \frac{{|u|}^{1+m}}{(4{\varrho}_{z_o})^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}\le \theta_{{\varrho}_{z_o}}^{2m},$$ which means that assumption is fulfilled for the cylinder $Q_{2{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)$ with $K=1$. For the estimate of $\theta_{{\varrho}_{z_o}}^{2m}$ from above, we distinguish between the cases $\widetilde{\varrho}_{z_o}\le 2{\varrho}_{z_o}$ and $\widetilde{\varrho}_{z_o}> 2{\varrho}_{z_o}$. In the former case, we use the fact $\theta_{{\varrho}_{z_o}}=\theta_{\widetilde{\varrho}_{z_o}}=\widetilde\theta_{\widetilde{\varrho}_{z_o}}$, which implies that $Q_{\widetilde{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)$ is intrinsic, and then the bound $\widetilde{\varrho}_{z_o}\le 2{\varrho}_{z_o}$, with the result that $$\theta_{{\varrho}_{z_o}}^{2m} = \biint_{Q_{\widetilde{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} \frac{{|u|}^{1+m}}{\widetilde{\varrho}_{z_o}^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}\le 2^{n+2+\frac2m} \biint_{Q_{2{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} \frac{{|u|}^{1+m}}{(2{\varrho}_{z_o})^{\frac{1+m}m}} {\mathrm{d}x}{\mathrm{d}t}.$$ This means that in this case, assumption $_1$ is satisfied with $K\equiv 2^{n+2+\frac2m}$. Next, we consider the remaining case $\widetilde{\varrho}_{z_o}>2{\varrho}_{z_o}$. Here, we claim that $$\label{claim-case-2} \theta_{{\varrho}_{z_o}}\le c(n,m,\nu,L)\lambda.$$ For the proof we treat the cases $\widetilde{\varrho}_{z_o}\in(2{\varrho}_{z_o},\frac R2]$ and $\widetilde{\varrho}_{z_o}\in(\frac R2,R]$ separately. In the latter case, we use with ${\varrho}=\widetilde{\varrho}_{z_o}$ and the bound $\widetilde{\varrho}_{z_o}>\frac R2$ in order to estimate $\theta_{{\varrho}_{z_o}}=\theta_{\widetilde{\varrho}_{z_o}}\le c\lambda_o \le c\lambda$, which yields . In the alternative case $\widetilde{\varrho}_{z_o}\in(2{\varrho}_{z_o},\frac R2]$, the cylinder $Q_{\widetilde{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)$ is intrinsic by definition of $\widetilde{\varrho}_{z_o}$, and the two times enlarged cylinder $Q_{2\widetilde{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)$ is sub-intrinsic by . Therefore, assumptions and $_1$ of Lemma \[lem:theta\] are satisfied for $Q_{\widetilde{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)$. The application of the lemma yields $$\theta_{{\varrho}_{z_o}}^m \le \tfrac1{\sqrt2} \Bigg[ \biint_{Q_{\widetilde{\varrho}_{z_o}/2}^{(\theta_{{\varrho}_{z_o}})}(z_o)}\! \frac{|u|^{1+m}}{(\widetilde{\varrho}_{z_o}/2)^{\frac{1+m}{m}}} {\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac12} + c \bigg[\biint_{Q_{2\widetilde{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} \! \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2}+|F|^2\big] {\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac12}$$ with a constant $c=c(n,m,L)$. For the first term, we exploit the sub-intrinsic coupling with radii ${\varrho}={\varrho}_{z_o}$ and $s=\frac12\widetilde{\varrho}_{z_o}>{\varrho}_{z_o}$. For the estimate of the last integral, we recall that $\theta_{\widetilde{\varrho}_{z_o}}=\theta_{{\varrho}_{z_o}}$, which allows to use with $s=\widetilde{\varrho}_{z_o}$ and $\sigma=2\widetilde{\varrho}_{z_o}\in(s,R]$. This leads to the upper bound $$\begin{aligned} \theta_{{\varrho}_{z_o}}^m \le \tfrac1{\sqrt2} \theta_{{\varrho}_{z_o}}^m + c\,\lambda^m .\end{aligned}$$ Here, we re-absorb $\tfrac1{\sqrt2} \theta_{{\varrho}_{z_o}}^m$ into the left and obtain the claim in any case. Combining this with the identity , we obtain the bound $$\begin{aligned} \theta_{{\varrho}_{z_o}}^{2m} \le c\,\lambda^{2m} \le c\,\biint_{Q_{2{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^2\big] {\mathrm{d}x}{\mathrm{d}t}.\end{aligned}$$ This means that in the case $\widetilde{\varrho}_{z_o}>2{\varrho}_{z_o}$ assumption $_2$ is satisfied on $Q_{2{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}$ with a constant $K\equiv K(n,m,\nu,L)$. In conclusion, in any case we have shown that all hypotheses of Proposition \[prop:revhoelder\] are satisfied. Consequently, the proposition yields the desired reverse Hölder inequality $$\begin{aligned} \label{rev-hoelder} \biint_{Q_{2{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} & |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}x}{\mathrm{d}t}\nonumber\\ &\le c\bigg[\biint_{Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}\bigg]^{\frac{1}{q}} + c\, \biint_{Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} |F|^2 {\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ for a constant $c=c(n,m,\nu,L)$ and with exponent $q=\max\{\frac{n(1+m)}{2(nm+1+m)},\frac12\}<1$. Estimate on super-level sets ---------------------------- So far we have shown that for every $\lambda$ as in and every $z_o\in \boldsymbol E(R_1,\lambda)$, there exists a cylinder $Q_{{\varrho}_{z_o}}^{(\theta_{z_o;{\varrho}_{z_o}})}(z_o)$ with $Q_{\hat c{\varrho}_{z_o}}^{(\theta_{z_o;{\varrho}_{z_o}})}(z_o)\subseteq Q_{R_2}$, for which the properties and , and the reverse Hölder type estimate are satisfied. This allows us to establish a reverse Hölder inequality for the distribution function of $|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2$ by a Vitali covering type argument. The precise argument is as follows. We define the super-level set of the inhomogeneity $F$ by $$\boldsymbol F(r,\lambda) := \Big\{z\in Q_{r}: \mbox{$z$ is a Lebesgue point of $F$ and $|F|>\lambda^{m}$}\Big\},$$ where again, the Lebesgue points have to be understood with respect to the cylinders constructed in Section \[sec:cylinders\]. Using and , we estimate $$\begin{aligned} \label{level-set-1} \lambda^{2m} &= \biint_{Q_{{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} \big[|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 + |F|^{2}\big] {\mathrm{d}}x{\mathrm{d}}t \nonumber\\ &\le c\,\Bigg[\biint_{Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac{1}{q}} + c\, \biint_{Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} |F|^{2} {\mathrm{d}x}{\mathrm{d}t}\nonumber\\ &\le c\,\eta^{2m}\lambda^{2m} + c\,\Bigg[\frac{1}{\big|Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)\big|} \iint_{Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)\cap \boldsymbol E(R_2,\eta\lambda)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac{1}{q}} \nonumber\\ &\quad+ \frac{c}{\big|Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)\big|} \iint_{Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)\cap \boldsymbol F(R_2,\eta\lambda)} |F|^{2} {\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ for a constant $c=c(n,m,\nu ,L)$ and any $\eta\in(0,1)$. We choose the parameter $\eta\in(0,1)$ in dependence on $n,m,\nu,$ and $L$ in such a way that $\eta^{2m}=\frac{1}{2c}$. This allows us to re-absorb the term $\frac12\lambda^{2m}$ into the left-hand side. For the estimate of the second last term, we apply Hölder’s inequality and , with the result $$\begin{aligned} \Bigg[\frac{1}{\big|Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)\big|} & \iint_{Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)\cap \boldsymbol E(R_2,\eta\lambda)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac{1}{q}-1} \\ &\le \bigg[\biint_{Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2} {\mathrm{d}x}{\mathrm{d}t}\bigg]^{1-q} \le c\,\lambda^{2m(1-q)}.\end{aligned}$$ We use this to estimate the right-hand side of . The resulting inequality is then multiplied by $\big|Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)\big|$. In this way, we arrive at $$\begin{aligned} \lambda^{2m}\big|Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)\big| &\le c\iint_{Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)\cap \boldsymbol E(R_2,\eta\lambda)} \lambda^{2m(1-q)}|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}\\ &\quad+ c \iint_{Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)\cap \boldsymbol F(R_2,\eta\lambda)} |F|^{2} {\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ where $c=c(n,m,\nu,L)$. On the other hand, we bound the left-hand side from below by the use of . This leads to the inequality $$\begin{aligned} \lambda^{2m} &\ge c\, \biint_{Q_{\hat c{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ where we relied on the fact that $\hat c$ is a universal constant depending only on $n$ and $m$. Combining the two preceding estimates and using again that $\hat c=\hat c(n,m)$, we arrive at $$\begin{aligned} \label{level-est} \iint_{Q_{\hat c{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}}x{\mathrm{d}}t &\le c \iint_{Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)\cap \boldsymbol E(R_2,\eta\lambda)} \lambda^{2m(1-q)}|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}\nonumber\\ &\quad + c \iint_{Q_{4{\varrho}_{z_o}}^{(\theta_{{\varrho}_{z_o}})}(z_o)\cap \boldsymbol F(R_2,\eta\lambda)} |F|^{2} {\mathrm{d}x}{\mathrm{d}t}\end{aligned}$$ for a constant $c=c(n,m,\nu ,L)$. Since the preceding inequality holds for every center $z_o\in \boldsymbol E(R_1,\lambda)$, we conclude that it is possible to cover the super-level set $\boldsymbol E(R_1,\lambda)$ by a family $\mathcal F\equiv\big\{Q_{4{\varrho}_{z_o}}^{(\theta_{z_o;{\varrho}_{z_o}})}(z_o)\big\}$ of parabolic cylinders with center $z_o\in \boldsymbol E(R_1,\lambda)$, such that each of the cylinders is contained in $Q_{R_2}$, and that on each cylinder estimate is valid. An application of the Vitali type Covering Lemma \[lem:vitali\] provides us with a countable disjoint subfamily $$\Big\{Q_{4{\varrho}_{z_i}}^{(\theta_{z_i;{\varrho}_{z_i}})}(z_i)\Big\}_{i\in\N} \subseteq \mathcal F$$ with the property $$\boldsymbol E(R_1,\lambda) \subseteq \bigcup_{i=1}^\infty Q_{\hat c{\varrho}_{z_i}}^{(\theta_{z_i;{\varrho}_{z_i}})}(z_i) \subseteq Q_{R_2}.$$ We apply for each of the cylinders $Q_{4{\varrho}_{z_i}}^{(\theta_{z_i;{\varrho}_{z_i}})}(z_i)$ and add the resulting inequalities. Since the cylinders $Q_{4{\varrho}_{z_i}}^{(\theta_{z_i;{\varrho}_{z_i}})}(z_i)$ are pairwise disjoint, we obtain $$\begin{aligned} \iint_{\boldsymbol E(R_1,\lambda)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}x}{\mathrm{d}t}&\le c\iint_{\boldsymbol E(R_2,\eta\lambda)} \lambda^{2m(1-q)}|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}+ c\iint_{\boldsymbol F(R_2,\eta\lambda)} |F|^{2} {\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ with $c=c(n,m,\nu,L)$. In order to compensate for the fact that super-level sets of different levels appear on both sides of the preceeding estimate, we need an estimate on the difference $\boldsymbol E(R_1,\eta\lambda)\setminus \boldsymbol E(R_1,\lambda)$. However, on this set we can simply estimate $|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2\le\lambda^{2m}$, which leads to the bound $$\begin{aligned} \iint_{\boldsymbol E(R_1,\eta\lambda)\setminus \boldsymbol E(R_1,\lambda)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}}x{\mathrm{d}}t &\le \iint_{\boldsymbol E(R_2,\eta\lambda)} \lambda^{2m(1-q)}|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}.\end{aligned}$$ Adding the last two inequalities, we obtain a reverse Hölder type inequality for the distribution function of $|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2$ for levels $\eta\lambda$. In this inequality we replace $\eta\lambda$ by $\lambda$ and recall that $\eta\in(0,1)$ was chosen as a universal constant depending only on $n,m,\nu$, and $L$. In this way, we obtain for any $\lambda\ge \eta B\lambda_o =:\lambda_1$ that $$\begin{aligned} \label{pre-1} \iint_{\boldsymbol E(R_1,\lambda)} & |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^2 {\mathrm{d}}x{\mathrm{d}}t \nonumber\\ &\le c\iint_{\boldsymbol E(R_2,\lambda)} \lambda^{2m(1-q)}|D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2q} {\mathrm{d}x}{\mathrm{d}t}+ c \iint_{\boldsymbol F(R_2,\lambda)} |F|^{2} {\mathrm{d}x}{\mathrm{d}t}\end{aligned}$$ with a constant $c=c(n,m,\nu ,L)$. This is the desired reverse Hölder type inequality for the distribution function of $D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}$. Proof of the gradient estimate ------------------------------ Once is established the final higher integrability result follows by integrating over the range of possible values $\lambda$. Here, one has to take into account that the existence of certain integrals appearing in the proof are not guaranteed in advance. This technical point can be overcome by use of truncation methods. Since all arguments have been elaborated in detail for example in [@BDKS-higher-int; @BDKS-doubly] we omit the details, and only state the final outcome. There exists ${\varepsilon}_o={\varepsilon}_o(n,m,\nu,L)\in (0,1]$ such that for any $0<{\varepsilon}<{\varepsilon}_1:=\min\{{\varepsilon}_o, \sigma-2\}$ we have $$\begin{aligned} \biint_{Q_{R}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2+{\varepsilon}} {\mathrm{d}}x{\mathrm{d}}t \le c\, \lambda_o^{{\varepsilon}m} \biint_{Q_{2R}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2} {\mathrm{d}x}{\mathrm{d}t}+ c\,\biint_{Q_{2R}} |F|^{2+{\varepsilon}} {\mathrm{d}x}{\mathrm{d}t}.\end{aligned}$$ At this stage it remains to bound $\lambda_o$. This can be achieved by an application of the energy estimate from Lemma \[lem:energy\] with $\theta=1$ and $a=0$ and Young’s inequality. Indeed, we have $$\begin{aligned} \lambda_o \le c \Bigg[ 1+\biint_{Q_{8R}} \bigg[\frac{{|u|}^{1+m}}{R^{\frac{1+m}{m}}} + |F|^{2}\bigg] {\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac{d}{2m}} ,\end{aligned}$$ where $c=c(n,m,\nu,L)$. Plugging this into the preceding estimate, we arrive at $$\begin{aligned} \biint_{Q_{R}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2+{\varepsilon}} {\mathrm{d}}x{\mathrm{d}}t &\le c \Bigg[ 1+\biint_{Q_{8R}} \bigg[\frac{|u|^{1+m}}{R^{\frac{1+m}{m}}} + |F|^{2}\bigg] {\mathrm{d}x}{\mathrm{d}t}\Bigg]^{\frac{{\varepsilon}d}{2}} \biint_{Q_{2R}} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2} {\mathrm{d}x}{\mathrm{d}t}\\ &\qquad\qquad+ c\,\biint_{Q_{2R}} |F|^{2+{\varepsilon}} {\mathrm{d}x}{\mathrm{d}t},\end{aligned}$$ for a constant $c=c(n,m,\nu,L)$. The asserted reverse Hölder inequality now follows by a standard covering argument. More precisely, we cover $Q_R$ by finitely many cylinders of radius $\frac R8$, apply the preceding estimate on each of the smaller cylinders and take the sum. This yields the same estimate as above, but with integrals over $Q_{2R}$ instead of $Q_{8R}$. This completes the proof of Theorem \[thm:higherint\]. Proof of Corollary \[cor:higher-int\] ------------------------------------- We consider a standard parabolic cylinder $C_{2R}(z_o):= B_{2R}(x_o)\times(t_o-(2R)^2,t_o+(2R)^2)\subseteq\Omega_T$ and rescale the problem via $$\left\{ \begin{array}{c} v(x,t):=u(x_o+Rx,t_o+R^2t) \\[5pt] \mathbf B(x,t,u,\xi):=R\,\mathbf A\big(x_o+Rx,t_o+R^2t,u,\tfrac1{R}\xi\big) \\[5pt] G(x,t):=R\,F(x_o+Rx,t_o+R^2t), \end{array} \right.$$ whenever $(x,t)\in C_{2}$ and $(u,\xi)\in\R^N\times\R^{Nn}$. The rescaled function $v$ is a weak solution of the differential equation $$\partial_tv-\mathrm{div}\,\mathbf B(x,t,v,D{\bm{v^{\mbox{\unboldmath{\scriptsize$m$}}}}})=\mathrm{div}\,G \qquad\mbox{in $\widetilde Q:=Q_{2^\frac{2m}{1+m}}\subseteq C_{2}$}$$ in the sense of Definition \[def:weak\_solution\]. Moreover, the rescaled vector-field $\mathbf B$ satisfies assumptions . Consequently, we can apply estimate to $v$ on the cylinder $\widetilde Q$, which gives $$\begin{aligned} \biint_{\frac12\widetilde Q} & |D{\bm{v^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2+{\varepsilon}} {\mathrm{d}}x{\mathrm{d}}t \\ &\le c\bigg[ 1+\biint_{\widetilde Q} \big[{|v|}^{1+m} + |G|^2\big]{\mathrm{d}}x{\mathrm{d}}t \bigg]^{\frac{{\varepsilon}d}{2}} \biint_{\widetilde Q} |D{\bm{v^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2} {\mathrm{d}x}{\mathrm{d}t}+ c\,\biint_{\widetilde Q} |G|^{2+{\varepsilon}} {\mathrm{d}x}{\mathrm{d}t}, \end{aligned}$$ for every ${\varepsilon}\in(0,{\varepsilon}_1]$, where $c=c(n,m,\nu,L)$, and we abbreviated $\frac12\widetilde Q:=Q_{2^{\frac{m-1}{1+m}}}$. Scaling back and using the fact that the cylinder $C_{\gamma R}$ with $\gamma:=2^\frac{m-1}{2m}$ is contained in the re-scaled version of the cylinder $\frac12\widetilde Q$ we deduce $$\begin{aligned} R^{2+{\varepsilon}} & \biint_{C_{\gamma R}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2+{\varepsilon}} {\mathrm{d}}x{\mathrm{d}}t \\ &\le c\,R^2\bigg[ 1+\biint_{C_{2R}(z_o)} \big[{|u|}^{1+m} + R^2|F|^2\big] {\mathrm{d}}x{\mathrm{d}}t \bigg]^{\frac{{\varepsilon}d}{2}} \biint_{C_{2R}(z_o)} |D{\bm{u^{\mbox{\unboldmath{\scriptsize$m$}}}}}|^{2} {\mathrm{d}x}{\mathrm{d}t}\\ &\quad + c\, R^{2+{\varepsilon}} \biint_{C_{2R}(z_o)} |F|^{2+{\varepsilon}} {\mathrm{d}x}{\mathrm{d}t}. \end{aligned}$$ The asserted estimate on the pair $C_R$, $C_{2R}$ of standard parabolic cylinders now follows with a standard covering argument. This finishes the proof of Corollary \[cor:higher-int\]. [99]{} V. Bögelein. Higher integrability for weak solutions of higher order degenerate parabolic systems. , 33 (2008), no. 2, 387–412. V. Bögelein, F. Duzaar, J. Kinnunen, and C. Scheven. The higher integrability of weak solutions doubly nonlinear parabolic systems. 2018, arXiv:1810.06039. V. Bögelein, F. Duzaar, R. Korte, and C. Scheven. The higher integrability of weak solutions of porous medium systems. , DOI: https://doi.org/10.1515/anona-2017-0270. V. Bögelein and M. Parviainen. Self-improving property of nonlinear higher order parabolic systems near the boundary. , 17 (2010), no. 1, 21–54. L.A. Caffarelli, J. L. Vázquez, and N. I. Wolanski. Lipschitz continuity of solutions and interfaces of the N-dimensional porous medium equation. , 36(2) (1987), 373–401. E. DiBenedetto. . Springer-Verlag, Universitytext xv, 387, New York, NY, 1993. Emmanuele DiBenedetto and Avner Friedman. Hölder estimates for nonlinear degenerate parabolic systems. , 357:1–22, 1985. E. DiBenedetto, U. Gianazza, and V. Vespri. [*Harnack’s inequality for degenerate and singular parabolic equations*]{}. Springer Monographs in Mathematics, 2011. L. Diening, P. Kaplický, and S. Schwarzacher. BMO estimates for the $p$-Laplacian, , 75 (2012), no. 2, 637–650. H. Federer. Die Grundlehren der mathematischen Wissenschaften, Band 153, Springer-Verlag, New York, 1969. U. Gianazza and S. Schwarzacher. Self-improving property of degenerate parabolic equations of porous medium-type. , to appear. U. Gianazza and S. Schwarzacher. Self-improving property of the fast diffusion equation. 2018, arXiv:1810.04557. M. Giaquinta. Multiple Integrals in the Calculus of Variations and Nonlinear Elliptic Systems. Princeton University Press, Princeton, 1983. M. Giaquinta and M. Struwe. On the partial regularity of weak solutions of nonlinear parabolic systems. , 179 (1982), no. 4, 437–451. E. Giusti. . World Scientific Publishing Company, Tuck Link, Singapore, 2003. J. Kinnunen and J. L. Lewis. Higher integrability for parabolic systems of $p$-Laplacian type. , 102 (2000), no. 2, 253-271. N. G. Meyers and A. Elcrat. Some results on regularity for solutions of non-linear elliptic systems and quasi-regular functions. , 42 (1975), 121–136. M. Parviainen. Global gradient estimates for degenerate parabolic equations in nonsmooth domains. (4), 188 (2009), no. 2, 333–358. S. Schwarzacher. Hölder-Zygmund estimates for degenerate parabolic systems. , 256 (2014), no. 7, 2423–2448. J. Vázquez. Equations of porous medium type. Oxford Lecture Series in Mathematics and its Applications, 33. Oxford University Press, Oxford, 2006. J. Vázquez. Mathematical theory. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, Oxford, 2007.
--- abstract: 'We present a systematic analysis of molecular oxygen (O$_2$) adsorption trends on bimetallic Pt-Ni clusters and their monometallic counterparts supported on MgO(100), by means of periodic DFT calculations for sizes between 25 up to 58 atoms. O$_2$ adsorption was studied on a variety of inequivalent sites for different structural motifs, such as truncated octahedral (TO), cuboctahedral (CO), icosahedral (Ih) and decahedral (Dh) geometries. We found that O$_2$ prefers to bind on top of two metal atoms, parallel to the cluster, with an average chemisorption energy of 1.09 eV (Pt-Ni), 1.07 eV (Pt) and 2.09 eV (Ni), respectively. The largest adsorption energy values are found to be along the edges between two neighbouring (111)/(111) and (111)/(100) facets; while FCC and HCP sites located on the (111) facets may show a chemisorption value lower 0.3 eV where often fast O$_2$ dissociation easily occurs. Our results show that, even though it is difficult to disentangle the geometrical and electronic effects on the oxygen molecule adsorption, there is a strong correlation between the calculated general coordination number (GCN) and the chemisorption map. Finally, the inclusion of dispersion corrections (DFT-D) leads to an overall increase on the calculated adsorption energy values but with a negligible alteration on the general O$_2$ adsorption trends.' author: - 'Lauro Oliver Paz-Borbón$^{1,2,\*}$ and Francesca Baletto$^1$' title: 'O$_2$ adsorption trends on small supported PtNi clusters' --- ### startsection[subsubsection]{}[3]{}[10pt]{}[-1.25ex plus -1ex minus -.1ex]{}[0ex plus 0ex]{}[****]{} [^1] [^2] Introduction ============ \[h\] \[0.26\][ ]{}\ \[h\] \[0.26\][ ]{}\ The development of novel mobility technologies for cleaner vehicle emissions is essential to mitigate the current high levels of pollution seen on internal combustion engines. One foreseeable technology involves the electrochemical conversion of energy using polymer electrolyte membrane fuel cells (PEMFC)[@ShaoCHEMICALREVIEW2016; @DebeNATURE2012]. Used in cars, fuel cells currently provide the necessary power needed to travel even long-distances. However, the costly use of platinum (Pt) as an electrocatalyst inside PEMFC has triggered an a vast active search for cheaper Pt-based alloys. One proposed solution is to combine it with other late transition metals such as nickel (Ni), cobalt (Co), chromium (Cr), copper (Cu) and iron (Fe)[@GreeleyNChem2009; @StrasserNChem2010; @StamenkovicNMAT2007]. The goal of the alloyed material is to improve the catalytic activity towards the oxygen reduction reactions (ORRs) offered by commercial Pt cathode catalysts without the scaling costs. In this respect, recent studies have shown that a five-fold decrease in the amount of Pt currently used in PEMFC stacks is needed, in terms of Pt costs and scarcity, in order to reach levels for mass-production and commercialisation in light-duty vehicles [@GasteigerAPPLIEDCatalysis2005]. Among these alloys, a promising one corresponds to Pt-Ni. Recent experimental studies performed by Stamenkovic *et al.* shown an exceptional catalytic activity (10-fold) towards ORR of an extended Pt$_3$Ni(111) surface compared to a monometallic Pt(111) surface; and up to 90 times than the carbon supported Pt catalysts used in PEMFC [@StamenkovicSCIENCE2007]. This was attributed to an unusual shift in the Pt-Ni *d*-band centre along with a peculiar atomic segregation pattern involving a Pt-rich outer layer. Furthermore, the maximisation of the large surface areas offered by nanoparticles (NPs) makes them ideal as catalysts for fuel cells applications. Recent experimental works have focused on the preparation of small Pt-Ni NPs ($\leq$ 5 nm) having well-controlled octahedral geometries (FCC-type atomic arrangements) along with extensive (111) facets aiming for similar catalytic properties as the extended surface[@StrasserSCIENCE2015; @CarpenterJACS2012; @CuiNATURE2013; @ChoiACSNANO2014; @ChoiNanoLetters2013; @CuiNanoLett2012; @HuangENERGY2014; @ZhangMATERIALSCHEM2014; @ZhangJACS2014; @BaoJPCC2013]. From a theoretical view, computational Monte Carlo (MC) simulations have previously addressed the structural stability of cuboctahedral Pt-Ni NPs[@WangMONTECARLO2005]. Using many-body potential for particles ranging from 2.5 to 5 nm. MC simulations at 600K have shown that Pt-Ni tend to form *surface-sandwich* structures, with a segregation pattern in which Pt atoms are enriched the outermost and third shells while the Ni atoms are enriched in the second shell. These results suggest an economical catalyst design with Pt atoms located at the outermost shells with Ni occupying core positions. Similar mixed segregation patterns were obtained, though for small Pt-Ni clusters of less than 20 atoms, using a genetic algorithm global optimization approach via density functional theory (DFT) calculations [@GarzonPTNI2009], while more recent work, we have shown that sub-nanometre gas-phase Pt-Ni clusters up to 55 atoms can bind $O_2$ too strongly (above 1 eV) due to substantial geometrical reconstruction of the metal-metal bridge underneath the molecule, making those clusters not ideal for catalysing the sluggish ORR compared to a Pt(111) surface[@ConoPCCP2011]. More recent DFT calculations performed by Fortunelli and co-workers have reported a fully-dealloyed Pt$_3$Ni$_7$ particule ($\sim$ 8 nm.) surfaces exhibiting triangulated surface arrangements, as a regular Pt(111), while reducing the rate-determining ORR step significantly[@FortunelliCHEMSCIE2015]. In this work we performed extensive DFT periodic calculations to quantify O$_2$ adsorption on a variety of Pt-Ni clusters supported on MgO(100) as well as on their monometallic (Pt, Ni) counterparts. Using the calculated adsorption energies (E$_{chem}$) we are able to construct a O$_2$ chemisorption map over supported Pt-Ni, Pt and Ni clusters, relevant to fuel cells and heterogeneous catalysis. Furthermore, the interfacial low-index surface displayed at the cluster/oxide interface - where the cluster can expose both (100) and (111) facets in contact with the oxide interface - was also analysed. For the bimetallic case, attention is placed on Ni doping at clusters core sites (*i.e.* core-shell clusters), though in some cases Ni atoms are located inevitably located at surface sites due to cluster size (TO$_{25}$) or geometrical arrangements (Dh$_{58}$). Overall, we identify the (111)/(111) and (111)/(100) cluster edges as those sites were calculated E$_{chem}$ values are the strongest (above 1 eV) which tend to have a low GCN value ($< $6); while FCC and HCP sites located at (111) facets have the weakest E$_{chem}$ values (less than 0.64 eV) involving GCN $>$ 8 . As GCN can distinguish symmetrically equivalent sites, it allow us to rationalise calculated E$_{chem}$ values among different clusters since weaker interactions occur at sites where GCN numbers are higher. It has been recently shown that GCN further establishes a link between geometry, adsorption and activity[@CallePHYSCHHEMLETT2014; @CalleANGEWANDTE014; @CalleNATURECHEM2015; @CalleSCIENCE2015]. Methodology =========== We considered 4 different structural motifs ranging from FCC-type such as truncated octahedra (TO), cuboctahedral (CO), and non-crystallographic arrangements such as icosahedral (Ih) and decahedral (Dh); with sizes ranging from 25 up to 58 atoms. These structures are constructed from highly symmetric models: 38(TO)$_{\alpha}$, 55(CO)$_{\alpha}$, 55(Ih)$_{\alpha}$ and 75(Dh)$_{\alpha}$. Half-cuts were performed on these structures to obtain a new set of clusters with exotic cluster/oxide interfaces: 25(TO)$_{\alpha}$, 33(Ih)$_{\alpha}$ and a 58(Dh)$_{\alpha}$. Here, the subscript ${\alpha}$ stands for the Miller indexes of the cluster-oxide interface initially exposed. In the following, both (100) or a (111) facets have been chosen. Pt-Ni clusters display a deliberate Pt$_{shell}$-Ni$_{core}$ chemical arrangement. Depending on size and structural, some Ni atoms are find to occupy surface sites, for example Pt$_{20}$Ni$_{5}$(TO)$_{100}$, Pt$_{21}$Ni$_{12}$(Ih)$_{111}$ as well as Pt$_{47}$Ni$_{11}$(Dh)$_{111}$ (see Figs. \[fig:sites1\] and \[fig:sites2\]). After construction in the gas-phase, the metal clusters are subsequently placed over the MgO(100) substrate maximising the number of metal-oxygen (M-O) bonds. An initial height of $\sim$ 2 Å prior DFT relaxations is employed between the cluster and the substrate (optimal Pt and Ni atoms DFT heights adsorbed on O-sites surface sites are 1.985 Å and 1.796 Å, respectively). The substrate thus acts as a geometrical constraint affecting the features of each metallic clusters. As part of the cluster/oxide interfacial geometry characterisation, we have counted the number of metal atoms at the interface. Furthermore, as a measure of the metal interface *roughness*, we have calculated the standard deviation of the corresponding interfacial atomic layer height, information which can be found in the Supplementary Material along with other calculated quantities. We must stress that the considered supported Pt, Ni and Pt-Ni clusters, though locally DFT-relaxed, they do not correspond to global minima in the potential energy surface (PES). However, they act as geometrical scenarios to understand their O$_2$ adsorption properties. The plane-wave implementation of Density Functional Theory (DFT) within the Quantum Espresso (QE) code [@QUANTUM_REF] with the PBE exchange-correlation (xc) functional[@PBEfunctional1996]. A combination of Rappe-Rabe-Kaxiras-Joannopoulos [@RappePSEUDOPOT1991] and Vanderbilt[@VanderbiltPSEUDOPOT2014] ultrasoft pseudopotentials - including non-linear core corrections - are use to describe Pt(5d$^9$6s$^1$), Ni(3d$^9$4s$^1$), Mg(2p$^6$3s$^1$3p$^{0.75}$) and O(2s$^2$2p$^4$) atoms. A kinetic energy cutoff for wave-functions and charge density of 45 and 360 Ry, respectively is used. To improve convergence, a Marzari-Vanderbilt smearing value of 0.001 Ry is used. The pristine MgO(100) surface is modelled using a 6$\times$6 three-layer slab our calculated PBE bulk lattice constant (4.238 Å). Although this is slightly larger than the experimental value of 4.21Å, is whoever, in close agreement with calculated PBE values (4.26 and 4.30 Å) and hybrid (PBE0) functionals (4.21 Å)[@Broqvist2004; @Paier2006MgO]. The MgO(100) substrate has been considered on the basis that it is a often used in experiments, it is a well-characterized and rigid substrate strongly interacting with a metallic cluster and it can be accurately treated with standard ab-initio methods. Its checkerboard (100) surface allow us to analyse any effects between flat and corrugated oxide-cluster interfaces on the calculated E$_{chem}$ values. The size of the supercell (17.9803 Å$\times$ 17.9803 Å$\times$ 29.2380 Å) provides a sufficiently large vacuum region, necessary to avoid any spurious interactions between neighbouring periodic images. All DFT calculations have been performed spin-polarised. The Brillouin zone was sampled at the Gamma point only. Empirical dispersion corrections are included via single-point calculations on PBE relaxed configurations using the DFT+D functional, which includes a pairwise addition C$_6$R$^6$ correction term[@GrimmeCOMPCHEM2004]. O$_2$ adsorption (chemisorption) energies ($E_{chem}$) are calculated as total energy differences between the interacting configuration (E$_{O_2 + cluster/MgO}$), the lowest-energy supported bare cluster (E$_{cluster/MgO}$), and the O$_2$ molecule in the gas-phase: $$E_{chem} = - (E_{O_2 + cluster/MgO} - E_{cluster/MgO} - E_{O_{2, gas phase}}) \label{eq1}$$ where positive E$_{chem}$ values indicate a exothermic O$_2$ adsorption. We also quantify the Pt-Ni, Pt and Ni clusters *core* and *shell* strain as the percentage difference between the average nearest-neighbour (NN) distance for those atoms occupying core (d$_{NN}^{c}$) and shell (d$_{NN}^{c}$) positions with respect to PBE calculated Pt (2.812 Å) and Ni (2.487 Å) lattice constants, while for Pt-Ni we considered the Ni-Pt$_3$ phase with an experimental value of 2.718 Å, see ref.[@EngelkePHD2010] (d$_{NN}^{PBE}$). Negative values in Eq. \[eq2\] indicate a compressive strain felt by the cluster: $$\begin{array}{cc} s_{core} = {\frac{d_{NN}^{c} - d_{NN}^{PBE}}{d_{NN}^{PBE}}} \times 100 \\ \\ s_{shell} = {\frac{d_{NN}^{s} - d_{NN}^{PBE}}{d_{NN}^{PBE}}} \times 100 \\ \end{array} \label{eq2}$$ We employ another quantity to geometrically characterise the adsorption site, the generalized coordination number (GCN). Introduced by Sautet and co-workers for monometallic Pt systems the GCN is a new descriptor as robust as the *d*-band center, which establishes a direct link between geometry, adsorption map and activity[@CallePHYSCHHEMLETT2014; @CalleANGEWANDTE014; @CalleNATURECHEM2015; @CalleSCIENCE2015]. It to help us differentiate each adsorption site despite size, shape and composition of the cluster systems in order to highlight adsorption trends across a wide variety of structures. The GCN can be defined as the coordination number at the adsorption site weighted by the overall coordination of the neighbouring atoms: $$GCN(i) = \sum_{j=1}^{n_i} CN(j) / CN_{max} \label{eq3}$$ where the sum runs over all neighbouring atoms *n$_i$* of the atom *i* and each neighbour is weighted according to its coordination number divided by the maximum coordination number of the adsorption site. We keep *CN$_{max}$* = 18 as the O$_2$ molecule is adsorbed at a bridge site (on top of 2 Pt$_{shell}$ atoms), as shown recently for larger supported Pt-Ni clusters[@AsaraACSCATALYSIS2016].\ \[h\] \[0.2\][ ]{} Results and Discussion ====================== \[htb\] [@cc@]{} \[0.10 \][ ]{} & \[0.10 \][ ]{}\ \[0.10 \][ ]{} & \[0.10 \][ ]{}\ \[0.10 \][ ]{} & \[0.10 \][ ]{}\ Due to the wide variety of Pt-Ni geometries and sizes we were able to identify several inequivalent sites for O$_2$ adsorption. These O$_2$ adsorption sites have been grouped into different *families* as shown in both Figs \[fig:sites1\] and \[fig:sites2\]. Prior to DFT relaxations, the O$_2$ molecule was always placed horizontally towards the cluster surface, (*bridging* between two metal atoms), as recent studies theoretical studies show that this is the preferred adsorption configuration [@TaoJPCB2001; @JenningsNANOSCALE2014; @ShaoNANOLETTERS2011]. Our analysis thus focuses mostly on the differences between the strongest and weakest O$_2$ adsorption sites found for supported Pt-Ni clusters and use the bare Pt and Ni cases as references. Overall adsorption trends ------------------------- \[0.35\][ ]{}\ \[0.2\][ ]{}\ An O$_2$ chemisorption map over Pt, Ni and Pt-Ni supported clusters is shown in Fig.\[fig:Echems\]. Here, calculated E$_{chem}$ values are plotted as a function of all inequivalent sites considered. Previously calculated PBE reference values for the O$_2$ adsorption on the flat Pt(111) are highlighted by the red area (0.46 up to 0.86 eV)[@SautetPRB1999; @EichlerPRL1997; @JenningsNANOSCALE2014; @McEwenPCCP2012], including experimental values (0.4 - 0.5 eV)[@GlandSURFSCIE1980; @SteiningerSURFSCIE1982A], as well as for the stepped Pt(321) surface (1.23 to 1.56 eV)[@BrayLANGMUIR2011]. From Fig.\[fig:Echems\], we identify 4 Pt-Ni adsorption sites which lead to E$_{chem}$ values resembling those of the Pt(111) surface: *FCC*, *HCP*, *between edges*, (111)/(100) and *no vertex* sites. In some cases, calculated E$_{chem}$ values are $<$ 0.5 eV, particularly at *FCC*, *HCP* sites, where in most of the DFT relaxations the O$_2$ molecule undergoes fast dissociation. We must stress we are only analysing the O$_2$ adsorption trends on a variety of inequivalent sites, in order to understand O$_2$ adsorption on a variety of Pt-Ni cluster structures. This represents the first step in the complex oxygen reduction reaction (ORR) which consisting of a series of protonation process of the dissociative products (O, OH, OOH and H$_2$O$_2$) to form a final H$_2$O product[@PEMFuelCells2008].\ The relevance of (111) facets on cluster structures as candidates for low-barrier O$_2$ dissociation has been recently discussed for truncated octahedral (TO) gas-phase clusters at atom sizes 38, 79 and 116 by Jennings *et al*[@JenningsNANOSCALE2014; @JenningsPCCP2014; @JenningsJPCC2015]. According to their work, stronger $E_{chem}$ values are calculated for pure Pt$_{38}$ on both FCC and HCP sites (1.79 and 1.84 eV) compared to a core-shell platinum-titanium Pt$_{32}$Ti$_{6}$ bimetallic cluster (0.38 and 0.74 eV). However, they reported near barrier-free dissociation at the FCC/HCP sites for pure Pt clusters (0.00 and 0.04 eV) with slightly larger barriers for the bimetallic case (0.62 and 0.34 eV). The monometallic case behaviour was explained due to an easily distorted Pt(111) facet - namely the central Pt atom - facilitating O$_2$ dissociation; while an increase in the rigidity of this (111) facet was due to Ti-alloying at the core, for the bimetallic case. Similar trends were also reported for larger TO$_{79}$ particles[@JenningsNANOSCALE2014]. Overall, the rigidity of the (111) was reported to decrease for 3d metals from the 4-8 group (Ti to Fe), while similar distortions as of the pure Pt cluster were reported for groups 10-12 (Ni, Cu and Zn).\ We monitored in detail the metal-metal (M-M) bond length at the O$_2$ adsorption site changes for the three different systems, Fig. \[fig:Data\] (a). Upon O$_2$ adsorption, the Pt-Pt bond is stretched as a general trend due to a “softer” character of Pt, in order to accommodate the incoming O$_2$ molecule. This is noticeable at edge sites, sites such as *along edges*, both (111)/(111) and (111)/(100); $E_{chem}$ values of 1.60 - 1.94 eV. Particularly at the TO$_{38}$ (111) facet, the central Pt atom involving in O$_2$ adsorption on Pt-Ni clusters is “pulled up”. For Ni clusters, Ni-Ni bonds remain rather stable upon O$_2$ adsorption, clustered around 2.40 - 2.70 Å, while it is the molecule O-O bond which is elongated - in some cases, stretched to the point of dissociation - as shown in Fig. \[fig:Data\] (b). Having a Pt shell, the M-M bond length of bimetallic Pt-Ni clusters resemble those values calculated for monometallic Pt. From Fig.\[fig:Data\] (a), we notice that the M-M bond length in Pt-Ni, Pt and Ni have a “L”-shaped distribution. A linear behaviour is observed where shortening the M-M bond stands for an E$_{chem}$ increment; when a critical compression is reached there is an abrupt change and small changes in the E$_{chem}$ are associated to very different M-M bond lengths. However, M-M bond elongation is not the sole cause of larger E$_{chem}$, but it is connected to both molecule O-O bond stretching and the adsorption site. Fig. \[fig:Data\] (b), shows only the O-O bond length range for molecular adsorption. From this plot, it seems to be two regimes for Pt-Ni: one where the E$_{chem}$ values increase from 0.18 up to nearly 1.2 eV while the O-O bond decreases (1.43 Å, down to 1.37 Å), namely namely at FCC/HCP sites for TO/CO structures and the *no-vertex* site of Ih clusters. A second regime, starting roughly at 1.3 eV up to 2.1 eV, is observed where the O-O bond length increases from 1.37 Åup to 1.41 Å, mostly *along edges* and *5-fold vertex* sites. However, this behaviour is more erratic for monometallic Pt clusters while Ni clusters involve O-O bond lengths with E$_{chem}$ above 1.5 eV.\ Fig.\[fig:Data\] (c) shows both *core* and *shell* calculated strain within the cluster, as it has been recently further shown a direct link between a nanoparticle (NP) reactivity and its compressive strain, particularly in bimetallic cases[@StrasserNChem2010; @ShaoNANOLETTERS2011; @HernandezNATURECHEM2014]. *Core* strain for Pt and Pt-Ni bimetallic clusters is rather large, particularly for the bimetallic case where compressive values can reach reach values around -10$\%$ as this clusters are still small enough that have not yet developed bond lengths are those of the bulk. Ni clusters values show a modest compression ($<$ -2$\%$), except for the TO$_{25}$(100) structure, where 12 Ni atoms at the cluster/oxide interface need to match equal12 O surface sites, with average interface bond lengths of 2.52 Å, compared to the average Ni-Ni cluster bond length of 2.45 Å; thus enlarging the internal Ni-Ni core distances leading to positive core strain values. Similar trends are seen for the Ih$_{33}$(111), which due to its large and irregular cluster/oxide interface needs to stretch those Ni-Ni bonds at the interface and core positions. Yet, it is difficult to imply a direct correlation with the calculated $E_{chem}$ values, except that we observe a trend where Pt-Ni clusters having a small number of atoms at the cluster/oxide interface ($<$ 8) tend to display a large core strain; while those cluster with a larger number of interfacial atoms, *core* strain tend to decrease up to approximately -4$\%$. Regarding *shell* strain - see Fig.\[fig:Data\] (d) - we observe that Pt-Ni clusters display smaller strain values compared to monometallic Pt clusters. There is a small window, where FCC and HCP sites on Pt$_{32}$Ni$_{6}$TO(111) as well as those for Pt$_{42}$Ni$_{13}$Ih(111) have slightly less ($<$ -1 $\%$) *shell*, leading to E$_{chem}$ values lower than 0.5 eV.\ Bader charge transfer analysis was performed for both Pt-Ni, Pt and Ni clusters at *best* (strong) and *worst* (weak) O$_2$ adsorption sites. In particular, Fig. \[fig:Data\] (e) shows the amount of charge on the O$_2$ molecule calculated for the Pt-Ni, Pt and Ni systems. As a general trend, one can observe that the amount of charge transferred to the supported clusters varies as a function of interfacial metal atoms (number of contact O surface sites), with less charge being transferred from the substrate to Ni clusters, compared to Pt and Pt-Ni clusters. Overall, the O$_2$ molecule subtracts an small amount of charge ($\sim$ 0.35 e$^{-}$) per O atom at Pt-Ni and Pt sites, being larger for Ni ($\sim$ 0.5 e$^{-}$), in line with having larger E$_{chem}$ values calculated for Ni clusters. This excess of charge on the O$_2$ molecule creates a metal-superoxo (M-O$_{2}^{-}$) type intermediate (one electron transferred)[@TaoJPCB2001]. Charge transfer is in line with both Pt (2.28), Ni (1.91) and O(3.44) Pauling electronegativities, and to those structural motifs and interfacial geometries which maximise the number of metal atoms in contact O-surface sites.\ There is a modest tendency of Pt-Ni clusters to reduce the overall *roughness* of the (100) and (111) cluster/oxide interface layers, as shown in Fig. \[fig:Data\] (f). In this plot, the Pt metal-metal (M-M) bond length of the interface layer in Pt-Ni clusters results to be shorter than the average M-M bond length in the cluster (Pt-Ni $\sim$ 2.65 Å), in contrast with the pure cases (Pt $\sim$ 2.73 Å, Ni $\sim$ 2.44 Å), where they show an elongation on the metal distances at the interface. Even though low E$_{chem}$ values are calculated for TO and CO clusters in contact with the oxide through their large (100) facets at low *roughness* (less than 0.1 $\%$), the dispersion of the overall values is too wide to provide any conclusive analysis suggesting a more local chemistry ruling O$_2$ adsorption energies. The role of the support in our work thus facilitates charge being transferred to the supported cluster (which eventually is redistributed among the cluster and O$_2$ molecule) as well as it provide an “anchor” point for the cluster. The substrates could eventually promote the controlled preparation of (111) facets as the Pt-Ni clusters grow in size, where tailored O$_2$ adsorption (and further dissociation) can take place[@AsaraACSCATALYSIS2016].\ Inclusion of van der Waals corrections and Bader charge transfer analysis ------------------------------------------------------------------------- The inclusion of a semi-empirical dispersion correction (DFT+D) via single-point (SCF) calculations on all the inequivalent configurations involving O$_2$ adsorption have, in most cases, modestly increased the calculated E$_{chem}$ values. In general, predicted PBE energetic ordering between adsorption sites was preserved, with only a minor energetic rearrangement among the strongest chemisorption sites. Full DFT+D relaxations were also performed for all 5 chemisorption sites (Fig. \[fig:sites1\]), only for the Pt$_{25}$, Ni$_{25}$ and Pt$_{20}$Ni$_{5}$ (TO)$_{100}$ structures to confirm the an overall preference for edge sites, such as (100)/(111) and (111)/(111) trends. For Pt-Ni clusters, DFT+D relaxations predicted the *along edge* (111)/(100) as the strongest chemisorption site (1.62 eV), closely followed by *along edge* (111)/(111) site (1.47 eV) just as the PBE energetic ordering. Similar gains in energy are calculated for the FCC and HCP sites. For the Pt clusters, the best adsorption site is now the *along edge* (111)/(100) instead of the predicted PBE *along edge*(111)/(111), at 1.96 eV. Being now the O$_2$ molecule at close proximity to the oxide surface (2.99 Å), DFT+D is able to capture an stronger overlap between the electron orbitals of O$_2$ and those of the oxide surface. This results in an E$_{chem}$ increase up to 1.96 eV, from the 1.38 eV single-point (DFT+D) calculation, an increased also calculated for the *between edge*(111)/(100)site, from 1.67 to 1.71 eV. Analogous trends are observed for Ni clusters, where *along edge* (111)/(100) site remains as the one having the largest E$_{chem}$ value (2.55 eV), while marginal gains are calculated for sites *along edge* (111)/(111) and *between edge* (111)/(100) (2.25 and 1.79 eV, respectively).\ Electronic vs. geometric properties ----------------------------------- Upon O$_2$ adsorption, we calculated the projected density of states (PDOS) for the *best* (strongest) and *worst* (weakest) - namely the FCC site - adsorption sites, see Fig. \[fig:dcenters\]. Included within the figure, is the calculated *d*-band center[@VojvodicDBAND2014]. This is done for those Pt atoms (and Ni) involved in the O$_2$ adsorption (which can occur, both molecularly and dissociated). Comparisons between different Pt-Ni clusters are difficult due to the size range considered as well as some *worst* configurations involve dissociated O$_2$. Overall, we observe that the calculated *d*-band centers of the 2 Pt atoms from the *best* adsorption sites (molecularly adsorbed O$_2$) are positioned nearly at the same energy values (see vertical black lines, Fig.\[fig:dcenters\]). For the all the *best* sites, this implies having their corresponding *d*-centers positioned lower in energy with respect to the Fermi energy ($E_{f}$), thus accounting for stronger interactions with the molecule O atoms, except for those configurations where O$_2$ is dissociated, namely Pt$_{20}$Ni$_{5}$TO(100), Pt$_{32}$Ni$_{6}$TO(100) and Pt$_{42}$Ni$_{13}$CO(100), FCC sites). For most of these dissociated configurations, we observe that the position of the Pt *d*-centers, of the least coordinated Pt atom is highly shifted to low energies thus implying a strong interaction with the O atom; see Pt$_{32}$Ni$_{6}$TO(100) and Pt$_{42}$Ni$_{13}$CO(100). Most the rest of the *worst* configurations, the Pt *d*-centers are closer to the $E_{f}$, thus corresponding to weaker interactions between the O$_2$ and the cluster. However, Pt$_{32}$Ni$_{6}$TO(111) is an exception, as the O$_2$ is adsorbed in a molecular fashion, though the the central Pt *d*-centers - occupying the (111) facet where adsorption takes - place is shifted to lower energy values as the Pt atom is “lifted” from its position as it interacts with the O$_2$ molecule, a geometric effect highlighted in previous work[@JenningsNANOSCALE2014]. Larger clusters, such as Pt$_{42}$Ni$_{13}$ (both CO and Ih) and the decahedral Pt$_{47}$Ni$_{11}$, further follow similar *d*-centers shift trends. However, close overlaps between calculated *d*-centers for *worst* and *best* sites, particularly for Pt$_{47}$Ni$_{11}$, are observed as $E_{chem}$ differences for the two sites become smaller. Thus, to get disentangle electronic from geometric effects, we have calculated the corresponding GCN for both adsorption sites for all Pt-Ni clusters as done for the *d*-band centers. The calculated GCN values, as described in Eq. \[eq3\], show for Pt-Ni clusters an overall linear relationship with respect to calculated $E_{chem}$ values; see Fig. \[fig:GCN\]. As the GCN value increases, a reduction on the $E_{chem}$ values is seen, particularly above a GCN $>$ 8, where most of the calculated values involving FCC and HCP sites resemble the experimental and those calculated values for O$_2$ adsorption on Pt(111) surface (0.46 up to 0.86 eV)[@GlandSURFSCIE1980; @SteiningerSURFSCIE1982A; @SautetPRB1999; @EichlerPRL1997; @JenningsNANOSCALE2014; @McEwenPCCP2012], and even below. There was only 1 case where the FCC site reported a lower GCN value (6.722), on the the Pt$_{32}$Ni$_{6}$(111) cluster. Here, the central Pt atom of the (111) facet was lifted with respect to the facert thus separating it from the rest of the core Pt atoms and thus lowering its GCN value. On the opposite, the strongest $E_{chem}$ values ($>$ 0.64 eV and above) can be found at sites such as *along edges* (111)/(111); where GCN between 4 and 5 are calculated. For monometallic Pt clusters, a similar trend is observed. However, the slope of the linear fitting is less pronounced due to the “soft” nature of the Pt-Pt bonds within the cluster, which can stretch their cluster M-M bonds to accommodate the incoming O$_2$ molecule effecting the overall GCN and $E_{chem}$ values. Ni clusters on the other hand, interact their strongly with O$_2$ that for GCN values between 5 and 6 ($\sim 2 eV$) - leaving a narrow window from 6 to 8 for a slightly weaker adsorption - again at GCN above 8, the O$_2$ molecule is strongly adsorbed and in some cases fastly dissociated thus affecting the overall linear trend. Interestingly, we have reported similar trends where O$_2$ adsorption can take place at low $E_{chem}$ ($<$ 0.5 eV) for GCN values above 8, for the particular case of larger TO and CO clusters involving 82 and 86 atoms, respectively[@AsaraACSCATALYSIS2016]. Conclusions =========== Using a wide variety of Pt-Ni, Pt and Ni geometries, sizes and compositions supported over MgO(100) our results we shown that molecular have shown that O$_2$ adsorption on the sub-nanometer range (from 25 up to 58 atoms) is a rather complex case. Their corresponding electronic structure as analysed via the calculated position of *d*-band center as well as the local geometry at the adsorption site using the generalised coordination number (GCN). The calculation of these two quantities show that these two effects are deeply intertwined, directly affecting the calculated E$_{chem}$ values. O$_2$ adsorption is regulated by mainly 2 types of inequivalent sites: (a) strongest E$_{chem}$ values (above 1 eV) are calculated at cluster edges, such as the *along edge*(111)/(111) and (111)/(100) as well as the *between edge*(111)/(100) sites which tend to have a low GCN value ($< $6); and (b) the FCC and HCP sites where the *weakest* E$_{chem}$ values (less than 0.64 eV) are calculated, involving GCN values $>$ 8 which resemble those O$_2$ adsorption energies of the flat Pt(111) surface. In general, calculated *d*-band centers positions for those Pt atoms at the adsorption are located at lower energies from the Fermi level (E$_f$), thus indicating a stronger binding between the metal and the O$_2$ adsorbate. The opposite is seen for the weakest adsorption sites, where *d*-band centers positions are closer to the E$_f$; though this trend can be affected if the O$_2$ is found to be dissociated as Pt interactions with atomic O are stronger compared to the molecular case. From our calculations provide - as far as we are concerned - the first O$_2$ chemisorption map for across a wide range of supported Pt-Ni clusters. Clear O$_2$ adsorption trends provide a deeper understanding on potential adsorption sites for low-barrier O$_2$ dissociation, following Sabatier principle. This will imply that the best catalysts should be the ones that bind both atoms and molecules not too weakly nor too strongly, in order to activate the reactants and to efficiently desorb the products[@NoskovCHEMREV2008]. We show from our results that it is theoretically possible to construct core-shell systems, where the core of the nanoparticle (cluster) is made of a cheaper metal (Ni) compared to a more expensive metal on the shell (Pt). This not only reduces the amount of Pt-loading by alloying; but preserves the Pt chemistry at a mere 1 atomic-layer Pt-shell where O$_2$ adsorption process can occur with calculated E$_{chem}$ values of the order of those seen for a full monometallic Pt nanoparticle and bulk Pt(111) surface. Having systematically analysed the effects of O$_2$ adsorption over supported over mono (Pt, Ni) and bimetallic (Pt-Ni) clusters ($\textless$1nm.), we foresee that a potential tailoring of chemisorption properties and the rational design of novel nanocatalysts will ultimately depend a synergistic combination of controlled NPs preparation methods (size, geometric and segregation effects) and on a profound knowledge of the complex electronic properties of both NPs and oxide support. Acknowledgements ================ This work has been supported by the UK research council EPSRC under the Critical Mass TOUCAN project, grant number EP/J010812/1. FB benefit of the financial support by EPSRC, under Grant No. EP/GO03146/1 and the Royal Society grant No. RG120207. Density Functional Theory (DFT) calculations have been performed on ARCHER (Cray XC30 system), the UK’s National High-Performance Computing (HPC) Facility. LOPB acknowledges Supercómputo UNAM (Miztli) for computational resources under projects SC16-1-IG-78 and SC15-1-IG-82 as well as financial support from PAPIIT-UNAM (Project IA102716). [^1]: Department of Physics, King’s College London, WC2R2LS, London [^2]: Instituto de Física, Universidad Nacional Autónoma de México, Apartado Postal 20-364, México DF 01000, México. email: oliver$\[email protected]
--- abstract: 'Physics informed neural networks (PINNs) have recently been widely used for robust and accurate approximation of PDEs. We provide rigorous upper bounds on the generalization error of PINNs approximating solutions of the forward problem for PDEs. An abstract formalism is introduced and stability properties of the underlying PDE are leveraged to derive an estimate for the generalization error in terms of the training error and number of training samples. This abstract framework is illustrated with several examples of nonlinear PDEs. Numerical experiments, validating the proposed theory, are also presented.' author: - 'Siddhartha Mishra [^1] and Roberto Molinaro [^2]' bibliography: - 'MMpaperI.bib' title: 'Estimates on the generalization error of Physics Informed Neural Networks (PINNs) for approximating PDEs.' --- Introduction ============ Deep learning has emerged as a central tool in science and technology in the last few years. It is based on using deep artificial neural networks (DNNs), which are formed by composing many layers of affine transformations and scalar non-linearities. These deep neural networks have been applied with tremendous success [@DLnat] in a variety of tasks such as image and text classification, speech and natural language processing, robotics, game intelligence and protein folding [@Dfold], among others. Partial differential equations (PDEs) model a vast array of natural and manmade phenomena in all areas of science and technology. Explicit solution formulas are only available for very specific types and examples of PDEs. Hence, numerical simulations are necessary for most practical applications featuring PDEs. A diverse set of methods for approximating PDEs numerically are available, such as finite difference, finite element, finite volume and spectral methods. Although very successful in practice, it is still challenging to numerically simulate problems such as Uncertainty quantification (UQ), multi-scale and multi-physics problems, Inverse and constrained optimization problems, PDEs in domains with very complex geometries and PDEs in very high dimensions. Could deep learning assist in the computations of these hitherto difficult to simulate problems involving PDEs? Deep learning techniques are being increasingly used in the numerical approximations of PDEs. A brief and very incomplete survey of this rapidly emerging literature follows: one approach in using deep neural networks (DNNs) for numerically approximating PDEs is based on explicit (or semi-implicit) representation formulas such as the Feynman-Kac formula for parabolic (and elliptic) PDEs, whose compositional structure is in turn utilized to facilitate approximation by DNNs. This approach is presented and analyzed for a variety of (parametric) elliptic and parabolic PDEs in [@E1; @HEJ1; @Jent1] and references therein, see a recent paper [@Pet1] for a similar approach to approximating linear transport equations with deep neural networks. Another strategy is to enhance existing numerical methods by adding deep learning inspired modules into them, for instance by learning free parameters of numerical schemes from data [@SM1; @DR1] and references therein. A third approach consists of using deep neural networks to learn *observables* or quantities of interest of the solutions of the underlying PDEs, from data. This approach has been described in [@LMR1; @LMM1; @MR1; @LMPR1] in the context of uncertainty quantification and PDE constrained optimization and [@QUAT1] for model order reduction, among others. Finally, deep neural networks possess the so-called *universal approximation property* [@Bar1; @Hor1; @Cy1], namely any continuous, even measurable, function can be approximated by DNNs, see [@YAR1] for very precise descriptions of the required neural network architecture for functions with sufficient Sobolev regularity. Hence, it is natural to use deep neural networks as ansatz spaces for the solutions of PDEs, in particular by collocating the PDE residual at training points (see section \[sec:2\], Algorithm \[alg:PINN\] for the detailed description of this approach). This approach was first proposed in [@Lag1; @Lag2]. However, it has been revived and developed in significantly greater detail in the pioneering contributions of Karniadakis and collaborators, starting with [@KAR1; @KAR2]. These authors have termed the underlying neural networks as *Physics Informed Neural Networks* (PINNs) and we will continue to use this, by now, widely accepted terminology in this paper. There has been an explosive growth of papers that present algorithms with PINNs for various applications to both forward and inverse problems for PDEs and a very incomplete list of references include [@KAR4; @KAR5; @KAR6; @KAR7] and references therein. Needless to say, PINNs have emerged as a very successful paradigm for approximating different aspects of solutions of PDEs. Why do PINNs approximate a wide variety of PDEs so well? Although many heuristic reasons have been proposed in some of the afore cited papers, particularly by highlighting the role played by (even small amounts of) data in driving the neural network towards the target and the role of the residual in changing training modes, there is very little rigorous justification of why PINNs work. With the exception of the very recent paper [@DAR1], there are few rigorous bounds on the approximation error due to PINNs. In [@DAR1], the authors prove consistency of PINNs by showing convergence, under reasonable hypothesis, to solutions of linear elliptic and parabolic PDEs as the number of training samples is increased. A detailed comparison between [@DAR1] and the current article is provided in section \[sec:6\]. The main goal of this paper is to provide a rigorous rationale of why PINNs are so efficient at approximating solutions for the *forward problem* for PDEs, under reasonable and verifiable hypothesis on the underlying PDE. To this end, we will present an abstract framework for PINNs that encompasses a wide variety of potential applications, including to nonlinear PDEs, and prove rigorous estimates on the so-called *generalization error* i.e, the error of the neural network on predicting unseen data. This abstract estimate bounds the generalization error in terms of the underlying training error, number of training (collocation) points and stability bounds for the underlying PDE. Our generalization error estimate will show that the error due to approximating the underlying PDE with a trained PINN will be sufficiently low as long as - The training error is low i.e, the PINN has been trained well. This error is computed and monitored during the training process. Hence, it is available *a posteriori*. - The number of training (collocation) points is sufficiently large. This number is determined by the error due to an underlying quadrature rule. - The solution of the underlying PDE is stable (with respect to perturbations of inputs) in a very precise manner. For nonlinear PDEs, these stability bounds might require that the solutions of the underlying PDEs (and the PINNs) are sufficiently regular. Thus, with the derived error estimate, we identify possible mechanisms by which PINNs are able to approximate PDEs so well and provide a firm mathematical foundation for approximations by PINNs. We will also provide three concrete examples to illustrate our abstract framework and error estimate, namely linear and semi-linear parabolic equations, one-dimensional scalar quasilinear parabolic (and hyperbolic) conservation laws and the incompressible Euler equations of fluid dynamics. The abstract error estimate is described in concrete terms for each example and numerical experiments are presented to illustrate and validate the proposed theory. The aim is to convince the reader of why PINNs, when correctly formulated, are successful at approximating the forward problem for PDEs numerically. The rest of the paper is organized as follows: in section \[sec:2\], we formulate PINNs for an abstract PDE and prove the estimate on the generalization error. In sections \[sec:3\], \[sec:4\] and \[sec:5\], the abstract framework and error estimate is worked out in the concrete examples of semi-linear Parabolic PDEs, viscous scalar conservation laws and the incompressible Euler equations of fluid dynamics. An abstract framework for Physics informed Neural Networks {#sec:2} ========================================================== The underlying abstract PDE {#sec:21} --------------------------- Let $X,Y$ be separable Banach spaces with norms $\| \cdot \|_{X}$ and $\|\cdot\|_{Y}$, respectively. For definiteness, we set $Y = L^p({\mathbb{D}};{\mathbb{R}}^m)$ and $X= W^{s,q}({\mathbb{D}};{\mathbb{R}}^m)$, for $m {\geqslant}1$, $1 {\leqslant}p,q < \infty$ and $s {\geqslant}0$, with ${\mathbb{D}}\subset {\mathbb{R}}^{\bar{d}}$, for some $\bar{d} {\geqslant}1$. In particular, we will also consider space-time domains with ${\mathbb{D}}= (0,T) \times D \subset {\mathbb{R}}^d$ with $d {\geqslant}1$. In this case $\bar{d} = d +1$. Let $X^{\ast} \subset X$ and $Y^{\ast} \subset Y$ be closed subspaces with norms $\|\cdot \|_{X^{\ast}}$ and $\|\cdot\|_{Y^{\ast}}$, respectively. We start by considering the following abstract formulation of our underlying PDE: $$\label{eq:pde} {\EuScript{D}}({{\bf u}}) = {\mathbf{f}}.$$ Here, the *differential operator* is a mapping, ${\EuScript{D}}: X^{\ast} \mapsto Y^{\ast}$ and the *input* ${\mathbf{f}}\in Y^{\ast}$, such that $$\label{eq:assm1} \begin{aligned} &(H1): \quad \|{\EuScript{D}}({{\bf u}})\|_{Y^{\ast}} < +\infty, \quad \forall~ {{\bf u}}\in X^{\ast}, ~{\rm with}~\|{{\bf u}}\|_{X^{\ast}} < +\infty. \\ &(H2):\quad \|{\mathbf{f}}\|_{Y^{\ast}} < +\infty. \end{aligned}$$ Moreover, we assume that for all ${\mathbf{f}}\in Y^{\ast}$, there exists a unique ${{\bf u}}\in X^{\ast}$ such that holds. Furthermore, the solutions of the abstract PDE satisfy the following stability bound, let $Z \subset X^{\ast} \subset X$ be a closed subspace with norm $\|\cdot\|_{Z}$. For any ${{\bf u}},{\mathbf{v}}\in Z$, the differential operator ${\EuScript{D}}$ satisfies $$\label{eq:assm2} (H3):\quad \|{{\bf u}}- {\mathbf{v}}\|_{X} {\leqslant}C_{pde}\left(\|{{\bf u}}\|_Z, \|{\mathbf{v}}\|_Z \right) \|{\EuScript{D}}({{\bf u}}) - {\EuScript{D}}({\mathbf{v}})\|_{Y}.$$ Here, the constant $C_{pde} > 0$ explicitly depends on $|{{\bf u}}\|_Z$ and $\|{\mathbf{v}}\|_Z$. As a first example of a PDE with solutions satisfying (H3) , consider the linear differential operator ${\EuScript{D}}: X \mapsto Y$ i.e ${\EuScript{D}}(\alpha {{\bf u}}+ \beta {\mathbf{v}}) = \alpha {\EuScript{D}}({{\bf u}}) + \beta {\EuScript{D}}({\mathbf{v}})$, for any $\alpha,\beta \in {\mathbb{R}}$. For simplicity, let $X^{\ast} = X$ and $Y^{\ast} = Y$. By the assumptions on the existence and uniqueness of the underlying linear PDE , there exists an *inverse* operator ${\EuScript{D}}^{-1}: Y \mapsto X$. Note that the assumption is satisfied if the inverse is bounded i.e, $\|{\EuScript{D}}^{-1}\| {\leqslant}C < +\infty$, with respect to the natural norm on linear operators from $Y$ to $X$. Thus, the assumption on stability boils down to the boundedness of the inverse operator for linear PDEs. Many well-known linear PDEs possess such bounded inverses [@DL1]. As a second example, we will consider a nonlinear PDE , but with a well-defined linearization i.e, there exists an operator $\overline{{\EuScript{D}}}: X^{\ast} \mapsto Y^{\ast}$, such that $$\label{eq:lin} {\EuScript{D}}({{\bf u}}) - {\EuScript{D}}({\mathbf{v}}) = \overline{{\EuScript{D}}}_{({{\bf u}},{\mathbf{v}})}\left({{\bf u}}- {\mathbf{v}}\right), \quad \forall {{\bf u}},{\mathbf{v}}\in X^{\ast}.$$ Again for simplicity, we will assume that $X^{\ast} = X$ and $Y^{\ast} = Y$. We further assume that the inverse of $\overline{{\EuScript{D}}}$ exists and is bounded in the following manner, $$\label{eq:lin1} \|\left(\overline{{\EuScript{D}}}_{({{\bf u}},{\mathbf{v}})}\right)^{-1}\| {\leqslant}C\left(\|{{\bf u}}\|_{X},\|{\mathbf{v}}\|_{X} \right) < +\infty, \quad \forall {{\bf u}},{\mathbf{v}}\in X,$$ with the norm of $\overline{{\EuScript{D}}}^{-1}$ being an operator norm, induced by linear operators from $Y$ to $X$. Then a straightforward calculation shows that suffices to establish the stability bound . Further and more concrete examples of PDEs satisfying the above assumptions will be provided in the following sections. We note that initial and boundary conditions are implicitly included in the PDE . Moreover, by assuming that $Y = L^p({\mathbb{D}})$, we are implicitly assuming some regularity for the solutions of the abstract PDE . In particular, depending on the order of derivatives in the differential operator ${\EuScript{D}}$, we can expect that the solution ${{\bf u}}$ satisfies the PDE in a classical sense i.e, pointwise. Quadrature rules {#sec:22} ---------------- In the following section, we need to consider approximating integrals of functions. Hence, we need an abstract formulation for quadrature. To this end, we consider a mapping $g: {\mathbb{D}}\mapsto {\mathbb{R}}^m$, such that $g \in Z^{\ast} \subset Y^{\ast}$. We are interested in approximating the integral, $$\overline{g}:= \int\limits_{{\mathbb{D}}} g(y) dy,$$ with $dy$ denoting the $\bar{d}$-dimensional Lebesgue measure. In order to approximate the above integral by a quadrature rule, we need the quadrature points $y_{i} \in {\mathbb{D}}$ for $1 {\leqslant}i {\leqslant}N$, for some $N \in {\mathbb{N}}$ as well as weights $w_i$, with $w_i \in {\mathbb{R}}_+$. Then a quadrature is defined by, $$\label{eq:quad} \overline{g}_N := \sum\limits_{i=1}^N w_i g(y_i),$$ for weights $w_i$ and quadrature points $y_i$. We further assume that the quadrature error is bounded as, $$\label{eq:assm3} \left|\overline{g} - \overline{g}_N\right| {\leqslant}C_{quad} \left(\|g\|_{Z^{\ast}},\bar{d} \right) N^{-\alpha},$$ for some $\alpha > 0$. As long as the domain ${\mathbb{D}}$ is in reasonably low dimension i.e $\bar{d} {\leqslant}4$, we can use standard (composite) Gauss quadrature rules on an underlying grid. In this case, the quadrature points and weights depend on the order of the quadrature rule [@SBbook] and the rate $\alpha$ depends on the regularity of the underlying integrand i.e, on the space $Z^{\ast}$. On the other hand, these grid based quadrature rules are not suitable for domains in high dimensions. For moderately high dimensions i.e $4 {\leqslant}\bar{d} \approx 20$, we can use *low discrepancy sequences*, such as the Sobol and Halton sequences, as quadrature points [@CAF1]. As long as the integrand $g$ is of bounded *Hardy-Krause variation* [@owen], the error in converges at a rate $(\log(N))^{\bar{d}}N^{-1}$. One can also employ sparse grids and Smolyak quadrature rules [@sgbook] in this regime. For problems in very high dimensions $\bar{d} \gg 20$, Monte-Carlo quadrature is the numerical integration method of choice [@CAF1]. In this case, the quadrature points are randomly chosen, independent and identically distributed (with respect to a scaled Lebesgue measure). The estimate holds in the root mean square (RMS) sense and the rate of convergence is $\alpha = \frac{1}{2}$. PINNs {#sec:23} ----- In this section, we will describe physics-informed neural networks (PINNs). We start with a description of neural networks which form the basis of PINNs. ### Neural Networks. Given an input $y \in {\mathbb{D}}$, a feedforward neural network (also termed as a multi-layer perceptron), shown in figure \[fig:1\], transforms it to an output, through a layer of units (neurons) which compose of either affine-linear maps between units (in successive layers) or scalar non-linear activation functions within units [@DLbook], resulting in the representation, $$\label{eq:ann1} {{\bf u}}_{\theta}(y) = C_K \circ\sigma \circ C_{K-1}\ldots \ldots \ldots \circ\sigma \circ C_2 \circ \sigma \circ C_1(y).$$ Here, $\circ$ refers to the composition of functions and $\sigma$ is a scalar (non-linear) activation function. A large variety of activation functions have been considered in the machine learning literature [@DLbook]. Popular choices for the activation function $\sigma$ in include the sigmoid function, the hyperbolic tangent function and the *ReLU* function. For any $1 {\leqslant}k {\leqslant}K$, we define $$\label{eq:C} C_k z_k = W_k z_k + b_k, \quad {\rm for} ~ W_k \in {\mathbb{R}}^{d_{k+1} \times d_k}, z_k \in {\mathbb{R}}^{d_k}, b_k \in {\mathbb{R}}^{d_{k+1}}.$$ For consistency of notation, we set $d_1 = \bar{d}$ and $d_K = m$. Thus in the terminology of machine learning (see also figure \[fig:1\]), our neural network consists of an input layer, an output layer and $(K-1)$ hidden layers for some $1 < K \in {\mathbb{N}}$. The $k$-th hidden layer (with $d_k$ neurons) is given an input vector $z_k \in {\mathbb{R}}^{d_k}$ and transforms it first by an affine linear map $C_k$ and then by a nonlinear (component wise) activation $\sigma$. A straightforward addition shows that our network contains $\left(\bar{d} + m + \sum\limits_{k=2}^{K-1} d_k\right)$ neurons. We also denote, $$\label{eq:theta} \theta = \{W_k, b_k\},~ \theta_W = \{ W_k \}\quad \forall~ 1 {\leqslant}k {\leqslant}K,$$ to be the concatenated set of (tunable) weights for our network. It is straightforward to check that $\theta \in \Theta \subset {\mathbb{R}}^M$ with $$\label{eq:ns} M = \sum\limits_{k=1}^{K-1} (d_k +1) d_{k+1}.$$ ![An illustration of a (fully connected) deep neural network. The red neurons represent the inputs to the network and the blue neurons denote the output layer. They are connected by hidden layers with yellow neurons. Each hidden unit (neuron) is connected by affine linear maps between units in different layers and then with nonlinear (scalar) activation functions within units.[]{data-label="fig:1"}](Images/ANN.png){width="8cm"} ### Training PINNs: Loss functions and optimization The neural network ${{\bf u}}_{\theta}$ depends on the tuning parameter $\theta \in \Theta$ of weights and biases. Within the standard paradigm of deep learning [@DLbook], one *trains* the network by finding tuning parameters $\theta$ such that the loss (error, mismatch, regret) between the neural network and the underlying target is minimized. Here, our target is the solution ${{\bf u}}\in X^{\ast}$ of the abstract PDE and we wish to find the tuning parameters $\theta$ such that the resulting neural network ${{\bf u}}_{\theta}$ approximates ${{\bf u}}$. Following standard practice of machine learning, one obtains training data ${{\bf u}}(y)$, for all $y \in {\EuScript{S}}$, with training set ${\EuScript{S}}\subset {\mathbb{D}}$ and then minimizes a loss function of the form $\sum\limits_{{\EuScript{S}}} \|{{\bf u}}(y) - {{\bf u}}_{\theta}(y)\|_{X}$ to find the neural network approximation for ${{\bf u}}$. However, obtaining this training data requires possibly expensive numerical simulations of the underlying PDE . In order to circumvent this issue, the authors of [@Lag1] suggest a different strategy. An abstract paraphrasing of this strategy runs as follows: we assume that for every $\theta \in \Theta$, the neural network ${{\bf u}}_{\theta} \in X^{\ast}$ and $\|{{\bf u}}_{\theta} \|_{X^{\ast}} < +\infty$. We define the following *residual*: $$\label{eq:res1} {\EuScript{R}}_{\theta} = {\EuScript{R}}({{\bf u}}_\theta):= {\EuScript{D}}\left({{\bf u}}_{\theta}\right) - {\mathbf{f}}.$$ By assumptions (H1),(H2) , we see that ${\EuScript{R}}\in Y^{\ast}$ and $\|{\EuScript{R}}\|_{Y^{\ast}} < +\infty$ for all $\theta \in \Theta$. Note that ${\EuScript{R}}({{\bf u}}) = {\EuScript{D}}({{\bf u}}) - {\mathbf{f}}\equiv 0$, for the solution ${{\bf u}}$ of the PDE . Hence, the term *residual* is justified for . The strategy of PINNs, following [@Lag1], is to minimize the *residual* , over the admissible set of tuning parameters $\theta \in \Theta$ i.e $$\label{eq:pinn1} {\rm Find}~\theta^{\ast} \in \Theta:\quad \theta^{\ast} = {\rm arg}\min\limits_{\theta \in \Theta} \|{\EuScript{R}}_{\theta}\|_{Y}.$$ Realizing that $Y = L^p({\mathbb{D}})$ for some $1 {\leqslant}p < \infty$, we can equivalently minimize, $$\label{eq:pinn2} {\rm Find}~\theta^{\ast} \in \Theta:\quad \theta^{\ast} = {\rm arg}\min\limits_{\theta \in \Theta} \|{\EuScript{R}}_{\theta}\|^p_{L^p({\mathbb{D}})} = {\rm arg}\min\limits_{\theta \in \Theta} \int\limits_{{\mathbb{D}}} |{\EuScript{R}}_{\theta}(y)|^p dy.$$ As it will not be possible to evaluate the integral in exactly, we need to approximate it numerically by a quadrature rule. To this end, we use the quadrature rules discussed earlier and select the *training set* ${\EuScript{S}}= \{y_n\}$ with $y_n \in {\mathbb{D}}$ for all $1 {\leqslant}n {\leqslant}N$ as the quadrature points for the quadrature rule and consider the following *loss function*: $$\label{eq:lf1} J(\theta):= \sum\limits_{n=1}^N w_n |{\EuScript{R}}_{\theta}(y_n)|^p = \sum\limits_{n=1}^N w_n \left| {\EuScript{D}}({{\bf u}}_{\theta}(y_n)) - {\mathbf{f}}(y_n) \right|^p.$$ It is common in machine learning [@DLbook] to regularize the minimization problem for the loss function i.e we seek to find, $$\label{eq:lf2} \theta^{\ast} = {\rm arg}\min\limits_{\theta \in \Theta} \left(J(\theta) + \lambda_{reg} J_{reg}(\theta) \right).$$ Here, $J_{reg}:\Theta \to {\mathbb{R}}$ is a *regularization* (penalization) term. A popular choice is to set $J_{reg}(\theta) = \|\theta_W\|^q_q$ for either $q=1$ (to induce sparsity) or $q=2$. The parameter $0 {\leqslant}\lambda_{reg} \ll 1$ balances the regularization term with the actual loss $J$ . The above minimization problem amounts to finding a minimum of a possibly non-convex function over a subset of ${\mathbb{R}}^M$ for possibly very large $M$. We will follow standard practice in machine learning and solve this minimization problem approximately by either (first-order) stochastic gradient descent methods such as ADAM [@adam] or even higher-order optimization methods such as LBFGS [@lbfgs]. For notational simplicity, we denote the (approximate, local) minimum in as $\theta^{\ast}$ and the underlying deep neural network ${{\bf u}}^{\ast}= {{\bf u}}_{\theta^{\ast}}$ will be our physics-informed neural network (PINN) approximation for the solution ${{\bf u}}$ of the PDE . The proposed algorithm for computing this PINN is given below, \[alg:PINN\] [**Finding a physics informed neural network to approximate the solution of the PDE** ]{}. - Underlying domain ${\mathbb{D}}$, differential operator ${\EuScript{D}}$ and input source term ${\mathbf{f}}$ for the PDE , quadrature points and weights for the quadrature rule , non-convex gradient based optimization algorithms. - Find PINN ${{\bf u}}^{\ast}= {{\bf u}}_{\theta^{\ast}}$ for approximating the PDE . - Choose the training set ${\EuScript{S}}= \{y_n\}$ for $y_n \in {\mathbb{D}}$, for all $1 {\leqslant}n {\leqslant}N$ such that $\{y_n\}$ are quadrature points for the underlying quadrature rule . - For an initial value of the weight vector $\overline{\theta} \in \Theta$, evaluate the neural network ${{\bf u}}_{\overline{\theta}}$ , the PDE residual , the loss function and its gradients to initialize the underlying optimization algorithm. - Run the optimization algorithm till an approximate local minimum $\theta^{\ast}$ of is reached. The map ${{\bf u}}^{\ast} = {{\bf u}}_{\theta^{\ast}}$ is the desired PINN for approximating the solution ${{\bf u}}$ of the PDE . \[rem:1\] The standard practice in machine learning is to approximate the solution ${{\bf u}}$ of from training data $\big(z_j,{{\bf u}}(z_j)\big)$, for $z_j \in {\mathbb{D}}$ and $1 {\leqslant}j {\leqslant}N_d$, then one would like to minimize the so-called *data loss*: $$\label{eq:ld} J_{d}(\theta) := \frac{1}{N_d} \sum\limits_{j=1}^{N_d} |{{\bf u}}(z_j) - {{\bf u}}_{\theta}(z_j)|^p.$$ Comparing the loss functions and reveals the essence of PINNs, i.e, for PINNs, one does not necessarily need any training data for the solution ${{\bf u}}$ of , only the residual needs to be evaluated, for which knowledge of the differential operator and inputs for suffice. Here, we distinguish between initial and boundary data, which is implicitly included into the formulation of the PDE and which are necessary for any numerical solution of , from other types of data, for instance values of the solution ${{\bf u}}$, from the interior of the domain ${\mathbb{D}}$. Thus, the PINN in this formulation, which is closer in spirit to the original proposal of [@Lag1], can be purely thought of as a numerical method for the PDE . In this form, one can consider PINNs as an example of *unsupervised learning* [@MLbook2], as no explicit data is necessary. On the other hand, the authors of [@KAR2] and subsequent papers, have added further flexibility to PINNs by also including training data by augmenting the loss function with the data loss and seeking neural networks with tuning parameters defined by, $$\label{eq:lf3} \theta^{\ast} = {\rm arg}\min\limits_{\theta \in \Theta} \left(J_d(\theta) + \lambda J(\theta) + \lambda_{reg} J_{reg}(\theta) \right),$$ with an additional hyperparameter $\lambda$ that balances the data loss with the residual. This paradigm has been very successful for inverse problems, where boundary and initial data may not be known [@KAR2; @KAR4]. However, for the rest of the paper, we focus on the forward problem and only consider PINNs, trained with algorithm \[alg:PINN\], that minimize the residual based loss function \[eq:lf2\], without needing additional data. Algorithm \[alg:PINN\] requires the residual , for the neural network ${{\bf u}}_{\theta}$, to be evaluated pointwise for every training step. Hence, one needs the neural network to be sufficiently regular. Depending on the order of derivatives in ${\EuScript{D}}$ of , this can be ensured by requiring sufficient smoothness for the activation function. Hence, the ReLU activation function, which is only Lipschitz continuous, might not be admissible in this framework. On the other hand, smooth activation functions such as logistic and $\tanh$ are always admissible. An abstract estimate on the generalization error ------------------------------------------------ In this section, we will estimate the error due to the PINN (generated by algorithm \[alg:PINN\]) in approximating the solution ${{\bf u}}$ of the PDE . The relevant concept of error is the so-called *generalization error* (see [@MLbook]): $$\label{eq:egen} {\EuScript{E}}_G= {\EuScript{E}}_{G} (\theta^{\ast};{\EuScript{S}}) := \|{{\bf u}}-{{\bf u}}^{\ast}\|_{X}$$ Clearly, the generalization error depends on the chosen training set ${\EuScript{S}}$ and the trained neural network with tuning parameters $\theta^{\ast}$, found by algorithm \[alg:PINN\]. However, we will suppress this dependence due to notational convenience. We remark that the generalization error is the error emanating from approximating the solution ${{\bf u}}$ of by the PINN ${{\bf u}}^{\ast}$, generated by the algorithm \[alg:PINN\]. Note that there is no computation of the generalization error during the training process. On the other hand, we monitor the so-called *training error* given by, $$\label{eq:train} {\EuScript{E}}_T:= \left(\sum\limits_{n=1}^N w_n |{\EuScript{R}}_{\theta^{\ast}}(y_n)|^p \right)^{\frac{1}{p}} = J(\theta^{\ast})^{\frac{1}{p}}.$$ Hence, the training error ${\EuScript{E}}_T$ can be readily computed, after training has been completed, from the loss function . The generalization error can be estimated in terms of the training error in the following theorem. \[thm:1\] Let ${{\bf u}}\in Z \subset X^{\ast}$ be the unique solution of the PDE and assume that the stability hypothesis holds. Let ${{\bf u}}^{\ast} \in Z \subset X^{\ast}$ be the PINN generated by algorithm \[alg:PINN\], based on the training set ${\EuScript{S}}$ of quadrature points corresponding to the quadrature rule . Further assume that the residual ${\EuScript{R}}_{\theta^{\ast}}$, defined in , be such that ${\EuScript{R}}_{\theta^{\ast}} \in Z^{\ast}$ and the quadrature error satisfies . Then the following estimate on the generalization error holds, $$\label{eq:egenb} {\EuScript{E}}_G {\leqslant}C_{pde}{\EuScript{E}}_T + C_{pde}C_{quad}^{\frac{1}{p}}N^{-\frac{\alpha}{p}},$$ with constants $C_{pde} = C_{pde}\left(\|{{\bf u}}\|_{Z},\|{{\bf u}}^{\ast}\|_{Z}\right)$ and $C_{quad} = C_{quad}\left(\left\||{\EuScript{R}}_{\theta^{\ast}}|^p\right\|\right)$ stemming from and , respectively. In the following, we denote ${\EuScript{R}}= {\EuScript{R}}_{\theta^{\ast}}$, the residual , corresponding to the trained neural network ${{\bf u}}^{\ast}$. As ${{\bf u}}$ solves the PDE and ${\EuScript{R}}$ is defined by , we easily see that, $$\label{eq:pf1} {\EuScript{R}}= {\EuScript{D}}({{\bf u}}^{\ast}) -{\EuScript{D}}({{\bf u}}).$$ Hence, we can directly apply the stability bound to yield, $$\begin{aligned} {\EuScript{E}}_G &= \|{{\bf u}}- {{\bf u}}^{\ast}\|_{X} \quad ({\rm by}~\eqref{eq:egen}), \\ &{\leqslant}C_{pde} \|{\EuScript{D}}({{\bf u}}^{\ast}) - {\EuScript{D}}({{\bf u}}) \|_{Y} \quad ({\rm by}~\eqref{eq:assm2}), \\ &{\leqslant}C_{pde} \|{\EuScript{R}}\|_{Y} \quad ({\rm by}~\eqref{eq:pf1}).\end{aligned}$$ By the fact that $Y = L^p({\mathbb{D}})$, the definition of the training error and the quadrature rule , we see that, $$\begin{aligned} \|{\EuScript{R}}\|^p_{Y} \approx \left(\sum\limits_{n=1}^N w_n |{\EuScript{R}}_{\theta^{\ast}}|^p \right) = {\EuScript{E}}_T^p.\end{aligned}$$ Hence, the training error is a quadrature for the residual and the resulting quadrature error, given by translates to, $$\begin{aligned} \|{\EuScript{R}}\|^p_{Y} {\leqslant}{\EuScript{E}}_T^p + C_{quad}N^{-\alpha}. \end{aligned}$$ Therefore, $$\begin{aligned} {\EuScript{E}}_G^p &{\leqslant}C^p_{pde} \|{\EuScript{R}}\|^p_{Y} \\ &{\leqslant}C^p_{pde} \left({\EuScript{E}}_T^p + C_{quad}N^{-\alpha} \right)\\ \Rightarrow \quad {\EuScript{E}}_G &{\leqslant}C_{pde}{\EuScript{E}}_T + C_{pde}C_{quad}^{\frac{1}{p}}N^{-\frac{\alpha}{p}},\end{aligned}$$ which is the desired estimate . We term a PINN, generated by the algorithm \[alg:PINN\] to be *well-trained* if the following condition hold, $$\label{eq:wtn} {\EuScript{E}}_T {\leqslant}C_{qaud}^{\frac{1}{p}}N^{-\frac{\alpha}{p}}.$$ Thus, a well-trained PINN is one for which the training errors are smaller than the so-called *generalization gap* (given by the rhs of ) which results from quadrature. The bound leads directly to the following *consistency or convergence* lemma for a well-trained network, \[lem:1\] Assume that for any $N$ (size of the training set ${\EuScript{S}}$), the PINNs generated by the algorithm \[alg:PINN\], denoted by ${{\bf u}}^{\ast}_{N}$ are well-trained i.e, they satisfy . Furthermore, assume that there exists constant ${\EuScript{C}}$, independent of $N$ such that the following holds, $$\label{eq:uconv} \max \left\{ \left\|{{\bf u}}^{\ast}_{N}\right\|_{Z},\left\|\left|{\EuScript{R}}\left({{\bf u}}^{\ast}_{N}\right)\right|^{p}\right\|_{Y}\right\} {\leqslant}{\EuScript{C}}< +\infty, \quad \forall N ,$$ with ${\EuScript{R}}$ being defined with respect to the trained PINN ${{\bf u}}^{\ast}_{N}$ by . Then we have, $$\label{eq:conv} \lim\limits_{N \rightarrow \infty} {{\bf u}}^{\ast}_{N} = {{\bf u}}, \quad {\rm in}~X$$ with ${{\bf u}}$ being the unique solution of the PDE . Thus, Lemma \[lem:1\] provides consistency for the approximation of the solution of the PDE by PINNs. It can be thought of as a variant of the celebrated *Lax equivalence theorem* of classical numerical analysis as stability (in the form of the conditional stability estimate ) and accuracy (in the form of small training errors and decay of quadrature errors through imply convergence of the PINN to the solution of the PDE . \[rem:2\] The estimate clearly indicates mechanisms that underpin possible efficient approximation of solutions of PDEs by PINNs as it breaks down the sources of error into the following three parts, - The PINN has to be well-trained i.e, the training error ${\EuScript{E}}_T$ has to be sufficiently small. Note that we have no a priori control on the training error but can compute it *a posteriori*. - The underlying solution and PINNs have to be sufficiently regular such that the residual can be approximated to high accuracy by the quadrature rule . The quadrature error has a constant $C_{quad}$ and a rate $\alpha$, both dependent on the underlying solution and quadrature rule used. The constant can be computed *a posteriori* and the (worst case) rate is known in advance. - Finally, the whole estimate relies on stability of solutions of the underlying PDE, measured by the stability estimate on the differential operator ${\EuScript{D}}$ and appearing as the constant $C_{pde}$ in . Thus, the estimate leverages stability of PDEs into efficient approximation by PINNs. In addition to the requirement that the approximating PINNs are well-trained , Lemma \[lem:1\] requires the uniform bound on the PINNs. Given the structure of the PINN , in principle this bound can be explicitly estimated in terms of the weights in by successive differentiation. Hence, one can penalize the resulting bound on weights, in a manner analogous to the regularization term in and ensure . However, given the complicated algebraic expressions that result in this process, even for a shallow (one hidden layer) neural network, we will not explore this further but rather verify these assumptions a posteriori. ### Case of random training points. {#sec:mc} The quadrature rule (and the quadrature error ) play an important role in the error estimate . In principle, as long as the underlying solution (and PINN) are sufficiently regular, one can use standard grid based (composite) Gauss quadrature rules and obtain low quadrature errors, resulting in a large $\alpha$ in the estimate . However, the constant $C_{quad}$, depends explicitly on the dimension $\bar{d}$ of ${\mathbb{D}}$ and suffers from the so-called *curse of dimensionality*. This can be alleviated for moderately high dimensions by using low-discrepancy sequences as the quadrature (and training) points. As long as the solution ${{\bf u}}$ (and PINN ${{\bf u}}^{\ast}$) are of bounded Hardy-Krause variation [@owen], we know that $\alpha =1$ and the constant $C_{quad} = (\log(N))^{\bar{d}}$ for the resulting Quasi-Monte Carlo quadrature error [@CAF1]. Note that the logarithmic correction can be dominated as long $N \approx e^{\bar{d}}$. However, if the underlying dimension is very high, using low-discrepancy sequences might not be efficient, see [@MR1] for a similar observation in the context of standard neural networks. Hence, for problems in very high dimensions, one has to use random quadrature points as the resulting Monte Carlo quadrature does not suffer from the curse of dimensionality. However, as we use the quadrature points as training points for the PINN in algorithm \[alg:PINN\], we cannot directly use error estimates for Monte Carlo quadrature (central limit theorem) for approximating the integral in $\|{\EuScript{R}}_{\theta^{\ast}}\|^p_Y$ as ${\EuScript{R}}_{\theta^{\ast}}(y_n)$ in the training error could be *correlated* (and are not independent) for different training points $y_n$. In general, advanced tools from statistical learning theory [@CS1; @MLbook] such as Rademacher complexity, VC dimension etc are used to circumvent these correlations and in practice, lead to large overestimates on the generalization error [@AR1]. Here, we follow the approach of [@LMM1] and references therein and adopt a pragmatic framework in estimating the generalization error in terms of the training error. For simplicity of notation, we set the domain ${\mathbb{D}}= [0,1]^{\bar{d}}$ and $Y=L^1({\mathbb{D}})$ and start by recalling that the points in the training set $\EuScript{S}$ are chosen randomly from domain ${\mathbb{D}}$, independently and identically distributed with the underlying Lebesgue measure $dy$. We will identify the training set ${\EuScript{S}}$ with the vector ${{\mathbf S}}\in {\mathbb{D}}^N$, defined by $${{\mathbf S}}= \left[y_1,y_2,\ldots y_N \right],$$ which is distributed according to the measure $d{{\mathbf S}}:= dy\otimes\cdot\cdot dy$ (a $N$-fold product Lebesgue measure). We also denote $(\Omega, \Sigma, {\mathbb{P}})$ as the underlying complete probability space from which random draws are made. Expectation with respect to this underlying probability measure ${\mathbb{P}}$ is denoted as ${\mathbb{E}}$ Note that as the training set is randomly chosen, the trained PINN ${{\bf u}}^{\ast}$, explicitly depends on the training set ${{\mathbf S}}$ i.e, ${{\bf u}}^{\ast} = {{\bf u}}^{\ast}({{\mathbf S}})$. Consequently, the generalization and training errors now explicitly depend on the training set i.e, $$\begin{aligned} {\EuScript{E}}_G({{\mathbf S}})= {\EuScript{E}}_{G} (\theta^{\ast};{{\mathbf S}}) := \|{{\bf u}}-{{\bf u}}^{\ast}({{\mathbf S}})\|_{X}, \quad {\EuScript{E}}_T({{\mathbf S}})= {\EuScript{E}}_T(\theta^{\ast};{{\mathbf S}}):= \frac{1}{N} \sum\limits_{n=1}^N |{\EuScript{R}}_{\theta^{\ast}}(y_n;{{\mathbf S}})| \end{aligned}$$ Next, we follow [@LMM1] and define the so-called *cumulative* (average over training sets) generalization and training errors as, $$\label{eq:ecgen} \bar{{\EuScript{E}}}_G = \int\limits_{{\mathbb{D}}^N} {\EuScript{E}}_{G} ({{\mathbf S}}) d{{\mathbf S}}= \int\limits_{{\mathbb{D}}^N} \|{{\bf u}}- {{\bf u}}^{\ast}({{\mathbf S}}) \|_X d{{\mathbf S}},$$ and $$\label{eq:ectrain} \bar{{\EuScript{E}}}_T = \int\limits_{{\mathbb{D}}^N} {\EuScript{E}}_{T} ({{\mathbf S}}) d{{\mathbf S}}.$$ Note that the cumulative errors $\bar{{\EuScript{E}}}_{G,T}$ are deterministic quantities. In [@LMM1] and references therein, one followed standard practice in machine learning and also computed the so-called *validation set*, $$\label{eq:val} {\EuScript{V}}= \{z_j \in {\mathbb{D}}, ~ 1{\leqslant}j {\leqslant}N, \quad z_j ~i.i.d~wrt~dy\}.$$ The validation set is chosen before the start of the training process and is independent of the training sets. We can define the *cumulative validation error* as, $$\label{eq:ecval} \bar{{\EuScript{E}}}_V= \frac{1}{N}\int_{{\mathbb{D}}^N}\sum\limits_{j=1}^N |{\EuScript{R}}_{\theta^{\ast}}(z_n;{{\mathbf S}}) | d{{\mathbf S}}.$$ We observe that as the set ${\EuScript{V}}$ is drawn randomly from ${\mathbb{D}}$ with underlying distribution $dy$, the cumulative validation error is a random quantity, $\bar{{\EuScript{E}}}_V = \bar{{\EuScript{E}}}_V(\omega)$ with $\omega \in \Omega$. We suppress this $\omega$-dependence for notational convenience. Finally, we introduce the *validation gap*: $$\label{eq:vgap} {\EuScript{E}}_{TV}:= {\mathbb{E}}\left(|\bar{{\EuScript{E}}}_{T} - \bar{{\EuScript{E}}}_V|\right):= \int\limits_{\Omega}|\bar{{\EuScript{E}}}_{T} - \bar{{\EuScript{E}}}_V(\omega)|d{\mathbb{P}}(\omega)$$ Then, we have the following lemma on estimating the cumulative generalization error, \[lem:11\] Let ${{\bf u}}\in X^{\ast} \subset X$ be the unique solution of the PDE . Let ${{\bf u}}^{\ast} = {{\bf u}}_{\theta^{\ast}}({{\mathbf S}})$ be a PINN generated by algorithm \[alg:PINN\], with $N$ randomly chosen training points, identified by ${{\mathbf S}}\in {\mathbb{D}}^N$. Assume that there exists a constant $C_{pde}$ such that the stability bound is uniformly satisfied for PINNs, generated by the algorithm \[alg:PINN\] for every training set ${\EuScript{S}}\subset {\mathbb{D}}$, then the cumulative generalization error is bounded by, $$\label{eq:ecgenb} \bar{{\EuScript{E}}}_G {\leqslant}C_{pde}\left( \bar{{\EuScript{E}}}_T + {\EuScript{E}}_{TV} + \frac{std(|{\EuScript{R}}_{\theta^{\ast}}|)}{\sqrt{N}}\right),$$ with cumulative training error $\bar{{\EuScript{E}}}_T$ , validation gap ${\EuScript{E}}_{TV}$ and $$\label{eq:std} std(|{\EuScript{R}}_{\theta^{\ast}}|):= \sqrt{{\mathbb{E}}\left(\int_{{\mathbb{D}}^N}|{\EuScript{R}}(z(\omega),{{\mathbf S}})| d{{\mathbf S}}- \int_{{\mathbb{D}}}\int_{{\mathbb{D}}^N}|{\EuScript{R}}(z,{{\mathbf S}})| d{{\mathbf S}}dz\right)^2}$$ We use the shorthand notation ${\EuScript{R}}({{\mathbf S}}) = {\EuScript{R}}_{\theta^{\ast}}({{\mathbf S}})$ for the residual . By definition of the cumulative generalization error , we have $$\begin{aligned} \bar{{\EuScript{E}}}_G &= {\mathbb{E}}\left(\|{{\bf u}}- {{\bf u}}^{\ast}\|_{X}\right) \\ &{\leqslant}C_{pde}{\mathbb{E}}\left(\|{\EuScript{R}}({{\mathbf S}})\|_{1}\right), \quad {\rm by}~\eqref{eq:assm2}, \\ &{\leqslant}C_{pde} {\mathbb{E}}\left(|{\mathbb{E}}\left(\|{\EuScript{R}}({{\mathbf S}})\|_{1}\right)-\bar{{\EuScript{E}}}_V + \bar{{\EuScript{E}}}_{V} -\bar{{\EuScript{E}}}_T+\bar{{\EuScript{E}}}_T| \right) \\ &{\leqslant}C_{pde}\left(\bar{{\EuScript{E}}}_T + {\EuScript{E}}_{TV} + \sqrt{{\mathbb{E}}\left(|{\mathbb{E}}(\|{\EuScript{R}}({{\mathbf S}})\|_{1}) - \bar{{\EuScript{E}}}_{V}|^2\right)}\right)\end{aligned}$$ As the residual evaluated on the validation points is independent, the term inside the square root can be easily estimated in terms of the Monte Carlo quadrature to obtain the desired estimate . We remark that the estimate on the cumulative generalization error requires the computation of the so-called validation set. This is standard practice in machine learning, albeit with a smaller validation set than the training set. An alternative approach to obtain bounds on the generalization error for random training points would be to use space-filling arguments for random points [@CALD] and some Hölder regularity of the underlying maps. The resulting bound is directly in the form and does not require the computation of a validation gap. However, this bound suffers from the curse of dimensionality and we do not investigate it in detail here. The error estimates (and ) are for the abstract PDE formulation and are meant to illustrate possible mechanisms for low approximation errors with PINNs. A lot of information is implicit in them, for instance the exact form of the function spaces $X,X^{\ast}, Y, Y^{\ast}$ and (initial) boundary conditions. These will be made explicit in each of the subsequent concrete examples. Semi-linear Parabolic equations {#sec:3} =============================== The underlying PDEs ------------------- Let $D \subset {\mathbb{R}}^d$ be a domain i.e, an open connected bounded set with a $C^1$ boundary ${\partial D}$. We consider the following model semi-linear parabolic equation, $$\label{eq:heat} \begin{aligned} u_t &= \Delta u + f(u), \quad \forall x\in D~ t \in (0,T), \\ u(x,0) &= \bar{u}(x), \quad \forall x \in D, \\ u(x,t) &= 0, \quad \forall x\in {\partial D}, ~ t \in (0,T). \end{aligned}$$ Here, $u_0 \in H^{\bar{s}}(D;{\mathbb{R}})$ is the initial data, $u \in H^s(((0,T)\times D);{\mathbb{R}})$ is the solution and $f:{\mathbb{R}}\times {\mathbb{R}}$ is the non-linear source (reaction) term. We assume that the non-linearity is globally Lipschitz i.e, there exists a constant $C_f$ (independent of $v,w$) such that $$\label{eq:assf} |f(v) - f(w)| {\leqslant}C_f|v-w|, \quad v,w \in {\mathbb{R}}.$$ In particular, the homogeneous linear heat equation with $f(u) \equiv 0$ and the linear source term $f(u) = c_f u$ are examples of . Semilinear heat equations with globally Lipschitz nonlinearities arise in several models in biology and finance [@Jent1]. The existence, uniqueness and regularity of the semi-linear parabolic equations with Lipschitz non-linearities such as can be found in classical textbooks such as [@Frdbook]. For our purposes here, we will choose $\bar{s}$ sufficiently large such that the initial data $\bar{u} \in C^k(D)$ and we obtain $u \in C^k([0,T] \times D)$, with $k{\geqslant}2$ as the classical solution of the semi-linear parabolic equation . PINNs {#pinns} ----- In order to complete the PINNs algorithm \[alg:PINN\], we need to specify the training set ${\EuScript{S}}$ and define the appropriate residual, which we do below. ### Training set {#sec:tset} Let $D_T = D \times (0,T)$ be the space-time domain. As in section \[sec:2\], we will choose the training set ${\EuScript{S}}\subset [0,T] \times \bar{D}$, based on suitable quadrature points. We have to divide the training set into the following three parts, - Interior training points ${\EuScript{S}}_{int}=\{y_n\}$ for $1 {\leqslant}n {\leqslant}N_{int}$, with each $y_n = (x,t)_n \in D_T$. These points can be the quadrature points, corresponding to a suitable space-time grid-based composite Gauss quadrature rule as long $d {\leqslant}3$ or correspond to low-discrepancy sequences for moderately high dimensions or randomly chosen points in very high dimensions. See figure \[fig:2\] for an example of randomly chosen training points for $d=1$. - Spatial boundary training points ${\EuScript{S}}_{sb} = \{z_n\}$ for $1 {\leqslant}n {\leqslant}N_{sb}$ with each $z_n = (x,t)_n$ and each $x_n \in {\partial D}$. Again the points can be chosen from a grid-based quadrature rule on the boundary, as low-discrepancy sequences or randomly. - Temporal boundary training points ${\EuScript{S}}_{tb} = \{x_n\}$, with $1 {\leqslant}n {\leqslant}N_{tb}$ and each $x_n \in D$, chosen either as grid points, low-discrepancy sequences or randomly chosen in $D$. The full training set is ${\EuScript{S}}= {\EuScript{S}}_{int} \cup {\EuScript{S}}_{sb} \cup {\EuScript{S}}_{tb}$. An example for the full training set is shown in figure \[fig:2\]. ### Residuals In the algorithm \[alg:PINN\] for generating PINNs, we need to define appropriate residuals. For the neural network $u_{\theta} \in C^k([0,T]\times \bar{D})$, with continuous extensions of the derivatives to the boundaries, defined by , with a smooth activation function such as $\sigma = \tanh$ and $\theta \in \Theta$ as the set of tuning parameters, we define the following residual, - Interior Residual given by, $$\label{eq:hres1} {\EuScript{R}}_{int,\theta}(x,t):= \partial_t u_{\theta}(x,t) - \Delta u_{\theta}(x,t) - f(u_{\theta}(x,t)).$$ Here $\Delta = \Delta_x$ is the spatial Laplacian. Note that the residual is well defined and ${\EuScript{R}}_{int,\theta} \in C^{k-2}([0,T]\times \bar{D})$ for every $\theta \in \Theta$. - Spatial boundary Residual given by, $$\label{eq:hres2} {\EuScript{R}}_{sb,\theta}(x,t):= u_{\theta}(x,t), \quad \forall x \in {\partial D}, ~ t \in (0,T].$$ Given the fact that the neural network is smooth, this residual is well defined. - Temporal boundary Residual given by, $$\label{eq:hres3} {\EuScript{R}}_{tb,\theta}(x):= u_{\theta}(x,0) - \bar{u}(x), \quad \forall x \in D.$$ Again this quantity is well-defined and ${\EuScript{R}}_{tb,\theta} \in C^k(D)$ as both the initial data and the neural network are smooth. ### Loss function As in section \[sec:2\], we need a loss function to train the PINN. To this end, we set the following loss function, $$\label{eq:hlf} J(\theta):= \sum\limits_{n=1}^{N_{tb}} w^{tb}_n|{\EuScript{R}}_{tb,\theta}(x_n)|^2 + \sum\limits_{n=1}^{N_{sb}} w^{sb}_n|{\EuScript{R}}_{sb,\theta}(x_n,t_n)|^2 + \lambda \sum\limits_{n=1}^{N_{int}} w^{int}_n|{\EuScript{R}}_{int,\theta}(x_n,t_n)|^2 .$$ Here the residuals are defined by , , , $w^{tb}_n$ are the $N_{tb}$ quadrature weights corresponding to the temporal boundary training points ${\EuScript{S}}_{tb}$, $w^{sb}_n$ are the $N_{sb}$ quadrature weights corresponding to the spatial boundary training points ${\EuScript{S}}_{sb}$ and $w^{int}_n$ are the $N_{int}$ quadrature weights corresponding to the interior training points ${\EuScript{S}}_{int}$. Furthermore, $\lambda$ is a hyperparameter for balancing the residuals, on account of the PDE and the initial and boundary data, respectively. Estimate on the generalization error. ------------------------------------- The algorithm \[alg:PINN\] for training a PINN to approximate the semilinear parabolic equation is now completely specified and can be run to generate the required PINN, which we denote as $u^{\ast} = u_{\theta^{\ast}}$, where $\theta^{\ast} \in \Theta$ is the (approximate) minimizer of the optimization problem, corresponding to the loss function , . We are interested in estimating the generalization error for this PINN. In this case, $X = L^2(D \times (0,T))$ and the generalization error is concretely defined as, $$\label{eq:hegen} {\EuScript{E}}_{G}:= \left(\int\limits_0^T \int\limits_D |u(x,t) - u^{\ast}(x,t)|^2 dx dt \right)^{\frac{1}{2}}.$$ As for the abstract PDE , we are going to estimate the generalization error in terms of the *training error* that we define as, $$\label{eq:hetrain} {\EuScript{E}}^2_{T}:= \underbrace{\sum\limits_{n=1}^{N_{tb}} w^{tb}_n|{\EuScript{R}}_{tb,\theta^{\ast}}(x_n)|^2}_{({\EuScript{E}}_T^{tb})^2} + \underbrace{\sum\limits_{n=1}^{N_{sb}} w^{sb}_n|{\EuScript{R}}_{sb,\theta^{\ast}}(x_n,t_n)|^2}_{({\EuScript{E}}_T^{sb})^2} + \lambda\underbrace{\sum\limits_{n=1}^{N_{int}} w^{int}_n|{\EuScript{R}}_{int,\theta^{\ast}}(x_n,t_n)|^2}_{({\EuScript{E}}_T^{int})^2}.$$ Note that the training error can be readily computed *a posteriori* from the loss function ,. We also need the following assumptions on the quadrature error, analogous to . For any function $g \in C^k(D)$, the quadrature rule corresponding to quadrature weights $w^{tb}_n$ at points $x_n \in {\EuScript{S}}_{tb}$, with $1 {\leqslant}n {\leqslant}N_{tb}$, satisfies $$\label{eq:hquad1} \left| \int\limits_{D} g(x) dx - \sum\limits_{n=1}^{N_{tb}} w^{tb}_n g(x_n)\right| {\leqslant}C^{tb}_{quad}(\|g\|_{C^k}) N_{tb}^{-\alpha_{tb}}.$$ For any function $g \in C^k({\partial D}\times [0,T])$, the quadrature rule corresponding to quadrature weights $w^{sb}_n$ at points $(x_n,t_n) \in {\EuScript{S}}_{sb}$, with $1 {\leqslant}n {\leqslant}N_{sb}$, satisfies $$\label{eq:hquad2} \left| \int\limits_0^T \int\limits_{{\partial D}} g(x,t) ds(x) dt - \sum\limits_{n=1}^{N_{sb}} w^{sb}_n g(x_n,t_n)\right| {\leqslant}C^{sb}_{quad}(\|g\|_{C^k}) N_{sb}^{-\alpha_{sb}}.$$ Finally, for any function $g \in C^\ell(D \times [0,T])$, the quadrature rule corresponding to quadrature weights $w^{int}_n$ at points $(x_n,t_n) \in {\EuScript{S}}_{int}$, with $1 {\leqslant}n {\leqslant}N_{int}$, satisfies $$\label{eq:hquad3} \left| \int\limits_0^T \int\limits_{D} g(x,t) dx dt - \sum\limits_{n=1}^{N_{int}} w^{int}_n g(x_n,t_n)\right| {\leqslant}C^{int}_{quad}(\|g\|_{C^\ell}) N_{int}^{-\alpha_{int}}.$$ In the above, $\alpha_{int},\alpha_{sb},\alpha_{tb} > 0$ and in principle, different order quadrature rules can be used. We estimate the generalization error for the PINN in the following, \[thm:heat\] Let $u \in C^k(\bar{D} \times [0,T])$ be the unique classical solution of the semilinear parabolic euqation with the source $f$ satisfying . Let $u^{\ast} = u_{\theta^{\ast}}$ be a PINN generated by algorithm \[alg:PINN\], corresponding to loss function , . Then the generalization error can be estimated as, $$\label{eq:hegenb} {\EuScript{E}}_G {\leqslant}C_1 \left({\EuScript{E}}_T^{tb}+{\EuScript{E}}_T^{int}+C_2({\EuScript{E}}_T^{sb})^{\frac{1}{2}} + (C_{quad}^{tb})^{\frac{1}{2}}N_{tb}^{-\frac{\alpha_{tb}}{2}} + (C_{quad}^{int})^{\frac{1}{2}}N_{int}^{-\frac{\alpha_{int}}{2}} + C_2 (C_{quad}^{sb})^{\frac{1}{4}}N_{sb}^{-\frac{\alpha_{sb}}{4}} \right),$$ with constants given by, $$\label{eq:hct} \begin{aligned} C_1 &= \sqrt{T + (1+2C_f)T^2e^{(1+2C_f)T}}, \quad C_2 = \sqrt{C_{{\partial D}}(u,u^{\ast})T^{\frac{1}{2}}}, \\ C_{{\partial D}} &= |{\partial D}|^{\frac{1}{2}}\left(\|u\|_{C^1([0,T] \times {\partial D})} + \|u^{\ast}\|_{C^1([0,T] \times {\partial D})}\right), \\ \end{aligned}$$ and $C_{quad}^{tb} = C_{quad}^{tb}(\|{\EuScript{R}}_{tb,\theta^{\ast}}\|_{C^k})$, $C_{quad}^{sb} = C_{quad}^{tb}(\|{\EuScript{R}}_{sb,\theta^{\ast}}\|_{C^k})$ and $C_{quad}^{int} = C_{quad}^{int}(\|{\EuScript{R}}_{int,\theta^{\ast}}\|_{C^{k-2}})$ are the constants defined by the quadrature error , , , respectively. By the definitions of the residuals , , and the underlying PDE , we can readily verify that the error $\hat{u}: u^{\ast} - u$ satisfies the following (forced) parabolic equation, $$\label{eq:herr} \begin{aligned} \hat{u}_t &= \Delta \hat{u} + f(u^{\ast}) - f(u) + {\EuScript{R}}_{int}, \quad \forall x\in D~ t \in (0,T), \\ \hat{u}(x,0) &= {\EuScript{R}}_{tb}(x), \quad \forall x \in D, \\ u(x,t) &= {\EuScript{R}}_{sb}(x,t), \quad \forall x\in {\partial D}, ~ t \in (0,T). \end{aligned}$$ Here, we have denoted ${\EuScript{R}}_{int} = {\EuScript{R}}_{int,\theta^{\ast}}$ for notational convenience and analogously for the residuals ${\EuScript{R}}_{tb},{\EuScript{R}}_{sb}.$ Multiplying both sides of the PDE with $\hat{u}$, integrating over the domain and integrating by parts, denoting ${\mathbf{n}}$ as the unit outward normal, yields, $$\begin{aligned} \frac{1}{2} \frac{d}{dt}\int_{D}|\hat{u}(x,t)|^2 dx &= - \int_{D} |\nabla \hat{u}|^2 dx + \int_{{\partial D}} {\EuScript{R}}_{sb}(x,t) (\nabla \hat{u}\cdot {\mathbf{n}}) ds(x) + \int_{D} \hat{u} (f(u^{\ast}) - f(u)) dx + \int_{D} {\EuScript{R}}_{int} \hat{u} dx. \\ &{\leqslant}\int_{D} |\hat{u}||f(u^{\ast}) - f(u)| dx + \frac{1}{2} \int_{D} \hat{u}(x,t)^2 dx + \frac{1}{2}\int_{D} |{\EuScript{R}}_{int}|^2 dx \\ &+ \underbrace{|{\partial D}|^{\frac{1}{2}}\left(\|u\|_{C^1([0,T] \times {\partial D})} + \|u^{\ast}\|_{C^1([0,T] \times {\partial D})}\right)}_{C_{{\partial D}}(u,u^{\ast})}\left(\int_{{\partial D}} |{\EuScript{R}}_{sb}(x,t)|^2 ds(x) \right)^{\frac{1}{2}} \\ &{\leqslant}(C_f+\frac{1}{2}) \int_{D} |\hat{u}(x,t)|^2 dx + \frac{1}{2}\int_{D} |{\EuScript{R}}_{int}|^2 dx + C_{{\partial D}}(u,u^{\ast})\left(\int_{{\partial D}} |{\EuScript{R}}_{sb}(x,t)|^2 ds(x) \right)^{\frac{1}{2}}~({\rm by}~\eqref{eq:assf}). \end{aligned}$$ Integrating the above inequality over $[0,\bar{T}]$ for any $\bar{T} {\leqslant}T$ and the definition together with Cauchy-Schwarz inequality, we obtain, $$\begin{aligned} \int_{D}|\hat{u}(x,\bar{T})|^2 dx &{\leqslant}\int_{D} |{\EuScript{R}}_{tb}(x)|^2 dx + (1+C_f)\int_0^{\bar{T}} \int_{D} |\hat{u}(x,t)|^2 dx dt + \int_0^T\int_{D} |{\EuScript{R}}_{int}|^2 dx dt \\ &+ C_{{\partial D}}(u,u^{\ast})T^{\frac{1}{2}}\left(\int_0^T\int_{{\partial D}} |{\EuScript{R}}_{sb}(x,t)|^2 ds(x) dt \right)^{\frac{1}{2}}. \end{aligned}$$ Applying the integral form of the Grönwall’s inequality to the above, we obtain, $$\begin{aligned} &\int_{D}|\hat{u}(x,\bar{T})|^2 dx \\ &{\leqslant}\left(1 + (1+2C_f)Te^{(1+2C_f)T}\right)\left(\int_{D} |{\EuScript{R}}_{tb}(x)|^2 dx +\int_0^T\int_{D} |{\EuScript{R}}_{int}|^2 dx dt + C_{{\partial D}}(u,u^{\ast})T^{\frac{1}{2}}\left(\int_0^T\int_{{\partial D}} |{\EuScript{R}}_{sb}(x,t)|^2 ds(x) dt \right)^{\frac{1}{2}} \right).\end{aligned}$$ Integrating over $\bar{T} \in [0,T]$ yields, $$\label{eq:hpf1} \begin{aligned} &{\EuScript{E}}^2_{G} = \int_0^T \int_{D}|\hat{u}(x,\bar{T})|^2 dx dt \\ &{\leqslant}\left(T + (1+2C_f)T^2e^{(1+2C_f)T}\right)\left(\int_{D} |{\EuScript{R}}_{tb}(x)|^2 dx +\int_0^T\int_{D} |{\EuScript{R}}_{int}|^2 dx dt + C_{{\partial D}}(u,u^{\ast})T^{\frac{1}{2}}\left(\int_0^T\int_{{\partial D}} |{\EuScript{R}}_{sb}(x,t)|^2 ds(x) dt \right)^{\frac{1}{2}} \right). \end{aligned}$$ By the definitions of different components of the training error and applying the estimates , , on the quadrature error yields the desired inequality . The estimate bounds the generalization error in terms of each component of the training error and the quadrature errors. Clearly, each component of the training error can be computed from the loss function , , once the training has been completed. As long as the PINN is well-trained i.e, each component of the training error is small, the bound implies that the generalization error will be small for large enough number of training points. The estimate is not by any means sharp as triangle inequalities and the Grönwall’s inequality are used. Nevertheless, it provides interesting information. For instance, the error due to the boundary residual has a bigger weight in , relative to the interior and initial residuals. This is consistent with the observations of [@Lag1] and can also be seen in the recent papers such as [@DAR1] and suggests that the loss function could be modified such that the boundary residual is penalized more. In addition to the training errors, which could depend on the underlying PDE solution, the estimate shows explicit dependence on the underlying solution through the constant $C_{{\partial D}}$, which is based only on the value of the underlying solution on the boundary. Similarly, the dependence on dimension is only seen through the quadrature error. As in subsection \[sec:mc\], one has to modify the estimate for the generalization error , when random training points are used. Defining cumulative generalization error $\bar{{\EuScript{E}}}_G$, analogously to , by integrating over all training sets ${\EuScript{S}}$, identified with the vector ${{\mathbf S}}$ and the analogous concepts of cumulative training error $\bar{{\EuScript{E}}}_T$ and validation error $\bar{{\EuScript{E}}}_V$ , we can readily combine the arguments of Lemma \[lem:11\] and Theorem \[thm:heat\] to obtain the following estimate on the cumulative generalization error, $$\label{eq:hecgenb} \begin{aligned} \bar{{\EuScript{E}}}_G^2 &{\leqslant}C_1^2\left(\left(\bar{{\EuScript{E}}}^{tb}_T \right)^2 +\left({\EuScript{E}}^{tb}_{TV}\right)^2 +\left(\bar{{\EuScript{E}}}^{int}_T \right)^2 +\left({\EuScript{E}}^{tb}_{TV}\right)^2 + C_2^2\left(\bar{{\EuScript{E}}}_{T}^{sb} + \bar{{\EuScript{E}}}_{TV}^{sb} \right) \right) \\ &+ C_1^2\left(\frac{std\left({\EuScript{R}}_{tb}^2\right)}{N_{tb}^{\frac{1}{2}}}+ \frac{std\left({\EuScript{R}}_{int}^2\right)}{N_{int}^{\frac{1}{2}}} +C_2^2\frac{\sqrt{std\left({\EuScript{R}}_{sb}^2\right)}}{N_{sb}^{\frac{1}{4}}} \right), \end{aligned}$$ with constants $C_{1,2}$ defined in , standard deviation of the residuals, defined analogously to and validation gaps defined by $$\left({\EuScript{E}}^{\ell}_{TV}\right)^2 = \left |\left(\bar{{\EuScript{E}}}^{\ell}_T\right)^2 - \left(\bar{{\EuScript{E}}}^{\ell}_V\right)^2\right|,\quad \ell=sb,tb,int.$$ Numerical experiments. ---------------------- In this section, we present numerical experiments for the approximation of solutions of the parabolic equation by PINNs, generated with algorithm \[alg:PINN\]. We focus on the case of the linear heat equation here, i.e, by setting $f \equiv 0$ in as one has explicit formulas for the underlying solution and we use these formulas to explicitly compute the generalization error. ### One-dimensional Heat Equation. For the first numerical experiment, we consider the heat equation in one space dimension i.e, $d=1$ in with $f\equiv 0$, domain $D=[-1,1]$, $T=1$, and initial data $\bar{u}(x) = -\sin(\pi x)$. Then, the exact solution of the heat equation is given by, $$\label{eq:hex1} u(x,t) = -\sin(\pi x) e^{-\pi^2 t}.$$ Clearly both the initial data and the exact solution are smooth and Theorem \[thm:heat\] holds. In particular, we can use any grid based quadrature rule to generate training points in algorithm \[alg:PINN\]. However, to keep the presentation general and allow for very high-dimensional problems later, we simply choose random points for all three components of the training set ${\EuScript{S}}_{int},{\EuScript{S}}_{sb},{\EuScript{S}}_{tb}$. Each set of points is chosen randomly, independently and identically distributed with the underlying uniform distribution. An illustration of these points is provided in figure \[fig:2\], where the random points in ${\EuScript{S}}_{int}$ are shown as blue dots, whereas the points in ${\EuScript{S}}_{sb}$ and ${\EuScript{S}}_{tb}$ are shown as black crosses. ![An illustration of the training set ${\EuScript{S}}$ for the one-dimensional heat equation with randomly chosen training points. Points in ${\EuScript{S}}_{int}$ are depicted with blue dots and those in ${\EuScript{S}}_{tb} \cup {\EuScript{S}}_{sb}$ are depicted with black crosses.[]{data-label="fig:2"}](Images/points.png){width="70.00000%"} Our first aim in this experiment is to illustrate the estimate on the generalization error. To this end, we run algorithm \[alg:PINN\] with this random training set and with following hyperparameters: we consider a fully connected neural network architecture , with the $\tanh$ activation function, with $4$ hidden layers and $20$ neurons in each layer, resulting in neural networks with $1761$ tuning parameters. Moreover, we use the loss function , , with $\lambda=1$ and with $q=2$ i.e $L^2$-regularization, with regularization parameter $\lambda_{reg}=10^{-6}$. Finally, the optimizer is the second-order LBFGS method. This choice of hyperparameters is consistent with the ensemble training, presented in the next section. On this hyperparameter configuration, we vary the number of training points as $N_{int}=$ \[1000, 2000, 4000, 8000, 16000\], $N_{tb}=N_{sb}=$\[8, 16, 32, 64, 256\] concurrently, run the algorithm \[alg:PINN\] to obtain the corresponding trained neural network and evaluate the resulting errors i.e, the cumulative training errors , and the upper bound in . The cumulative generalization error is computed by evaluating the error of the neural network with respect to the exact solution on a *randomly chosen test set* of $10^5$ points. We compute the averages and standard deviations by taking $K=30$ different random training sets. The results for this procedure are shown in figure \[fig:hconv1\]. We see from this figure that the cumulative generalization error is very low to begin with and decays with the number of boundary training points $N_{tb}=N_{sb}$. The effect of the number of interior training points seems to be minimal in this case. Similarly, the computable upper bound also decays with respect to increasing the number of boundary training points. However, this upper bound does appear to be a significant overestimate as it is almost three orders to magnitude greater than the actual generalization error. This is not surprising as we had used non-sharp estimates such as triangle inequality and Grönwall’s inequality rather indiscriminately while deriving . Moreover, obtaining sharp bounds on generalization errors is a notoriously hard problem in machine learning [@NEYS1; @AR1] with overestimates of tens of orders of magnitude. More importantly, both the computed generalization error and the upper bound follow the same decay in the number of training samples. Surprisingly, the training errors are slightly larger than the computed generalization errors for this example. Note that this observation is still consistent with the bound . Given the fact that the training error is defined in terms of residuals and the generalization error is the error in approximating the solution of the underlying PDE by the PINN, there is no reason, a priori, to expect that the generalization error should be greater than the training error. ![Generalization error, training error and theoretical bound VS number of training samples $N_u = N_{sb} + N_{tb}$. Each color gradation corresponds to different values of the number of interior training points $N_{int}$ (from the smallest to the largest).[]{data-label="fig:hconv1"}](Images/Conv.png){width="70.00000%"} ### Ensemble training A PINN involves several hyperparameters, some of which are shown in Table \[tab:1\]. An user is always confronted with the question of which parameter to choose. The theory, presented in this paper and in the literature, offers very little guidance about the choice of hyperparameters. Instead, it is standard practice in machine learning to do a systematic hyperparameter search. To this end, we follow the *ensemble training* procedure of [@LMR1] with a randomly chosen training set ($N_{int} = 1024, N_{sb}=N_{tb}=64$) and compute the marginal distribution of the generalization error with respect to different choices of the hyperparameters (see Table \[tab:1\]). The ensemble training results in a total number of 360 configurations. For each of them, the model is retrained five times with different starting values of the trainable weights in the optimization algorithm and the one resulting in the smallest value of the training loss is selected. We plot the corresponding histograms, visualizing the marginal generalization error distributions, in figure \[fig:hhist\]. [.3]{} ![Marginal distributions of the generalization error to different network hyperparameters[]{data-label="fig:hhist"}]({Images/Sensitivity_neurons_2} "fig:"){width="1\linewidth"} [.3]{} ![Marginal distributions of the generalization error to different network hyperparameters[]{data-label="fig:hhist"}]({Images/Sensitivity_hidden_layers_2} "fig:"){width="1\linewidth"} [.3]{} ![Marginal distributions of the generalization error to different network hyperparameters[]{data-label="fig:hhist"}]({Images/Sensitivity_kernel_regularizer_2} "fig:"){width="1\linewidth"} [.3]{} ![Marginal distributions of the generalization error to different network hyperparameters[]{data-label="fig:hhist"}]({Images/Sensitivity_regularization_parameter_2} "fig:"){width="1\linewidth"} [.3]{} ![Marginal distributions of the generalization error to different network hyperparameters[]{data-label="fig:hhist"}]({Images/Sensitivity_residual_parameter_2} "fig:"){width="1\linewidth"} As seen from figure \[fig:hhist\], there is a large variation in the spread of the generalization error, often two to three orders of magnitude, indicating sensitivity to hyperparameters. However, even the worst case errors are fairly low for this example. Comparing different hyperparameters, we see that there is not much sensitivity to the network architecture (number of hidden layers and number of neurons per layer) and a slight regularization or no regularization in the loss function is preferable to large regularizations. The most sensitive parameter is $\lambda$ in where $\lambda = 1$ or $0.1$ are significantly better than larger values of $\lambda$. This can be explained in terms of the bounds , , where the boundary residual ${\EuScript{R}}_{sb}$ has a larger weight in error. A smaller value of $\lambda$ enforces this component of error, and hence the overall error, to be small, and we see exactly this behavior in the results. Finally, in figure \[fig:tg\], we plot the total training error on a logarithmic scale (x-axis) against the generalization error (in log scale) (y-axis) for all the hyperparameter configurations in the ensemble training. This plot clearly shows that the two errors are highly correlated and validates the fundamental point of the estimates and that if the PINNs are trained well, they generalize very well. In other words, low training errors imply low generalization errors. ### Heat equation in several space dimensions For this experiment, we consider the linear heat equation in domain $[0,1]^n$ and for the time interval $[0,1]$, for different space dimensions $n$. We consider the initial data $\bar{u}(x) = \frac{\|x\|^2}{n}$. In this case, the explicit solution of the linear heat equation is given by $$\label{eq:hex2} u(x,t)=\frac{\|x\|^2}{n} + 2t.$$ For different values of $n$ ranging up to $n=100$, we generate PINNs with algorithm \[alg:PINN\], by selecting randomly chosen training points, with respect to the underlying uniform distribution. Ensemble training, as outlined above, is performed in order to select the best performing hyperparameters among the ones listed in Table \[tab:1\] and resulting errors are shown in \[tab:h2\]. In particular, we present the relative percentage (cumulative) generalization error ${\EuScript{E}}^r_G$, readily computed from definition and normalized with the $L^2$-norm of the exact solution . We see from the table that the generalization errors are very low, less than $0.1\%$ upto 10 spatial dimensions and rise rather slowly (approximately linearly) with dimension, resulting in a low generalization error of $2.6\%$, even for $100$ space dimensions. This shows the ability of PINNs to possibly overcome the curse of dimensionality, at least for the heat equation. Note that we have not used *any explicit solution formulas* such as the Feynman-Kac formulas in algorithm \[alg:PINN\]. Still, PINNs were able to obtain low enough errors, comparable to supervised learning based neural networks that relied on the availability of an explicit solution formula [@HEJ1; @Jent1]. This experiment illustrates the ability of PINNs to overcome the *curse of dimensionality*, at least with random training points. ![Log of Training error (X-axis) vs Log of Generalization error (Y-axis) for each hyperparameter configuration during ensemble training. []{data-label="fig:tg"}](Images/et_vs_eg.png){width="8cm"} Viscous scalar conservation laws {#sec:4} ================================ The underlying PDE ------------------ In this section, we consider the following one-dimensional version of *viscous scalar conservation laws* as a model problem for quasilinear, convection-dominated diffusion equations, $$\label{eq:vscl} \begin{aligned} u_t + f(u)_x &= \nu u_{xx}, \quad \forall x\in (0,1),~t \in [0,T], \\ u(x,0) &= \bar{u}(x), \quad \forall x \in (0,1). \\ u(0,t) &= u(1,t) \equiv 0, \quad \forall t \in [0,T]. \end{aligned}$$ Here, $\bar{u} \in C^k([0,1])$, for any $k {\geqslant}1$, is the initial data and we consider zero Dirichlet boundary conditions. Note that $0 < \nu \ll 1$ is the viscosity coefficient. The flux function is denoted by $f\in C^k({\mathbb{R}};{\mathbb{R}})$. We emphasize that is a model problem that we present here for notational and expositional simplicity. The following results can be readily extended in the following directions: - Several space dimensions. - Other boundary conditions such as Periodic or Neumann boundary conditions. - More general forms of the viscous term, namely $\nu \left(B(u)u_x\right)_x$, for any $B \in C^k({\mathbb{R}};{\mathbb{R}})$ with $B(v) {\geqslant}c > 0$, for all $v \in {\mathbb{R}}$ and for some $c$. Moreover, we can follow standard textbooks such as [@GRbook] to conclude that as long as $\nu > 0$, there exists a classical solution $u \in C^k([0,T)\times [0,1])$ of the viscous scalar conservation law . PINNs for ---------- We realize the abstract algorithm \[alg:PINN\] in the following concrete steps, ### Training Set. Let $D =(0,1)$ and $D_T = (0,1) \times (0,T)$. As in section \[sec:3\], we divide the training set ${\EuScript{S}}= {\EuScript{S}}_{int} \cup {\EuScript{S}}_{sb} \cup {\EuScript{S}}_{tb}$ of the abstract PINNs algorithm \[alg:PINN\] into the following three subsets, - Interior training points ${\EuScript{S}}_{int}=\{y_n\}$ for $1 {\leqslant}n {\leqslant}N_{int}$, with each $y_n = (x_n,t_n) \in D_T$. These points can the quadrature points, corresponding to a suitable space -time grid-based composite Gauss quadrature rule or generated from a low-discrepancy sequence in $D_T$. - Spatial boundary training points ${\EuScript{S}}_{sb} = (0,t_n) \cup (1,t_n)$ for $1 {\leqslant}n {\leqslant}N_{sb}$, and the points $t_n$ chosen either as Gauss quadrature points or low discrepancy sequences in $[0,T]$. - Temportal boundary training points ${\EuScript{S}}_{tb} = \{x_n\}$, with $1 {\leqslant}n {\leqslant}N_{tb}$ and each $x_n \in (0,1)$, chosen either as Gauss quadrature points of low-discrepancy sequences. ### Residuals In the algorithm \[alg:PINN\] for generating PINNs, we need to define appropriate residuals. For the neural network $u_{\theta} \in C^k([0,T]\times [0,1])$, defined by , with a smooth activation function such as $\sigma = \tanh$ and $\theta \in \Theta$ as the set of tuning parameters, we define the following residuals, - Interior Residual given by, $$\label{eq:bres1} {\EuScript{R}}_{int,\theta}(x,t):= \partial_t (u_{\theta}(x,t)) + \partial_x (f(u_{\theta}(x,t))) - \nu \partial_{xx} (u_{\theta}(x,t)).$$ - Spatial boundary Residual given by, $$\label{eq:bres2} \left({\EuScript{R}}_{sb,0,\theta}(t),~{\EuScript{R}}_{sb,1,\theta}(t)\right) := \left(u_{\theta}(0,t),~u_{\theta}(1,t)\right), \quad \forall t \in (0,T].$$ - Temporal boundary Residual given by, $$\label{eq:bres3} {\EuScript{R}}_{tb,\theta}(x):= u_{\theta}(x,0) - \bar{u}(x), \quad \forall x \in [0,1].$$ All the above quantities are well defined for $k {\geqslant}2$ and ${\EuScript{R}}_{int,\theta} \in C^{k-2}([0,1] \times [0,T]),~ {\EuScript{R}}_{sb,\theta} \in C^k([0,T]),~ {\EuScript{R}}_{tb,\theta} \in C^k([0,1])$. ### Loss function We use the following loss function to train the PINN for approximating the viscous scalar conservation law , $$\label{eq:blf} J(\theta):= \sum\limits_{n=1}^{N_{tb}} w^{tb}_n|{\EuScript{R}}_{tb,\theta}(x_n)|^2 + \sum\limits_{n=1}^{N_{sb}} w^{sb}_n|{\EuScript{R}}_{sb,0,\theta}(t_n)|^2 + \sum\limits_{n=1}^{N_{sb}} w^{sb}_n|{\EuScript{R}}_{sb,1,\theta}(t_n)|^2 + \lambda \sum\limits_{n=1}^{N_{int}} w^{int}_n|{\EuScript{R}}_{int,\theta}(x_n,t_n)|^2 .$$ Here the residuals are defined by , , . $w^{tb}_n$ are the $N_{tb}$ quadrature weights corresponding to the temporal boundary training points ${\EuScript{S}}_{tb}$, $w^{sb}_n$ are the $N_{sb}$ quadrature weights corresponding to the spatial boundary training points ${\EuScript{S}}_{sb}$ and $w^{int}_n$ are the $N_{int}$ quadrature weights corresponding to the interior training points ${\EuScript{S}}_{int}$. Furthermore, $\lambda$ is a hyperparameter for balancing the residuals, on account of the PDE and the initial and boundary data, respectively. Estimate on the generalization error. ------------------------------------- As for the semilinear parabolic equation, we will try to estimate the following generalization error for the PINN $u^{\ast} = u_{\theta^\ast}$, generated through algorithm \[alg:PINN\], with loss functions , , for approximating the solution of the viscous scalar conservation law : $$\label{eq:begen} {\EuScript{E}}_{G}:= \left(\int\limits_0^T \int\limits_0^1 |u(x,t) - u^{\ast}(x,t)|^2 dx dt \right)^{\frac{1}{2}}.$$ This generalization error will be estimated in terms of the *training error*, $$\label{eq:betrain} \begin{aligned} {\EuScript{E}}^2_{T}&:= \lambda\underbrace{\sum\limits_{n=1}^{N_{int}} w^{int}_n|{\EuScript{R}}_{int,\theta^{\ast}}(x_n,t_n)|^2}_{({\EuScript{E}}_T^{int})^2} +\underbrace{\sum\limits_{n=1}^{N_{tb}} + w^{tb}_n|{\EuScript{R}}_{tb,\theta^{\ast}}(x_n)|^2}_{({\EuScript{E}}_T^{tb})^2} \\ &+ \underbrace{\sum\limits_{n=1}^{N_{sb}} w^{sb}_n|{\EuScript{R}}_{sb,0,\theta^{\ast}}(t_n)|^2}_{({\EuScript{E}}_T^{sb,0})^2} + \underbrace{\sum\limits_{n=1}^{N_{sb}} w^{sb}_n|{\EuScript{R}}_{sb,1,\theta^{\ast}}(t_n)|^2}_{({\EuScript{E}}_T^{sb,1})^2}, \end{aligned}$$ readily computed from the training loss *a posteriori*. We have the following estimate, \[thm:burg\] Let $\nu > 0$ and let $u \in C^k((0,T) \times (0,1))$ be the unique classical solution of the viscous scalar conservation law . Let $u^{\ast} = u_{\theta^{\ast}}$ be the PINN, generated by algorithm \[alg:PINN\], with loss function . Then, the generalization error is bounded by, $$\label{eq:begenb} \begin{aligned} {\EuScript{E}}_G^2 &{\leqslant}\left(T + {\bf C}T^2e^{{\bf C}T}\right)\left[ \left({\EuScript{E}}^{tb}_{T}\right)^2 + \left({\EuScript{E}}^{int}_{T}\right)^2 + 2\bar{{\bf C}}_b\left( \left({\EuScript{E}}^{sb,0}_{T}\right)^2 + \left({\EuScript{E}}^{sb,1}_{T}\right)^2 \right) + 2\nu{\bf C}_b T^{\frac{1}{2}}\left({\EuScript{E}}^{sb,0}_{T}+{\EuScript{E}}^{sb,1}_{T}\right) \right]\\ &+\left(T + {\bf C}T^2e^{{\bf C}T}\right)\left[C^{tb}_{quad}N^{-\alpha_{tb}}_{tb} + C^{int}_{quad}N^{-\alpha_{int}}_{int} + 2\bar{{\bf C}}_b\left(\left(C^{sb,0}_{quad}+ C^{sb,1}_{quad}\right)N^{-\alpha_{sb}}_{sb}\right) + 2\nu{\bf C}_b T^{\frac{1}{2}}\left(\left(C^{sb,0}_{quad}+ C^{sb,1}_{quad}\right)^{\frac{1}{2}}N_{sb}^{-\frac{\alpha_{sb}}{2}} \right)\right]. \end{aligned}$$ Here, the training errors are defined by and the constants are given by ${\bf C} = 1 + 2 C_{f,u,u^{\ast}}$, with $$\label{eq:bc1} \begin{aligned} C_{f,u,u^{\ast}} &= C\left(\|f\|_{C^2},\|u\|_{W^{1,\infty}},\|u^{\ast}\|_{L^\infty} \right) = \left|f^{\prime\prime}\left(\max\{\|u\|_{L^\infty},\|u^{\ast}\|_{L^\infty}\}\right)\right|\|u_x\|_{L^{\infty}}, \\ {\bf C}_b &= \left(\|u_x\|_{C([0,1]\times[0,T])} + \|u^{\ast}_x\|_{C([0,1]\times[0,T])}\right), \\ \end{aligned}$$ $\bar{{\bf C}}_b = \bar{{\bf C}}_b\left(\|f^{\prime}\|_{\infty},\|u^{\ast}\|_{C^0([0,1] \times [0,T])}\right)$ and $C^{tb}_{quad} = C^{tb}_{qaud}\left(\|{\EuScript{R}}_{tb,\theta^{\ast}}\|_{C^k}\right)$, $C^{int}_{quad} = C^{int}_{qaud}\left(\|{\EuScript{R}}_{int,\theta^{\ast}}\|_{C^{k-2}}\right)$, $C^{sb,0}_{quad} = C^{sb,0}_{qaud}\left(\|{\EuScript{R}}_{sb,0,\theta^{\ast}}\|_{C^k}\right)$, $C^{sb,1}_{quad} = C^{sb,1}_{qaud}\left(\|{\EuScript{R}}_{sb,1,\theta^{\ast}}\|_{C^k}\right)$ are constants are appear in the bounds on quadrature error -. We drop the $\theta^{\ast}$-dependence of the residuals - for notational convenience in the following. Define the *entropy flux function*, $$\label{eq:q} Q(u) = \int\limits_a^u s f^{\prime}(s) ds,$$ for any $a \in {\mathbb{R}}$. Let $\hat{u} = u^{\ast}-u$ be the error with the PINN. From the PDE and the definition of the interior residual , we have the following identities, $$\label{eq:bpf1} \begin{aligned} \partial_t \left(\frac{(u^{\ast})^2}{2}\right) + \partial_x Q(u^{\ast}) &= \nu u^{\ast}u^{\ast}_{xx} + {\EuScript{R}}_{int} u^{\ast} \\ \partial_t \left(\frac{u^2}{2}\right) + \partial_x Q(u) &= \nu uu_{xx} \end{aligned}$$ A straightforward calculation with and yields, $$\label{eq:bpf2} \partial_t (u\hat{u}) + \partial_x\left(u\left(f(u^{\ast}) - f(u) \right)\right) = \left[f(u^{\ast}) - f(u) - f^{\prime}(u)\hat{u}\right]u_x +{\EuScript{R}}_{int} u + \nu \left(u\hat{u}_{xx} + \hat{u}u_{xx}\right).$$ Subtracting the second equation of and from the first equation of yields, $$\label{eq:bpf3} \partial_t S(u,u^{\ast}) + \partial_x H(u,u^{\ast}) = {\EuScript{R}}_{int} \hat{u} + T_1 + T_2,$$ with, $$\begin{aligned} S(u,u^{\ast})&:= \frac{(u^{\ast})^2}{2} - \frac{u^2}{2} - \hat{u}u = \frac{1}{2}\hat{u}^2,\\ H(u,u^{\ast})&:= Q(u^{\ast}) - Q(u) - u(f(u^{\ast})-f(u)), \\ T_1 &= -\left[f(u^{\ast}) - f(u) - f^{\prime}(u)\hat{u}\right]u_x, \\ T_2 &= \nu \left(u^{\ast}u^{\ast}_{xx} - uu_{xx}-u\hat{u}_{xx} - \hat{u}u_{xx}\right) = \nu \hat{u} \hat{u}_{xx}. \end{aligned}$$ As the flux $f$ is smooth, by a Taylor expansion, we see that $$\label{eq:bpf4} T_1 = -f^{\prime \prime}(u + \gamma(u^{\ast}-u))\hat{u}^2 u_x,$$ for some $\gamma \in (0,1)$. Hence, a straightforward estimate for $T_1$ is given by, $$\label{eq:bpf5} |T_1| {\leqslant}C_{f,u,u^{\ast}} \hat{u}^2,$$ with $C_{f,u,u^{\ast}}$ defined in . Next, we integrate over the domain $(0,1)$ and integrate by parts to obtain, $$\label{eq:bpf6} \begin{aligned} \frac{d}{dt} \int_0^1 \hat{u}^2(x,t) dx &{\leqslant}2 H(u(0,t),u^{\ast}(0,t)) - 2 H(u(1,t),u^{\ast}(1,t)) \\ & + {\bf C} \int_0^1 \hat{u}^2(x,t) dx + \int_0^1 {\EuScript{R}}_{int}^2(x,t) dx, \\ &-2\nu\int_0^1 \hat{u}^2_x(x,t) dx + 2\nu \left(\hat{u}(1,t)\hat{u}_x(1,t) - \hat{u}(0,t)\hat{u}_x(0,t)\right), \end{aligned}$$ with the constant, ${\bf C} = 1 + 2 C_{f,u,u^{\ast}}$. Next, for any $\bar{T} {\leqslant}T$, we estimate the boundary terms starting with, $$\begin{aligned} \int\limits_0^{\bar{T}} \hat{u}(0,t)\hat{u}_x(0,t) dt &= \int\limits_0^{\bar{T}} {\EuScript{R}}_{sb,0}(t) \left(u^{\ast}_x(0,t) - u_x(0,t)\right) dt \\ &{\leqslant}\underbrace{\left(\|u_x\|_{C([0,1]\times[0,T])} + \|u^{\ast}_x\|_{C([0,1]\times[0,T])}\right)}_{{\bf C}_b}T^{\frac{1}{2}}\left(\int_0^T {\EuScript{R}}_{sb,0}^2(t) dt \right)^{\frac{1}{2}}.\end{aligned}$$ Analogously we can estimate, $$\begin{aligned} \int\limits_0^{\bar{T}} \hat{u}(1,t)\hat{u}_x(1,t) dt {\leqslant}{\bf C}_b T^{\frac{1}{2}}\left(\int_0^T {\EuScript{R}}_{sb,1}^2(t) dt \right)^{\frac{1}{2}}.\end{aligned}$$ We can also estimate from and that, $$\begin{aligned} H(u(0,t),u^{\ast}(0,t))&= Q(u^{\ast}(0,t)) - Q(u(0,t)) - u(0,t)(f(u^{\ast}(0,t)) - f(u(0,t))), \\ &= Q({\EuScript{R}}_{sb,0}(t)) - Q(0), \quad {\rm as}~ u(0,t) = 0, \\ &= Q^{\prime}(\gamma_0{\EuScript{R}}_{sb,0}(t)){\EuScript{R}}_{sb,0}(t), \quad {\rm for~some}~\gamma_0 \in (0,1), \\ &= \gamma_0f^{\prime}(\gamma_0u^{\ast}(0,t)) {\EuScript{R}}_{sb,0}^2(t), \quad {\rm by}~\eqref{eq:q}, \\ &{\leqslant}\bar{{\bf C}}_b {\EuScript{R}}_{sb,0}^2(t), \quad {\rm with}~ \bar{{\bf C}}_b = \bar{{\bf C}}_b\left(\|f^{\prime}\|_{\infty},\|u^{\ast}\|_{C^0([0,1] \times [0,T])}\right).\end{aligned}$$ Analogously, we can estimate, $$\begin{aligned} H(u(1,t),u^{\ast}(1,t)) {\leqslant}\bar{{\bf C}}_b {\EuScript{R}}_{sb,1}^2(t).\end{aligned}$$ For any $\bar{T} {\leqslant}T$, integrating over the time interval $[0,\bar{T}]$ and using the above inequalities on the boundary terms, together with the definition of the residual yields, $$\label{eq:bpf7} \begin{aligned} \int_0^1 \hat{u}^2(x,\bar{T}) dx &{\leqslant}{\EuScript{C}}+ {\bf C}\int_0^{\bar{T}} \int_0^1 \hat{u}^2(x,t) dx dt, \\ {\EuScript{C}}&= \int_0^1 {\EuScript{R}}_{tb}^2(x) dx + \int_0^T \int_0^1 {\EuScript{R}}_{int}^2(x,t) dx dt \\ &+ 2 \bar{{\bf C}}_b\left[\int_0^T {\EuScript{R}}_{sb,0}^2(t) dt + \int_0^T {\EuScript{R}}_{sb,1}^2(t) dt\right] + 2\nu {\bf C}_b T^{\frac{1}{2}}\left[\left(\int_0^T {\EuScript{R}}_{sb,0}^2(t) dt\right)^{\frac{1}{2}} +\left(\int_0^T {\EuScript{R}}_{sb,1}^2(t) dt\right)^{\frac{1}{2}}\right] . \end{aligned}$$ By applying the integral form of the Grönwall’s inequality to for any $\bar{T} {\leqslant}T$ and integrating again over $\bar{T}$, together with the definition of the generalization error , we obtain, $$\label{eq:bpf8} {\EuScript{E}}_G^2 {\leqslant}\left(T + {\bf C}T^2e^{{\bf C}T}\right){\EuScript{C}}.$$ Using the bounds - on the quadrature errors and the definition of ${\EuScript{C}}$ in , we obtain, $$\begin{aligned} {\EuScript{C}}&{\leqslant}\sum\limits_{n=1}^{N_{tb}} w^{tb}_n|{\EuScript{R}}_{tb}(x_n)|^2 + C^{tb}_{qaud}\left(\|{\EuScript{R}}_{tb}\|_{C^k}\right) N_{tb}^{-\alpha_{tb}} \\ &+ \sum\limits_{n=1}^{N_{int}} w^{int}_n|{\EuScript{R}}_{int}(x_n,t_n)|^2 + C^{int}_{qaud}\left(\|{\EuScript{R}}_{int}\|_{C^{k-2}}\right) N_{int}^{-\alpha_{int}}, \\ &+ 2\bar{{\bf C}}_b \left[\sum\limits_{n=1}^{N_{sb}} w^{sb}_n|{\EuScript{R}}_{sb,0}(t_n)|^2 + \sum\limits_{n=1}^{N_{sb}} w^{sb}_n|{\EuScript{R}}_{sb,1}(t_n)|^2 + \left(C^{sb}_{qaud}\left(\|{\EuScript{R}}_{sb,0}\|_{C^k}\right)+ C^{sb}_{qaud}\left(\|{\EuScript{R}}_{sb,1}\|_{C^k}\right)\right) N_{sb}^{-\alpha_{sb}} \right] \\ &+ 2\nu{\bf C}_b T^{\frac{1}{2}}\left[\left(\sum\limits_{n=1}^{N_{sb}} w^{sb}_n|{\EuScript{R}}_{sb,0}(t_n)|^2\right)^{\frac{1}{2}} + \left(\sum\limits_{n=1}^{N_{sb}} w^{sb}_n|{\EuScript{R}}_{sb,1}(t_n)|^2 \right)^{\frac{1}{2}}+ \left(C^{sb}_{qaud}\left(\|{\EuScript{R}}_{sb,0}\|_{C^k}\right)+ C^{sb}_{qaud}\left(\|{\EuScript{R}}_{sb,1}\|_{C^k}\right)\right)^{\frac{1}{2}} N_{sb}^{-\frac{\alpha_{sb}}{2}} \right].\end{aligned}$$ From definition of training errors and and the above inequality, we obtain the desired estimate . The estimate is a concrete realization of the abstract estimate , with training error decomposed into 4 parts, the constants, associated with the PDE, are given by $C_{f,u,u^{\ast}},{\bf C}_b$ and the constants due to the quadrature errors are also clearly delineated. \[rem:burg\] A close inspection of the estimate reveals that at the very least, the classical solution $u$ of the PDE needs to be in $L^{\infty}((0,T);W^{1,\infty}((0,1)))$ for the rhs of to be bounded. This indeed holds as long as $\nu > 0$. However, it is well known (see [@GRbook] and references therein) that if $u^{\nu}$ is the solution of for viscosity $\nu$, then for some initial data, $$\label{eq:bbup} \|u^{\nu}\|_{L^{\infty}((0,T);W^{1,\infty}((0,1)))} \sim \frac{1}{\sqrt{\nu}}.$$ Thus, in the limit $\nu \rightarrow 0$, the constant $C_{f,u,u^{\ast}}$ can blow up (exponentially in time) and the bound no longer controls the generalization error. This is not unexpected as the whole strategy of this paper relies on pointwise realization of residuals. However, the zero-viscosity limit of , leads to a scalar conservation law with discontinuous solutions (shocks) and the residuals are measures that do not make sense pointwise. Thus, the estimate also points out the limitations of a PINN for approximating discontinuous solutions. Numerical experiments --------------------- [.49]{} ![Burgers equation with discontinuos solution for different values of $\nu$[]{data-label="fig:burg1"}]({Images/B_Samples_1.png} "fig:"){width="1\linewidth"} [.49]{} ![Burgers equation with discontinuos solution for different values of $\nu$[]{data-label="fig:burg1"}]({Images/B_Samples_2} "fig:"){width="1\linewidth"} [.49]{} ![Burgers equation with discontinuos solution for different values of $\nu$[]{data-label="fig:burg1"}]({Images/B_Samples_3} "fig:"){width="1\linewidth"} [.49]{} ![Burgers equation with discontinuos solution for different values of $\nu$[]{data-label="fig:burg1"}]({Images/B_Samples_4} "fig:"){width="1\linewidth"} We consider the viscous scalar conservation law , but in the domain $D_T = [-1,1]\times[0,1]$, with initial conditions, $ \bar{u}(x) = -\sin(\pi x)$ and zero Dirichlet boundary conditions. We choose the flux function $f(u) = \frac{u^2}{2}$, resulting in the well-known *viscous Burgers’* equation. This problem is considered for $4$ different values of the viscosity parameter $\nu = \frac{c}{\pi}$, with $c= $ 0.01, 0.005, 0.001, 0.0, and with $N_{int}=8192$, $N_{tb}=256$, $N_{sb}=256$ points. All the training points are chosen as low-discrepancy Sobol sequences on the underlying domains. An ensemble training procedure, based on the hyperparameters presented in Table \[tab:1\], is performed and the best performing hyperparameters i.e. those that led to the smallest training errors, are chosen and presented in Table \[tab:burg\]. In figure \[fig:burg1\], we present the *reference* solution field $u(\cdot,t)$, at different time snapshots, of the viscous Burgers’ equation computed with a simple upwind finite volume scheme and forward Euler time integration with $2\times10^{6}$ Cartesian grid points in space-time, and the predicted solution $u^{\ast}(\cdot,t)$ of the PINN, generated with algorithm \[alg:PINN\], corresponding to the best performing hyperparameters (see Table \[tab:burg\]), for different values of the viscosity coefficient. From this figure, we observe that for the viscosity coefficients, corresponding to $c=0.01,0.005$, the approximate solution, predicted by the PINN, approximates the underlying exact solution, which involves self steepening of the initial sine wave into a steady sharp profile at the origin, very well. This is further reinforced by the very low (relative percentage) generalization errors of approximately $1\%$, presented in Table \[tab:burg\]. However, this efficient approximation is no longer the case for the inviscid problem i.e $\nu = 0$. As seen from figure \[fig:burg1\] (bottom right), the PINN fails to resolve the solution, which in this case, consists of a steady shock at the origin. In fact, this failure to approximate is already seen with a viscosity coefficient of $\nu = \frac{0.001}{\pi}$. For this very low viscosity coefficient, we see that the relative generalization error is approximately $11 \%$. The generalization error rises to $23 \%$ for the inviscid Burgers’ equation. This increase in error appears consistent with the bound , combined with the blow-up estimate for the derivatives of the viscous Burgers’ equation. As the viscosity $\nu \rightarrow 0$, the bounds in the rhs of can increase exponentially, which appears to be the case here. [.49]{} ![Burgers equation with rarefaction wave for different values of $\nu$[]{data-label="fig:burg2"}]({Images/Rar_Samples001.png} "fig:"){width="1\linewidth"} [.49]{} ![Burgers equation with rarefaction wave for different values of $\nu$[]{data-label="fig:burg2"}]({Images/Rar_Samples0005} "fig:"){width="1\linewidth"} [.49]{} ![Burgers equation with rarefaction wave for different values of $\nu$[]{data-label="fig:burg2"}]({Images/Rar_Samples0001} "fig:"){width="1\linewidth"} [.49]{} ![Burgers equation with rarefaction wave for different values of $\nu$[]{data-label="fig:burg2"}]({Images/Rar_Samples} "fig:"){width="1\linewidth"} To further test the bound ’s ability to explain performance of PINNs for the viscous Burgers’ equation, we consider the following initial and boundary conditions, $$\bar{u}(x) = \begin{cases} 0, &\quad\text{if } x{\leqslant}0 \\ 1, &\quad\text{if } x> 0\\ \end{cases}\quad \forall ~x \in [-1,1],$$ $$u(t,-1) =0, ~ u(t,1)=1, ~ \forall ~t \in [0,0.5].$$ Given the discontinuity in the initial data we train the PINNS with a larger number of boundary training samples $N_{tb}=512$ and $N_{sb}=512$, while leaving $N_{int}= 8192$, unchanged. As in the previous experiments, training sets are Sobol sequences and an ensemble training is preformed to configure the network architecture. The results are summarized in Table \[tab:rar\] and figure \[fig:burg2\]. In this case, the exact solution is a so-called rarefaction wave (see figure \[fig:burg2\] for the reference solution, computed in the manner, analogous to the previous numerical experiment) and the gradient of the solution remains bounded, uniformly as the viscosity coefficient $\nu \rightarrow 0$. Hence, from the bound , we expect that PINNs will efficiently approximate the underlying solution for all values of the viscosity coefficient. This is indeed verified in the solution snapshots, presented figure \[fig:burg2\], where we observe that the PINN approximates the reference solution quite well, for all values of the viscosity coefficient. This behavior is further verified in table \[tab:rar\], where we see that the generalization error , remains low (less than $2\%$) for all the values of viscosity and in fact, reduces slightly as $\nu \rightarrow 0$, completely validating the error estimate . Incompressible Euler equations {#sec:5} ============================== The underlying PDE ------------------ The motion of an inviscid, incompressible fluid is modeled by the incompressible Euler equations [@MBbook]. We consider the following form of these PDEs, $$\label{eq:ie} \begin{aligned} {{\bf u}}_t + \left({{\bf u}}\cdot \nabla\right){{\bf u}}+ \nabla p &= {\mathbf{f}}, \quad (x,t) \in D \times (0,T), \\ {\rm div}({{\bf u}}) &= 0, \quad (x,t) \in D \times (0,T), \\ {{\bf u}}\cdot{{\mathbf n}}&= 0, \quad (x,t) \in {\partial D}\times (0,T), \\ {{\bf u}}(x,0) &= \bar{{{\bf u}}}(x), \quad x \in D. \end{aligned}$$ Here, $D \subset {\mathbb{R}}^d$, for $d=2,3$ is an open, bounded, connected subset with smooth $C^1$ boundary ${\partial D}$, $D_T = D \times (0,T)$, ${{\bf u}}: D_T \mapsto {\mathbb{R}}^d$ is the velocity field, $p: D_T \mapsto {\mathbb{R}}$ is the pressure that acts as a Lagrange multiplier to enforce the divergence constraint and ${\mathbf{f}}\in C^1(D_T;{\mathbb{R}}^d)$ is a forcing term. We use the *no penetration* boundary conditions here with ${{\mathbf n}}$ denoting the unit outward normal to ${\partial D}$. Note that we have chosen to present this form of the incompressible Euler equations for simplicity of exposition. The analysis, presented below, can be readily but tediously extended to the following, - Other boundary conditions such as periodic boundary conditions on the torus ${\mathbb T}^d$. - The *Navier-Stokes equations*, where we add the *viscous term* $\nu \Delta {{\bf u}}$ to the first equation in , with either periodic boundary conditions or the so-called *no slip* boundary conditions i.e, ${{\bf u}}\equiv 0$, for all $x \in {\partial D}$ and for all $t \in (0,T]$. PINNs {#pinns-1} ----- We describe the algorithm \[alg:PINN\] for this PDE in the following steps, ### Training set {#training-set} We chose the training set ${\EuScript{S}}\subset D_T$ with ${\EuScript{S}}= {\EuScript{S}}_{int} \cup {\EuScript{S}}_{sb} \cup {\EuScript{S}}_{tb}$, with interior, spatial and temporal boundary training sets, chosen exactly as in section \[sec:tset\], either as quadrature points for a (composite) Gauss rule or as low-discrepancy sequences on the underlying domains. ### Residuals For the neural networks $(x,t) \mapsto \left({{\bf u}}_{\theta}(x,t),p_{\theta}(x,t)\right) \in C^k((0,T)\times D) \cap C([0,T]\times \bar{D})$, defined by , with a smooth activation function and $\theta \in \Theta$ as the set of tuning parameters, we define the residual ${\EuScript{R}}$ in algorithm \[alg:PINN\], consisting of the following parts, - *Velocity residual* given by, $$\label{eq:ires1} {\EuScript{R}}_{{{\bf u}},\theta}(x,t):= ({{\bf u}}_{\theta})_t + \left({{\bf u}}_{\theta} \cdot \nabla\right){{\bf u}}_{\theta} + \nabla p_{\theta} - {\mathbf{f}}, \quad (x,t) \in D \times (0,T),$$ - *Divergence residual* given by, $$\label{eq:ires2} {\EuScript{R}}_{div,\theta}(x,t):= {\rm div}({{\bf u}}_{\theta}(x,t)), \quad (x,t) \in D \times (0,T),$$ - *Spatial boundary Residual* given by, $$\label{eq:ires3} {\EuScript{R}}_{sb,\theta}(x,t):= {{\bf u}}_{\theta}(x,t)\cdot {{\mathbf n}}, \quad \forall x \in {\partial D}, ~ t \in (0,T].$$ - *Temporal boundary Residual* given by, $$\label{eq:ires4} {\EuScript{R}}_{tb,\theta}(x):= {{\bf u}}_{\theta}(x,0) - \bar{{{\bf u}}}(x), \quad \forall x \in D.$$ As the underlying neural networks have the required regularity, the residuals are well-defined. ### Loss function We consider the following loss function for training PINNs to approximate the incompressible Euler equation , $$\label{eq:ilf} J(\theta):= \sum\limits_{n=1}^{N_{tb}} w^{tb}_n|{\EuScript{R}}_{tb,\theta}(x_n)|^2 + \sum\limits_{n=1}^{N_{sb}} w^{sb}_n|{\EuScript{R}}_{sb,\theta}(x_n,t_n)|^2 + \lambda \left( \sum\limits_{n=1}^{N_{int}} w^{int}_n|{\EuScript{R}}_{{{\bf u}},\theta}(x_n,t_n)|^2+\sum\limits_{n=1}^{N_{int}} w^{int}_n|{\EuScript{R}}_{div,\theta}(x_n,t_n)|^2 \right).$$ Here the residuals are defined by -. $w^{tb}_n$ are the $N_{tb}$ quadrature weights corresponding to the temporal boundary training points ${\EuScript{S}}_{tb}$, $w^{sb}_n$ are the $N_{sb}$ quadrature weights corresponding to the spatial boundary training points ${\EuScript{S}}_{sb}$ and $w^{int}_n$ are the $N_{int}$ quadrature weights corresponding to the interior training points ${\EuScript{S}}_{int}$. Furthermore, $\lambda$ is a hyperparameter for balancing the residuals, on account of the PDE and the initial and boundary data, respectively. Estimate on the generalization error. ------------------------------------- We denote the PINN, obtained by the algorithm \[alg:PINN\], for approximating the incompressible Euler equations, as ${{\bf u}}^{\ast}= {{\bf u}}_{\theta^{\ast}}$, with $\theta^{\ast}$ being a (approximate,local) minimum of the loss function ,. We consider the following generalization error, $$\label{eq:iegen} {\EuScript{E}}_{G}:= \left(\int\limits_0^T \int\limits_D \|{{\bf u}}(x,t) - {{\bf u}}^{\ast}(x,t)\|^2 dx dt \right)^{\frac{1}{2}},$$ with $\|\cdot\|$ denoting the Euclidean norm in ${\mathbb{R}}^d$. Note that we only consider the error with respect to the velocity field ${{\bf u}}$ in . Although the pressure $p$ in is approximated by the neural network $p^{\ast} = p_{\theta^{\ast}}$, we recall that the pressure is a Lagrange multiplier, and not a primary variable in the incompressible Euler equations. Hence, we will not consider pressure errors here. As in section \[sec:2\], we will bound the generalization error in terms of the following *training errors*, $$\label{eq:ietrain} {\EuScript{E}}_T^2:= \underbrace{\sum\limits_{n=1}^{N_{tb}} w^{tb}_n|{\EuScript{R}}_{tb,\theta^{\ast}}(x_n)|^2}_{\left({\EuScript{E}}_T^{tb}\right)^2} + \underbrace{\sum\limits_{n=1}^{N_{sb}} w^{sb}_n|{\EuScript{R}}_{sb,\theta^{\ast}}(x_n,t_n)|^2}_{\left({\EuScript{E}}_T^{sb}\right)^2} + \underbrace{\sum\limits_{n=1}^{N_{int}} w^{int}_n|{\EuScript{R}}_{{{\bf u}},\theta^{\ast}}(x_n,t_n)|^2}_{\left({\EuScript{E}}_T^{{{\bf u}}}\right)^2}+\lambda\underbrace{\sum\limits_{n=1}^{N_{int}} w^{int}_n|{\EuScript{R}}_{div,\theta^{\ast}}(x_n,t_n)|^2}_{\left({\EuScript{E}}_T^{d}\right)^2}.$$ As in the previous sections, the training errors can be readily computed *a posteriori* from the loss function , . We have the following bound on the generalization error in terms of the training error, \[thm:euler\] Let ${{\bf u}}\in C^1((0,T)\times D) \cap C([0,T]\times \bar{D})$ be the classical solution of the incompressible Euler equations . Let ${{\bf u}}^{\ast} = {{\bf u}}_{\theta^{\ast}}, p^{\ast} = p_{\theta^{\ast}}$ be the PINN generated by algorithm \[alg:PINN\], then the resulting generalization error is bounded as,$$\label{eq:iegenb} \begin{aligned} {\EuScript{E}}_G^2 &{\leqslant}\left(T+C_{\infty}T^2e^{C_{\infty}T}\right) \left[\left({\EuScript{E}}^{tb}_T\right)^2 + \left({\EuScript{E}}^{{{\bf u}}}_T\right)^2 + C_0T^{\frac{1}{2}}\left({\EuScript{E}}_T^{div} + {\EuScript{E}}_T^{sb}\right) \right] \\ &+ \left(T+C_{\infty}T^2e^{C_{\infty}T}\right) \left[C^{tb}_{qaud} N_{tb}^{-\alpha_{tb}} +C^{int,{{\bf u}}}_{qaud} N_{int}^{-\alpha_{int}} + \left(C^{int,div}_{qaud}\right)^{\frac{1}{2}} N_{int}^{-\frac{\alpha_{int}}{2}} + \left(C^{sb}_{qaud}\right)^{\frac{1}{2}} N_{sb}^{-\frac{\alpha_{sb}}{2}}\right]. \end{aligned}$$ Here, the training errors are defined in and the constants are given by, $$\label{eq:cons} \begin{aligned} C_0 &= C\left(\|{{\bf u}}\|_{C^0([0,T]\times \bar{D})},\|{{\bf u}}^{ \ast}\|_{C^0([0,T]\times \bar{D})}, \|p\|_{C^0([0,T]\times \bar{D})}, \|p^{\ast}\|_{C^0([0,T]\times \bar{D})}\right), \\ C_{\infty} &= 1 + 2C_d \|\nabla {{\bf u}}\|_{L^{\infty}(D_T)}, \end{aligned}$$ with $C_d$ only depending on dimension $d$ and $C^{tb}_{quad} = C^{tb}_{qaud}\left(\|{\EuScript{R}}_{tb,\theta^{\ast}}\|_{C^k}\right)$, $C^{int,{{\bf u}}}_{quad} = C^{int}_{qaud}\left(\|{\EuScript{R}}_{{{\bf u}},\theta^{\ast}}\|_{C^{k-1}}\right)$, $C^{int,div}_{quad} = C^{int}_{qaud}\left(\|{\EuScript{R}}_{div,\theta^{\ast}}\|_{C^{k-1}}\right)$ and $C^{sb}_{quad} = C^{sb}_{qaud}\left(\|{\EuScript{R}}_{sb,\theta^{\ast}}\|_{C^k}\right)$ are the constants associated with the quadrature errors -. We will drop explicit dependence of all quantities on the parameters $\theta^{\ast}$ for notational convenience. We denote the difference between the underlying solution ${{\bf u}}$ of and PINN ${{\bf u}}^{\ast}$ as $\hat{{{\bf u}}} = {{\bf u}}^{\ast} - {{\bf u}}$. Similarly $\hat{p} = p^{\ast}-p$. Using the PDE and the definitions of the residuals -, a straightforward calculation yields the following PDE for the $\hat{{{\bf u}}}$, $$\label{eq:iehat} \begin{aligned} \hat{{{\bf u}}}_t + \left(\hat{{{\bf u}}} \cdot \nabla\right)\hat{{{\bf u}}} + \left({{\bf u}}\cdot \nabla\right)\hat{{{\bf u}}} + \left(\hat{{{\bf u}}} \cdot \nabla\right){{\bf u}}+ \nabla \hat{p} &= {\EuScript{R}}_{{{\bf u}}}, \quad (x,t) \in D \times (0,T), \\ {\rm div}(\hat{{{\bf u}}}) &= {\EuScript{R}}_{div}, \quad (x,t) \in D \times (0,T), \\ \hat{{{\bf u}}}\cdot{{\mathbf n}}&= {\EuScript{R}}_{sb}, \quad (x,t) \in {\partial D}\times (0,T), \\ {{\bf u}}(x,0) &= {\EuScript{R}}_{tb}, \quad x \in D. \end{aligned}$$ We take a inner product of the first equation in with the vector $\hat{{{\bf u}}}$ and use the following vector identities, $$\begin{aligned} \hat{{{\bf u}}}\cdot \partial_t \hat{{{\bf u}}} &= \partial_t \left(\frac{\|\hat{{{\bf u}}}\|^2}{2}\right), \\ \hat{{{\bf u}}} \cdot \left(\left(\hat{{{\bf u}}} \cdot \nabla\right)\hat{{{\bf u}}}\right) &= \left(\hat{{{\bf u}}} \cdot \nabla\right)\left(\frac{\|\hat{{{\bf u}}}\|^2}{2}\right), \\ \hat{{{\bf u}}} \cdot \left(\left({{\bf u}}\cdot \nabla\right)\hat{{{\bf u}}}\right) &= \left({{\bf u}}\cdot \nabla\right)\left(\frac{\|\hat{{{\bf u}}}\|^2}{2}\right), \end{aligned}$$ yields the following identity, $$\begin{aligned} \partial_t \left(\frac{\|\hat{{{\bf u}}}\|^2}{2}\right) + \left(\hat{u} \cdot \nabla\right)\left(\frac{\|\hat{{{\bf u}}}\|^2}{2}\right) +\left({{\bf u}}\cdot \nabla\right)\left(\frac{\|\hat{{{\bf u}}}\|^2}{2}\right) + \hat{{{\bf u}}} \cdot \left(\left(\hat{{{\bf u}}} \cdot \nabla\right){{\bf u}}\right)+\left(\hat{{{\bf u}}} \cdot \nabla\right)\hat{p} = \hat{{{\bf u}}}\cdot{\EuScript{R}}_{{{\bf u}}}.\end{aligned}$$ Integrating the above identity over $D$ and integrating by parts, together with and yields, $$\label{eq:ipf1} \begin{aligned} \frac{d}{dt} \int_D \left(\frac{\|\hat{{{\bf u}}}\|^2}{2}\right) dx &= \int_{D} {\EuScript{R}}_{div} \left(\frac{\|\hat{{{\bf u}}}\|^2}{2}+ \hat{p}\right) dx - \int_{{\partial D}} {\EuScript{R}}_{sb}\left(\frac{\|\hat{{{\bf u}}}\|^2}{2}+ \hat{p}\right) ds(x) \\ &-\int_D \hat{{{\bf u}}} \cdot \left(\left(\hat{{{\bf u}}} \cdot \nabla\right){{\bf u}}\right) dx + \int_D\hat{{{\bf u}}}\cdot{\EuScript{R}}_{{{\bf u}}} dx. \end{aligned}$$ It is straightforward to obtain the following inequality, $$\begin{aligned} \int_D \hat{{{\bf u}}} \cdot \left(\left(\hat{{{\bf u}}} \cdot \nabla\right){{\bf u}}\right) dx {\leqslant}C_d \|\nabla {{\bf u}}\|_{\infty} \int_D \|\hat{{{\bf u}}}\|^2 dx,\end{aligned}$$ with the constant $C_d$ only depending on dimension and $\|{{\bf u}}\|_{\infty} = \|{{\bf u}}\|_{L^{\infty}(D_T)}$. Using the above estimate and estimating yields, $$\label{eq:ipf2} \frac{d}{dt} \int_D \|\hat{{{\bf u}}}\|^2 dx {\leqslant}C_0 \left[ \left(\int_D \left({\EuScript{R}}_{div}\right)^2 dx\right)^{\frac{1}{2}} + \left(\int_{{\partial D}} \left({\EuScript{R}}_{sb}\right)^2 ds(x)\right)^{\frac{1}{2}}\right] + C_{\infty} \int_D \|\hat{{{\bf u}}}\|^2 dx + \int_D {\EuScript{R}}^2_{{{\bf u}}} dx,$$ with constants given by . For any $\bar{T} {\leqslant}T$, we integrate over time and use some simple inequalities to obtain, $$\label{eq:ipf3} \begin{aligned} \int_D \|\hat{{{\bf u}}}(x,\bar{T})\|^2 dx &{\leqslant}{\EuScript{C}}+ C_{\infty} \int_0^{\bar{T}}\int_D \|\hat{{{\bf u}}}(x,t)\|^2 dx dt, \\ {\EuScript{C}}&= \int_D {\EuScript{R}}_{tb}^2 dx + \int_0^{T} \int_D {\EuScript{R}}^2_{{{\bf u}}} dx dt, \\ &+ C_0T^{\frac{1}{2}}\left[ \left(\int_0^T\int_D \left({\EuScript{R}}_{div}\right)^2 dxdt \right)^{\frac{1}{2}} + \left(\int_0^T\int_{{\partial D}} \left({\EuScript{R}}_{sb}\right)^2 ds(x)dt \right)^{\frac{1}{2}}\right] . \end{aligned}$$ Now by using the integral form of the Grönwall’s inequality in and integrating again over $[0,T]$ results in, $$\label{eq:ipf4} {\EuScript{E}}_G^2 {\leqslant}\left(T+C_{\infty}T^2e^{C_{\infty}T}\right){\EuScript{C}}.$$ Using the bounds - on the quadrature errors and the definition of ${\EuScript{C}}$ in , we obtain, $$\begin{aligned} {\EuScript{C}}&{\leqslant}\sum\limits_{n=1}^{N_{tb}} w^{tb}_n|{\EuScript{R}}_{tb}(x_n)|^2 + C^{tb}_{qaud}\left(\|{\EuScript{R}}_{tb}\|_{C^k}\right) N_{tb}^{-\alpha_{tb}} \\ &+ \sum\limits_{n=1}^{N_{int}} w^{int}_n|{\EuScript{R}}_{{{\bf u}}}(x_n,t_n)|^2 + C^{int}_{qaud}\left(\|{\EuScript{R}}_{{{\bf u}}}\|_{C^{k-1}}\right) N_{int}^{-\alpha_{int}}, \\ &+ C_0T^{\frac{1}{2}} \left[\left(\sum\limits_{n=1}^{N_{int}} w^{int}_n|{\EuScript{R}}_{div}(x_n,t_n)|^2\right)^{\frac{1}{2}} + \left(C^{int}_{qaud}\left(\|{\EuScript{R}}_{div}\|_{C^{k-1}}\right)\right)^{\frac{1}{2}} N_{int}^{-\frac{\alpha_{int}}{2}} \right] \\ &+ C_0T^{\frac{1}{2}} \left[\left(\sum\limits_{n=1}^{N_{sb}} w^{sb}_n|{\EuScript{R}}_{sb}(x_n,t_n)|^2\right)^{\frac{1}{2}} + \left(C^{sb}_{qaud}\left(\|{\EuScript{R}}_{sb}\|_{C^{k}}\right)\right)^{\frac{1}{2}} N_{sb}^{-\frac{\alpha_{sb}}{2}} \right]\end{aligned}$$ From definition of training errors and and the above inequality, we obtain the desired estimate . The bound explicitly requires the existence of a classical solution ${{\bf u}}$ to the incompressible Euler equations, with a minimum regularity of $\nabla {{\bf u}}\in L^{\infty}(D \times (0,T))$. Such solutions do exist as long as we consider the incompressible Euler equations in *two space dimensions* and with sufficiently smooth initial data [@MBbook]. However, in three space dimensions, even with smooth initial data, the existence of smooth solutions is a major open question. It is possible that the derivative blows up and the constant $C_{\infty}$ in is unbounded, leading to a loss of control on the generalization error. In general, complicated solutions of the Euler equations are characterized by strong vorticity, resulting in large values of the spatial derivative. The bound makes it clear that the generalization error with PINNs can be large for such problems. Numerical Experiments --------------------- We present experiments for the incompressible Euler equations in two space dimensions, i.e $d=2$ in . Moreover, we will use the *CELU* function given by, $$\label{eq:celu} CELU(x)=\max(0,x)+\min\big(0,\exp(x)-1\big),$$ as the activation function $\sigma$ in . The CELU function results in better approximation than the hyperbolic tangent, for the Euler equations. [.49]{} ![Exact and PINN solutions to the Taylor Vortex[]{data-label="fig:euler1"}]({Images/TV_Samples_w_ex_0.png} "fig:"){width="1\linewidth"} [.49]{} ![Exact and PINN solutions to the Taylor Vortex[]{data-label="fig:euler1"}]({Images/TV_Samples_w_0} "fig:"){width="1\linewidth"} [.49]{} ![Exact and PINN solutions to the Taylor Vortex[]{data-label="fig:euler1"}]({Images/TV_Samples_w_ex_1} "fig:"){width="1\linewidth"} [.49]{} ![Exact and PINN solutions to the Taylor Vortex[]{data-label="fig:euler1"}]({Images/TV_Samples_w_1} "fig:"){width="1\linewidth"} ### Taylor Vortex In the first numerical experiment, we consider the well-known Taylor Vortex, in a computational domain $D = [-8,8]^2$ with periodic boundary conditions and with the initial conditions, $$\label{eq:TV_ic} \begin{aligned} u_0(x,y) &= -y e^{\frac{1}{2}(1-x^2-y^2)} + a_x,& \quad& (x,y)\in[-8,8]^2,\\ v_0(x,y) &= xe^{\frac{1}{2}(1-x^2-y^2)} + a_y, & \quad& (x,y)\in[-8,8]^2, \end{aligned}$$ with $a_x=8$ and $a_y=0$. In this case, one can obtain the following exact solution, $$\label{eq:tv_exact} \begin{aligned} u(t, x,y) &= -(y-a_yt) e^{\frac{1}{2}\big[1-(x-a_xt)^2-(y-a_yt)^2\big]} + a_x,\\ v(t, x,y) &= (x-a_xt)e^{\frac{1}{2}\big[1-(x-a_xt)^2-(y-a_yt)^2\big]} + a_y. \end{aligned}$$ We will generate the training set with $N_{int}=8192$, $N_{tb} = N_{sb} = 256$ points, chosen as low-discrepancy Sobol sequences on the underlying domains. An ensemble training procedure is performed, as described in the previous section, and resulted in the hyperparameter configuration presented in Table \[tab:euler\]. To visualize the solution, we follow standard practice and compute the vorticity $\omega = {\rm curl} ({{\bf u}})$ and present the exact vorticity and the one obtained from the PINN, generated by algorithm \[alg:PINN\] in figure \[fig:euler1\]. We remark that the vorticity can be readily computed from the PINN ${{\bf u}}^{\ast}$ by automatic differentiation. We see from the figure, that the PINN, approximates the flow field very well, both initially as well as at later times, with small numerical errors. This good quality of approximation is further reinforced by the generalization error , computed from with $10^5$ uniformly distributed random points, and presented in Table \[tab:euler\]. We see that the generalization error for the best hyperparameter configuration is only $0.012\%$, indicating very high accuracy of the approximation for this test problem. ### Double shear Layer We consider the two-dimensional Euler equations in the computational domain $D = [0,2\pi]^2$ with periodic boundary conditions and consider initial data with the underlying vorticity, shown in figure \[fig:euler2\] (Top Left). This vorticity, corresponds to a velocity field that has been evolved with a standard second-order finite difference projection method, with the well-known double shear layer initial data [@BCG1], evolved till $T=1$. We are interested in determining if we can train a PINN to match the solution for later times. To this end, we acknowledge that the underlying solution is rather complicated (see figure \[fig:euler2\] Top row) for the corresponding reference vorticity, and consists of fast moving sharp vortices. Moreover, the vorticity is high, implying from the bound , that the generalization errors with PINNs can be high in this case. Hence, we consider training sets with larger number of points than the previous experiment, by setting $N_{int}=65536$ and $N_{tb}=N_{sb}=16384$. The ensemble training procedure resulted in hyperparameters presented in Table \[tab:euler\]. We present the approximate vorticity computed with the PINN, together with the exact vorticity, in figure \[fig:euler2\], at three different times. From the figure, we see that the vorticity is approximated by the PINN quite well. However, the sharp vortices are smeared out and this is particularly apparent at later times. This is not surprising as the underlying solution is much more complicated in this case. Moreover, we have trained the PINN to approximate the velocity field, rather than the vorticity, and the generalization error is still quite low at $3.8 \%$ (see Table \[tab:euler\]). [.3]{} ![Reference (Top Row) and PINN generated (Bottom Row) vorticities for the double shear layer problem at different times[]{data-label="fig:euler2"}]({Images/DS_Samples_w_ex_3} "fig:"){width="1\linewidth"} [.3]{} ![Reference (Top Row) and PINN generated (Bottom Row) vorticities for the double shear layer problem at different times[]{data-label="fig:euler2"}]({Images/DS_Samples_w_ex_5} "fig:"){width="1\linewidth"} [.3]{} ![Reference (Top Row) and PINN generated (Bottom Row) vorticities for the double shear layer problem at different times[]{data-label="fig:euler2"}]({Images/DS_Samples_w_ex_7} "fig:"){width="1\linewidth"} [.3]{} ![Reference (Top Row) and PINN generated (Bottom Row) vorticities for the double shear layer problem at different times[]{data-label="fig:euler2"}]({Images/DS_Samples_w_0} "fig:"){width="1\linewidth"} [.3]{} ![Reference (Top Row) and PINN generated (Bottom Row) vorticities for the double shear layer problem at different times[]{data-label="fig:euler2"}]({Images/DS_Samples_w_2} "fig:"){width="1\linewidth"} [.3]{} ![Reference (Top Row) and PINN generated (Bottom Row) vorticities for the double shear layer problem at different times[]{data-label="fig:euler2"}]({Images/DS_Samples_w_4} "fig:"){width="1\linewidth"} Code {#code .unnumbered} ==== The building and the training of PINNs, together with the ensemble training for the selection of the model hyperparameters, are performed with a collection of Python scripts, realized with the support of PyTorch <https://pytorch.org/>. The scripts can be downloaded from <https://github.com/mroberto166/Pinns>. Discussion {#sec:6} ========== Physics informed neural networks (PINNs), originally proposed in [@Lag1], have recently been extensively used to numerically approximate solutions of PDEs in different contexts [@KAR1; @KAR2; @KAR4] and references therein. The main aim of this paper was to explain this efficient approximation by PINNs rigorously. We do so by proving bounds on the underlying generalization error i.e, error of the neural network on unseen data. To this end, we introduce an abstract framework, where an abstract nonlinear PDE is approximated by a PINN, generated with the algorithm \[alg:PINN\]. The key point of this algorithm was to minimize the PDE residual , at training points chosen as the quadrature points, corresponding to an underlying quadrature rule . The resulting generalization error is bounded in the abstract error estimate , in terms of the training error, the number of training samples and constants that stem from a stability estimates for the underlying PDE. In particular for *well-trained networks* , the approximation by PINNs converges to the solution of the underlying PDE as the number of training samples is increased. Thus, we leverage the stability of nonlinear PDEs, together with accuracy of quadrature rules, in order to bound the generalization error for PINNs, providing a rigorous explanation for their robust performance. We illustrate our approach with three representative examples for PDEs, i.e, semilinear parabolic PDEs, viscous scalar conservation laws and the incompressible Euler equations of fluid dynamics. For each of these examples, the abstract framework was worked out and resulted in the bounds , , on the PINN generalization errors. All the bounds were of the form , in terms of the training and quadrature errors and with constants relying on stability (and regularity) estimates for *classical* solutions of the underlying PDE. We also presented numerical experiments to validate the proposed theory. The numerical results were consistent with the derived estimates on the generalization error. For the heat equation, the results showed that PINNs are able to approximate the solutions accurately, even for very high ($100$) dimensional problems. For the viscous scalar conservation laws, we observed very accurate approximations with PINNs, for all values of the viscosity, as long as the underlying solution was at least Lipschitz continuous. However, as expected from the estimate , the accuracy deteriorated for the inviscid problem, when shocks were formed. Results with the two-dimensional incompressible Euler equations were also consistent with the derived error estimate . With the exception of the very recent paper [@DAR1], this article is one of the first rigorous investigations on approximations of PDEs by PINNs. Given that [@DAR1] also considers this theme, it is instructive to compare the two articles and highlight differences. In [@DAR1], the authors focus on consistent approximation by PINNs by estimating the PINN loss in terms of the number of randomly chosen training samples and training error, corresponding to a *loss function*, regularized with Hölder norms of PDE residual, (Theorem 2.1 of [@DAR1]). Under assumptions of vanishing training error and uniform bounds on the residual in Hölder spaces, the authors prove convergence of the resulting PINN to the classical solution of the underlying PDE as the number of training samples increase. They illustrate their abstract framework on linear elliptic and parabolic PDEs. Although similar in spirit, there are major differences in our approach compared to that of [@DAR1]. First, we do not need any additional regularization terms and directly estimate the generalization error from the stability estimate and loss function . Convergence is proved to the underlying PDE solution as number of training samples increases (see lemma \[lem:1\]), under conditions on the training error (similar to assumption 3.3 (2) of [@DAR1]) and bounds on weights (similar to assumption 3.3 (3) of [@DAR1]). Second, given the abstract formalism of section \[sec:2\], in particular the error estimate , we can cover very general PDEs, including non-linear PDEs. In fact, any PDE with a stability estimate of the form is covered by our approach. We believe that this provides a unified explanation, in terms of stability estimates and quadrature rules, for the robust performance of PINNs for approximating a large number of linear and non-linear PDEs. Finally, our current work has certain advantages as well as limitations and forms the foundation for the following extensions, - We provided a very abstract framework for deriving error estimates for PINNs here and illustrated this approach for some representative linear and non-linear PDEs. Given the generality and universality of the proposed formalism, it is possible to extend the estimates for approximating a very wide class of PDEs that satisfy stability estimates of the form and have classical solutions. In addition to the examples here, such PDEs include all well-known linear PDEs such as linear elliptic PDEs, wave equations, Maxwell’s equations, linear elasticity, Stokes equations, Helmholtz equation etc as well as nonlinear PDEs such as Schrödinger equations, semilinear elliptic equations, dispersive equations such as KdV and many others. Extension of PINNs to these equations can be performed readily. - A major limitation of the estimate on the generalization error lies in the fact that the rhs in this estimate involves the training error . We are not able to estimate this error rigorously. However, this is standard in machine learning [@MLbook2], where the training error is computed *a posteriori* (see the numerical results for training errors in respective experiments). The training error stems from the solution of a non-convex optimization problem in very high dimensions and there are no robust estimates on this problem. Once such estimates become available, they can be readily incorporated in the rhs of . For the time being, the estimate should be interpreted as *as long as the PINN is trained well, it generalizes well*. - Although never explicitly assumed, stability estimates of the abstract form , at least for nonlinear PDEs, amounts to regularity assumptions on the underlying solutions. For instance, the subspace $Z$ in is often a space of functions with sufficiently high Sobolev (or Hölder) regularity. The issue of regularity of the underlying PDE solutions is brought to the fore in the numerical example (see figure \[fig:burg1\] and table \[tab:burg\]) for the viscous Burgers’ equation. Here, we clearly need the underling PDE solution to be Lipschitz continuous, for the PINN to appproximate it. Can PINNs approximate rough solutions of PDEs? The techniques introduced here may not suffice for this purpose and non-trivial extensions are needed. - Finally, we have focused on the forward problems for PDEs in this paper. One of the most attractive features of PINNs is their ability to solve Inverse problems [@KAR2; @KAR4; @KAR6], with the same computational costs and efficiency as the forward problem. We prove rigorous generalization error estimates for PINNs, approximating inverse problems for PDEs in the companion paper [@MM2]. Acknowledgements. {#acknowledgements. .unnumbered} ================= The research of SM and RM was partially supported by European Research Council Consolidator grant ERCCoG 770880: COMANFLO. [^1]: Seminar for Applied Mathematics (SAM), D-Math ETH Zürich, Rämistrasse 101, Zürich-8092, Switzerland [^2]: Seminar for Applied Mathematics (SAM), D-Math ETH Zürich, Rämistrasse 101, Zürich-8092, Switzerland.
--- abstract: 'We consider a matrix space based on the spin degree of freedom, describing both a Hilbert state space, and its corresponding symmetry operators. Under the requirement that the Lorentz symmetry be kept, at given dimension, scalar symmetries, and their representations are determined. Symmetries are flavor or gauge-like, with fixed chirality. After spin 0, 1/2, and 1 fields are obtained in this space, we construct associated interactive gauge-invariant renormalizable terms, showing their equivalence to a Lagrangian formulation, using as example the previously studied (5+1)-dimensional case, with many standard-model connections. At 7+1 dimensions, a pair of Higgs-like scalar Lagrangian is obtained naturally producing mass hierarchy within a fermion flavor doublet.' author: - 'J. Besprosvany and R. Romero' date: 'Instituto de Física, Universidad Nacional Autónoma de México, Apartado Postal 20-364, México 01000, D. F., México ' title: Representation of quantum field theory in an extended spin space and fermion mass hierarchy --- = 1.5ex plus 1pt PACS: 12.60.-i, 11.15.-q, 12.10.Dm, 11.30.Rd . Introduction ============ The current theory of elementary particles, the standard model (SM), is successful in describing their behavior, but it is phenomenological. The origin of the interaction groups, the particles’ spectrum and representations, and parameters has remained largely unexplained. A unified theory can aim to build physical objects from the most elementary ones. The generalization of features of the model into larger structures with a unifying principle has suggested connections among the observables. Thus, additional spatial dimensions in Kaluza-Klein theories are associated with gauge symmetries, and larger gauge groups in grand-unified theories (GUTs)$\cite{unification}$ put some restrictions on them. Particles and interactions obey Lorentz-scalar and local symmetries, associated to gauge groups. The fundamental representation of the Lorentz group is physically manifested in elementary-particle fermions, while the vector representation corresponds to interaction bosons. On the other hand, fermions occupy the scalar-group fundamental representation, and vector particles the adjoint. In addition, the description and quantification[^1] of particles and interactions have similar consistency requirements, as restrictions on the representations from unitarity. These notable similarities and connections between the existing particles’ discrete degrees of freedom point to a common origin, and hence, a simple composite description. Indeed, a shared vector space, was proposed[@Jaime; @JaimeB] that generalizes spin, and accommodates scalar and Lorentz degrees of freedom; at given dimension\[d\], this space constrains the symmetries and representations, and its generators in the dimensions beyond 3+1 are associated with scalar symmetries. While only a simple admixture of Lorentz and scalar groups is permitted by the Coleman-Mandula theorem[@Coleman], additional non-trivial information is obtained from the spin-space scheme, as chiral and vector characterizations emerge from the symmetries and particle representations. Similarly to the supersymmetry case[@wess], the dimension of the space constrains the particle spectrum; as we will explain, the interactions are also constrained. Within a Kaluza-Klein framework, this extension may be viewed as a consequence of the spatial components being frozen. Conceptually, the matrix construction stems from incremental direct products with $2\times 2$ matrices, suggesting the discrete Hilbert space used is built up from the most elementary degrees of freedom (e. g., q-bits or spin-1/2 particles.) Although standard SM extensions provide additional information on it, many puzzles remain unsolved. With its bottom-up approach, this model reduces the available groups and representations to fit particles and their quantum numbers, in contrast, e. g., to the representation choices available in GUTs, and to the multiplicity of compactification options that plagues strings. While this scheme was used before to derive information on coupling constants[@bespro9m1], SM representations [@bespro9m1; @Besprosub], and relations between electroweak boson masses[@Besprosub], a formal treatment to produce an interactive model was missing. In this paper, we construct step-by-step gauge- and Lorentz-invariant terms from fields within representations and symmetries that derive from the extended spin space, which translates into a Poincaré-invariant Lagrangian theory. In particular, we show formally the equivalence of a gauge-invariant field theory, written in such a space, and a standard formulation, thus extending and complementing previous work[@bespro9m1; @Besprosub; @BesproMosh]; each vertex type exhibits particular features. We also find that the scalar fermion-scalar term in (7+1)-d implies a hierarchy in the fermion masses. The paper is organized as follows. In Section 2, we review the spin-space extension, based on the Dirac equation, and its connection to a matrix space. In particular, we present its elements’ classification, using a Clifford algebra, under the demand that Lorentz symmetry be maintained; in such a space, spinors belong to the scalar-group fundamental representation, while vectors to the adjoint representation[@Jaime; @JaimeB; @bespro9m1; @Besprosub; @BesproMosh]. In Section 3, generalized fields and symmetries are expressed in this space (using as example the (5+1)-d case). In Section 4, these are used to construct a gauge-invariant interactive theory, showing that it can be formulated in terms of a standard Lagrangian; we deal with vector-scalar, fermion-vector and vector-vector vertices, using the obtained groups $\rm SU(2)_L \bigotimes U(1)_Y$ in (5+1)-d, and correct chirality. In Section 5, fermion-scalar vertices are obtained in 7+1 d. Higgs-like scalars emerge that lead, through the Higgs mechanism, to fermion masses in a flavor doublet; Yukawa couplings naturally generate a fermion-mass hierarchy. In Section 6, we summarize relevant points in the paper. Other investigations similarly rely on the spin degree of freedom in SM extensions[@spin; @Chisholm], in trying to understand its still unresolved questions. These use an algebraic spinor represented by a matrix, where the common feature of this type of model building is the use of the structure within an associated Clifford-algebra space. In four dimensions, a $4\times4$ matrix connects to the $(3+1)$-d Clifford algebra ${\mathcal C}_{4}$. Each column in the matrix is a left ideal of the algebra. This allows for operators acting from the right, and such transformations are usually associated with gauge groups. To take account of the SM particle multiplets and gauge groups, one introduces extra spacetime dimensions. Different choices are made for the nature of the left ideals, the spacetime dimensions, and symmetry transformations, which leads to different models with various degrees of applicability and phenomenological implications. In Refs. [@Bracic:2005ic; @Bregar2008] models based on Clifford objects in $13+1$ d purport to explain the origin of quark and lepton families. In Ref. [@Trayling:2001kd] an algebraic spinor of ${\mathcal C}_{7}$ is used to represent one family of quarks and leptons, with Poincaré and gauge transformations restricted to act from the left and right, respectively. Other types of models include gravity and are geometric in nature. Thus, the fundamental Clifford algebra relation, usually taken as a real algebra, is given in terms of an abstract vector basis $\{e_{\mu}\}$ as $$\begin{aligned} \label {geometrical} e_{\mu}e_{\nu}+e_{\nu}e_{\mu}=g_{\mu\nu} ,\end{aligned}$$ without reference to the gamma matrices. To cite some recent examples (in no manner an exhaustive list), in Refs. [@Lu2011] and [@Nesti2009] models include the SM gauge groups and gravity, the former based on ${\mathcal C}_{6}$, and the latter on ${\mathcal C}_{3+1}$, which assumes a column spinor within an algebraic matrix. Ref. [@Pavsic2010] also advances a model including gauge and gravity fields, motivated by strings and branes models, and set up in a $16$-d Clifford space. Gamma-matrix symmetry classification ==================================== The Dirac equation formulated over the matrix $\Psi$ (and corresponding conjugate equation) $$\begin{aligned} \label {JaimeqDi} \gamma_0( i \partial_\mu\gamma^\mu -M)\Psi ={ 0},\end{aligned}$$ may be used as framework for the classification of states and operators in an extended space,[^2] and study symmetry transformations. It also generates free-particle fermion and bosons on the extended space. These matrices generate an algebra, and may be also viewed in terms of their bra-ket components: $$\begin{aligned} \label {Jaimeqnext} \Gamma= \sum c_{a b}|a \rangle \langle b |,\end{aligned}$$ with $c_{a b}$ $c$-numbers. The dot product between the elements $\Gamma_a$, $\Gamma\textbf{}_b$ can be defined using the trace $$\begin{aligned} \label {dotprod} {\rm tr \ } \Gamma_a^\dagger \Gamma_b.\end{aligned}$$Assuming $\Gamma$ in Eq. \[Jaimeqnext\] to be unitary we obtain the condition[@Jaime; @JaimeB] on the $c_{a b}$ values $$\begin{aligned} \label {unitarity} \Gamma^\dagger\Gamma= \sum c^*_{b a} c_{b c} = \delta_{ac}.\end{aligned}$$ Implicit in the $| a\rangle \langle b |$ matrix construction is the appropriate transformation operators $U$ acting on field states $\Psi$; these can generically be characterized by the expression $$\begin{aligned} \label {transfoGen} \Psi\rightarrow U \Psi U^\dagger.\end{aligned}$$ We show next that a matrix $\Gamma$ can be associated to either $\Psi$ and $U$, the latter representing both Lorentz and scalar symmetries. We also show that a 4-dimensional Clifford matrix subalgebra is obtained, implying spinor up to bi-spinor elements, thus vectors and scalar fields, can be described. An operator $Op$ within this space characterizes a state $\Psi$ with the eigenvalue rule $$\begin{aligned} \label {OpAct} [Op,\Psi]=\lambda \Psi, \end{aligned}$$ consistent with the hole interpretation, and anticipating a second-quantization description. For example, an on-shell boson may be constructed by two fermion components with positive frequencies $\psi_1(x)$, $\bar\psi_2 (x)$ through $\psi_1(x)\bar\psi_2(x)$, following Eq. \[Jaimeqnext\], with $\bar\psi_2 (x)$ describing an antiparticle. If Eq. \[JaimeqDi\], keeping $\mu=0,...,3,$ is assumed within the larger Clifford algebra[^3] ${\mathcal C}_{N},$ $\{ \gamma_\eta ,\gamma_\sigma \} =2 g_{\eta\sigma}$, $\eta,\sigma=0,...,N-1$, with $N$ the (assumed even) dimension, whose structure is helpful in classifying the available symmetries $U$, and solutions $\Psi$, both represented by $2^{N/2}\times 2^{N/2}$ matrices. The 4-d Lorentz symmetry is maintained, and uses the generators $$\begin{aligned} \label {sigmamunu} \sigma_{\mu\nu}=\frac{i}{2}[\gamma_\mu ,\gamma_\nu ], \end{aligned}$$ where $\mu ,\nu =0,...,3.$ $U$ contains also $ \gamma_a$, $a=4,...,N-1$, and their products as possible symmetry generators. The $N=4$ case was analyzed in Ref. [@Jaime; @JaimeB], $N=6$ in $\cite{Jaime}$, $\cite{JaimeB}$, [@Besprosub], and $N=10$ in [@bespro9m1]. Indeed, the latter elements are scalars for they commute with the Poincaré generators, which contain $\sigma_{\mu\nu}$, and they are also symmetry operators of the massless Eq. \[JaimeqDi\], bilinear in $\gamma_\mu,$ $\mu=0,...,3$ which is not necessarily the case for mass terms (containing $\gamma_0$). In addition, their products with $\tilde\gamma_5=-i\gamma_0\gamma_1\gamma_2\gamma_3$ are Lorentz pseudoscalars. As $[\tilde\gamma_5,\gamma_a]=0$, we can classify the (unitary) symmetry algebra as ${\mathcal S}_{N-4}={\mathcal S}_{(N-4)R} \bigotimes {\mathcal S}_{(N-4)L},$ consisting of the projected right-handed ${\mathcal S}_{(N-4)R}= \frac{1}{2}(1+\tilde\gamma_5){\rm U}(2^{(N-4)/2})$ and left-handed ${\mathcal S}_{(N-4)L}= \frac{1}{2}(1-\tilde\gamma_5)U(2^{(N-4)/2})$ components, where ${\rm U}(M)$ is a representation of the $M$-unitary group in ${\mathcal C}_{N}$. Its reduced form $\tilde {\rm U}(2^{(N-4)/2})\subset {\mathcal C}_{N-4}$, with ${\mathcal C}_{N} = {\mathcal C}_{4}\bigotimes {\mathcal C}_{N-4},$ is the irreducible fundamental representation. The operator algebra was described in Refs. [@bespro9m1] and [@BesproMosh]. A state $\Psi$ is classified in accordance with the above symmetry generators that emerge from the Clifford algebra. For given dimension $N$, any matrix element representing a state is obtained by combinations of products of one or two $\gamma_\mu$, and elements of ${\mathcal S}_{N-4}$, which define, respectively, their Lorentz (as for 4-d) and scalar-group representation. There is a finite number of partitions on the matrix space for the states and symmetry operators, consistent with Lorentz symmetry. The variations of the symmetry algebra are defined by the projection operators ${\mathcal P}_P, \ {\mathcal P}_S\in {\mathcal S}_{N-4}$ with $[{\mathcal P}_P,{\mathcal P}_S]=0$; ${\mathcal P}_P$ acts on the Lorentz generator $$\begin{aligned} \label{Lorentgen} {\mathcal P}_P[\frac{1}{2}\sigma_{\mu\nu}+i(x_\mu\partial_\nu- x_\nu\partial_\mu)],\end{aligned}$$and ${\mathcal P}_S$ on the symmetry operator space $$\begin{aligned} \label{Sprima}{\mathcal S}_{N-4}^\prime={\mathcal P}_S {\mathcal S}_{N-4},\end{aligned}$$ leading to projected scalar generators $I_a={\mathcal P}_S I_a$, so that they determine, respectively, the Poincaré generators and the scalar groups. \ [[1(a)]{} ]{} \[tab:tablejbFBVMatr\] \ [[1(b)]{} ]{}\ \[tab:tablejbFBVSOl\] The application of these operators follows the operator rule in Eq. \[OpAct\], which assigns states to particular Lorentz and scalar group representations. For simplicity, we assume ${\mathcal P}_P ={\mathcal P}_S\neq 1$, as other possibilities are less plausible[@bespro9m1]. Thus, the Lorentz or scalar operators act trivially on one side of solutions of the form $\Psi={\mathcal P }_P\Psi ( 1-{\mathcal P }_P) $, since $(1-{\mathcal P }_P){\mathcal P }_P=0$, leading to spin-1/2 states or states belonging to the fundamental representation of the non-Abelian symmetry groups, respectively. On Table 1(a), we show schematically the organization of the symmetry operators, producing corresponding Lorentz and scalar generators. Table 1[(b)]{} also depicts the resulting solution representations, distributed according to their Lorentz classification: fermion, scalar, vector, and antisymmetric tensor. The matrices are classified according to the chiral projection operators $\frac{1}{2}(1\pm\tilde \gamma_5)$, leading to $N/2 \times N/2$ matrix blocks in ${\mathcal C}_{N}$. The space projected by ${\mathcal P}_P ={\mathcal P}_S\neq 1$ is also depicted. The chiral property of the fermion representations contrasts with the difficulty to reproduce it in traditional Kaluza-Klein extensions[@Witten]. In addition, when deriving a unitary subgroup SU$(M-4)$, for arbitrary $M$, departing from an extended Lorentz group requires O$(2M-5,1)\supset$ SU$(M-4)\bigotimes$ O(3,1), while in our scheme, the subgroup chain can be chosen as U$(M-4)\supset $ O(3,1) $\bigotimes$ SU$(M-4)_{R} \bigotimes $ SU$(M-4)_{L}$. This means lower dimensional spaces are sufficient to reproduce the SM groups, reducing the representation sizes, and eliminating spurious degrees of freedom; in addition, the right- and left-handed group separation is possible for all dimensions. While a grand-unified group limits the representations among which one must choose to put particles, in our case, the representations are determined. Indeed, the specific combinations also emerge, corresponding to spin-1/2-fundamental and vector-adjoint, Lorentz and scalar groups representations, respectively; graphically, vectors and scalar group elements occupy the same matrix spots (and similarly for fermions,) as seen in Tables 1(a) and 1(b). In the next Section, we generalize these fields. Fields and symmetries in matrix space ===================================== To construct interactive fields, we start with free fields within the $(5+1)$-d space[@Besprosub] case as example, for which we highlight predicted physical features. There, among few choices, ${\mathcal P}_P=L$, with $L=\dfrac{3}{4}-\dfrac{i}{4} (1+\tilde{\gamma}_{5} )\gamma^{5}\gamma^{6}-\dfrac{1}{4}\tilde{\gamma}_{5}$ is associated to the lepton number, and the resulting symmetry generators and particle spectrum fits the SM electroweak sector. Specifically, the projected symmetry space also includes the SU(2)$_L\bigotimes \rm U(1) _Y$ groups, with respective generators $I_i$ and hypercharge $Y$ $$\begin{aligned} \label {geneSU2Y} I_{1}&=&\dfrac{i}{4} (1-\tilde{\gamma}_{5} )\gamma^{5} \nonumber \\ I_{2}&=&-\dfrac{i}{4} (1-\tilde{\gamma}_{5} )\gamma^{6} \nonumber \\ I_{3}&=&-\dfrac{i}{4} (1-\tilde{\gamma}_{5} )\gamma^{5}\gamma^{6} \nonumber \\ Y&=&-1+\dfrac{i}{2} (1+\tilde{\gamma}_{5} )\gamma^{5}\gamma^{6}.\end{aligned}$$ We note that the $\rm SU(2)$ generators correctly contain the projection operator $\dfrac{1}{2} (1-\tilde{\gamma}_{5})$, confirming the interaction’s chiral nature, which also leads to chiral representations, a feature that results from nature of the matrix space under projector $L$ and the Lorentz group. Under Eq. \[OpAct\], the action of these operators on choices of free-particle states $\Psi$ is given on Table 2 together with their quantum numbers. The question on what fixes this extension’s dimension to derive SM groups and representations similarly applies to GUTs, as there is also an infinite number of possible groups that contain the SM. The answers for both extensions hinge on that the lowest dimension numbers already give relevant information, and on predictability as, in our case, features as representations and chiral SU(2) are derived. [|&gt;p[0.13]{}|&gt;m[0.44]{}|&gt;p[0.07]{}|&gt;p[0.05]{}|&gt;p[0.05]{}|&gt;p[0.05]{}|&gt;p[0.08]{}|&gt;p[0.05]{}|]{} Electroweak multiplets & States $\Psi$ & $I_{3}$ & $Y$ & $Q$ & $L$ & $\frac{i}{2}L\gamma^{1}\gamma^{2}$ & $L\tilde{\gamma}_{5}$Fermion doublet & $\begin{array}{c} \frac{1}{8} (1-\tilde{\gamma}_{5}) (\gamma^{0}+\gamma^{3} )(\gamma^{5}-i\gamma^{6})\\ \frac{1}{8} (1-\tilde{\gamma}_{5})(\gamma^{0}+\gamma^{3} )(1+i\gamma^{5}\gamma^{6} ) \end{array}$ & $\begin{array}{r} 1/2\\ -1/2 \end{array}$ & $\begin{array}{r} -1\\ -1 \end{array}$ & $\begin{array}{r} 0\\ -1 \end{array}$ & $\begin{array}{r} 1\\ 1 \end{array}$ & $\begin{array}{r} 1/2\\ 1/2 \end{array}$ & $\begin{array}{r} -1\\ -1 \end{array}$Fermion singlet & $\frac{1}{8}$ $(1+\tilde{\gamma}_{5} ) \gamma^{0}( \gamma^{0}+\gamma^{3} ) ( \gamma^{5}-i\gamma^{6})$ & $0$ & $-2$ & $-1$ & $1$ & $1/2$ & $1$Scalar doublet & $\begin{array}{c} \frac{1}{4\sqrt{2}}(1-\tilde{\gamma}_{5} )\gamma^{0} (1-i\gamma^{5}\gamma^{6})\\ \frac{1}{4\sqrt{2}} (1-\tilde{\gamma}_{5} )\gamma^{0} ( \gamma^{5}+i\gamma^{6} ) \end{array}$ & $\begin{array}{r} 1/2\\ -1/2 \end{array}$ & $\begin{array}{r} 1\\ 1 \end{array}$ & $\begin{array}{r} 1\\ 0 \end{array}$ & $\begin{array}{r} 0\\ 0 \end{array}$ & $\begin{array}{r} 0\\ 0 \end{array}$ & $\begin{array}{r} -2\\ -2 \end{array}$Vector triplet & $\begin{array}{c} \frac{1}{4}(1-\tilde{\gamma}_{5})\gamma^{0}(\gamma^{1}+i\gamma^{2})(\gamma^{5}-i\gamma^{6})\\ \frac{1}{2\sqrt{2}}(1-\tilde{\gamma}_{5})\gamma^{0}(\gamma^{1}+i\gamma^{2})\gamma^{5}\gamma^{6}\\ \frac{1}{4}(1-\tilde{\gamma}_{5})\gamma^{0}(\gamma^{1}+i\gamma^{2})(\gamma^{5}+i\gamma^{6}) \end{array}$ & $\begin{array}{r} 1\\ 0\\ -1 \end{array}$ & $\begin{array}{r} 0\\ 0\\ 0 \end{array}$ & $\begin{array}{r} 1\\ 0\\ -1 \end{array}$ & $\begin{array}{r} 0\\ 0\\ 0 \end{array}$ & $\begin{array}{r} 1\\ 1\\ 1 \end{array}$ & $\begin{array}{r} 0\\ 0\\ 0 \end{array}$ \[tab:table5p1\] By identifying elements between the extended spin space and standard Lagrangian terms, Ref. [@Besprosub] set thumb rules to derive some gauge-invariant terms. For rigor’s sake, and to test the model’s reach, it is desirable to obtain such terms within the model’s algebra. Next, we translate the field information that emerges from the extended-spin space, to derive an interactive gauge theory. First, we write fields in the extended-spin basis; similarly, the symmetry generators are written in a standard representation; finally, invariant terms are constructed, and shown to be equivalent to field-theory Lagrangian contributions. As derived in Section 2, and exemplified above, it is possible to write fundamental fields using as basis products of matrices conformed of Lorentz and scalar group representations. Indeed, the commuting property of the respective degrees of freedom allows for states and operators to be written in the form $ {\mathcal C}_{4}{ \bigotimes }{\mathcal S}_{N-4}$; explicitly, $\Psi=M_1 M_2$, where $$\begin{aligned} \label {Separation} M_1 \in {\mathcal C}_{4} \ \ {\rm and} \ \ M_2 \in {\mathcal S}_{N-4} .\end{aligned}$$ An expression with elements of each set is possible through their passage to each side, using commutation or anticommutation rules. Fields’ construction -------------------- In the presence of interactions, free fields give way to more general expressions of fermion and boson fields, keeping their transformation properties: Vector field : $$\begin{aligned} \label {AmuExpandfinal} A_\mu^a (x) \gamma_0\gamma_\mu I_a,\end{aligned}$$ where $\gamma_0\gamma_\mu \in{\mathcal C}_{4}$, and $I_a \in {\mathcal S}_{N-4}^\prime$ is a generator of a given unitary group, and ${\mathcal S}_{N-4}^\prime$ is defined in Eq. \[Sprima\]. Scalar field : $$\begin{aligned} \label {phiExpandfinal} \phi^a (x) \gamma_0 M_a^S.\end{aligned}$$ <!-- --> Fermions : $$\begin{aligned} \label {Fermion} \psi^a_\alpha (x) L^\alpha P_F M_a^F ,\end{aligned}$$ where $M_a^S$,$M_a^F \in {\mathcal S}_{N-4}$ are, respectively, scalar and fermion components, and $L^\alpha$ represents a spin component; for example, $L^1=(\gamma_1+ i \gamma_2) $, $P_F$ is a projection operator of the type in Eq. \[Lorentgen\], such that $$\begin{aligned} \label {transfoyy} P_F \gamma_\mu= \gamma_\mu P_F^c,\end{aligned}$$ and we use the complement $P_F^c = 1- P_F$, so that a Lorentz transformation with $P_F \sigma_{\mu\nu}$, will describe fermions, as argued in Section 3. The simplest example for an operator satisfying such conditions is $P_F = (1-\tilde\gamma_5 )/2$ [@Jaime; @JaimeB], used by the fermion doublet on Table 2. By the argument after Eq. \[Lorentgen\], the fundamental-representation state is derived from the trivial right-hand action of the operator within the transformation rule in Eq. \[transfoyy\]. The matrix entitles spurious ket states contained in the Lorentz-scalar matrices in Eq. \[Separation\]. Antisymmetric tensor : It is also obtained, however, but as it leads to non-renormalizable interactions, it will hence be omitted. Symmetry Transformations ------------------------ We now describe different types of transformations that act as in Eq. \[transfoGen\]: ### Lorentz Transformation {#lorentz-transformation .unnumbered} $$\begin{aligned} \label {transfoLorentz} U=\exp (-\frac{i}{4} {\mathcal P}_P w^{\mu\nu}\sigma_{\mu\nu}),\end{aligned}$$ where $ \sigma_{\mu\nu}$ is given in Eq. \[sigmamunu\], $w^{\mu\nu}$ are parameters and $ {\mathcal P}_P $ is the scalar projector in Eq. \[Lorentgen\]. ### Gauge Transformation {#gauge-transformation .unnumbered} $$\begin{aligned} \label {transfoGauge} U=\exp [-i I_a \alpha_a(x)],\end{aligned}$$ where $I_a\in {\mathcal S}_{N-4}^\prime$, and $\alpha_a(x)$ are arbitrary functions. The unitary-group representations $\bar N\bigotimes N$, based on elements in the fundamental representation and its conjugate, denoted by $N$, $\bar N$, respectively, are implicit from the $| a\rangle \langle b |$ matrix construction in Eq. \[Jaimeqnext\]; these include the singlet, and the fundamental (expressed in $I_a$) representations, and similarly those obtained by $ N\bigotimes N$ (see, e.g., [@bespro9m1].) Lagrangian connection ===================== Historically, it is known that Maxwell’s equations can be formulated in terms of a Dirac basis[@Bargmann]. In our case, the fields within the extended-spin basis can be used to construct a standard-formulation Lagrangian.[^4] This amounts to using elements with a well-defined group structure to get Lorentz-scalar gauge-invariant combinations. Choosing scalar elements that result from the direct product in Eq. \[dotprod\], one obtains an interactive theory, as the same particle content is maintained. In this way, choices of Lagrangians are constrained by the same conditions as in quantum field theory, as renormalizability and quantization. We proceed by first constructing matrix elements containing the vector field, together with either fermion or bosons fields, and then converting them to expressions in terms of states’ associated bras or kets. Under Lorentz and gauge-group transformations of the extended spin space, invariant elements are obtained by taking the trace. The latter extracts the identity-matrix coefficient, leading to the usual Lagrangian components. The invariance under transformations in Eq. \[transfoGen\] can be verified independently, using the separation in Eq. \[Separation\] into Lorentz and scalar symmetries; the invariance will be shown for linear (vector-fermion) or bilinear (vector-scalar) objects, with input from Eqs. \[AmuExpandfinal\]-\[Fermion\]: Fermions -------- A gauge-invariant fermion-vector interaction term results, by adding to the fermion free-term Lagrangian (that implies the Dirac equation \[JaimeqDi\]) the vector-term contribution in Eq. \[AmuExpandfinal\] $$\begin{aligned} \label {Bilin} \frac{1}{N_f} {\rm tr}\Psi^\dagger\{ [ i\partial_\mu I_{den}+g A_\mu^a (x) I_a ] \gamma_0\gamma^\mu- M\gamma_0 \}\Psi P_f,\end{aligned}$$ where $\Psi$ is a field representing in this case spin-1/2 particles; spin-1 terms are treated below. $ I_a$ is the group generator in a given representation, $g$ is the coupling constant, $N_f$ contains the normalization (and similar terms below), and $I_{den}$ the identity scalar group operator in the same representation (which will be omitted hence). An operator $P_f$ is introduced to avoid cancelation of non-diagonal fermion elements. For example, $$\begin{aligned} \label {Pf} P_f=\frac{1}{\sqrt{2}}[(1+i) (I+\gamma^0 \gamma^2)+ \gamma^5 \gamma^6+\gamma^0 \gamma^2 \gamma^5 \gamma^6]\end{aligned}$$ as $[P_f,L]=[P_f,(1-\tilde\gamma_5)L]=0$, provides a non-trivial combination with the correct quantum numbers for the fermion pair $\Psi_a P_f\Psi^\dagger_b$ (with $\Psi_a, \Psi_b$ either doublet or singlet fermions, on Table 2), and maintains their normalization, spin, lepton and electroweak representation. As explained after Eq. \[Fermion\], Lorentz and scalar operators act non-trivially only from one side. Given the action of projection operators ${\mathcal P}_{S}$ ${\mathcal P}_{P}, $ the transformation in Eq. \[transfoGen\] becomes $$\begin{aligned} \label {transfoGenFermi} \Psi\rightarrow U \Psi.\end{aligned}$$ Eq. \[Bilin\] is invariant under the Lorentz transformation in Eq. \[transfoLorentz\], provided the vector field transforms as $$\begin{aligned} \label {transformsLor} A_\mu^a (x) I_a \rightarrow \Delta_\mu^{\ \ \nu} A_\nu^a (x) I_a,\end{aligned}$$ where we use the identity relating the spin representation of the Lorentz group in $$\begin{aligned} \label {LorentzIden} U \gamma^\mu U^{-1}=({\Delta^{-1}})^\mu_{\ \ \nu} \gamma^\nu ,\end{aligned}$$ and $\Delta^\mu_ {\ \ \nu} $ is a $4\times 4$ Lorentz transformation matrix transforming a coordinate as $x^\mu \rightarrow \Delta^\mu_{\ \ \nu} x^\nu$. The equation is also invariant under local transformation in Eq. \[transfoGauge\], under the condition the vector field transforms as $$\begin{aligned} \label {transforms} A_\mu^a (x) I_a \rightarrow U A_\mu^a (x) I_a U^\dagger -\frac{i}{g}(\partial_\mu U) U^\dagger ,\end{aligned}$$ The trace in Eq. \[Bilin\] can be expressed in terms of states, as we rely on the expansion in Eq. \[Jaimeqnext\] for fields $\Psi$. The fermion field in Eq. \[Fermion\], with matrix elements $\gamma\delta$, is expressed as $ [ \psi^a_\alpha (x) L^\alpha P_F M_a^F] _{\gamma\delta}=\sum_{\eta} (L^\alpha P_F)_{\gamma\eta} (M_a^F)_{\eta\delta} \langle\alpha a x |\Psi\rangle $, where $(L^\alpha P_F)_{\gamma\eta}=\langle\gamma |\alpha\rangle \sum_\beta d_{\alpha \beta} \langle \beta |\eta\rangle$, $(M_a^F)_{\eta\delta}=\langle\eta |a\rangle \sum_b f_{ab} \langle b |\delta\rangle$, with $d_{\alpha \beta}, f_{ab} $ c-numbers, and where the fermion field $ \psi^a_\alpha (x) =\langle\alpha a x |\Psi\rangle $ takes account of spin and scalar degrees of freedom. We choose commuting scalar and spin matrices as basis elements, as the fermion singlet on Table 2; we use the separability property[@CohenTanu] of the generators (for anticommuting matrices, as for the doublets, each bilinear term is separated), as the normalization condition, in Eq. \[unitarity\], cancels the ket-bra: for the spin component, $ L_\alpha P_F \tilde L_f ( L_\beta P_F)^\dagger = {( L_\alpha P_F)}_{\gamma\delta}(\tilde L_f)_{\delta\gamma}{(L_\beta P_F)} ^*_ {\epsilon\gamma}=\langle\gamma |\alpha\rangle\langle\beta |\epsilon\rangle $, and similar calculation involving the electroweak states (here $\tilde L_f$ is a reduced operator $ L_f$ acting on spin degrees of freedom.) This implies Eq. \[Bilin\] can be written $$\begin{aligned} \label {BilinNoTrace} { \psi^b_\beta}^ \dagger(x) \{ [ i\partial_\mu I_{bc}+g A_\mu^a (x) (I_a)_{bc} ]( \gamma_0\gamma^\mu )_{\beta\alpha} - M I_{bc}(\gamma_0)_{\beta\alpha} \} \psi^c_\alpha (x).\end{aligned}$$ Thus, Table 2 leads to the fermion electroweak SM Lagrangian contribution[@Glashow]-[@GlashowC], also derived heuristically in Refs. [@Besprosub] and [@BesproMosh] . $$\begin{aligned} \label {BilinNoTracElectroweak} {\bar {{\mbox{\boldmath$\Psi$\unboldmath}}_l} } [ i\partial_\mu + \frac{1}{2}g\tau^a W_\mu^a(x) - \frac{1}{2}g' B_\mu (x) ] \gamma^\mu {{\mbox{\boldmath$\Psi$\unboldmath}}_l}+{ \bar {\psi_r} } [ i\partial_\mu - g' B_\mu (x) ] \gamma^\mu { \psi_r},\end{aligned}$$ which contains a left-handed hypercharge $Y_l=-1$ SU(2) doublet ${\mbox{\boldmath$\Psi$\unboldmath}}_l$, and right-handed $Y_r=-2$ singlet $ \psi_r$, and the corresponding gauge-group vector bosons and coupling constants are, respectively, $B_\mu (x)$, $W_\mu^a(x)$, and $g$, $g'$. Spin-0 Boson ------------ A Lorentz-invariant interaction term between vector and scalar fields is constructed by applying twice the operator contained within the state $\Psi$ in Eq. \[Bilin\], removing the $\gamma_0$ matrix, following the Klein Gordon equation: $$\begin{aligned} \label {BilinNoTraceTR} {\rm tr}\ \frac{1}{N_B}\Psi^\dagger [ i \partial_\nu I_{den}+g A_\nu^b (x) I_b ] \gamma^\nu \gamma^\mu [ i\partial_\mu I_{den}+g A_\mu^a (x) I_a ] \Psi ,\end{aligned}$$ where the transformation in Eq. \[transfoGen\] is now used in the guise $\Psi\rightarrow U \Psi U^{-1}$, and the 4-d $\gamma_\mu$ are positioned in near pairs to maintain the generators $I_a$ relations (see also the vector term in Eq. \[vectortraceFmunu\];) this expression applies to the Lorentz transformation as in Eq. \[transfoLorentz\]. The final expression is obtained by applying the equality $\gamma_\mu \gamma_\nu =g_{\mu\nu}-i \sigma_{\mu\nu}$, as the only symmetric term $[ i\partial _\nu I_{den}+A_\nu^b (x) I_b ] [ i\partial_\mu I_{den}+A_\mu^a (x) I_a ] =\frac{1}{2} \{ i \partial _\nu I_{den}+A_\nu^b (x) I_b , i\partial_\mu I_{den}+A_\mu^a (x) I_a \}$ survives the renormalizability demand.[^5] A similar expansion as for the fermion field in Eq. \[Bilin\] can be performed, the two $\gamma_0$ matrices in the field terms in Eq. \[phiExpandfinal\], contained in $\Psi\Psi^\dagger$, lead to the identity matrix within the trace. The vector mass term resulting from the Higgs mechanism was related to mass operators within the spin-extended space, and used to connect it to the SM in Ref. [@Besprosub]. Vector Boson ------------ We use invariant components for the vector field contained in Eq. \[AmuExpandfinal\] to construct its kinetic-energy term, and we extract the antisymmetric part $$\begin{aligned} \label {vectortraceFmunu} [ i\partial_\nu I_{den}+g A_\nu^b (x) I_b ] [ i\partial_\mu I_{den}+g A_\mu^a (x) I_a ] \frac{i}{2} [\gamma^\nu ,\gamma^\mu ]=F_{\mu\nu}^a I_a \frac{i}{2} [\gamma^\nu ,\gamma^\mu ],\end{aligned}$$ where by taking the antisymmetric tensor $[\gamma^\nu ,\gamma^\mu ]$ we extract $F_{\mu\nu}^a=\partial_\mu A^a_\nu-\partial_\nu A^a_\mu+g c^{a b d}A^b_\nu A^d_\mu $, and $ c^{a b d}$ are the structure constants of the group $[I_b ,I_d] =i c^{a b d} I_a$, We show a particular term that reproduces the kinetic vector contribution, which eliminates non-renormalizable higher-derivative terms. A scalar contribution is constructed from the contraction of the two terms $$\begin{aligned} \label {vectortraceFmunuContract} \frac{1}{N_A}{\rm tr } F_{\mu\nu}^a I_a \frac{i}{2} [\gamma^\nu ,\gamma^\mu ] F_{\rho\sigma}^b I_b \frac{i}{2} [\gamma^\rho ,\gamma^\sigma ]. \end{aligned}$$ From the 4-d trace relation $$\begin{aligned} \label {traFMunu} { \rm tr }\gamma^\mu\gamma^\nu\gamma^\rho\gamma^\sigma=g ^{\mu \nu}g^ {\rho \sigma}-g ^{\mu\rho }g^ {\nu \sigma}+g ^{\mu \sigma}g^ {\nu \rho} \end{aligned}$$ the trace is reduced to the anti-symmetrized combination, $-g ^{\mu\rho }g^ {\nu \sigma}+g ^{\mu \sigma}g^ {\nu \rho}$. We finally get the known expression for the kinetic term $-\frac{1}{4}F_{\mu\nu}^a F^{\mu\nu\ a} $. The expression in Eq. \[vectortraceFmunuContract\] may be also derived from the original corresponding standard Lagrangian[@Besprosub]. (7+1)-dimensional electroweak spinors: mass term and hierarchy =============================================================== We described fermion-vector and vector-scalar Lagrangian contributions, and in this Section we deal with fermion-scalar terms. An inherent aspect of the $(5+1)$-d space is the impossibility of defining fermion masses for both flavor-doublet components. The $(7+1)$-d space allows for charge $2/3$ and $-1/3$ terms, associated to quarks, and charge $-1$ and neutral leptons. We concentrate on quarks, while the results of this section can be equally applied to leptons. The baryon-number operator $B=\frac{1}{6}(1-i\gamma_{5}\gamma_{6})$ conforms a spin-space partition obtained with the additional Clifford members $\gamma_{7}$, $\gamma_{8}$. It allows for quark symmetry generators that include the hypercharge $Y=\frac{1}{6}\left(1-i\gamma_{5}\gamma_{6}\right)[1+i\frac{3}{2}(1+\tilde{\gamma}_{5})\gamma_{7}\gamma_{8}]$, the weak SU(2)$_L$ terms $$\begin{aligned} \label{isospinquarks} \begin{array}{c} {\displaystyle {\displaystyle I_{1}=\frac{i}{8}(1-\tilde{\gamma}_{5})(1-i\gamma_{5}\gamma_{6})\gamma^{7}},}\\ \\ {\displaystyle I_{2}=\frac{i}{8}(1-\tilde{\gamma}_{5})(1-i\gamma_{5}\gamma_{6})\gamma^{8}},\\ \\ {\displaystyle I_{3}=\frac{i}{8}(1-\tilde{\gamma}_{5})(1-i\gamma_{5}\gamma_{6})\gamma_{7}\gamma_{8}}, \end{array} \end{aligned}$$ flavor generators, and the Lorentz generators, with spin component $3B\sigma_{\mu\nu}$, projected by $B$, using Eq. \[sigmamunu\]. As required, $[Y,I_i]=[B,Y]=[B,I_i]=0$, and all quarks are associated the baryon number 1/3 ($-1/3$ for antiparticles.) Examples of quark massless basis states, expressed as in Eq. \[Fermion\], are summarized on Table 3, for both u and d-type quarks, with their quantum numbers. The spin component along $\hat {\bf z}$, $ i\frac{3}{2}B\gamma^{1}\gamma^{2}, $ is used. Only one polarization and two flavors are shown, as a more thorough treatment of the fermion flavor states will be given elsewhere[@Romero]. ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- hypercharge $1/3$ left-handed doublet $I_{3}$ $Q$ $\frac{3i}{2}B\gamma^{1}\gamma^{2}$ ---------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------- ------------------- ------------------------------------- $ { {\mbox{\it\boldmath$Q$\unboldmath}}}_L^1=\left(\begin{array}{lcr} $\begin{array}{r} $\begin{array}{r} $\begin{array}{r} U_{L}^1 \\ 1/2\\ 2/3\\ 1/2\\ D_{L}^1\\ -1/2 -1/3 1/2 \end{array}\right)=\left(\begin{array}{c} \end{array}$ \end{array}$ \end{array}$ \frac{1}{16}\left(1-\tilde{\gamma}_{5}\right)\left(\gamma^{5}-i\gamma^{6}\right)\left(\gamma^{7}+i\gamma^{8}\right)\left(\gamma^{0}+\gamma^{3}\right)\\ \frac{1}{16}\left(1-\tilde{\gamma}_{5}\right)\left(\gamma^{5}-i\gamma^{6}\right)\left(1-i\gamma^{7}\gamma^{8}\right)\left(\gamma^{0}+\gamma^{3}\right) \end{array}\right)$ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : (a)Massless left-handed quark weak isospin doublet, and (b) right-handed singlets, with momentum along $\pm{\bf {\hat z}}$.[]{data-label="tab:tablequarks"} ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $I_{3}=0$ right-handed singlet $Y$ $Q$ $\frac{3i}{2}B\gamma^{1}\gamma^{2}$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------- ------------------- ------------------------------------- $\begin{array}{c} $\begin{array}{r} $\begin{array}{r} $\begin{array}{r} U_R^1=\frac{1}{16}\left(1+\tilde{\gamma}_{5}\right)\left(\gamma^{5}-i\gamma^{6}\right)\left(\gamma^{7}+i\gamma^{8}\right)\gamma^{0}\left(\gamma^{0}+\gamma^{3}\right)\\ 4/3\\ 2/3\\ 1/2\\ D_R^1 =\frac{1}{16}\left(1+\tilde{\gamma}_{5}\right)\left(\gamma^{5}-i\gamma^{6}\right)\left(1-i\gamma^{7}\gamma^{8}\right)\gamma^{0}\left(\gamma^{0}+\gamma^{3}\right) -2/3 -1/3 1/2 \end{array}$ \end{array}$ \end{array}$ \end{array}$ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : (a)Massless left-handed quark weak isospin doublet, and (b) right-handed singlets, with momentum along $\pm{\bf {\hat z}}$.[]{data-label="tab:tablequarks"} On Table 4 we also present two scalar elements as in Eq. \[phiExpandfinal\], whose quantum numbers associate them to the Higgs doublet. These are unique within the (7+1)-d space[@Romero]. --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- $0$ baryon-number scalar $I_{3}$ $Y$ $Q$ $\frac{3 i}{2}B\gamma^{1}\gamma^{2}$ -------------------------------------------------------------------------------------------------------------------------------------- ------------------- ----- ------------------- -------------------------------------- ${ {\mbox{\boldmath$\phi$\unboldmath}}}_1$=$\left(\begin{array}{c} $\begin{array}{r} $1$ $\begin{array}{r} $0$ \phi_{1}^{+}\\ 1/2\\ 1\\ \phi_{1}^{0} -1/2 0 \end{array}\right)$ = $\left(\begin{array}{c} \end{array}$ \end{array}$ \frac{1}{8}\left(1-i\gamma_{5}\gamma_{6}\right)\left(\gamma^{7}+i\gamma^{8}\right)\gamma^{0}\\ \frac{1}{8}\left(1-i\gamma_{5}\gamma_{6}\right)\left(1+i\gamma_{7}\gamma_{8}\tilde{\gamma}_{5}\right)\gamma^{0} \end{array}\right)$ ${ {\mbox{\boldmath$\phi$\unboldmath}}}_2$=$\left(\begin{array}{c} $\begin{array}{r} $1$ $\begin{array}{r} $0$ \phi_{2}^{+} \\ 1/2\\ 1\\ \phi_{2}^{0} -1/2 0 \end{array}\right)$ = $\left(\begin{array}{c} \end{array}$ \end{array}$ \frac{1}{8}\left(1-i\gamma_{5}\gamma_{6}\right)\left(\gamma^{7}+i\gamma^{8}\right)\tilde{\gamma}_{5}\gamma^{0}\\ \frac{i}{8}\left(1-i\gamma_{5}\gamma_{6}\right)\left(1+i\gamma_{7}\gamma_{8}\tilde{\gamma}_{5}\right)\gamma^{7}\gamma^{8}\gamma^{0} \end{array}\right)$ --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : Scalar Higgs-like pairs \[tab:tableHiggs\] The combination $a{ {\mbox{\boldmath$\phi$\unboldmath}}}_1+b{ {\mbox{\boldmath$\phi$\unboldmath}}}_2$ for arbitrary real $a$, $b$ is classified with the chiral projection operators $L_5=\frac{1}{2}(1-\tilde \gamma_5)$, $R_5=\frac{1}{2}(1+\tilde \gamma_5)$, giving $R_5({ {\mbox{\boldmath$\phi$\unboldmath}}}_1+{ {\mbox{\boldmath$\phi$\unboldmath}}}_2)L_5={ {\mbox{\boldmath$\phi$\unboldmath}}}_1+{ {\mbox{\boldmath$\phi$\unboldmath}}}_2$, $L_5({ {\mbox{\boldmath$\phi$\unboldmath}}}_1+ { {\mbox{\boldmath$\phi$\unboldmath}}}_2)R_5=0$, $L_5({ {\mbox{\boldmath$\phi$\unboldmath}}}_1- { {\mbox{\boldmath$\phi$\unboldmath}}}_2)R_5={ {\mbox{\boldmath$\phi$\unboldmath}}}_1-{ {\mbox{\boldmath$\phi$\unboldmath}}}_2$, $R_5({ {\mbox{\boldmath$\phi$\unboldmath}}}_1- { {\mbox{\boldmath$\phi$\unboldmath}}}_2)L_5=0$. This leads to the gauge-invariant Lagrangian $$\begin{aligned} \label {BilinHiggs} \frac{1}{N_f} {\rm tr}\{ [ m_U { \Psi_R^U}^\dagger(x) [ { {\mbox{\boldmath$\phi$\unboldmath}}}_1(x) +{ {\mbox{\boldmath$\phi$\unboldmath}}}_ 2 (x)]{ {\mbox{\boldmath$\Psi$\unboldmath}}}_L^Q(x) +m_D {{ {\mbox{\boldmath$\Psi$\unboldmath}}}_L^Q}^\dagger(x) [{ {\mbox{\boldmath$\phi$\unboldmath}}}_1 (x)-{ {\mbox{\boldmath$\phi$\unboldmath}}}_ 2 (x)] \Psi_R^D (x)] P_f \}+\{ cc \},\end{aligned}$$ in terms of the scalar fields ${ {\mbox{\boldmath$\phi$\unboldmath}}}_1(x)$=$\left(\begin{array}{c} \psi_1^{+}(x)\phi_{1}^{+} \\ \psi_1^{0}(x)\phi_{1}^{0} \end{array}\right)$ and ${ {\mbox{\boldmath$\phi$\unboldmath}}}_2(x)$=$\left(\begin{array}{c} \psi_2^{+}(x)\phi_{2}^{+}\\ \psi_2^{0}(x)\phi_{2}^{0} \end{array}\right)$, and quark fields ${ \Psi_R^U}(x)=\sum_\alpha \psi_{UR}^\alpha(x)U^\alpha_R$,  ${ \Psi_R^D}(x)=\sum_\alpha \psi_{DR}^\alpha (x)D^\alpha_R$,   and ${\mbox{\boldmath$\Psi$\unboldmath}}_L^Q(x)=\sum_\alpha \left(\begin{array}{lcr} \psi_{UL}^\alpha(x) U_{L}^\alpha \\ \psi_{DL}^\alpha(x) D_{L}^\alpha \\ \end{array}\right) $, where $P_f$ is a projection operator,$\alpha$ is a spin component, and, with hindsight, we assign the masses $$\begin{aligned} \label {massinter} m_U=(a+b)/2, \ \ m_D=(a-b)/2 . \end{aligned}$$ Eq. \[BilinHiggs\]’s configuration makes manifest the required gauge symmetries: SU(2)$_L$ for a field $\Psi(x)$ $$\begin{aligned} \label {genSU2YSymmetry} \Psi(x)\rightarrow e^{i \sum_c\alpha_c(x) I_c} \Psi(x) e^{-i \sum_d\alpha_d(x) I_d}\end{aligned}$$ leads to the non-trivial transformations $$\begin{aligned} \label {SU2YSymmetry} {\mbox{\boldmath$\phi$\unboldmath}}_1(x)-{\mbox{\boldmath$\phi$\unboldmath}}_2(x)\rightarrow e^{i \sum_c\alpha_c(x) I_c} [ {\mbox{\boldmath$\phi$\unboldmath}}_1(x)-{\mbox{\boldmath$\phi$\unboldmath}}_2 (x)] \\ {\mbox{\boldmath$\phi$\unboldmath}}_1(x)+{\mbox{\boldmath$\phi$\unboldmath}}_2(x)\rightarrow [ {\mbox{\boldmath$\phi$\unboldmath}}_1(x)+{\mbox{\boldmath$\phi$\unboldmath}}_2(x)] e^{-i \sum_d\alpha_d(x) I_d} \\ {\mbox{\boldmath$\Psi$\unboldmath}}_L^Q(x) \rightarrow e^{i \sum_c \alpha_c(x) I_c} {\mbox{\boldmath$\Psi$\unboldmath}}_L^Q(x),\end{aligned}$$ and for the U(1)$_Y$ transformation $$\begin{aligned} \label {generalU1} \Psi(x)\rightarrow e^{i \alpha_Y(x) Y} \Psi(x) e^{-i \alpha_Y(x) Y}\end{aligned}$$ implies $$\begin{aligned} \label { U1YSymmetry} {\mbox{\boldmath$\phi$\unboldmath}}_1(x)-{\mbox{\boldmath$\phi$\unboldmath}}_2(x)\rightarrow e^{i \alpha_Y(x) 1/3 } [ {\mbox{\boldmath$\phi$\unboldmath}}_1(x)-{\mbox{\boldmath$\phi$\unboldmath}}_2(x)]e^{i \alpha_Y(x) 2/3 } \\ {\mbox{\boldmath$\phi$\unboldmath}}_1(x)+{\mbox{\boldmath$\phi$\unboldmath}}_2(x)\rightarrow e^{ i \alpha_Y 4/3 } [ {\mbox{\boldmath$\phi$\unboldmath}}_1(x)+{\mbox{\boldmath$\phi$\unboldmath}}_2(x)]e^{-i \alpha_Y(x) 1/3 } \\ {\mbox{\boldmath$\Psi$\unboldmath}}_L^Q(x) \rightarrow e^{i \alpha_Y(x) 1/3 } {\mbox{\boldmath$\Psi$\unboldmath}}_L^Q(x) \\ \Psi_R^U(x) \rightarrow e^{i \alpha_Y(x) 4/3 } \Psi_R^U(x) \\ \Psi_R^D(x) \rightarrow e^{ -i \alpha_Y(x) 2/3 } \Psi_R^D(x).\end{aligned}$$ These relations imply scalar components are connected to the SM Higgs ${\bf H}$ through the assignments $$\begin{aligned} \label {correspH} { \bf H(x)} \sim {\mbox{\boldmath$\phi$\unboldmath}}_1(x)-{\mbox{\boldmath$\phi$\unboldmath}}_2(x)\\ \label {correspHo} \tilde{\bf H}^\dagger(x) \sim {\mbox{\boldmath$\phi$\unboldmath}}_1(x)+{\mbox{\boldmath$\phi$\unboldmath}}_2(x) ,\end{aligned}$$ where the conjugate representation corresponds to $\tilde{\bf H}(x)=i I_2 {\bf H}^*(x)$, a unitary transformation connects them to their conjugates, e. g. (see Table 4), $$\begin{aligned} \label {transforH} {\phi_{1}^{+}}^\dagger+{\phi_{2}^{+}}^\dagger=-2 I_2\gamma_2(\phi_{1}^{0}+\phi_{2}^{0})^*\gamma_2,\end{aligned}$$ and the Dirac representation for the $\gamma_\mu$ matrices fixes charge conjugation. After the Higgs mechanism[@higgsMech]-[@higgsMechC], only neutral fields survive, and the same basis as Table 4 for the vacuum expectation value is used, leading to the mass Lagrangian $$\begin{aligned} \label {HiggsComponents} H_v= a \phi_1^0 + b \phi_2^0 + a {\phi_1^0}^\dag + b {\phi_2^0}^\dagger , \end{aligned}$$ This term produces fermion eigenstates and masses from the Yukawa coupling parameters through the relations $$\begin{aligned} \nonumber H_v U_{M}^1=m_U U_{M}^1, \ \ H_v U_{M}^{c1} =-m_U U_{M}^{c1} , \\ \label{MassiveEigenvalues} H_v D_{M}^1 =m_D D_{m}^1 , \ \ H_v D_{M}^{c1} =-m_D D_{M}^{c1}, \end{aligned}$$ where $U_{M}^{c1}$, $ D_{M}^{c1}$ correspond to negative-energy solution states (and similarly for opposite spin components.) These states are listed on Table 5 with their quantum numbers;[^6] only two flavors are shown. massive quarks $H_{v}$ $Q$ $\frac{3i}{2}B\gamma^{1}\gamma^{2}$ ------------------------------------------------------ --------- -------- ------------------------------------- $ U_{M}^1 =\frac{1}{\sqrt{2}} ({U_L^1}+{U_R^1} )$ $m_U$ $2/3$ $ 1/2$ $D_{M}^1=\frac{1}{\sqrt{2}} ({D_L^1}-{D_R^1} )$ $m_D$ $-1/3$ $ 1/2$ $ U_{M}^{c1} =\frac{1}{\sqrt{2}} (U_{L}^1-U_{R}^1 )$ $-m_U$ $2/3$ $ 1/2$ $D_{M}^{c1}=\frac{1}{\sqrt{2}}({D_L^1}+{D_R^1})$ $-m_D$ $-1/3$ $ 1/2$ : Massive quarks eigenstates of $H_v$ The role played by $m_U$, $m_D$ in Eq. \[MassiveEigenvalues\] confirms their mass interpretation in Eq. \[massinter\]. In addition, the particular dependence on the $a$, $b$ parameters implies a flavor-doublet mass hierarchy effect, if they represent a comparable large scale O($a ) \simeq$O( $ b$). This interpretation is supported by the connections among the Higgs components on Table 4 $ {{\mbox{\boldmath$\phi$\unboldmath}}}_2= \gamma_5 {{\mbox{\boldmath$\phi$\unboldmath}}}_1$, and as ${{\mbox{\boldmath$\phi$\unboldmath}}}_2 $ can be generated from ${{\mbox{\boldmath$\phi$\unboldmath}}}_1$ by the transformation $$\begin{aligned} \label {UnitaryGamma5} {{\mbox{\boldmath$\phi$\unboldmath}}}_2=-i e^{i\beta \gamma_5} {{\mbox{\boldmath$\phi$\unboldmath}}}_1 e^{-i\beta \gamma_5} \end{aligned}$$ for $\beta=\pi/4$; further on, by a compositeness property, as one may construct the Higgs wave function from the fermions. This is shown in the relations $$\begin{aligned} \label {HiggsComponentsU} {\phi_1^0}^\dagger+{\phi_2^0}^\dagger=U_{L}^1 {U_{R}^1}^\dagger+U_{L}^2 {U_{R}^2}^\dagger\end{aligned}$$ $$\begin{aligned} \label {HiggsComponentsD} {\phi_1^0}^\dagger-{\phi_2^0}^\dagger=-D_{L }^1 {D_{R }^1}^\dagger-D_{L }^2 {D_{R }^2}^\dagger ,\end{aligned}$$ and the second spin component may be obtained by flipping the spin, for example, $D_{L }^2 =\frac{3 i }{2} B (\gamma_2 \gamma_3-i \gamma_3 \gamma_1) D_{L }^1$. Conclusions =========== This paper presented two related themes: one formal, dealing with translating a previously proposed SM extension to a Lagrangian formalism, and the other phenomenological, dealing with deriving a hierarchy effect from the model. It explained steps aimed at the model’s formalization, providing a field-theory formulation; the final objective is to use its restrictions to obtain SM information. Conversely, a field theory can be formulated in this basis, which may provide insight into the symmetries and representations used. A matrix space is used in which both symmetry generators and fields are formulated. For given dimension, a chosen non-trivial projection operator ${\mathcal P}_P$ constrains the matrix space, determining the symmetry groups, and the arrangement of fermion and boson representations. In particular, spin-1/2, and 0 states are obtained in the fundamental representation of scalar groups and spin-1 states in the adjoint representation. After expressing fields within this basis, a gauge-invariant field theory is constructed, based on the Lorentz and obtained scalar symmetries. Features obtained from the (5+1)-d extension are formulated through a Lagrangian: the gauge symmetry $\rm SU(2)_L \bigotimes U(1)_Y $ and global lepton $\rm U_{Le}(1)$ groups with the vector bosons associated to SU(2)$_L$, acting only on the model’s predicted representations: left-handed fermions; a scalar doublet associated to a Higgs particle; leading to scalar vector and fermion vertices. Special features emerge in the Lagrangian construction, as the need of a projection operator and Dirac-matrix rules to maintain Lorentz invariance. Within the (7+1)-d case, we showed a pair of Higgs-like scalars induce hierarchy in the masses of flavor-doublet fermions, confirming the model’s predictive power. The paper’s SM extension satisfies basic requirement of correct symmetries, including Lorentz and gauge ones, description of SM particles, and field-theory formulation, in addition to its SM prediction provision (the latter two is what the paper deals with.) This supports the view that it is an extension worth considering. With the Poincaré and SM-gauge symmetric Lagrangian presentation of the model, renormalization and quantization conditions can be applied, leading to a quantum field theory formulation. A future goal is to apply this framework to supersymmetry. The latter has in common with the extended spin representations classified by a Clifford algebra with Lorentz indexing. This suggests a closer connection between these frameworks. As restrained matrix spaces provide information on fundamental interactions and physical-particle representations, it is worth investigating whether this information can be obtained within supersymmetry, with the ultimate goal of explaining the origin of interactions. [99]{} H. Georgi and S. L. Glashow, Phys. Rev. Lett. [**32**]{}, 438 (1974). J. Besprosvany, Int. J. Theor. Phys. [ **39**]{}, 2797 (2000). J. Besprosvany, Nuc. Phys. B (Proc. Suppl.) [**101**]{}, 323 (2001). S. Coleman and J. Mandula, Phys. Rev. [**159**]{}, 1251 (1967). J. Besprosvany, Phys. Lett. B [**578**]{} 181 (2004). J. Wess, and J. Bagger, [*Supersymmetry and Supergravity*]{}, (Princeton University Press, Princeton, 1992). J. Besprosvany, Phys. Lett. B [**578**]{} 181 (2004). J. Besprosvany, Int. J. Mod. Phys. A [**20**]{} 77 (2005). J. Besprosvany and R. Romero, [*AIP Conf. Proc.*]{} [**1323**]{} 16 (American Institute of Physics, Melville, New York, 2010). K. Shima, Phys. Lett. B [**501**]{} 237 (2001). J. R. S. Chisholm and R. S. Farwell, J. Phys A: Math. Gen. [**22**]{} 1059 (1989). A. Borstnik Bracic and N. S. Mankoc Borstnik, Phys. Rev. D [**74**]{} 073013 (2006). G. Bregar, M. Breskvar, D. Lukman, N. Mankoc Borstnik, New J.Phys. [**10** ]{} 093002 (2008). G. Trayling and W. B. Baylis, J. Phys A: Math. Gen. [**34**]{} 3309 (2001). W. Lu, Adv. Appl. Clifford Algebras [**21**]{} 145 (2011). F. Nesti, The European Physical Journal C [**59**]{} (3) 723 (2009). M. Pavsic, Advances in Applied Clfford Algebras [**20** ]{}(3-4) 781 (2010). E. Witten, Nucl. Phys. B [**186**]{} 412 (1981). V. Bargmann and E. P. Wigner, Proc. Nat. Acad. Sci. (USA) [**34**]{}, 211 (1948). C. Cohen-Tannoudji, B. Diu, F. Laloe, [*Quantum Mechanics*]{} Vol I, New York, Wiley; Paris, Hermann, (1977). S. L. Glashow, Nucl. Phys. [**22**]{}, [ 579]{} (1961). S. Weinberg, Phys. Rev. Lett. [**19**]{}, 1264 (1967). A. Salam, in W. Svartholm (Ed.), [*Elementary Particle Theory*]{}, (Almquist and Wiskell, Stockholm, 1968). R. Romero and J. Besprosvany, Electroweak quark model in extended spin space, in preparation. F. Englert and R. Brout, Phys. Rev. Lett. [**13**]{} 321 (1964). P. W. Higgs, Phys. Lett. [**12**]{} 132 (1964). H. E. Haber, G. L. Kane, T. Sterling, Nucl. Phys. B [**161**]{} 493 (1979). [**Acknowledgements**]{} The authors acknowledge support from DGAPA-UNAM, project IN115111. [^1]: Expressed in quantum numbers within quantum mechanics. [^2]: We assume throughout $\hbar=c=1$, and 4-d diagonal metric elements $g_{\mu\nu}=(1,-1,-1,-1).$ [^3]: Understood here also as a matrix space. [^4]: Alternatively, the fields’  Lagrangian describing them can be reinterpreted in terms of this basis. [^5]: Where we use the operator equality $AB=\frac{1}{2} [A,B]+\frac{1}{2} \{A,B\}$, and the antisymmetric term cancels through the trace on the (3+1)-d spinor indices [^6]: The fermion states shown can be interpreted as either massive quarks or massive leptons (charged particle and neutrino pairs), according to the choice of the $Y$ operator.
--- abstract: 'We give an explicit representation of central measures corresponding to finite sequences of complex . Such measures are intimately connected to central functions. This enables us to prove an explicit representation of the non-stochastic spectral measure of an arbitrary multivariate autoregressive stationary sequence in terms of the covariance sequence.' author: - Bernd Fritzsche - Bernd Kirstein - Conrad Mädler bibliography: - '151arxiv.bib' title: 'Rational $q\times q$  Functions and Central Non-negative Measures' --- Mathematical Subject Classification (2000) : Primary 30E05, 60G10, Secondary 42A70 Introduction ============ If $\kappa$ is a integer or if $\kappa=\infty$, then a sequence $\Cbk$ of complex is called if, for each integer $n$ with $n\leq\kappa$, the block matrix $\Tn\defeq\matauo{C_{j-k}}{j,k=0}{n}$ is . In the second half of the 1980’s, the first two authors intensively studied the structure of sequences of complex in connection with interpretations in the languages of stationary sequences, interpolation, orthogonal matrix polynomials etc. (see [@MR885621I_III_V; @MR1056068] and also [@MR1152328] for a systematic treatment of several aspects of the theory). In particular, it was shown in [@MR885621I_III_V Part I] (see also [@MR1152328]) that the structure of the elements of a sequence of complex is described in terms of matrix balls which are determined by all preceding elements. Amongst these sequences there is a particular subclass which plays an important role, namely the so-called class of central sequences of complex . These sequences are characterized by the fact that starting with some index all further elements of the sequences coincide with the center of the matrix ball in question. Central sequences possess several interesting extremal properties (see [@MR885621I_III_V Parts I–III]) and a remarkable recurrent structure (see [@MR1152328]). In view of the matrix version of a classical theorem due to Herglotz (see,  [@MR1152328]), the set of all sequences coincides with the set of all sequences of on the unit circle $\T\defeq\setaa{z\in\C}{\abs{z}=1}$ of $\C$. If $\Cbinf $ is a sequence of complex and if $\mu$ denotes the unique Borel measure on $\T$ with $\Cbinf $ as its sequence of Fourier coefficients then we will call $\mu$ the spectral measure of $\Cbinf $. In the special case of a central sequence of complex , , if for each integer $n$ the block matrix $\Tn\defeq\Cmsn$ is positive , in [@MR885621I_III_V Part III] (see also [@MR1152328]), we stated an explicit representation of its spectral measure. In particular, it turned out that in this special case its spectral measure is absolutely continuous with respect to the linear Lebesgue-Borel measure on the unit circle and that the corresponding Radon-Nikodym density can be expressed in terms of left or right orthogonal matrix polynomials. The starting point of this paper was the problem to determine the spectral measure of a central sequence of complex matrices. An important step on the way to the solution of this problem was gone in the paper [@MR2104258], where it was proved that the matrix-valued function associated with a central sequence of complex matrices is rational and, additionally, concrete representations as quotient of two matrix polynomials were derived. Thus, the original problem can be solved if we will be able to find an explicit expression for the of a rational matrix-valued function. This question will be answered in . As a first essential consequence of this result we determine the s of central matrix-valued functions (see ). Reformulating in terms of sequences, we get an explicit description of the spectral measure of central sequences of complex matrices (see ). In the final , we apply to the theory of multivariate stationary sequences. In particular, we will be able to express explicitly the non-stochastic spectral measure of a multivariate autoregressive stationary sequence by its covariance sequence (see ). On the of rational matrix-valued functions {#S1024} ========================================== In this section, we give an explicit representation of the of an arbitrary rational matrix-valued function. Let $\R$, , , and be the set of all real numbers, the set of all integers, the set of all non-negative integers, and the set of all positive integers, respectively. Throughout this paper, let $p,q\in\N$. If $\mathcal{X}$ is a non-empty set, then by we denote the set of all each entry of which belongs to $\mathcal{X}.$ The notation is short for $\mathcal{X}^{q\times 1}$. If $\mathcal{X}$ is a non-empty set and if $x_1, x_2,\dotsc, x_q\in\mathcal{X}$, then let $$\col (x_j)_{j=1}^q \defeq \begin{bmatrix} x_1\\ x_2 \\ \vdots \\ x_q \end{bmatrix}.$$ For every choice of $\alpha, \beta,\in\R\cup \{-\infty,+\infty\}$, let $\mn{\alpha}{\beta} \defeq \setaa{m\in\Z}{\alpha \le m\le \beta}$. We will use and for the unit matrix belonging to $\Cqq $ and the null matrix belonging to $\Cqp $, respectively. For each $A\in\Cqq$, let $\re A \defeq \frac{1}{2} (A +A^\ad)$ and $\im A \defeq \frac{1}{2\iu} (A-A^\ad)$ be the real part and the imaginary part of $A$, respectively. If $\kappa\in\NOinf $, then a sequence $\Cbk $ of complex is called (resp. ) if, for each $n\in\mn{0}{\kappa}$, the block matrix $$\begin{aligned} \Tu{n} \defeq\matauo{C_{j-k}}{j,k=0}{n}\end{aligned}$$ is (resp. ). Obviously, if $m\in\NO $, then $\Cb{m}$ is (resp. ) if the block matrix $\Tu{m} =\matauo{C_{j-k}}{j,k=0}{m}$ is (resp. ). Let $\Omega$ be a non-empty set and let $\gA$ be a $\sigma$-algebra on $\Omega$. A mapping $\mu$ whose domain is $\gA$ and whose values belong to the set of all complex is said to be a if it is countably additive, , if $\mu( \bigcup_{k=1}^\infty A_k) =\sum_{k=1}^\infty \mu (A_k)$ holds true for each sequence $(A_k)_{k=1}^\infty$ of pairwise disjoint sets which belong to $\gA$. The theory of integration with respect to measures goes back to Kats [@MR0080280] and Rosenberg [@MR0163346]. In particular, we will turn our attention to the set $\MggqT$ of all s on $(\T , \BsaT )$, where $\BsaT$ is the $\sigma$-algebra of all Borel subsets of the unit circle $\T\defeq\setaa{z\in\C}{\abs{z}=1}$ of $\C$. Non-negative measures belonging to $\MggqT $ are intimately connected to the class of all functions in the open unit disk $\D \defeq\setaa{z\in\C}{\abs{z}<1}$ of $\C$. A -valued function $\Phi \colon\D \to \Cqq $ which is holomorphic in $\D $ and which fulfills $\re \Phi (z)\in\Cggq $ for all $z\in\D$ is called . The matricial version of a famous theorem due to F. Riesz and G. Herglotz illustrates the mentioned interrelation: \[D2.2.2\] \[D2.2.2.a\] Let $\Phi\in\CqD $. Then there exists one and only one measure $\mu\in\MggqT$ such that $$\label{NGZ} \Phi (z) - \iu\im \Phi (0) = \int_\T \frac{\zeta + z}{\zeta - z} \mu (\dif\zeta)$$ for each $z\in \D $. For every choice of $z$ in $\D $, furthermore, $$\Phi (z) - \iu\im \Phi (0) = \Fcm{0} + 2\sum_{j=1}^\infty \Fcm{j} z^j$$ where $$\label{D2.2.9} \Fcm{j} \defeq \int_\T \zeta^{-j} \mu (\dif\zeta),$$ for each $j\in\Z$ are called the . \[D2.2.2.b\] Let $H$ be a complex and let $\mu\in\MggqT$. Then the function $\Phi \colon\D \to\Cqq $ defined by $$\Phi (z) \defeq \int_\T \frac{\zeta + z}{\zeta - z} \mu (\dif\zeta) + \iu H$$ belongs to $\CqD $ and fulfills $\im \Phi (0) = H$. A proof of is given, , in [@MR1152328, pp. 71/72]. If $\Phi \in\CqD $, then the unique measure $\mu\in\MggqT$ which fulfills for each $z\in\D $ is said to be the . Let be the Dirac measure on $(\T,\BsaT )$ with unit mass at $u\in\T$. \[E1223\] Let $u\in\T$ and $W\in\Cggq$. Then yields that the function $\Phi\colon\D\to\Cqq$ defined by $\Phi(z)\defeq\frac{u+z}{u-z}W$ belongs to $\CqD$ with $\mu\defeq\kron{u}W$. The are given by $\Fcm{j}=u^{-j}W$ for all $j\in\Z$ and the function $\Phi$ admits the representation $\Phi(z)=[1+2\sum_{j=1}^\infty(zu)^j]W$ for all $z\in\D$. Let and be the column space and the null space of a matrix $A$, respectively. Let $\Phi\in\CqD$ with $\mu$. For all $z\in\D$, $$\begin{aligned} \Ran{\Phi(z)-\iu\im\Phi(0)}&=\Ran{\mu(\T)}=\Ran{\re\Phi(z)} \sand{} \Nul{\Phi(z)-\iu\im\Phi(0)}&=\Nul{\mu(\T)}=\Nul{\re\Phi(z)}. \end{aligned}$$ Let $z\in\D$. Since $\re(\Phi(z)-\iu\im\Phi(0))=\re\Phi(z)\in\Cggq$, we obtain from  then $\ran{\re\Phi(z)}\subseteq\ran{\Phi(z)-\iu\im\Phi(0)}$ and $\nul{\Phi(z)-\iu\im\Phi(0)}\subseteq\nul{\re\Phi(z)}$. In view of , the application of  yields $\ran{\Phi(z)-\iu\im\Phi(0)}\subseteq\ran{\mu(\T)}$ and $\nul{\mu(\T)}\subseteq\nul{\Phi(z)-\iu\im\Phi(0)}$. From , we get $\re\Phi(z)=\int_\T(1-\abs{z}^2)/\abs{\zeta-z}^2\mu(\dif\zeta)$. Since $(1-\abs{z}^2)/\abs{\zeta-z}^2>0$ for all $\zeta\in\T$, the application of  yields $\ran{\re\Phi(z)}=\ran{\mu(\T)}$ and $\nul{\re\Phi(z)}=\nul{\mu(\T)}$, which completes the proof. Now we consider the s for a particular subclass of $\CqD$. In particular, we will see that in this case, the is absolutely continuous with respect to the $\lebc $ defined on $\BsaT$ and that the Radon-Nikodym density can be always chosen as a continuous function on $\T$. By a region of $\C$ we mean an open, connected, non-empty subset of $\C$. For all $z\in\C$ and all $r\in(0,+\infty)$, let $\diskaa{z}{r}\defeq\setaa{w\in\C}{\abs{w-z}<r}$. \[M1.2\] Let $\cD$ be a region of $\C$ such that $\cdisk{r}\subseteq\cD$ for some $r\in(1,+\infty)$ and let $F \colon\cD\to \Cqq $ be holomorphic in $\cD$ such that the restriction $\Phi$ of $F$ onto $\D$ belongs to $\CqD $. Then the $\mu$ of $\Phi$ admits the representation $$% \mu (B) = \frac{1}{2\pi} \int_B \re F(\zeta) \lebca{\dif\zeta},$$ for each $B\in\BsaT $. A proof of can be given by use of a matrix version of an integral formula due to H. A. Schwarz (see,  [@MR1152328 p. 71]). In particular, contains full information on the of that functions belonging to $\CqD$ which are restrictions onto $\D$ of rational matrix-valued functions without poles on $\T$. Our next goal is to determine the of functions belonging to $\CqD$ which are restrictions onto $\D$ of rational matrix-valued functions having poles on $\T$. First we are going to verify that in this case all poles on $\T$ have order one. Our strategy of proving this is based on the following fact: \[M1.1\] Let $\Phi \in\CqD $ with $\mu$. For each $u\in\T $, then $$\label{MB1} \mu\rk*{\{u\}} =\lim_{r\to 1 - 0} \frac{1-r}{2} \Phi (ru).$$ A proof of is given, , in [@MR1004239]. As a direct consequence of we obtain: Let $\cD$ be a region of $\C$ such that $\cdisk{r}\subseteq\cD$ for some $r\in(1,+\infty)$ and let $F\colon\cD\to\Cqq$ be holomorphic such that the restriction $\Phi$ of $F$ onto $\D$ belongs to $\CqD$. Then the $\mu$ of $\Phi$ fulfills $\mu(\set{u})=\Oqq$ for all $u\in\T$. Let $\cD$ be a region of $\C$ such that $\cdisk{r}\subseteq\cD$ for some $r\in(1,+\infty)$ and let $F$ be a function meromorphic in $\cD$ such that the restriction $\Phi$ of $F$ onto $\D$ belongs to $\CqD$. Furthermore, let $u\in\T$ be a pole of $F$. Then $u$ is a simple pole of $F$ with $\Res(F,u)=-2u\mu(\set{u})$ and $$\label{P1309.B1} \lim_{r\to 1 - 0} \ek*{ (ru - u)F(ru)} =-2u\mu(\set{u}),$$ where is the residue of $F$ at $u$ and $\mu$ is the of $\Phi$. Because of , we have , which implies . Denote by $k$ the order of the pole $u$ of $F$. Then $k\in\N$ and $$\label{P1309.1} \lim_{z\to u}(z-u)^kF(z) =A \neq\Oqq.$$ In the case $k>1$, we infer from that $$% \lim_{r\to 1 - 0} \ek*{ (ru - u)^kF(ru)} =\ek*{\lim_{r\to 1 - 0}(ru - u)^{k-1}}\ek*{\lim_{r\to 1 - 0} \ek*{ (ru - u)F(ru)}} =\Oqq,$$ which contradicts . Thus $k=1$ and the application of completes the proof. Since every complex-valued function $f$ meromorphic in a region $\cD$ of $\C$ can be written as $f=g/h$ with holomorphic functions $g,h\colon\cD\to\C$, where $h$ does not vanish identically in $\cD$ (see, , ), we obtain: For every function $F$ meromorphic in a region $\cD$ of $\C$, there exist a holomorphic matrix-valued function $G\colon\cD\to\Cpq$ and a holomorphic function $h\colon\cD\to\C$ which does not vanish identically in $\cD$, such that $F=h^\inv G$. If $f$ is holomorphic at a point $z_0\in\C$, then, for each $m\in\NO $, we write $f^{(m)} (z_0)$ for the $m$th derivative of $f$ at $z_0$. \[L1344\] Let $F$ be a function meromorphic in a region $\cD$ of $\C$. In view of , let $G\colon\cD\to\Cpq$ and $h\colon\cD\to\C$ be holomorphic such that $h$ does not vanish identically in $\cD$ and that $F=h^\inv G$ holds true. Suppose that $w\in\cD$ is a zero of $h$ with multiplicity $m>0$. Then $w$ is a pole (including a removable singularity) of $F$, the order $k$ of the pole $w$ fulfills $0\leq k\leq m$, and $h^{(m)} (w) \ne 0$ holds true. For all $\ell\in\Z_{k, m}$, furthermore, $$\label{L1344.B} \lim_{z\to w} \ek*{ (z-w)^\ell F (z)} = \frac{m!}{(m-\ell)! h^{(m)} (w)} G^{(m-\ell)} (w).$$ Obviously $w$ is a pole (or a removable singularity) of $F$ and $k$ fulfills $0\leq k\leq m$. Since $h$ is holomorphic, there is an $r\in(0,+\infty)$ such that $K\defeq\diskaa{w}{r}$ is a subset of $\cD$ and $h (z) \ne 0$ for all $z\in K\setminus\set{w}$. Then $F$ is holomorphic in $K \setminus \{w\}$. Let $\ell\in\Z_{k,m}$. Then there is a holomorphic function $\Phi_\ell\colon K \to\Cpq $ such that $F (z) = (z-w)^{-\ell}\Phi_\ell(z)$ for all $z\in K \setminus \{w\}$. Consequently, $$\label{L1344.1} \lim_{z\to w} \ek*{ (z-w)^\ell F (z)} =\Phi_\ell(w).$$ Since $w$ is a zero of $h$ with multiplicity $m\ge \ell$, there exists a holomorphic function $\eta_\ell\colon\cD\to\C$ such that $h (z) = (z-w)^\ell \eta_\ell(z)$ holds true for all $z\in\cD$. Furthermore, we have $$h(z) =\sum_{j=m}^\infty\frac{h^{(j)} (w)}{j!} (z-w)^j$$ for all $z\in K$, where $h^{(m)} (w) \ne 0$. Thus, for all $z\in K$, we conclude $$\eta_\ell(z) = \sum_{j=m}^\infty \frac{h^{(j)} (w)}{j!} (z - w)^{j-\ell}.$$ Comparing the last equation with the Taylor series representation of $\eta_\ell$ centered at $w$, we obtain $\eta_\ell^{(s)} (w) = 0$ for all $s\in\Z_{0, m-\ell-1}$ and $$\frac{\eta_\ell^{(m-\ell) }(w)}{(m-\ell)!} = \frac{h^{(m)} (w)}{m!}.$$ Using the general Leibniz rule for differentiation of products, we get then $$(\eta_\ell\Phi_\ell)^{(m-\ell)} (w) = \sum_{s=0}^{m-\ell}\binom{m - \ell}{s}\ek*{\eta_\ell^{(s)} (w)}\ek*{\Phi_\ell^{(m-\ell-s)} (w)} = \frac{(m-\ell)!h^{(m)} (w)}{m!}\Phi_\ell(w),$$ which, in view of $h^{(m)} (w)\ne 0$, implies $$\label{L1344.2} \Phi_\ell(w) = \frac{m!}{(m-\ell)!h^{(m)} (w)} (\eta_\ell\Phi_\ell)^{(m-\ell)} (w).$$ Obviously, we have $$\eta_\ell(z)\Phi_\ell(z) =\eta_\ell(z) \ek*{ (z-w)^\ell F (z)} =h (z)F (z) =G(z)$$ for all $z\in K \setminus \{w\}$. Since $G$ is holomorphic, by continuity, this implies $(\eta_\ell\Phi_\ell) (z) = G (z)$ for all $z\in K $ and, hence $(\eta_\ell\Phi_\ell)^{(m-\ell)} (w) = G^{(m-\ell)} (w).$ Thus, from and we finally obtain . \[L1445\] Let $\cD$ be a region of $\C$ such that $\cdisk{r}\subseteq\cD$ for some $r\in(1,+\infty)$ and let $F$ be a function meromorphic in $\cD$ such that the restriction $\Phi$ of $F$ onto $\D$ belongs to $\CqD$. In view of , let $G\colon\cD\to\Cqq$ and $h\colon\cD\to\C$ be holomorphic such that $h$ does not vanish identically in $\cD$ and that $F=h^\inv G$ holds true. Let $u\in\T$ be a zero of $h$ with multiplicity $m>0$. Then: \[L1445.a\] $u$ is either a removable singularity or a simple pole of $F$. \[L1445.b\] $h^{(m)}(u)\neq0$ and $$\label{L1445.B1} \mu\rk*{\set{u}} =\frac{-m}{2uh^{(m)} (u)}G^{(m - 1)} (u),$$ where $\mu$ is the of $\Phi$. \[L1445.c\] If there is no $z\in\cD$ with $G(z)=\Oqq$ and $h(z)=0$, then $u$ is a pole of $F$. \[L1445.d\] $u$ is a removable singularity of $F$ if and only if $G^{(m - 1)} (u)=\Oqq$ or equaivalently $\mu(\set{u})=\Oqq$. Obviously $h^{(m)}(u)\neq0$ and $u$ is either a removable singularity or a pole of $F $, which then is simple according to , , the order of the pole $u$ of $F$ is either $0$ or $1$. Thus, we can chose $\ell=1$ in and obtain $$\label{L1445.1} \lim_{r\to 1-0}\ek*{(ru - u) F(r u)} = \frac{m}{h^{(m)} (u)}G^{(m-1)} (u).$$ yields . Comparing and , we get . The rest is plain. Now we will extend the statement of for the case of rational matrix-valued functions. For this reason we will first need some notation. For each $A\in\Cqq $, let $\det A$ be the determinant of $A$ and let $A^\adj$ be the classical adjoint of $A$ or classical adjugate (see, , Horn/Johnson [@MR832183 p. 20]), so that $AA^\adj = (\det A) \Iq $ and $A^\adj A = (\det A) \Iq $. If $Q$ is a polynomial, then $Q^\adj \colon \C\to\Cqq$ defined by $Q^\adj (z) \defeq [Q(z)]^\adj$ is obviously a matrix polynomial as well. \[P1637\] Let $P$ and $Q$ be complex polynomials such that $\det Q$ does not vanish identically and the restriction $\Phi$ of $PQ^\inv$ onto $\D$ belongs to $\CqD $. Let $u\in\T$ be a zero of $\det Q$ with multiplicity $m>0$. Then $u$ is either a removable singularity or a simple pole of $PQ^\inv$. Furthermore, $(\det Q)^{(m)}(u)\neq0$ and $$\mu\rk*{\set{u}} =\frac{-m}{2u (\det Q)^{(m)} (v)} (PQ^\adj)^{(m - 1)} (u),$$ where $\mu$ is the of $\Phi$. The functions $G\defeq PQ^\adj$ and $h\defeq\det Q$ are holomorphic in $\C$ such that $h$ does not vanish identically, and $F\defeq PQ^\inv$ is meromorphic in $\C$ and admits the representation $F=h^\inv G$. Hence, the application of completes the proof. \[P1643\] Let $Q$ and $R$ be complex polynomials such that $\det Q$ does not vanish identically and the restriction $\Phi$ of $Q^\inv R$ onto $\D$ belongs to $\CqD $. Let $u\in\T$ be a zero of $\det Q$ with multiplicity $m>0$. Then $u$ is either a removable singularity or a simple pole of $Q^\inv R$. Furthermore, $(\det Q)^{(m)}(u)\neq0$ and $$\mu\rk*{\set{u}} =\frac{-m}{2u (\det Q)^{(m)} (v)} (Q^\adj R)^{(m - 1)} (u),$$ where $\mu$ is the of $\Phi$. Apply to $\rk{Q^\inv R}^\tra$. As usual, if $\cM$ is a finite subset of $\Cpq $, then the notation should be understood as $\Opq $ in the case that $\cM$ is empty. In the following, we continue to use the notations $\lebc $ and $\kron{u}$ to designate the linear Lebesgue measure on $(\T,\BsaT)$ and the Dirac measure on $(\T,\BsaT)$ with unit mass at $u\in\T$, respectively. Now we are able to derive the main result of this section. \[T1648\] Let $r\in(1,+\infty)$, let $\cD$ be a region of $\C$ such that $\cdisk{r}\subseteq\cD$, and let $F$ be a function meromorphic in $\cD$ such that the restriction $\Phi$ of $F$ onto $\D$ belongs to $\CqD$. In view of , let $G\colon\cD\to\Cqq$ and $h\colon\cD\to\C$ be holomorphic functions such that $h$ does not vanish identically in $\cD$ and that $F=h^\inv G$ holds true. Then $\cN \defeq \setaa{u\in\T}{h(u) = 0}$ is a finite subset of $\T $ and the following statements hold true: \[T1648.a\] For all $u\in\cN$, the inequality $h^{(m_u)} (u)\ne 0$ holds true, where $m_u$ is the multiplicity of $u$ as zero of $h$, and the matrix $$W_u \defeq\frac{-m_u}{2uh^{(m_u)} (u)}G^{(m_u- 1)} (u)$$ is well defined and , and coincides with $\mu(\set{u})$, where $\mu$ is the of $\Phi$. \[T1648.b\] Let $\Delta \colon\cD\setminus\cN\to\Cqq$ be defined by $$\label{T1648.D1} \Delta (z) \defeq \sum_{u\in\cN} \frac{u+z}{u-z} W_u.$$ Then $\Theta \defeq F-\Delta$ is a function meromorphic in $\cD$ which is holomorphic in $\cdisk{r_0}$ for some $r_0\in(1,r)$ and the restrictions of $\Theta$ and $\Delta$ onto $\D$ both belong to $\CqD $. \[T1648.c\] The $\mu$ of $\Phi$ admits for all $B\in\BsaT $ the representation $$\label{T1648.B1} \mu (B) = \frac{1}{2\pi} \int_B \re \Theta (\zeta) \lebca{\dif\zeta} + \sum_{u\in\cN} W_u % \kron{u} (B).$$ Since $h$ is a holomorphic function in $\cD$ which does not vanish identically in $\cD$ and since $\T$ is a bounded subset of the interior of $\cD$, the set $\cN$ is finite. This follows from . Obviously, $\Theta$ is meromorphic in $\cD$. According to , each $u\in\cN$ is either a removable singularity or a sinple pole of $F $ and $\mu (\set{u}) = W_u$ holds true. yields then $$\label{T1648.1} \lim_{z\to u}\ek*{(z-u) F (z)} = -2u W_u$$ for each $u\in\cN$. Obviously, $\Theta$ is holomorphic at all points $z\in\T \setminus\cN$. Let us now assume that $u$ belongs to $\cN$. Then $h(u) = 0$ and there is a positive real number $r_u$ such that $K\defeq\diskaa{u}{r_u}$ is a subset of $\cD$ and $h (z)\ne 0$ for all $z\in K \setminus \set{u}$. In particular, the restriction $\theta$ of $\Theta$ onto $K \setminus \set{u}$ is holomorphic and $$\label{T1648.2} (z-u)\theta (z) = (z-u) F (z) + (u+z) W_u - (z-u) \sum_{\zeta\in\cN \setminus \set{u}} \frac{\zeta + z}{\zeta - z} W_\zeta$$ is fulfilled for each $z\in K \setminus \set{u}$. Consequently, and provide us $$\begin{split} \Oqq &= -2u W_u + (u+u) W_u - (u-u) \sum_{\zeta\in\cN \setminus \set{u}} \frac{\zeta + z}{\zeta - z} W_\zeta\\ &= \lim_{z\to u}\ek*{(z-u) F (z)}+ \rk{u +\lim_{z\to u} z} W_u- \ek*{\rk{\lim_{z\to u} z} -u } \sum_{\zeta\in\cN \setminus \set{u}} \frac{\zeta + z}{\zeta - z} W_u\\ &= \lim_{z\to u} \ek*{ (z-u) F (z) + (u+z) W_u - (z-u) \sum_{\zeta\in\cN \setminus \set{u}} \frac{\zeta + z}{\zeta - z} W_\zeta } = \lim_{z\to u}\ek*{(z-u) \theta (z)}. \end{split}$$ In view of Riemann’s theorem on removable singularities, this implies that $u$ is a removable singularity for $\theta $. In particular, $\Theta$ is holomorphic at $u$. Thus, $\Theta$ is holomorphic at each $\zeta\in\T $. Taking into account $\D \cap\cN = \emptyset$, we see then that $\Theta$ is holomorphic at each point $z\in\D \cup\T $. Since $\Theta$ is meromorphic in $\cD$ and $\cdisk{r}$ is bounded, $\Theta$ has only a finite number of poles in $\cdisk{r}\setminus(\D\cup\T)$. Thus, there is an $r_0\in(1,r)$ such that $\Theta$ is holomorphic in $\cdisk{r_0}$. In particular, the restriction $\Psi$ of $\Theta$ onto $\D$ is holomorphic. Because of $\D \cap\cN =\emptyset$, we get $$\label{T1648.3} \Theta (z) = F (z) -\Delta (z) =\Phi (z) - \sum_{u\in\cN} \frac{u + z}{u - z} W_u$$ for each $z\in\D $. Because of $\mu (\set{u}) = W_u$ for each $u\in\cN$, we conclude that $$\label{T1648.4} \rho \defeq \mu - \sum_{u\in\cN} W_u\kron{u}$$ fulfills $\rho (\BsaT ) \subseteq \Cggq $ and, hence, that $\rho$ belongs to $\MggqT$. Since $\mu$ is the of $\Phi$, we have for each $z\in\D $. Thus, we obtain from then $$\begin{split} \Theta (z) &= \int_\T \frac{\zeta + z}{\zeta - z} \mu (\dif\zeta) + \iu\im \Phi (0) - \sum_{u\in\cN} \rk*{ \int_\T \frac{\zeta + z}{\zeta - z} \kron{u} (\dif\zeta)} W_u \\ &= \int_\T \frac{\zeta + z}{\zeta - z} \rho (\dif\zeta) + \iu\im \Phi (0) \end{split}$$ for every choice of $z$ in $\D $. Consequently, from we see that $\Psi$ belongs to $\CqD $ and that $\rho$ is the of $\Psi$. Since the matrix $W_u$ is for all $u\in\cN$, yields in view of furthermore, that the restriction of $\Delta$ onto $\D$ belongs to $\CqD$ as well. Applying shows then that $ \rho (B) =\frac{1}{2\pi} \int_B \re \Theta (\zeta) \lebca{\dif\zeta} $ holds true for each $B\in\BsaT $. Thus, from , for each $B\in\BsaT $, we get . A closer look at and its proof shows that the $\rho$ and $\sum_{u\in\cN} W_u\kron{u}$ of $\Psi$ and the restriction of $\Delta$ onto $\D$, respectively, are exactly the absolutely continuous and singular part in the Lebesgue decomposition of the of $\Phi$ with respect to $\lebc $. In particular, the singular part is a discrete measure which is concentrated on a finite number of points from $\T$ and there is no nontrivial singular continuous part. The absolutely continuous part with respect to $\lebc $ possesses a continuous Radon-Nikodym density with respect to $\lebc $. \[M3.2\] Let $P$ and $Q$ be polynomials such that $\det Q$ does not vanish identically and that the restriction $\Phi$ of $PQ^\inv $ onto $\D$ belongs to $\CqD $. Then $\cN \defeq \setaa{u\in\T}{\det Q (u) = 0}$ is a finite subset of $\T $ and the following statements hold true: For all $u\in\cN$, the inequality $(\det Q)^{(m_u)} (u)\ne 0$ holds true, where $m_u$ is the multiplicity of $u$ as zero of $\det Q$, and $$W_u \defeq\frac{-m_u}{2u(\det Q)^{(m_u)} (u)} (PQ^\adj)^{(m_u - 1)} (u)$$ is a well-defined and matrix which coincides with $\mu(\set{u})$, where $\mu$ is the of $\Phi$. Let $\Delta \colon\cD\setminus\cN\to\Cqq$ be defined by . Then $\Theta \defeq PQ^\inv-\Delta$ is a rational function which is holomorphic in $\cdisk{r}$ for some $r\in(1,+\infty)$ and the restrictions of $\Theta$ and $\Delta$ onto $\D$ both belong to $\CqD $. The $\mu$ of $\Phi$ admits the representation for all $B\in\BsaT $. is an immediate consequence of if one chooses $\cD=\C$, $h=\det Q$ and $G=PQ^\adj$. On the truncated matricial trigonometric moment problem {#S1029} ======================================================= A matricial version of a theorem due to G. Herglotz shows in particular that if $\mu$ belongs to $\MggqT$, then it is uniquely determined by the sequence $(\Fcm{j})_{j=-\infty}^\infty$ of its Fourier coefficients given by . To recall this theorem in a version which is convenient for our further considerations, let us modify the notion of non-negativity. Obviously, if $\kappa\in\NOinf $ and if $\Cbk $ is a sequence, then $C_{-j} = C_j^\ad$ for each $j\in\mn{-\kappa}{\kappa}$. Thus, if $\kappa\in\NOinf $, then a sequence $\Cska $ is called (resp. ) if $\Cbk $ is (resp. ), where $C_{-j} \defeq C_j^\ad$ for each $j\in\mn{0}{\kappa}$. \[D2.2.1’\] Let $\Csinf $ be a sequence of complex . Then there exists a $\mu\in\MggqT$ such that $\Fcm{j} = C_j$ for each $j\in\NO $ if and only if the sequence $\Csinf $ is . In this case, the measure $\mu$ is unique. In view of the fact that $\Fcm{-j} = (\Fcm{j})^\ad$ holds true for each $\mu \in\MggqT$ and each $j\in\Z$, a proof of is given, , in [@MR1152328, pp. 70/71]. In the context of the truncated trigonometric moment problem, only a finite sequence of Fourier coefficients is prescribed: : : Let $n\in\NO $ and let $\Csn $ be a sequence of complex . Describe the set of all $\mu\in\MggqT$ which fulfill $\Fcm{j} = C_j$ for each $j\in\Z_{0,n}$. The answer to the question of solvability of is as follows: \[D3.4.2\] Let $n\in\NO $ and let $\Csn $ be a sequence of complex . Then $\MggqTcn $ is non-empty if and only if the sequence $\Csn $ is . Ando [@MR0290157] gave a proof of with the aid of the Naimark Dilation Theorem. An alternate proof stated in [@MR1152328, p. 123] is connected to below, which gives an answer to the following matrix extension problem: : : Let $n\in\NO $ and let $\Csn $ be a sequence of complex . Describe the set of all complex $C_{n+1}$ for which the sequence $\Cs{n+1} $ is . The description of $\Tcn $, we will recall here, is given by using the notion of a matrix ball: For arbitrary choice of $M\in\Cpq$ , $A\in\Cpp $, and $B\in \Cqq$, the set of all $X\in \Cpq $ which admit a representation $X=M+AKB$ with some contractive complex $K$ is said to be the matrix ball with center $M$, left semi-radius $A$, and right semi-radius $B$. A detailed theory of (more general) operator balls was worked out by Yu. L. Smul$'$jan [@MR1073857] (see also [@MR1152328 Section 1.5] for the matrix case). To give a parametrization of $\Tcn $ with the aid of matrix balls, we introduce some further notations. For each $A\in\Cpq$, let $A^\mpi$ be the Moore-Penrose inverse of $A$. By definition, $A^\mpi$ is the unique matrix from $\Cqp$ which satisfies the four equations $$\begin{aligned} AA^\mpi A&=A,& A^\mpi AA^\mpi&=A^\mpi,& (AA^\mpi)^\ad&=AA^\mpi,& &\text{and}& (A^\mpi A)^\ad&=A^\mpi A. \end{aligned}$$ Let $\kappa\in\NOinf $ and let $\Cska $ be a sequence of complex . For every $j\in\mn{0}{\kappa}$, let $C_{-j} \defeq C_j^\ad$. Furthermore, for each $n\in \mn{0}{\kappa}$, let $$\begin{aligned} \label{NGL} \Tu{n}&\defeq\ek{C_{j-k}}_{j,k=0}^n,& \Yu{n}&\defeq \col (C_j)_{j=1}^n,& &\text{and}& \Zu{n}&\defeq\ek{C_n, C_{n-1},\dotsc, C_1}.\end{aligned}$$ Let $$\begin{aligned} \label{MLR1} \Mu{1}&\defeq \Oqq,& \Lu{1}&\defeq C_0,&\tand{}\Ru{1}&\defeq C_0. \end{aligned}$$ If $\kappa \ge 1$, then, for each $n\in\mn{1}{\kappa}$, let $$\begin{aligned} \label{MLR} \Mu{n+1}&\defeq \Zu{n}\Tu{n-1}^\dagger \Yu{n},&\Lu{n+1}&\defeq C_0 - \Zu{n} \Tu{n-1}^\dagger \Zu{n}^\ad,&\tand{}\Ru{n+1}&\defeq C_0 - \Yu{n}^\ad \Tu{n-1}^\dagger \Yu{n}.\end{aligned}$$ In order to formulate an answer to , we observe, that, if $\Cska $ is , then, for each $n\in\mn{0}{\kappa}$, the matrices $\Lu{n+1}$ and $\Ru{n+1}$ are both (see, , [@MR1152328, p. 122]). \[D3.4.1\] Let $n\in\NO $ and let $\Csn $ be a sequence of complex . Then $\Tcn \ne \emptyset$ if and only if the sequence $\Csn $ is . In this case, $\Tcn =\gK (\Mu{n+1};\sqrt{\Lu{n+1}}, \sqrt{\Ru{n+1}}).$ A proof of is given in [@MR885621I_III_V Part I, ], (see also  [@MR1152328, pp. 122/123]). Observe that the parameters $\Mu{n+1}$, $\Lu{n+1}$, and $\Ru{n+1}$ of the matrix ball stated in admit a stochastic interpretation (see [@MR885621I_III_V Part I]). Let $n\in\N$ and let $\mu\in\MggqTcn$, where $\Csn $ is a sequence of complex . If $\rank\Tu{n}\leq n$, then there exists a subset $\cN$ of $\T$ with at most $nq$ elements such that $\mu(\T\setminus\cN)=\Oqq$. Let $\mu=\matauo{\mu_{jk}}{j,k=1}{q}$ and denote by $\stb{1},\stb{2},\dotsc,\stb{q}$ the canonical basis of $\Cq$. We consider an arbitrary $\ell\in\mn{1}{q}$. Then $\Tu{n}^{(\ell)}\defeq\matauo{\Fc{\mu_{\ell\ell}}{j-k}}{j,k=0}{n}$ admits the representation $$\Tu{n}^{(\ell)} =\ek*{\diag_{n+1}(\stb{\ell})}^\ad\Tu{n}\ek*{\diag_{n+1}(\stb{\ell})}$$ with the block diagonal matrix $\diag_{n+1}(\stb{\ell})\in\Coo{(n+1)q}{(n+1)}$ with diagonal blocks $\stb{\ell}$. Consequently, $$\rank\Tu{n}^{(\ell)} \leq\rank\Tu{n} \leq n.$$ Hence, there exists a vector $v^{(\ell)}\in\Co{n+1}\setminus\set{\Ouu{(n+1)}{1}}$ and $\Tu{n}^{(\ell)}v^{(\ell)}=\Ouu{(n+1)}{1}$. With $v^{(\ell)}=\col\seq{v^{(\ell)}_j}{j}{0}{n}$, then $$0 =\rk{v^{(\ell)}}^\ad\Tu{n}^{(\ell)}v^{(\ell)} =\int_\T\abs*{\sum_{j=0}^nv^{(\ell)}_j\zeta^j}^2\mu_{\ell\ell}(\dif \zeta)$$ follows. Since $\ell\in\mn{1}{q}$ was arbitrarily chosen, we obtain $\tr\mu(\T\setminus\cN)=\OM$, where $\cN$ consists of all modulus $1$ roots of the polynomial $\prod_{\ell=1}^q\sum_{j=0}^nv^{(\ell)}_j\zeta^j$, which is of degree at most $nq$. Thus, by observing that $\mu$ is absolutely continuous with respect to $\tr\mu$, the proof is complete. Central measures {#S1031} ================ In this section, we study so-called central measures. Let $\kappa\in\Ninf$ and let $\Cska$ be a sequence of complex . If $k\in\mn{1}{\kappa}$ is such that $C_j=\Mu{j}$ for all $j\in\mn{k}{\kappa}$, where $\Mu{j}$ is given by and , then $\Cska$ is called . If in the case $\kappa\geq2$ the sequence $\Cska$ is additionally not , then $\Cska$ is called . If there exists a number $\ell\in\mn{1}{\kappa}$ such that $\Cska$ is , then $\Cska$ is simply called . Let $n\in\NO$ and let $\Csn$ be a sequence of complex . Let the sequence $(C_j)_{j=n+1}^\infty$ be recursively defined by $C_j\defeq\Mu{j}$, where $\Mu{j}$ is given by . Then $\Csinf$ is called the . \[NR1\] Let $n\in\NO $ and let $\Csn $ be a sequence of complex . According to , then the is as well. Observe that the elements of central sequences fulfill special recursion formulas (see [@MR885621I_III_V Part V, , p. 303] or [@MR1152328, p. 124]). Furthermore, if $n\in\NO $ and if $\Csn $ is a sequence of complex , then the is (see [@MR1152328]). A measure $\mu$ belonging to $\MggqT$ is said to be if $(\Fcm{j})_{j=0}^\infty$ is . If $k\in\N$ is such that $(\Fcm{j})_{j=0}^\infty$ is of (minimal) order $k$, then $\mu$ is called . \[ZM\] Let $n\in\NO $, let $\Csn $ be a sequence of complex and let $\Csinf $ be the . According to , there is a unique measure $\mu$ belonging to $\MggqT$ such that its Fourier coefficients fulfill $\Fcm{j} = C_j$ for each $j\in\NO $. This $\mu$ is called the . Let $n\in\N$ and let $\Csn $ be a sequence of complex . Suppose $\rank\Tu{n}=\rank\Tu{n-1}$. Then there exists a finite subset $\cN$ of $\T$ such that the central measure $\muc$ corresponding to $\Csn $ fulfills $\mu(\T\setminus\cN)=\Oqq$. We have $\muc\in\MggqTcinf$ where $\Csinf $ is the central sequence corresponding to $\Csn $. According to , we get $\Lu{\ell+1}=\OM$ for all $\ell\in\minf{n}$. In view of , then $\rank\Tu{\ell}=\rank\Tu{n-1}$ follows for all $\ell\in\minf{n}$. In particular, $\rank\Tu{nq}=\rank\Tu{n-1}\leq nq$. Since $\muc$ belongs to $\MggqTc{nq}$, the application of completes the proof. If $n\in\N$ and if $\Csn $ is a sequence of complex , then the is the unique measure in $\MggqTcn $ with maximal entropy (see [@MR885621I_III_V Part II, ]). \[R6-11R\] Let $\Csinf $ be a sequence which is a . Then it is readily checked that $C_k = \Oqq$ for each $k\in\N$ and that the measure $\mu$ corresponding to $\Cs{0}$ admits the representation $\mu = \frac{1}{2\pi} C_0\lebc $, where $\lebc $ is the linear Lebesgue measure defined on $\BsaT $. Now we describe the . \[FK16-N\] Let $n\in\NO $ and let $\Csn $ be a sequence of complex . Let $\Tu{n}^\inv =\matauo{\tau_{jk}^{[n]}}{j,k=0}{n} $ be the representation of $\Tu{n}^\inv $, and let the matrix polynomials $A_n \colon \C\to\Cqq $ and $B_n \colon\C\to\Cqq $ be given by $$\begin{aligned} \label{FK16-N.V1} A_n (z)&\defeq \sum_{j=0}^n \tau_{j0}^{[n]} z^j& \tand{} B_n (z)&\defeq \sum_{j=0}^n \tau_{n,n-j}^{[n]} z^j.\end{aligned}$$ Then $\det A_n (z) \ne 0 $ and $\det B_n (z)\ne 0$ hold true for each $z\in\D \cup\T $ and the central measure $\mu$ for $\Csn $ admits the representations $$\label{NN1} \mu (B) =\frac{1}{2\pi} \int_B [A_n (\zeta)]^{-\ast} A_n (0) [A_n (\zeta)]^\inv \lebca{\dif\zeta}$$ and $$\label{NN2} \mu (B) =\frac{1}{2\pi} \int_B [B_n (\zeta)]^\inv B_n (0) [B_n (\zeta)]^{-\ast} \lebca{\dif\zeta}$$ for each $B\in\BsaT $, where $\lebc $ is the linear Lebesgue measure defined on $\BsaT $. The fact that $\det A_n(z)\neq0$ or $\det B_n(z)\neq0$ for $z\in\D\cup\T$ can be proved in various ways (see  Ellis/Gohberg  or Delsarte/Genin/Kamp , and , where the connection to the truncated matricial trigonometric moment problem is used. The representations and are proved in [@MR885621I_III_V Part III, , , pp. 332/333]. The measure given via was studied in a different framework by Delsarte/Genin/Kamp . These authors considered a measure $\mu\in\MggqT$ with sequence $(\Fcm{j})_{j=0}^\infty$ of Fourier coefficients. Then it was shown in  that, for each $n\in\NO$, the measure constructed via from the sequence $(\Fcm{j})_{j=0}^n$ is a solution of the truncated trigonometric moment problem associated with the sequence $(\Fcm{j})_{j=0}^n$. The main topic of  is to study left and right orthonormal systems of polynomials associated with the measure $\mu$. It is shown in  that these polynomials are intimately connected with the polynomials $A_n$ and $B_n$ which were defined in . Let $P$ be a complex polynomial of degree $n$ such that $P(0)$ is and $\det P(z)\neq0$ for all $z\in\D\cup\T$. Let $g\colon\T\to\Cqq$ be defined by $g(\zeta)\defeq\ek{P(\zeta)}^\invad\ek{P(0)}\ek{P(\zeta)}^\inv$. Then $\mu\colon\BsaT\to\Cqq$ defined by $ \mu(B) \defeq\frac{1}{2\pi}\int_Bg(\zeta)\lebca{\dif\zeta} $ belongs to $\MggqT$ and is central of order $n+1$. Obviously, $\mu$ belongs to $\MggqT$. Let $\Cbinf$ be the . According to [@MR1200154], then $\Tu{n}$ is ,  the sequence $\Csn$ is , and $P$ coincides with the matrix polynomial $A_n$ given in . In view of , thus $\mu$ is the . In particular, $\Csinf$ is the and therefore $\Csinf$ is . Hence, $\mu$ is . Using [@MR1200154] instead of [@MR1200154], one can analogously prove the following dual result: Let $Q$ be a complex polynomial of degree $n$ such that $Q(0)$ is and $\det Q(z)\neq0$ for all $z\in\D\cup\T$. Let $h\colon\T\to\Cqq$ be defined by $h(\zeta)\defeq\ek{Q(\zeta)}^\inv\ek{Q(0)}\ek{Q(\zeta)}^\invad$. Then $\mu\colon\BsaT\to\Cqq$ defined by $ \mu(B) \defeq\frac{1}{2\pi}\int_Bh(\zeta)\lebca{\dif\zeta} $ belongs to $\MggqT$ and is central of order $n+1$. Let $n\in\NO$ and let $\mu\in\MggqT$ be central of order $n+1$ with Fourier coefficients $\Cbinf$ such that the sequence $\Csn$ is . Then the matrix polynomials $A_n$ and $B_n$ given by fulfill $\det A_n(z)\neq0$ and $\det B_n(z)\neq0$ for all $z\in\D\cup\T$, and $\mu$ admits the representations and for all $B\in\BsaT$. Since $\mu$ is , the sequence $\Csinf$ is . In particular, $\Csinf$ is the . Hence, $\mu$ is the . The application of completes the proof. In the general situation of an arbitrarily given sequence $\Csn $ of complex , the can also be represented in a closed form. To do this, we will use the results on matrix-valued functions defined on the open unit disk $\D $ which were obtained in . Central matrix-valued functions {#S1019} =============================== In this section, we recall an explicit representation of the of an arbitrary central matrix-valued function. \[NCC\] Let $\Csinf $ be a sequence of complex and let $\Gsinf $ be given by $$\begin{aligned} \label{GC} \Gamma_0&\defeq C_0& \tand{} \Gamma_j&\defeq 2C_j \end{aligned}$$ for each $j\in\N$. Furthermore, let $\mu\in\MggqT$. In view of $\Gamma_0^\ad = \Gamma_0$, show then that $\mu$ belongs to $\MggqTcinf$ if and only if $\mu$ is the of the function $\Phi \colon\D \to\Cqq $ defined by $$\label{DC} \Phi (z) \defeq \int_\T \frac{\zeta + z}{\zeta - z} \mu (\dif\zeta).$$ The well-studied matricial version of the classical interpolation problem consists of the following: : : Let $\kappa\in\NOinf $ and let $\Gska $ be a sequence of complex . Describe the set $\CqDGka $ of all $\Phi \in\CqD $ such that $\frac{1}{j!} \Phi^{(j)} (0) = \Gamma_j$ holds true for each $j\in\mn{0}{\kappa}$. In order to formulate a criterion for the solvability of , we recall the notion of a sequence. If $\kappa \in\NOinf $, then a sequence $\Gska $ is called a if, for each $n\in\mn{0}{\kappa}$, the matrix $\re \Sn $ is , where $\Sn $ is given by $$\label{N11} \Sn \defeq \begin{bmatrix} \Gamma_0 & 0 & \hdots & 0 & 0\\ \Gamma_1 & \Gamma_0 & \hdots & 0 & 0\\ \vdots & \vdots & &\vdots & \vdots\\ \Gamma_{n-1} &\Gamma_{n-2} & \hdots & \Gamma_0 & 0\\ \Gamma_{n} &\Gamma_{n-1} & \hdots & \Gamma_1 & \Gamma_0\\ \end{bmatrix}.$$ \[NLB\] Let $\kappa\in\NOinf $ and let $\Gska $ be a sequence of complex . Then $\CqDGka \ne\emptyset $ if and only if $\Gska $ is a . In the case $\kappa =\infty$, is a consequence of . In the case $\kappa\in\NO $, a proof of can be found, , in [@MR885621I_III_V Part I, ]. \[C1343\] Let $\Gsinf$ be a sequence of complex . Then $\Phi\colon\D\to\Cqq$ defined by $$\label{Tsr} \Phi(z) =\sum_{j=0}^\infty z^j\Gamma_j$$ belongs to $\CqD$ if and only if $\Gsinf$ is a . Apply . If $\kappa \in\NOinf $ and a sequence $\Gska $ of complex are given, then it is readily checked that $\Gska $ is a if and only if the sequence $\Cska $ defined by $$\begin{aligned} \label{CG} C_0&\defeq \re \Gamma_0& &\text{and}& C_j&\defeq \frac{1}{2}\Gamma_j\end{aligned}$$ for each $j\in \mn{1}{\kappa}$ is . Let $\kappa\in\Ninf$, let $\Gska$ be a sequence of complex , and let the sequence $\Cska$ be given by for all $j\in\mn{0}{\kappa}$. If $k\in\mn{1}{\kappa}$ is such that $\Cska$ is of (minimal) order $k$, then $\Gska$ is called . If there exists a number $\ell\in\mn{1}{\kappa}$ such that $\Gska$ is , then $\Gska$ is simply called . Let $n\in\NO$, let $\Gsn$ be a sequence of complex , and let the sequence $\Csn$ be given by for all $j\in\mn{0}{n}$. Let the sequence $(\Gamma_j)_{j=n+1}^\infty$ be given by $\Gamma_j\defeq2C_j$, where $\Csinf$ is the . Then $\Gsinf$ is called the . \[R1532\] Let $n\in\NO $ and let $\Gsn $ be a . According to , then the is a . Let $\Phi\in\CqD$ with Taylor series representation . If $k\in\N$ is such that $\Gsinf$ is of (minimal) order $k$, then $\Phi$ is called . If there exists a number $\ell\in\N$ such that $\Phi$ is , then $\Phi$ is simply called . \[R1526\] Let $n\in\NO$, let $\Gsn$ be a , and let $\Gsinf $ be the . According to and , then $\Phi\colon\D\to\Cqq$ given by belongs to $\CqD$. This function $\Phi$ is called the . \[NC1\] Let $n\in\NO $ and let $\Csn $ be a sequence of complex . Further, let $\mu\in\MggqT$. From one can see then that $\mu$ is the if and only if $\Phi \colon\D \to\Cqq $ defined by is the given by for each $j\in\mn{0}{n}$. Let $n\in\NO $ and let $\Gsn $ be a sequence of complex such that $\cC_q [\D , \Gsn ] \ne \emptyset$. Then indicate that $$\left\{ \frac{1}{(n+1)!}\Phi^{(n+1)} (0)\colon \Phi\in\cC_q [\D , \Gsn ]\right\} = \gK \left(2\Mu{n+1}; \sqrt{2\Lu{n+1}}, \sqrt{2\Ru{n+1}}\right ),$$ where $\Csn $ is given by for all $j\in\mn{0}{n}$ (see also [@MR885621I_III_V Part I, ]). \[R9\] In the case $n=0$, , if only one complex $\Gamma_0$ with $\re \Gamma_0\in \Cggq $ is given, the is the constant function (defined on $\D $) with value $\Gamma_0$ (see [@MR2104258]). The first and second authors showed in  that in the general case the is a rational matrix-valued function and constructed explicit right and left quotient representations with the aid of concrete polynomials. To recall these formulas, we introduce several matrix polynomials which we use if $\kappa\in \NOinf $ and a sequence $\Cska $ of complex are given. For all $m\in\NO $ let the matrix polynomial $e_m$ be defined by $$% e_m (z) \defeq\ek{z^0 \Iq , z^1 \Iq , z^2 \Iq ,\dotsc, z^m \Iq }.$$ Let $\Gamma_0 \defeq \re C_0$. For each $j\in\mn{1}{\kappa}$, we set $\Gamma_j \defeq 2C_j$ and $C_{-j} \defeq C_j^\ad$. For each $n\in\mn{0}{\kappa}$, let the matrices $\Tu{n}$, $\Yu{n}$ and $\Sn $ be defined by and . Furthermore, for each $n\in\mn{0}{\kappa} $, let the matrix polynomials $\an $ and $\bn $ be given by $$\begin{aligned} \label{N11-1N} \an (z)&\defeq \Gamma_0 + z e_{n-1} (z) \Su{n-1}^\ad \Tu{n-1}^\dagger \Yu{n}& \tand{} \bn (z)&\defeq \Iq - z e_{n-1} (z) \Tu{n-1}^\dagger \Yu{n}.\end{aligned}$$ Now we see that central functions admit the following explicit quotient representations expressed by the given data: \[FK1.2\] Let $n\in\N$, let $\Gsn $ be a , and let $\Phi$ be the . Then the matrix polynomials $\an $ and $\bn $ given by fulfill $\det \bn (z)\neq0$ and $\Phi (z) = \an (z) [\bn (z)]^\inv $ for all $z\in\D $. Observe that further quotient representations of $\Phi$ are given in [@MR2104258 and ]. Obviously, the set $$% \Nn \defeq \setaa{ v\in\T}{\det \bna{v} = 0}$$is finite. For each $v\in\Nn $, let $m_v$ be the multiplicity of $v$ as a zero of $\det \bn $. Then $(\det \bn )^{(m_v)} (v) \ne 0$ for each $v\in\Nn $, so that, for each $v\in\Nn $, the matrix $$\label{S1} \Xnv \defeq \frac{-m_v}{2v (\det \bn)^{(m_v)} (v)} (\an \bn ^\adj)^{(m_v - 1)} (v)$$ and the matrix-valued functions $\Dn\colon\C\setminus \Nn \to \Cqq $ given by $$% \Dna{z} \defeq \sum_{v\in \Nn } \frac{v+z}{v-z} \Xnv$$and $$\label{S3} \Lan \defeq \an \bn ^\inv - \Dn$$ are well defined. shows that the function $\Phi$ corresponding to a $\Gs{n}$ is a rational matrix-valued function. Thus, combining yields an explicit expression for the of $\Phi$. \[C1\] Let $n\in\N$ and let $\Gsn $ be a . Then the $\mu$ of the admits the representation $$\label{C1.B1} \mu(B) = \frac{1}{2\pi} \int_B \re \Lana{\zeta} \lebca{\dif\zeta} + \sum_{v\in\Nn}\Xnv\kron{v} (B)$$ for all $B\in\BsaT $, where $\Lan$ is given via and where $\lebc$ is the linear Lebesgue measure defined on $\BsaT$. Use . Now we reformulate in the language of central measures. \[C2\] Let $n\in\N$ and let $\Csn $ be a sequence of complex . Then the measure $\mu$ for $\Csn $ admits the representation for all $B\in\BsaT $. In view of $\re C_0 = C_0$, the assertion follows immediately from and . The following examples show in particular that central measures need neither be continuous with respect to the Lebesgue measure nor be discrete measures. \[E1131\] The sequence $\Csinf $ given by $C_0\defeq1$ and $C_j\defeq0$ for all $j\in\N$ is obviously . Since $\Mu{1}=0=C_1$ and $ M_{k+1} =Z_kT_{k-1}^\dagger Y_k =\Ouu{1}{k}\cdot T_{k-1}^\dagger\cdot\Ouu{k}{1} =0 =C_{k+1} $ for all $k\in\N$, it is the and it is . It is readily seen that $\frac{1}{2\pi}\lebc $ is the and that $\Phi \colon\D\to\C$ defined by $\Phi (z)=1$ is the , where $\Gamma_0\defeq1$. \[E1533\] The sequence $\Csinf $ given by $C_j\defeq1$ is obviously . Since $C_1\neq0=\Mu{1}$ and $ M_{k+1} =Z_kT_{k-1}^\dagger Y_k =\mathbf{1}_k^*\rk{k^{-2}\mathbf{1}_k\mathbf{1}_k^*}\mathbf{1}_k =1 =C_{k+1} $ for all $k\in\N$, where $\mathbf{1}_k\defeq\col(1)_{j=1}^k$, it is the and it is . It is readily seen that $\kron{1}$ is the and that $\Phi\colon\D\to\C$ defined by $\Phi(z)=(1+z)/(1-z)$ is the , where $\Gamma_0\defeq1$ and $\Gamma_1\defeq2$. \[R1432\] Let $\kappa\in\NOinf$ and let $\Cska$ and $(D_j)_{j=0}^\kappa$ be sequences of complex and complex , respectively. Then the sequence $\diag[C_j,D_j]_{j=0}^\kappa$ is . \[R1420\] Let $\kappa\in\Ninf$ and $k,\ell\in\mn{1}{\kappa}$. Let $\Cska$ be a sequence of complex and let $(D_j)_{j=0}^\kappa$ be a sequences of complex . Then the sequence $\diag[C_j,D_j]_{j=0}^\kappa$ is . \[E1554\] In view of , one can easily see from that the sequence $\Csinf $ given by $C_0\defeq\Iu{2}$ and $C_j\defeq\bigl[\begin{smallmatrix}0&0\\0&1\end{smallmatrix}\bigr]$ for all $j\in\N$ is , and, thus, it coincides with the . It is readily seen that $\bigl[\begin{smallmatrix}\frac{1}{2\pi}\lebc &0\\0&\kron{1}\end{smallmatrix}\bigr]$ is the and that $\Phi \colon\D\to\Coo{2}{2}$ defined by $\Phi (z)=\bigl[\begin{smallmatrix}1&0\\0&(1+z)/(1-z)\end{smallmatrix}\bigr]$ is the , where $\Gamma_0\defeq\Iu{2}$ and $\Gamma_1\defeq\bigl[\begin{smallmatrix}0&0\\0&2\end{smallmatrix}\bigr]$. \[R1438\] Let $\kappa\in\NOinf$, let $\Cska$ be a sequence of complex and let $U$ be a unitary . Then, formula  below shows that the sequence $(U^\ad C_jU)_{j=0}^\kappa$ is . \[E1159\] Let the sequence $\Csinf $ be given by $C_0\defeq\Iu{2}$ and $C_j\defeq\frac{1}{4}\smat{1&\sqrt{3}\\\sqrt{3}&3}$ for all $j\in\N$. With the unitary matrix $U\defeq\frac{1}{2}\smat{\sqrt{3}&-1\\1&\sqrt{3}}$ we have $C_0=U^\ad\Iu{2}U$ and $C_j=U^\ad\tmat{0&0\\0&1}U$ for all $j\in\N$. In view of , one can then easily see from and that the sequence $\Csinf $ is , , and thus it coincides with the . Furthermore, $\frac{1}{4}\smat{\frac{3}{2\pi}\lebc +\kron{1}&-\sqrt{3}(\frac{1}{2\pi}\lebc -\kron{1})\\-\sqrt{3}(\frac{1}{2\pi}\lebc -\kron{1})&\frac{1}{2\pi}\lebc +3\kron{1}}$ is the and $\Phi\colon\D\to\Coo{2}{2}$ defined by $\Phi(z)=\frac{1}{4}\smat{3+\frac{1+z}{1-z}&-\sqrt{3}(1-\frac{1+z}{1-z})\\-\sqrt{3}(1-\frac{1+z}{1-z})&1 +3\frac{1+z}{1-z}}$ is the , where $\Gamma_0\defeq\Iu{2}$ and $\Gamma_1\defeq\frac{1}{2}\smat{1&\sqrt{3}\\\sqrt{3}&3}$. The non-stochastic spectral measure of an autoregressive stationary sequence {#S1038} ============================================================================ Let $\cH$ be a complex Hilbert space with inner product $\langle.,.\rangle $. For every choice of $g = \col (g^{(j)})_{j=1}^q$ and $h = \col (h^{(j)})_{j=1}^q$ in $\cH^q$, the $(g,h)$ of the ordered pair $[g,h]$ is defined by $(g,h) =\matauo{\langle g^{(j)}, h^{(k)}}{j,k=1}{q}$. A sequence $(g_m)_{m=-\infty}^\infty$ of vectors belonging to $\cH^q$ is said to be stationary (in $\cH^q$), if, for every choice of $m$ and $n$ in $\Z$, the Gramian $(g_m, g_n)$ only depends on the difference $m-n$: $(g_m, g_n) = (g_{m-n}, g_0)$. It is well known that the covariance sequence $(C_m)_{m=-\infty}^\infty$, of an arbitrary stationary sequence $(g_m)_{m=-\infty}^\infty$, given by $C_m \defeq (g_m, g_0)$ for each $m\in\Z$, is , , that, for each $m\in\NO $, the block matrix $\Tu{m} \defeq\matauo{C_{j-k}}{j,k=0}{m}$ is . According to a matricial version of a famous theorem due to G. Herglotz (see above), there exists one and only one $\mu$ defined on the set $\BsaT $ of all Borel subsets of the unit circle $\T \defeq \setaa{ \zeta\in\C}{\abs{\zeta} = 1}$ of the complex plane $\C$ such that, for each $j\in\Z$, the $j$-th of $\mu$ coincides with the matrix $C_j$. Then $\mu$ is called the non-stochastic spectral measure of $(g_j)_{j=-\infty}^\infty$. A stationary sequence $(g_j)_{j=-\infty}^\infty$ is said to be autoregressive if there is a positive integer $n$ such that the orthogonal projection $\hat{g}_n$ of $g_0$ onto the matrix linear subspace generated by $(g_{-j})_{j=1}^n$ coincides with the orthogonal projection $\hat{g}$ of $g_0$ onto the closed matrix linear subspace generated by $(g_{-j})_{j=1}^\infty$: $\hat{g}_n = \hat{g}$. If $\hat{g}\ne 0$, then the smallest positive integer $n$ with $\hat{g}_n = \hat{g}$ is called the order of the autoregressive stationary sequence $(g_j)_{j=-\infty}^\infty$. If $\hat{g} = 0$, then $(g_j)_{j=-\infty}^\infty$ is said to be autoregressive of order $0$. Now we are going to give an explicit representation of the non-stochastic spectral measure of an arbitrary autoregressive stationary sequence in $\cH^q$, where we study the general case without any regularity conditions. This representation is expressed in terms of the covariance sequence of the stationary sequence. As already mentioned above, the covariance sequence $\Cbinf $ of an arbitrary stationary sequence $(g_j)_{j=-\infty}^\infty$ in $\cH^q$ is . Observe that, conversely, if the complex Hilbert space $\cH$ is infinite-dimensional and if an arbitrary sequence $\Cbinf $ of complex is given, then a matricial version of a famous result due to A. N. Kolmogorov [@MR0009098] shows that there exists a stationary sequence $(g_j)_{j=-\infty}^\infty$ in $\cH^q$ with covariance sequence $\Cbinf $ (see also [@MR1080924]). The interrelation between autoregressive stationary sequences and central measures is expressed by the following theorem: \[FK9\] Let $n\in\NO $ and let $(g_j)_{j=-\infty}^\infty$ be a stationary sequence (in $\cH^q$) with covariance sequence $\Cbinf $ and non-stochastic spectral measure $\mu$. Then the following statements are equivalent: 1. $(g_j)_{j=-\infty}^\infty$ is autoregressive of order $n$. 2. $\Csinf $ is . 3. $\mu$ is . Now we are able to formulate the announced representation. \[C\] Let $(g_j)_ {j=-\infty}^\infty$ be a stationary sequence in $\cH^q$ with covariance sequence $\Cbinf $ and let $n\in\N$. Suppose that $(g_j)_{j=-\infty}^\infty$ is autoregressive of order $n$. Then $\Lan$ given by is holomorphic at each point $u\in\T $ and the non-stochastic spectral measure $\mu$ of $(g_j)_{j=-\infty}^\infty$ admits the representation for all $B\in \BsaT $, where $\lebc $ is the linear Lebesgue measure defined on $\BsaT $, the matrix $\Xnv $ is given by , and $\kron{v}$ is the Dirac measure defined on $\BsaT $ with unit mass at $v$. According to , the sequence $\Csinf $ is and $\mu$ is . From the definition of the non-stochastic spectral measure of $(g_j)_{j=-\infty}^\infty$ we know then that $\mu$ is the . Consequently, the application of completes the proof. \[ZR\] Let $(g_j)_{j=-\infty}^\infty$ be a stationary sequence in $\cH^q$ which is autoregressive of order $0$. Then the non-stochastic spectral measure $\mu$ of $(g_j)_{j=-\infty}^\infty$ is given by $\mu = \frac{1}{2\pi} (g_0, g_0)\lebc $ (see and ). Some facts from matrix theory {#A1346} ============================= Let $A\in\Cpq$. Further, let $V\in\Coo{m}{p}$ and $U\in\Coo{q}{n}$ satisfy the equations $V^\ad V=\Ip$ and $UU^\ad=\Iq$, respectively. Then $(VAU)^\mpi=U^\ad A^\mpi V^\ad$. Let $\kappa\in\NOinf$ and let $\Cska$ be a sequence from $\Cqq$. Let $U\in\Cqq$ be unitary and let $C_{j,U}\defeq U^\ad C_jU$ for $j\in\mn{0}{\kappa}$. For $j\in\mn{0}{\kappa}$ let $C_{-j}\defeq C_j^\ad$ and $C_{-j,U}\defeq C_{j,U}^\ad$. Let $n\in\mn{0}{\kappa}$. Let $\Tn\defeq\matauo{C_{j-k}}{j,k-0}{n}$ and $\Tu{n,U}\defeq\matauo{C_{j-k,U}}{j,k-0}{n}$. Then $$\label{L1348.0} \Tu{n,U} =\ek*{\diag_{n+1}\rk{U}}^\ad\Tn\ek*{\diag_{n+1}\rk{U}}$$ and $$\label{L1348.1} \Tu{n,U}^\mpi =\ek*{\diag_{n+1}\rk{U}}^\ad\Tn^\mpi\ek*{\diag_{n+1}\rk{U}}.$$ Let $n\in\mn{0}{\kappa}$. Let $\Yu{n}$ and $\Zu{n}$ be given by . Furthermore let $\Yu{n,U}$ and $\Zu{n,U}$ be defined by $\Yu{n,U}\defeq \col (C_{j,U})_{j=1}^n$ and $\Zu{n,U}\defeq\mat{C_{n,U},\dotsc, C_{1,U}}$. Let $\Mu{1}$, $\Lu{1}$, and $\Ru{1}$ be given by , let $\Mu{1,U}\defeq\Oqq$, $\Lu{1,U}\defeq C_{0,U}$, and let $\Ru{1,U}\defeq C_{0,U}$. If $\kappa\geq1$, then, for each $n\in\mn{1}{\kappa}$, let $\Mu{n+1}$, $\Lu{n+1}$, and $\Ru{n+1}$ be given via , let $$\begin{aligned} \Mu{n+1,U}&\defeq \Zu{n,U}\Tu{n-1,U}^\dagger \Yu{n,U},& \Lu{n+1,U}&\defeq C_{0,U}- \Zu{n,U} \Tu{n-1,U}^\dagger \Zu{n,U}^\ad \end{aligned}$$ and $$\Ru{n+1,U} \defeq C_{0,U}- \Yu{n,U}^\ad \Tu{n-1,U}^\dagger \Yu{n,U}.$$ For each $n\in\mn{0}{\kappa}$ then $$\begin{aligned} \Mu{n+1,U}&=U^\ad\Mu{n+1}U,& \Lu{n+1,U}&=U^\ad\Lu{n+1}U,& &\text{and}& \Ru{n+1,U}&=U^\ad\Ru{n+1}U. \end{aligned}$$ If $k\in\mn{2}{\kappa}$ and if $\Cska$ be , then $\seq{C_{j,U}}{j}{1}{\kappa}$ is . If $k\in\mn{2}{\kappa}$ and if $\Cska$ be , then $\seq{C_{j,U}}{j}{1}{\kappa}$ is . Equation  is obvious. Since $U$ is unitary, the matrix $\diag_{n+1}(U)$ is unitary as well. Thus, in view of , formula  is an immediate consequence of . is proved. Obviously, $ \Mu{1,U} =\Oqq =U^\ad\Mu{1}U$. Now, let $n\in\mn{1}{\kappa}$. Then, using  and $\ek{\diag_n\rk{U}}\ek{\diag_n\rk{U}}^\ad=\Iu{nq}$, we get $$\begin{split} \Mu{n+1,U} &=\Zu{n,U}\Tu{n-1,U}^\dagger \Yu{n,U}\\ &=\mat{U^\ad C_nU,\dotsc,U^\ad C_1U}\ek*{\diag_n\rk{U}}^\ad\Tu{n-1}^\mpi\ek*{\diag_n\rk{U}}\ek*{\col\seq{U^\ad C_jU}{j}{1}{n}}\\ &=U^\ad \mat{C_n,\dotsc,C_1}\Tu{n-1}^\mpi\ek*{\col\seq{C_j}{j}{1}{n}}U =U^\ad\Zu{n}\Tu{n-1}^\mpi\Yu{n}U =U^\ad\Mu{n+1}U. \end{split}$$ Analogously, the remaining assertions of  can be shown. The assertions stated in  and  are an immediate consequence of . Universität Leipzig\ Fakultät für Mathematik und Informatik\ PF 10 09 20\ D-04009 Leipzig ` [email protected] [email protected] [email protected] `
--- abstract: 'We consider uniformly random bipartite planar maps with a given boundary-length and $n$ inner face with given degrees and we study its asymptotic behaviour as $n \to \infty$. We prove that, suitably rescaled, such maps converge in distribution in the Gromov–Hausdorff–Prokhorov topology towards the celebrated Brownian map, and more generally a Brownian disk, if and only if there is no inner face with a macroscopic degree, or, if the boundary-length is too big, the maps degenerate and converge to the Brownian CRT. This criterion applies to random $2q_n$-angulations with $n$ faces which always converge to the Brownian map. It also allows us to deal with size-conditioned $\alpha$-stable critical Boltzmann maps: we recover tightness of these maps for a parameter $\alpha \in (1,2)$ and convergence to the Brownian map for $\alpha = 2$, but we also give a new result for $\alpha=1$ in which case the maps fall into the condensation phase and they converge to the Brownian CRT.' author: - 'Cyril <span style="font-variant:small-caps;">Marzouk</span> [^1]' title: Brownian limits of planar maps with a prescribed degree sequence --- ![A Brownian CRT with Brownian labels describing the Brownian map: labels are indicated by colours (red for the highest values and blue for the lowest).[]{data-label="fig:serpent_brownien"}](CRT_10000_couleurs){width=".8\linewidth"} Introduction ============ This paper deals with ‘continuum’ limits of random planar maps as their size tends to infinity and their edge-length tends to zero appropriately. Groundbreaking results of such convergences have been obtained by Le Gall [@Le_Gall:Uniqueness_and_universality_of_the_Brownian_map] and Miermont [@Miermont:The_Brownian_map_is_the_scaling_limit_of_uniform_random_plane_quadrangulations] who proved the convergence of several models of maps towards a universal limit called the *Brownian map* [@Marckert-Mokkadem:Limit_of_normalized_random_quadrangulations_the_Brownian_map]. Our aim is to extend these results and subsequent ones to a large class of distributions which we next define, which are essentially configuration models on (dual of) planar maps studied previously in [@Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence] in a restricted case and more recently in the companion paper [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence] to which we shall refer at several occasions. Main results ------------ For every integer $n {\geqslant}1$, we are given an integer $\varrho_n {\geqslant}1$ and a sequence $(d_n(k))_{k {\geqslant}1}$ of non-negative integers such that $\sum_{k {\geqslant}1} d_n(k) = n$ and we let ${{\mathbf{M}}_{d_n}^{\varrho_n}}$ denote the set of all those rooted bipartite maps with boundary-length $2 \varrho_n$ (the face to the right of the root-edge is the external face) and $n$ inner faces, amongst which exactly $d_n(k)$ have degree $2k$ for every $k {\geqslant}1$; see Figure \[fig:exemple\_carte\] for an example. A key quantity in this work is $$\sigma_n^2 \coloneqq \sum_{k {\geqslant}1} k (k-1) d_n(k),$$ which is a sort of global variance term. We sample ${M_{d_n}^{\varrho_n}}$ uniformly at random in ${{\mathbf{M}}_{d_n}^{\varrho_n}}$ and consider its asymptotic behaviour as $n \to \infty$. A particular, well-studied case of such random maps are so-called random *quadrangulations* and more generally *$2q$-angulations*, with any $q {\geqslant}2$ fixed, where all faces have degree $2q$, which corresponds to our model in which $d_n(q) = n$ so $\sigma_n^2 = q (q-1) n$. Another well-studied model is that of *Boltzmann planar maps* introduced by Marckert & Miermont [@Marckert-Miermont:Invariance_principles_for_random_bipartite_planar_maps], which can be seen as a mixture of our model in which the face-degrees $d_n$ are random and then one samples ${M_{d_n}^{\varrho_n}}$ conditionally given $d_n$. ![An element of ${{\mathbf{M}}_{d_n}^{\varrho_n}}$ with $n = 5$ inner faces, $\varrho_n = 4$ half boundary-length and $(d_n)_{n {\geqslant}1} = (1, 2, 1, 1, 0, 0, \dots)$ face-degree sequence; faces are labelled by their degree.[]{data-label="fig:exemple_carte"}](Exemple_carte){height="8\baselineskip"} In the companion paper [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Theorem 1] we proved that if we endow the vertex-set $V({M_{d_n}^{\varrho_n}})$ of the map ${M_{d_n}^{\varrho_n}}$ with the graph distance ${d_{\mathrm{gr}}}$ and the uniform probability measure ${p_{\mathrm{unif}}}$, then the sequence $$\left(V({M_{d_n}^{\varrho_n}}), (\sigma_n + \varrho_n)^{-1/2} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}}\right)_{n {\geqslant}1}$$ is always tight in the Gromov–Hausdorff–Prokhorov topology, whatever $(\varrho_n)_{n {\geqslant}1}$ and $(d_n)_{n {\geqslant}1}$, thus extending results due to Le Gall [@Le_Gall:The_topological_structure_of_scaling_limits_of_large_planar_maps] for $2q$-angulations for any $q {\geqslant}2$ fixed, and to Bettinelli [@Bettinelli:Scaling_limit_of_random_planar_quadrangulations_with_a_boundary] for quadrangulations with a boundary. In the case of $2q$-angulations without a boundary, the problem of uniqueness of the subsequential limits was solved simultaneously by Le Gall [@Le_Gall:Uniqueness_and_universality_of_the_Brownian_map] and Miermont [@Miermont:The_Brownian_map_is_the_scaling_limit_of_uniform_random_plane_quadrangulations] who proved that, suitably rescaled, they converge in distribution towards the same limit called the *Brownian map*, named after Marckert & Mokkadem [@Marckert-Mokkadem:Limit_of_normalized_random_quadrangulations_the_Brownian_map], which is a compact metric measured space ${\mathscr{M}}^0 = ({\mathscr{M}}^0,{\mathscr{D}}^0,{\mathscr{p}}^0)$ almost surely homeomorphic to the sphere [@Le_Gall-Paulin:Scaling_limits_of_bipartite_planar_maps_are_homeomorphic_to_the_2_sphere; @Miermont:On_the_sphericity_of_scaling_limits_of_random_planar_quadrangulations] and with Hausdorff dimension $4$ [@Le_Gall:The_topological_structure_of_scaling_limits_of_large_planar_maps]. In the case of $2q$-angulations with a boundary, uniqueness was solved by Bettinelli & Miermont [@Bettinelli-Miermont:Compact_Brownian_surfaces_I_Brownian_disks]: when the boundary-length behaves like $\varrho_n \sim \varrho n^{1/2}$ with $\varrho \in (0, \infty)$ fixed, they converge to a (unit area) *Brownian disk* with perimeter $\varrho$, denoted by ${\mathscr{M}}^\varrho = ({\mathscr{M}}^\varrho,{\mathscr{D}}^\varrho, {\mathscr{p}}^\varrho)$; the latter now has the topology of a disk, with Hausdorff dimension $4$, and its boundary has Hausdorff dimension $2$ [@Bettinelli:Scaling_limit_of_random_planar_quadrangulations_with_a_boundary]. We prove that these Brownian limits appear when there is no macroscopic (face-)degree. We let $$\Delta_n = \max\{k {\geqslant}1 : d_n(k) \ne 0\}$$ be the largest half-degree of a face in ${{\mathbf{M}}_{d_n}^{\varrho_n}}$. \[thm:convergence\_carte\_disque\] Assume that $\lim_{n \to \infty} \sigma_n = \infty$ and that $\lim_{n \to \infty} \sigma_n^{-1} \varrho_n = \varrho$ for some $\varrho \in [0,\infty)$. Then the convergence in distribution $$\left(V({M_{d_n}^{\varrho_n}}), \left(\frac{3}{2 \sigma_n}\right)^{1/2} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}}\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}({\mathscr{M}}^\varrho,{\mathscr{D}}^\varrho, {\mathscr{p}}^\varrho)$$ holds in the sense of Gromov–Hausdorff–Prokhorov if and only if $$\lim_{n \to \infty} \sigma_n^{-1} \Delta_n = 0.$$ The ‘only if’ part of the statement is simple to prove: we shall see that such an inner face with large degree has a diameter of order $\sigma_n^{1/2}$, which either creates a pinch-point or a hole, so the space cannot converge in distribution towards a limit which has the topology of the sphere or the disk. The behaviour drastically changes if $\varrho_n$ is much larger that $\sigma_n$; as shown by Bettinelli [@Bettinelli:Scaling_limit_of_random_planar_quadrangulations_with_a_boundary Theorem 5] for quadrangulations, in this case, the boundary takes over the rest of the map and we obtain at the limit ${\mathscr{T}}_{X^0} = ({\mathscr{T}}_{X^0}, {\mathscr{d}}_{X^0}, {\mathscr{p}}_{X^0})$ the Brownian Continuum Random Tree of Aldous [@Aldous:The_continuum_random_tree_3] encoded by the standard Brownian excursion $X^0$. Precisely, the boundary of the map converges in the Gromov–Hausdorff sense to ${\mathscr{T}}_{X^0}$ and the rest of the map is small and disappears at the limit. \[thm:convergence\_cartes\_CRT\] Suppose that $\lim_{n \to \infty} \sigma_n^{-1} \varrho_n = \infty$. Then the convergence in distribution $$\left(V({M_{d_n}^{\varrho_n}}), (2\varrho_n)^{-1/2} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}}\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}({\mathscr{T}}_{X^0}, {\mathscr{d}}_{X^0}, {\mathscr{p}}_{X^0})$$ holds in the sense of Gromov–Hausdorff–Prokhorov, where $X^0$ is the standard Brownian excursion. Theorem \[thm:convergence\_carte\_disque\] and \[thm:convergence\_cartes\_CRT\] apply directly to models of random maps when all faces have the same degree to deduce the following corollary which extends the aforementioned results of Le Gall [@Le_Gall:Uniqueness_and_universality_of_the_Brownian_map Theorem 1] and Bettinelli & Miermont [@Bettinelli-Miermont:Compact_Brownian_surfaces_I_Brownian_disks Corollary 6] when the sequence $(q_n)_{n {\geqslant}1}$ is constant. \[cor:convergence\_d\_ang\] Let $\varrho \in [0,\infty]$; let $(q_n)_{n {\geqslant}1} \in \{2, 3, \dots\}^{{\mathbf{N}}}$ and $(\varrho_n)_{n {\geqslant}1} \in {{\mathbf{N}}}^{{\mathbf{N}}}$ be any sequences such that $\lim_{n \to \infty} (q_n (q_n-1) n)^{-1/2} \varrho_n = \varrho$. For every $n {\geqslant}1$, let $M^{\varrho_n}_{n, q_n}$ be a uniformly chosen random $2q_n$-angulation with $n$ inner faces and with perimeter $2\varrho_n$. 1. Suppose that $\varrho < \infty$, then the convergence in distribution $$\left(V(M^{\varrho_n}_{n, q_n}), \left(\frac{9}{4 q_n (q_n-1) n}\right)^{1/4} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}}\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}({\mathscr{M}}^\varrho,{\mathscr{D}}^\varrho, {\mathscr{p}}^\varrho)$$ holds in the sense of Gromov–Hausdorff–Prokhorov. 2. Suppose that $\varrho = \infty$, then the convergence in distribution $$\left(V(M^{\varrho_n}_{n, q_n}), (2\varrho_n)^{-1/2} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}}\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}({\mathscr{T}}_{X^0}, {\mathscr{d}}_{X^0}, {\mathscr{p}}_{X^0})$$ holds in the sense of Gromov–Hausdorff–Prokhorov. As alluded in the abstract, Theorem \[thm:convergence\_carte\_disque\] and \[thm:convergence\_cartes\_CRT\] may be applied to size-conditioned *$\alpha$-stable Boltzmann maps* studied in [@Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces; @Marzouk:On_scaling_limits_of_planar_maps_with_stable_face_degrees], where $\sigma_n$ here is replaced by $B_n$ in [@Marzouk:On_scaling_limits_of_planar_maps_with_stable_face_degrees], which is of order $n^{1/\alpha}$ up to a slowly varying function. When the index is $\alpha=2$, the largest face-degree is small compared to $B_n$ and the map rescaled by a factor of order $B_n^{1/2}$ converges to the Brownian map thanks to Theorem \[thm:convergence\_carte\_disque\]. For $\alpha \in (1,2)$, there are (many !) faces with degree of order $B_n$, but none larger than that, so we may still deduce the existence of non-trivial subsequential limits from [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Theorem 1]. The genuine original result concerns the case $\alpha=1$: relying on the recent work by Kortchemski & Richier [@Kortchemski-Richier:Condensation_in_critical_Cauchy_Bienayme_Galton_Watson_trees], we prove that there is a unique giant face and this falls into the framework of Theorem \[thm:convergence\_cartes\_CRT\]. We refer to Section \[sec:BGW\_Boltzmann\] for precise statements. Strategy of the proof and further discussion -------------------------------------------- In order to study random maps, we rely as usual on a bijection with labelled forests: the coding proposed by Janson & Stefánsson [@Janson-Stefansson:Scaling_limits_of_random_planar_maps_with_a_unique_large_face] relates our model of maps with random forests with a prescribed (out-)degree sequence studied previously in [@Broutin-Marckert:Asymptotics_of_trees_with_a_prescribed_degree_sequence_and_applications; @Lei:Scaling_limit_of_random_forests_with_prescribed_degree_sequences], on which we add random labels as in [@Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence; @Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence]. Theorem \[thm:convergence\_carte\_disque\] and \[thm:convergence\_cartes\_CRT\] follow easily from the convergence of the function which encodes the labels stated in Theorem \[thm:convergence\_etiquettes\]. This label process was shown to always be tight in [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence] so it only remains to consider its finite-dimensional marginals. A first step is to prove the convergence of the forest without the labels, in the weak sense of the subforests spanned by finitely many random vertices, see Proposition \[prop:marginales\_hauteur\], which is proved by comparing the so-called height process and [Ł]{}ukasiewicz path of the forest. We want to stress that only this weak convergence of the forest is needed, and not a strong convergence in the sense of the Gromov–Hausdorff–Prokhorov topology; as a matter of fact, if the label process is always tight, the height or contour process of the forest not always is! This work leaves open several questions, let us present two of them. First, we only consider maps which are planar and with a unique boundary. In a more general setting, Bettinelli [@Bettinelli:Geodesics_in_Brownian_surfaces_Brownian_maps] has shown that for any $\kappa,g {\geqslant}1$ fixed, uniformly random bipartite quadrangulations with genus $g$, with $\kappa$ boundary components, and with $n$ internal faces admit subsequential limits in the sense of the Gromov–Hausdorff topology, once rescaled by a factor $n^{-1/4}$. We understand that Bettinelli & Miermont [@Bettinelli-Miermont:Compact_Brownian_surfaces_II_The_general_case] are currently proving that these subsequential limits agree, so these quadrangulations converge in distribution towards the ‘Brownian $g$-torus with $\kappa$ boundary components’. We believe that our present work may then be extended using more involved bijections with labelled forests. A second problem is the question of the asymptotic behaviour of the planar maps in the case of large degrees, of order $\sigma_n$. Under suitable assumption, the [Ł]{}ukasiewicz path should converge towards the excursion of a process with exchangeable increments which makes no negative jump. Aldous, Miermont & Pitman [@Aldous-Miermont-Pitman:The_exploration_process_of_inhomogeneous_continuum_random_trees_and_an_extension_of_Jeulin_s_local_time_identity] constructed the analogue of the height process of the associated ‘Inhomogeneous Continuum Random Tree’ which is a family of random excursions with continuous-path which extends the Brownian excursion. One can then try to define ‘Inhomogeneous Continuum Random Maps’ by adding random labels on such trees in a similar way as in [@Le_Gall-Miermont:Scaling_limits_of_random_planar_maps_with_large_faces] and to prove convergence of the discrete maps towards these objects (after extraction of a subsequence). In this regards, the tightness result from [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence] is a first step towards such a convergence. The proof has to be different from here though, since as opposed to the Brownian regime, the [Ł]{}ukasiewicz path and the height process are different processes at the limit. Organisation of the paper ------------------------- In Section \[sec:preliminaires\], we first recall the definition of labelled plane forests and their encoding by paths and we briefly discuss the bijection with planar maps. Then in Section \[sec:convergence\_forets\] we first prove the convergence of the [Ł]{}ukasiewicz path of our random forests and then that of the reduced forests, spanned by finitely many random vertices. In Section \[sec:arbres\_etiquetes\] we consider the labels on this forest and we state prove our key result on the convergence of the label process (Theorem \[thm:convergence\_etiquettes\]), from which we deduce our main results in Section \[sec:convergence\_cartes\]. Finally in Section \[sec:BGW\_Boltzmann\] we describe the model of stable Boltzmann planar maps and we state and prove our results on these models by relating them to our general setup. Planar maps as labelled trees {#sec:preliminaires} ============================= As alluded in the introduction, we study planar maps via a bijection with *labelled* forests; let us first recall formally the definition of the latter and set the notation we shall need. Labelled plane trees and their encoding {#sec:def_arbres} --------------------------------------- We view (plane) trees as words: let ${{\mathbf{N}}}= \{1, 2, \dots\}$ and set ${{\mathbf{N}}}^0 = \{\varnothing\}$, then a *tree* is a non-empty subset $T \subset \bigcup_{n {\geqslant}0} {{\mathbf{N}}}^n$ such that: $\varnothing \in T$, it is called the *root* of $T$, and for every $x = (x(1), \dots, x(n)) \in T$, we have $pr(x) \coloneqq (x(1), \dots, x(n-1)) \in T$ and there exists an integer $k_x {\geqslant}0$ such that $xi \coloneqq (x(1), \dots, x(n), i) \in T$ if and only if $1 {\leqslant}i {\leqslant}k_x$. We interpret the vertices of a tree $T$ as individuals in a population: For every $x = (x(1), \dots, x(n)) \in T$, the vertex $pr(x)$ is its *parent*, $k_x$ is the number of *offsprings* of $x$ (if $k_x = 0$, then $x$ is called a *leaf*, otherwise, $x$ is called an *internal vertex*), and $|x| = n$ is its *generation*; finally, we let $\chi_x \in \{1, \dots, k_{pr(x)}\}$ be the only index such that $x=pr(x) \chi_x$, which is the relative position of $x$ amongst its siblings. We shall denote by $\llbracket x , y \rrbracket$ the unique geodesic path between $x$ and $y$ and we shall always list the vertices of a tree as $\varnothing = x_0 < x_1 < \dots < x_{\upsilon}$ in the lexicographical order. Fix a tree $T$ with $\upsilon+1$ vertices; its *[Ł]{}ukasiewicz path* $W = (W(j) ; 0 {\leqslant}j {\leqslant}\upsilon + 1)$ and its *height process* $H = (H(j); 0 {\leqslant}j {\leqslant}\upsilon)$ are defined by $W(0) = 0$ and for every $0 {\leqslant}j {\leqslant}\upsilon$, $$W(j+1) = W(j) + k_{x_j}-1 \qquad\text{and}\qquad H(j) = |x_j|.$$ One easily checks that $W(\upsilon+1) = -1$ and $W(j) {\geqslant}0$ if $j {\leqslant}\upsilon$. A (plane) *forest* is a finite ordered list of plane trees, which we may view as a single tree by linking all the roots to an extra root-vertex; to be consistent, we also perform this operation when the forest originally consists of a single tree. Then we may define the [Ł]{}ukasiewicz path and the height process of a forest as the paths describing the corresponding tree. In this case, the first jump of the [Ł]{}ukasiewicz path is given by the number of trees minus one, say, $\varrho-1$, and then the path terminates by hitting $-1$ for the first time. This first possibly large jump is uncomfortable and we prefer instead cancelling it so the [Ł]{}ukasiewicz path starts at $0$, makes a first $0$ step, and it terminates by hitting $-\varrho$ for the first time. We refer to Figure \[fig:foret\_etiquetee\] for an illustration. Let ${{\mathbf{Z}}}_{{\geqslant}-1} = \{-1, 0, 1, 2, \dots\}$ and for every $k {\geqslant}1$, let us consider the following set of *discrete bridges* $$\label{eq:def_pont} \mathscr{B}_k^{{\geqslant}-1} = \left\{(b_1, \dots, b_k): b_1, b_2-b_1, \dots, b_k-b_{k-1} \in {{\mathbf{Z}}}_{{\geqslant}-1} \text{ and } b_k=0\right\}.$$ Then a *labelling* of a plane tree $T$ is a function $\ell$ from the vertices of $T$ to ${{\mathbf{Z}}}$ such that the root of $T$ has label $\ell(\varnothing) = 0$ and for every vertex $x$ with $k_x {\geqslant}1$ offpsrings, the sequence of increments $(\ell(x1)-\ell(x), \dots, \ell(xk_x)-\ell(x))$ belongs to $\mathscr{B}_{k_x}^{{\geqslant}-1}$. We encode then the labels into the *label process* given by $$L(j) = \ell(x_j);$$ the labelled tree is encoded by the pair $(H, L)$. We extend the definition to forests by considering the associated tree, see Figure \[fig:foret\_etiquetee\]. [.5]{} \(1) at (0,0\*); (2) at (-4.25,1\*); (3) at (-1.5,1\*); (4) at (1.5,1\*); (5) at (.75,2\*); (6) at (.75,3\*); (7) at (-.75,4\*); (8) at (.25,4\*); (9) at (1.25,4\*); (10) at (2.25,4\*); (11) at (2.25,2\*); (12) at (4.25,1\*); (13) at (3.5,2\*); (14) at (2.5,3\*); (15) at (3.5,3\*); (16) at (4.5,3\*); (17) at (5,2\*); \(1) – (2) (1) – (3) (1) – (4) (1) – (12); (4) – (5) (4) – (11) (5) – (6) (6) – (7) (6) – (8) (6) – (9) (6) – (10) (12) – (13) (12) – (17) (13) – (14) (13) – (15) (13) – (16) ; at (2) [$-1$]{}; at (3) [$-2$]{}; at (7) [$-1$]{}; at (8) [$-2$]{}; at (9) [$-1$]{}; at (10) [$0$]{}; at (11) [$1$]{}; at (14) [$-2$]{}; at (15) [$0$]{}; at (16) [$-1$]{}; at (17) [$0$]{}; at (1) [$0$]{}; at (4) [$1$]{}; at (5) [$0$]{}; at (6) [$0$]{}; at (12) [$0$]{}; at (13) [$-1$]{}; (0,3) – (17,3); (0,-1) – (0,5.5); in [-1, 0, 1, 2, 4, 5]{} (0,) – (17,); in [-1, 0, 1, ..., 5]{} (.1,)–(-.1,); (0,-1) node\[left\] [-4]{} (0,0) node\[left\] [-3]{} (0,1) node\[left\] [-2]{} (0,2) node\[left\] [-1]{} (0,3) node\[left\] [0]{} (0,4) node\[left\] [1]{} (0,5) node\[left\] [2]{} ; in [2, 4, ..., 16]{} (,3.1)–(,2.9) (,3) node\[below\] ; (0,3) – ++ (2,0) ++(0,-1) – ++ (1,0) ++(0,-1) – ++ (1,0) ++(0,1) – ++ (1,0) ++(0,0) – ++ (1,0) ++(0,3) – ++ (1,0) ++(0,-1) – ++ (1,0) ++(0,-1) – ++ (1,0) ++(0,-1) – ++ (1,0) ++(0,-1) – ++ (1,0) ++(0,-1) – ++ (1,0) ++(0,1) – ++ (1,0) ++(0,2) – ++ (1,0) ++(0,-1) – ++ (1,0) ++(0,-1) – ++ (1,0) ++(0,-1) – ++ (1,0) ; (17,-1) circle (2pt); (0,0) – (17,0); (0,0) – (0,4.5); in [1, 2, ..., 4]{} (0,) – (17,); in [0, 1, ..., 4]{} (.1,)–(-.1,) (0,) node\[left\] ; in [2, 4, ..., 16]{} (,.1)–(,-.1) (,0) node\[below\] ; (0, 0) circle (2pt) – ++ (1, 1) circle (2pt) – ++ (1, 0) circle (2pt) – ++ (1, 0) circle (2pt) – ++ (1, 1) circle (2pt) – ++ (1, 1) circle (2pt) – ++ (1, 1) circle (2pt) – ++ (1, 0) circle (2pt) – ++ (1, 0) circle (2pt) – ++ (1, 0) circle (2pt) – ++ (1, -2) circle (2pt) – ++ (1, -1) circle (2pt) – ++ (1, 1) circle (2pt) – ++ (1, 1) circle (2pt) – ++ (1, 0) circle (2pt) – ++ (1, 0) circle (2pt) – ++ (1, -1) circle (2pt) ; (0,0) – (17,0); (0,-2) – (0,1.5); in [-2, -1, 1]{} (0,) – (17,); in [-2, -1, 0, 1]{} (.1,)–(-.1,) (0,) node\[left\] ; in [2, 4, ..., 16]{} (,.1)–(,-.1) ; (0) at (0\*, 0); (1) at (1\*, -1); (2) at (2\*, -2); (3) at (3\*, 1); (4) at (4\*, 0); (5) at (5\*, 0); (6) at (6\*, -1); (7) at (7\*, -2); (8) at (8\*, -1); (9) at (9\*, 0); (10) at (10\*, 1); (11) at (11\*, 0); (12) at (12\*, -1); (13) at (13\*, -2); (14) at (14\*, 0); (15) at (15\*, -1); (16) at (16\*, 0); in [1, 2, 3, ..., 16]{} ([0]{}) – (); in [0, 1, 2, 3, ..., 16]{} () circle (2pt); Without further notice, throughout this work, every [Ł]{}ukasiewicz path shall be viewed as a step function, jumping at integer times, whereas the height and label processes shall be viewed as continuous functions after interpolating linearly between integer times. The key bijection {#sec:bijection} ----------------- Recall the notation from introduction; for every integer $n$, let us consider $\varrho_n \in {{\mathbf{N}}}$ and $d_n = (d_n(k))_{k {\geqslant}1} \in {{\mathbf{Z}}}_+^{{{\mathbf{N}}}}$ and let ${\mathbf{T}}_{d_n}^{\varrho_n}$ be the set of all those plane forests with $\varrho_n$ trees and $n$ internal vertices, amongst which $d_n(k)$ have $k$ offsprings for every $k {\geqslant}1$. Such a forest has $$\begin{array}{rcll} \epsilon_n &\coloneqq& \sum_{k {\geqslant}1} k d_n(k)& \quad\text{edges}, \\ d_n(0) &\coloneqq& \varrho_n + \sum_{k {\geqslant}1} (k-1) d_n(k)& \quad\text{leaves}, \\ \upsilon_n &\coloneqq& \sum_{k {\geqslant}0} d_n(k)& \quad\text{vertices}. \end{array}$$ For a single tree, we simply write ${\mathbf{T}}_{d_n}$ for ${\mathbf{T}}_{d_n}^{1}$. We let ${{\mathbf{LT}}_{d_n}^{\varrho_n}}$ denote the set of forests in ${{\mathbf{T}}_{d_n}^{\varrho_n}}$ equipped with a labelling as in the preceding subsection. Let ${{\mathbf{PM}}_{d_n}^{\varrho_n}}$ be the set of *pointed maps* $(m_n, \star)$ where $m_n$ is a map in ${{\mathbf{M}}_{d_n}^{\varrho_n}}$ and $\star$ is a distinguished vertex of ${{\mathbf{M}}_{d_n}^{\varrho_n}}$. If $(m_n, \star)$ is a pointed map, then the tip of the root-edge is either farther (by one) to $\star$ than its origin, in which case the map is said to be *positive* by Marckert & Miermont [@Marckert-Miermont:Invariance_principles_for_random_bipartite_planar_maps], or it is closer (again by one), in which case the map is said to be *negative*. Let us immediately note that every map in ${{\mathbf{M}}_{d_n}^{\varrho_n}}$ has $\varrho_n + \sum_{k {\geqslant}1} k d_n(k) = \upsilon_n$ edges in total and so $\upsilon_n - n + 1 = d_n(0) + 1$ vertices by Euler’s formula. Therefore a uniformly random map in ${{\mathbf{M}}_{d_n}^{\varrho_n}}$ in which we further distinguish a vertex $\star$ uniformly at random has the uniform distribution in ${{\mathbf{PM}}_{d_n}^{\varrho_n}}$. Moreover, half of the $2 \varrho_n$ edges on the boundary are ‘positively oriented’ and half of them are ‘negatively oriented’, so if ${{\mathbf{M}}_{d_n}^{\varrho_n}}$ is positive, we may re-root it to get a negative map. Therefore it is equivalent to work with random negative maps in ${{\mathbf{PM}}_{d_n}^{\varrho_n}}$ instead of maps in ${{\mathbf{M}}_{d_n}^{\varrho_n}}$. Combining the bijections due to Bouttier, Di Francesco, & Guitter [@Bouttier-Di_Francesco-Guitter:Planar_maps_as_labeled_mobiles] and to Janson & Stefánsson [@Janson-Stefansson:Scaling_limits_of_random_planar_maps_with_a_unique_large_face], the set ${{\mathbf{LT}}_{d_n}^{\varrho_n}}$ is in one-to-one correspondence with the set negative maps in ${{\mathbf{PM}}_{d_n}^{\varrho_n}}$. Let us refer to these papers as well as to [@Marzouk:On_scaling_limits_of_planar_maps_with_stable_face_degrees] for a direct construction of the bijection (see also Figure \[fig:bijection\_arbre\_carte\]), and let us only recall here the properties we shall need. 1. The leaves of the forest are in one-to-one correspondence with the vertices different from the distinguished one in the map, and the label of a leaf minus the infimum over all labels, plus one, equals the graph distance between the corresponding vertex of the map and the distinguished vertex. 2. The internal vertices of the forest are in one to one correspondence with the inner faces of the map, and the out-degree of the vertex is half the degree of the face. 3. The root-face of the map corresponds to the extra root-vertex of the forest, and the out-degree of the latter is half the boundary-length of the map. The third property only holds for negative maps, which is the reason why we restricted ourselves to this case. Convergence of random forests {#sec:convergence_forets} ============================= In this section, we discuss limits of the forest ${T_{d_n}^{\varrho_n}}$ sampled uniformly at random in ${{\mathbf{T}}_{d_n}^{\varrho_n}}$. Let us first consider its [Ł]{}ukasiewicz path ${W_{d_n}^{\varrho_n}}$ which is a rather simple object. Convergence of the [Ł]{}ukasiewicz path {#sec:Luka} --------------------------------------- For a vertex $x$ of a tree $T$, let us denote by ${\mathsf{L}}(x)$ and ${\mathsf{R}}(x)$ respectively the number those vertices $y$ whose parent is a strict ancestor of $x$ and which lie strictly to the left, respectively to the right of the ancestral line $\llbracket \varnothing, x\llbracket$; then we put ${\mathsf{LR}}(x) = {\mathsf{L}}(x) + {\mathsf{R}}(x)$. One easily checks that, in a tree, if $x$ is the $i$-th vertex in lexicographical order, then we have ${\mathsf{R}}(x) = W(i)$. In the case of a forest, we define ${\mathsf{L}}(x)$ and ${\mathsf{R}}(x)$ (and so ${\mathsf{LR}}(x)$) as the same quantities *in the tree* containing $x$, i.e. we really do want to view the forest as a forest and not as a tree; then we have more generally $$\label{eq:def_X_discret} {\mathsf{R}}(x) = W(i) - \min_{0 {\leqslant}j {\leqslant}i} W(j).$$ It is well-known that the [Ł]{}ukasiewicz path ${W_{d_n}^{\varrho_n}}$ of our random forest ${T_{d_n}^{\varrho_n}}$ can be constructed as a cyclic shift of a bridge, also called the discrete Vervaat’s transform, see e.g. Pitman [@Pitman:Combinatorial_stochastic_processes Chapter 6]. More precisely, let ${B_{d_n}^{\varrho_n}}= ({B_{d_n}^{\varrho_n}}(i))_{0 {\leqslant}i {\leqslant}\upsilon_n}$ be a discrete path sampled uniformly at random amongst all those started from ${B_{d_n}^{\varrho_n}}(0) = 0$ and which make exactly $d_n(k)$ jumps with value $k-1$ for every $k {\geqslant}0$, so ${B_{d_n}^{\varrho_n}}$ is a bridge from $0$ to ${B_{d_n}^{\varrho_n}}(\upsilon_n) = \sum_{k {\geqslant}0} (k - 1) d_n(k) = - \varrho_n$. Independently, sample $p_n$ uniformly at random in $\{0, \dots, \varrho_n-1\}$ and set $$i_n = \inf\left\{i \in\{1, \dots, \upsilon_n\} : {B_{d_n}^{\varrho_n}}(i) = p_n + \inf_{1 {\leqslant}j {\leqslant}\upsilon_n} {B_{d_n}^{\varrho_n}}(j)\right\}.$$ Then ${W_{d_n}^{\varrho_n}}$ has the law of $({B_{d_n}^{\varrho_n}}(i_n+k) - {B_{d_n}^{\varrho_n}}(i_n))_{0 {\leqslant}k {\leqslant}\upsilon_n}$, where the sum of indices is understood modulo $\upsilon_n$. Moreover, one can check that $i_n$ has the uniform distribution on $\{1, \dots, \upsilon_n\}$ and is independent of the cyclicly shifted path. For $\varrho \in [0,\infty)$, let us denote by $B^\varrho = (B^\varrho_t)_{t \in [0,1]}$ the standard Brownian bridge from $0$ to $-\varrho$. Analogously one can construct $F^\varrho$ the *first-passage Brownian bridge* from $0$ to $-\varrho$ (which reduces to the standard Brownian excursion when $\varrho = 0$) by cyclicly shifting $B^\varrho$, see [@Bertoin-Chaumont-Pitman:Path_transformations_of_first_passage_bridges Theorem 7]. The starting point of this work is the following result, which extends [@Broutin-Marckert:Asymptotics_of_trees_with_a_prescribed_degree_sequence_and_applications Lemma 7] and [@Lei:Scaling_limit_of_random_forests_with_prescribed_degree_sequences Theorem 1.6]. \[prop:convergence\_Luka\] Assume that $\lim_{n \to \infty} \sigma_n = \infty$ and that there exists $\varrho \in [0, \infty]$ such that $\lim_{n \to \infty} \sigma_n^{-1} \varrho_n = \varrho$. 1. Suppose that $\varrho \in [0, \infty)$. Then from every sequence of integers, one can extract a subsequence along which the sequence of càdlàg processes $(\sigma_n^{-1} {B_{d_n}^{\varrho_n}}(\lfloor \upsilon_n t \rfloor))_{t \in [0,1]}$ converges in distribution in the Skorokhod’s $J_1$ topology. 2. Furthermore this sequence converges in distribution towards $B^\varrho$ if and only if $\lim_{n \to \infty} \sigma_n^{-1} \Delta_n = 0$. In this case, the sequence of processes $(\sigma_n^{-1} {W_{d_n}^{\varrho_n}}(\lfloor \upsilon_n t \rfloor))_{t \in [0,1]}$ converges in distribution towards $F^\varrho$. 3. Suppose that $\varrho = \infty$, then both processes $(\varrho_n^{-1} {B_{d_n}^{\varrho_n}}(\lfloor \upsilon_n t \rfloor))_{t \in [0,1]}$ and $(\varrho_n^{-1} {W_{d_n}^{\varrho_n}}(\lfloor \upsilon_n t \rfloor))_{t \in [0,1]}$ converge in probability towards $t \mapsto -t$. Note that, by , by continuity, when $\lim_{n \to \infty} \sigma_n^{-1} \Delta_n = 0$ the process $(\sigma_n^{-1} {\mathsf{R}}(x_{\lfloor \upsilon_n t\rfloor}))_{t \in [0,1]}$ converges in distribution towards $(F^\varrho_t - \inf_{s \in [0,t]} F^\varrho_s)_{t \in [0,1]}$. Let us first suppose that $\varrho < \infty$. For every $1 {\leqslant}i {\leqslant}\upsilon_n$, let $b_i = {B_{d_n}^{\varrho_n}}(i) - {B_{d_n}^{\varrho_n}}(i-1)$. Then $\sum_{i = 1}^{\upsilon_n} b_i = \sum_{k {\geqslant}0} (k - 1) d_n(k) = -\varrho_n$ and $$\sum_{i = 1}^{\upsilon_n} b_i^2 = \sum_{k {\geqslant}0} (k - 1)^2 d_n(k) = \sum_{k {\geqslant}0} k (k - 1) d_n(k) - \sum_{k {\geqslant}0} (k - 1) d_n(k) = \sigma_n^2 + \varrho_n.$$ Therefore $\sum_{i = 1}^{\upsilon_n} (b_i + \upsilon_n^{-1} \varrho_n) = 0$ and $\sum_{i = 1}^{\upsilon_n} (b_i + \upsilon_n^{-1} \varrho_n)^2 = \sigma_n^2 + \varrho_n - \upsilon_n^{-1} \varrho_n^2 = \sigma_n^2 (1 + o(1))$. Then the sequence given by $$x_i = \frac{b_i + \upsilon_n^{-1} \varrho_n}{(\sigma_n^2 + \varrho_n - \upsilon_n^{-1} \varrho_n^2)^{1/2}}, \qquad 1 {\leqslant}i {\leqslant}\upsilon_n,$$ is called a ‘normalised urn’ by Aldous [@Aldous:Saint_Flour Chapter 20]. For every $t \in [0,1]$, let us define $$X_n(t) = \sum_{i {\leqslant}\lfloor \upsilon_n t \rfloor} x_i = \frac{{B_{d_n}^{\varrho_n}}(\lfloor \upsilon_n t \rfloor)}{\sigma_n (1 + o(1))} + \frac{\varrho_n \lfloor \upsilon_n t \rfloor}{\upsilon_n \sigma_n (1 + o(1))}.$$ Then by [@Aldous:Saint_Flour Proposition 20.3], the sequence $(X_n)_{n {\geqslant}1}$ is always tight in the Skorokhod’s $J_1$ topology. Let us extract a subsequence along which it converges to some process, say, $X$. The possible limits have been entirely characterised: according to Kallenberg [@Kallenberg:Foundations_of_modern_probability Theorem 14.21], $X$ takes the form $(B^0_t + J_t)_{t \in [0,1]}$ where $J_t$ is a ‘jump term’, which has only non-negative jumps in our case. Therefore, along this subsequence, we have the convergence $$\left(\sigma_n^{-1} {B_{d_n}^{\varrho_n}}(\lfloor \upsilon_n t \rfloor)\right)_{t \in [0,1]} {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}X^\varrho = (-\varrho t + B^0_t + J_t)_{t \in [0,1]}.$$ Furthermore, the jump part $J_t$ is null if and only if the discrete rescaled process has no jump at the limit, i.e. $\lim_{n \to \infty} \sigma_n^{-1} \Delta_n = 0$, in which case the limit is $(-\varrho t + B^0_t)_{t \in [0,1]}$ which has the same law as $B^\varrho$. Since ${W_{d_n}^{\varrho_n}}$ and $F^\varrho$ are obtained by cyclicly shifting ${B_{d_n}^{\varrho_n}}$ and $B^\varrho$ respectively, this implies further the convergence in distribution of $(\sigma_n^{-1} {W_{d_n}^{\varrho_n}}(\lfloor \upsilon_n t \rfloor))_{t \in [0,1]}$ towards $F^\varrho$ from the continuity of this operation, see Lei [@Lei:Scaling_limit_of_random_forests_with_prescribed_degree_sequences Section 4]. Let us finally suppose that $\varrho_n \gg \sigma_n$. Then similarly, for every $t \in [0,1]$, let us set $$Y_n(t) = \frac{\sigma_n}{\varrho_n} X_n(t) = \frac{{B_{d_n}^{\varrho_n}}(\lfloor \upsilon_n t \rfloor)}{\varrho_n (1 + o(1))} + \frac{\varrho_n \lfloor \upsilon_n t \rfloor}{\upsilon_n \varrho_n (1 + o(1))}.$$ Since the sequence $(X_n)_{n {\geqslant}1}$ is tight, the sequence $(Y_n)_{n {\geqslant}1}$ converges in probability to the null process, whence $\varrho_n^{-1} {B_{d_n}^{\varrho_n}}(\lfloor \upsilon_n \cdot \rfloor)$ converges in probability towards $t \mapsto -t$. The convergence of $\varrho_n^{-1} {W_{d_n}^{\varrho_n}}(\lfloor \upsilon_n \cdot \rfloor)$ follows again by cyclic shift. Convergence of reduced forests {#sec:marginales_hauteur} ------------------------------ Let us fix a continuous function $g : [0,1] \to [0, \infty)$ such that $g(0) = g(1) = 0$; then for every $0 {\leqslant}s {\leqslant}t {\leqslant}1$, set $${\mathscr{d}}_g(s,t) = {\mathscr{d}}_g(t,s) = g(s) + g(t) - 2 \min_{r \in [s,t]} g(r).$$ One easily checks that ${\mathscr{d}}_g$ is a pseudo-metric on $[0,1]$, consider then the quotient space ${\mathscr{T}}_g = [0,1] / \{{\mathscr{d}}_g = 0\}$, we let $\pi_g$ be the canonical projection $[0,1] \to {\mathscr{T}}_g$; then ${\mathscr{d}}_g$ induces a metric on ${\mathscr{T}}_g$ that we still denote by ${\mathscr{d}}_g$, and the Lebesgue measure on $[0,1]$ induces a measure ${\mathscr{p}}_g$ on ${\mathscr{T}}_g$. The space ${\mathscr{T}}_g = ({\mathscr{T}}_g, {\mathscr{d}}_g, {\mathscr{p}}_g)$ is a so-called compact measured real-tree, naturally rooted at $\pi_g(0)$. When $g = 2 X^0$, the space ${\mathscr{T}}_{2 X^0}$ is the celebrated Brownian Continuum Random Tree of Aldous [@Aldous:The_continuum_random_tree_3]. More generally, for $\varrho \in (0,\infty)$, let $X^\varrho_t = F^\varrho_t - \inf_{s {\leqslant}t} F^\varrho_s$ for every $t \in [0,1]$, where we recall that $F^\varrho$ denotes the first-passage Brownian bridge from $0$ to $-\varrho$. Then the space ${\mathscr{T}}_{2X^\varrho}$ describes a forest of Brownian trees, glued at their root; note that the point of view differs from [@Bettinelli-Miermont:Compact_Brownian_surfaces_I_Brownian_disks] in which the forest is coded by the process $F^\varrho$, so it is viewed as a collection of Brownian trees glued along an interval. The next proposition shows that, under our assumption of no macroscopic degree, the forest ${T_{d_n}^{\varrho_n}}$ spanned by i.i.d. uniform random vertices, i.e. the smallest connected subset of ${T_{d_n}^{\varrho_n}}$ (viewed as a tree) containing these vertices, converges towards the analogue for ${\mathscr{T}}_{2X^\varrho}$ and i.d.d. points sampled from its mass measure ${\mathscr{p}}_{2X^\varrho}$. This notion of convergence, although weaker than the Gromov–Hausdorff–Prokhorov convergence, is equivalent to the Gromov–Prokhorov convergence. \[prop:marginales\_hauteur\] Assume that $\lim_{n \to \infty} \sigma_n^{-1} \varrho_n = \varrho$ for some $\varrho \in [0,\infty)$ and that $\lim_{n \to \infty} \sigma_n^{-1} \Delta_n = 0$. Let $U_1, \dots, U_q$ be i.i.d. uniform random variables in $[0,1]$ independent of the rest and denote by $0 = U_{(0)} < U_{(1)} < \dots < U_{(q)} < U_{(q+1)} = 1$ the ordered statistics of $U_1, \dots, U_q$. Then the convergence in distribution $$\frac{\sigma_n}{2 \epsilon_n} \left({H_{d_n}^{\varrho_n}}(\upsilon_n U_{(i)}), \inf_{U_{(i)} {\leqslant}t {\leqslant}U_{(i+1)}} {H_{d_n}^{\varrho_n}}(\upsilon_n t)\right)_{1 {\leqslant}i {\leqslant}q} {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}\left(X^\varrho_{U_{(i)}}, \inf_{U_{(i)} {\leqslant}t {\leqslant}U_{(i+1)}} X^\varrho_t\right)_{1 {\leqslant}i {\leqslant}q}$$ holds jointly with the convergence in distribution of $\sigma_n^{-1} {W_{d_n}^{\varrho_n}}(\upsilon_n \cdot)$ towards $F^\varrho$ in Proposition \[prop:convergence\_Luka\], where $X^\varrho$ is independent of $(U_1, \dots, U_q)$. Note that $\sigma_n^2 {\leqslant}\Delta_n \epsilon_n$ so the assumption that $\Delta_n = o(\sigma_n)$, which implies that $\sigma_n$ converges to $\infty$, also implies that the scaling factor $\epsilon_n^{-1} \sigma_n$ converges to $0$. Also, $\upsilon_n = \epsilon_n + \varrho_n = \epsilon_n (1+o(1))$. The proof is inspired from the work of Broutin & Marckert [@Broutin-Marckert:Asymptotics_of_trees_with_a_prescribed_degree_sequence_and_applications] which itself finds its root in the work of Marckert & Mokkadem [@Marckert-Mokkadem:The_depth_first_processes_of_Galton_Watson_trees_converge_to_the_same_Brownian_excursion]; the ground idea is to compare the process ${H_{d_n}^{\varrho_n}}$ which describes the height of the vertices with ${W_{d_n}^{\varrho_n}}$ which counts the number of vertices branching-off to the right of their ancestral lines. As opposed to these works, these two processes have different scaling here so one has to be more careful. Let us first show the convergence $$\frac{\sigma_n}{2 \epsilon_n} \left({H_{d_n}^{\varrho_n}}(\upsilon_n U_{(i)})\right)_{0 {\leqslant}i {\leqslant}q} {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}\left(X^\varrho_{U_{(i)}}\right)_{0 {\leqslant}i {\leqslant}q}.$$ It will be sufficient to consider only one uniformly random time $U$; let $x_n$ be the $\lceil \upsilon_n U\rceil$-th vertex of ${T_{d_n}^{\varrho_n}}$ in lexicographical order, so it has the uniform distribution in ${T_{d_n}^{\varrho_n}}$. Then ${H_{d_n}^{\varrho_n}}(\lceil \upsilon_n U\rceil) = |x_n|$ denotes its generation in this forest, whereas ${\mathsf{R}}(x_n)$ as defined in  is the number individuals branching-off strictly to the right of its ancestral line in the corresponding tree of the forest. According to Proposition \[prop:convergence\_Luka\] and , $\sigma_n^{-1} {\mathsf{R}}(x_n)$ converges in distribution towards $X^\varrho_U$; we claim that $$\label{eq:ecart_longueur_LR} \left|\frac{1}{\sigma_n} {\mathsf{R}}(x_n) - \frac{\sigma_n}{2 \epsilon_n} |x_n| \right| {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}0.$$ Recall the notation ${\mathsf{LR}}(x_n)$ for the total number individuals branching-off of the ancestral line of $x_n$; by symmetry, it has the law of the sum of two copies of ${\mathsf{R}}(x_n)$. Fix $\varepsilon, \eta > 0$. Let $K > 0$ be such that $${{{\mathbf{P}}}\left({\mathsf{LR}}(x_n) {\leqslant}K \sigma_n \text{ and } |x_n| {\leqslant}K \epsilon_n / \sigma_n\right)} {\geqslant}1-\eta$$ for every $n$ large enough. This is ensured by [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Proposition 2 and 3]. Then we may write $${{{\mathbf{P}}}\left(\left|\frac{1}{\sigma_n}{\mathsf{R}}(x_n) - \frac{\sigma_n}{2 \epsilon_n}|x_n|\right| > \varepsilon\right)} {\leqslant}\eta + {{{\mathbf{P}}}\left(\left|{\mathsf{R}}(x_n) - \frac{\sigma_n^2}{2 \epsilon_n}|x_n|\right| > \varepsilon \sigma_n \text{ and } {\mathsf{LR}}(x_n) {\leqslant}K \sigma_n \text{ and } |x_n| {\leqslant}K \epsilon_n / \sigma_n\right)}.$$ We next appeal to the *spinal decomposition* obtained in [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Lemma 3]: for every $1 {\leqslant}i {\leqslant}\epsilon_n$, we let $\xi_{d_n}(i)$ denote the label of the $i$-th ball in successive picks without replacement in an urn with initial configuration of $k d_n(k)$ balls labelled $k$ for every $k {\geqslant}1$. Further, conditional on $(\xi_{d_n}(i))_{1 {\leqslant}i {\leqslant}\epsilon_n}$, let us sample independent random variables $(\chi_{d_n}(i))_{1 {\leqslant}i {\leqslant}\epsilon_n}$ such that each $\chi_{d_n}(i)$ is uniformly distributed in $\{1, \dots, \xi_{d_n}(i)\}$. Consider the event that $|x_n| = h$ and that for all $0 {\leqslant}i < h$, the ancestor of $x_n$ at generation $i$ has $k_i$ offsprings and its $j_i$-th is the ancestor of $x_n$ at generation $i+1$; then as shown in [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Lemma 3] its probability is bounded by $$\frac{\varrho_n + \sum_{1 {\leqslant}i {\leqslant}h} (k_i - 1)}{\upsilon_n} \cdot {{{\mathbf{P}}}\left(\bigcap_{i {\leqslant}h} \left\{(\xi_{d_n}(i), \chi_{d_n}(i)) = (k_i, j_i)\right\}\right)}.$$ By decomposing according to the height of $x_n$ and taking the worst case, we then obtain the bound $${{{\mathbf{P}}}\left(\left|\frac{1}{\sigma_n}{\mathsf{R}}(x_n) - \frac{\sigma_n}{2 \epsilon_n}|x_n|\right| > \varepsilon\right)} {\leqslant}\eta + K \frac{\epsilon_n}{\upsilon_n} \frac{\varrho_n + K \sigma_n}{\sigma_n} \sup_{h {\leqslant}K \epsilon_n/\sigma_n} {{{\mathbf{P}}}\left(\left|\sum_{i {\leqslant}h} \left(\xi_{n, d_n}(i) - \chi_{n, d_n}(i)\right) - \frac{\sigma_n^2}{2 \epsilon_n} h\right| > \varepsilon \sigma_n\right)}.$$ From our assumption $\frac{\epsilon_n}{\upsilon_n} \frac{\varrho_n + K \sigma_n}{\sigma_n}$ converges to $\varrho + K$. From the triangle inequality, the last probability is bounded above by $${{{\mathbf{P}}}\left(\left|\sum_{i {\leqslant}h} \left(\left(\xi_{n, d_n}(i) - \chi_{n, d_n}(i)\right) - \frac{\xi_{n, d_n}(i) - 1}{2}\right)\right| > \frac{\varepsilon \sigma_n}{2}\right)} + {{{\mathbf{P}}}\left(\left|\sum_{i {\leqslant}h} \frac{\xi_{n, d_n}(i) - 1}{2} - \frac{\sigma_n^2}{2 \epsilon_n} h\right| > \frac{\varepsilon \sigma_n}{2}\right)}.$$ Observe that the $\xi_{d_n}(i)$’s are identically distributed, with $$\label{eq:moyenne_biais_par_la_taille} {{{\mathbf{E}}}\left[\xi_{d_n}(i) - 1\right]} = \sum_{k {\geqslant}1} (k-1) \frac{k d_n(k)}{\epsilon_n} = \frac{\sigma_n^2}{\epsilon_n} \qquad\text{and so}\qquad {\mathrm{Var}}\left(\xi_{d_n}(i)-1\right) {\leqslant}\Delta_n \frac{\sigma_n^2}{\epsilon_n}.$$ Furthermore, these random variables are obtained by successive picks without replacement in an urn, and therefore are negatively correlated, in particular, the variance of their sum is bounded by the sum of their variances, see e.g. [@Aldous:Saint_Flour Proposition 20.6]. The Markov inequality then yields for every $h {\leqslant}K \epsilon_n / \sigma_n$ $${{{\mathbf{P}}}\left(\left|\sum_{i {\leqslant}h} \frac{\xi_{n, d_n}(i) - 1}{2} - \frac{\sigma_n^2}{2 \epsilon_n} h\right| > \frac{\varepsilon \sigma_n}{2}\right)} {\leqslant}\frac{h}{\varepsilon^2 \sigma_n^2} {\mathrm{Var}}\left(\xi_{n, d_n}(1)-1\right) {\leqslant}\frac{K \Delta_n}{\varepsilon^2 \sigma_n},$$ which converges to $0$. Moreover, conditional on the $\xi_{n, d_n}(i)$’s, the random variables $\xi_{n, d_n}(i) - \chi_{n, d_n}(i)$ are independent and uniformly distributed on $\{0, \dots, \xi_{n, d_n}(i)-1\}$, with mean $(\xi_{n, d_n}(i) - 1)/2$ and variance $(\xi_{n, d_n}(i)^2 - 1)/12$. Similarly, the Markov inequality applied conditional on the $\xi_{n, d_n}(i)$’s yields for every $h {\leqslant}K \epsilon_n / \sigma_n$: $$\begin{aligned} {{{\mathbf{P}}}\left(\left|\sum_{i {\leqslant}h} \left(\left(\xi_{n, d_n}(i) - \chi_{n, d_n}(i)\right) - \frac{\xi_{n, d_n}(i) - 1}{2}\right)\right| > \frac{\varepsilon \sigma_n}{2}\right)} &{\leqslant}\frac{4}{\varepsilon^2 \sigma_n^2} \cdot {{{\mathbf{E}}}\left[\sum_{i {\leqslant}h} \frac{\xi_{n, d_n}(i)^2 - 1}{12}\right]} \\ &{\leqslant}\frac{\Delta_n+1}{3 \varepsilon^2 \sigma_n^2} \cdot h \cdot {{{\mathbf{E}}}\left[\xi_{n, d_n}(1) - 1\right]} \\ &{\leqslant}K \frac{\Delta_n+1}{3 \varepsilon^2 \sigma_n},\end{aligned}$$ which also converges to $0$. This completes the proof of , which, combined with Proposition \[prop:convergence\_Luka\] and , yields the convergence $$\frac{\sigma_n}{2 \epsilon_n} \left({H_{d_n}^{\varrho_n}}(\upsilon_n U_{(i)})\right)_{0 {\leqslant}i {\leqslant}q} {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}\left(X^\varrho_{U_{(i)}}\right)_{0 {\leqslant}i {\leqslant}q}$$ for any $q {\geqslant}1$ fixed. In order to obtain the full statement of the proposition, we need to prove the following: Assume that the forest ${T_{d_n}^{\varrho_n}}$ reduced to the ancestors of $q$ i.d.d. vertices has $q$ leaves (this occurs with high probability since the height of such a vertex is at most order $\epsilon_n / \sigma_n = o(\epsilon_n)$ as we have seen) and a random number, say, $b \in \{1, \dots, q-1\}$ of branch-points; then remove these $b$ branch-points from the reduced forest to obtain a collection of $q+b$ single branches. Then we claim that uniformly for $1 {\leqslant}i {\leqslant}q+b$, the length of the $i$-th branch multiplied by $\frac{\sigma_n}{2 \epsilon_n}$ is close to $\sigma_n^{-1}$ times the number of vertices branching-off strictly to the right of this path in the original forest. Since this number is encoded by the [Ł]{}ukasiewicz path, the proposition then follows from Proposition \[prop:convergence\_Luka\]. Such a comparison follows similarly as in the case $q=1$ above from the spinal decomposition of [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Lemma 3]: now we have to consider not only the length $h$ of a single branch, but those $h_1, \dots, h_{q+b}$ of all the branches, which is compensated by the factor $(\sigma_n / \upsilon_n)^{q+b}$ in this reference, the other terms before the probability are bounded. We leave the details to the reader. Convergence of labelled forests {#sec:arbres_etiquetes} =============================== We discuss in this section the convergence of the label process ${L_{d_n}^{\varrho_n}}$ of a labelled forest $({T_{d_n}^{\varrho_n}}, \ell)$ sampled uniformly at random in ${{\mathbf{LT}}_{d_n}^{\varrho_n}}$. The law of the latter can be constructed as follows: first sample ${T_{d_n}^{\varrho_n}}$ uniformly at random in ${{\mathbf{T}}_{d_n}^{\varrho_n}}$, and then, conditional on it, label its extra root $0$ and independently for every branch-point with, say, $k {\geqslant}1$ offsprings, sample the label increments between these offsprings and the branch-point uniformly at random in $\mathscr{B}_k^{{\geqslant}-1}$ as defined in . Such a random bridge has the same law as a random walk $S$ with step distribution ${{\mathbf{P}}}(S_1 = i) = 2^{-(i+2)}$ for all $i {\geqslant}-1$, conditioned to satisfy $S_k = 0$. In [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Theorem 2], we proved the tightness of this process in great generality: \[thm:tension\_etiquettes\] Fix any sequence of boundary-lengths $(\varrho_n)_{n {\geqslant}1}$ and any degree sequence $(d_n)_{n {\geqslant}1}$ such that $\Delta_n {\geqslant}2$ for every $n {\geqslant}1$ and $\limsup_{n \to \infty} \epsilon_n^{-1} d_n(1) < 1$. Then the sequence of label processes $$\left((\sigma_n + \varrho_n)^{-1/2} {L_{d_n}^{\varrho_n}}(\upsilon_n t)\right)_{t \in [0,1]}$$ is tight in the space $\mathscr{C}([0,1], {{\mathbf{R}}})$. As explained in [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence] the assumption $\limsup_{n \to \infty} \epsilon_n^{-1} d_n(1) < 1$ ensures that the time-rescaling is correct; faces of degree $2$ anyway do not play any role in the geometry of the graph induced by a map, so we could even assume that $d_n(1) = 0$. We shall need to deal with the root-vertices of ${T_{d_n}^{\varrho_n}}$ separately, let us denote by $({b_{d_n}^{\varrho_n}}(k))_{1 {\leqslant}k {\leqslant}\varrho_n}$ the labels of the roots of ${T_{d_n}^{\varrho_n}}$, and set ${b_{d_n}^{\varrho_n}}(0) = {b_{d_n}^{\varrho_n}}(\varrho_n) = 0$. Recall that ${W_{d_n}^{\varrho_n}}$ is the [Ł]{}ukasiewicz path of ${T_{d_n}^{\varrho_n}}$, for every $0 {\leqslant}k {\leqslant}\upsilon_n$, let us set ${{\underline{W}\vphantom{W}}_{d_n}^{\varrho_n}}(k) = \min_{0 {\leqslant}i {\leqslant}k} {W_{d_n}^{\varrho_n}}(i)$ and $$\label{eq:decomposition_labels} {L_{d_n}^{\varrho_n}}(k) = {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(k) + {b_{d_n}^{\varrho_n}}(1-{{\underline{W}\vphantom{W}}_{d_n}^{\varrho_n}}(k)).$$ Then ${b_{d_n}^{\varrho_n}}(1-{{\underline{W}\vphantom{W}}_{d_n}^{\varrho_n}}(k))$ gives the value of the label of the root-vertex of the tree containing the $k$-th vertex, so ${{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(k)$ gives the value of the label of the $k$-th vertex minus the value of the label of the root-vertex of its tree; in other words, ${{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}$ is the concatenation of the label process of each tree taken individually, so where all labels have been shifted so that each root has label $0$. The point is that, conditionally on ${T_{d_n}^{\varrho_n}}$, the two processes ${{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(\cdot)$ and ${b_{d_n}^{\varrho_n}}(1-{{\underline{W}\vphantom{W}}_{d_n}^{\varrho_n}}(\cdot))$ are independent. Brownian labelled trees {#sec:serpent_brownien} ----------------------- The Brownian map and the Brownian disks are described similarly by ‘continuum labelled trees’ which we next recall, following Bettinelli & Miermont [@Bettinelli-Miermont:Compact_Brownian_surfaces_I_Brownian_disks]. Fix $\varrho \in [0, \infty)$ and recall that $F^\varrho$ denotes the standard Brownian first passage bridge from $0$ to $-\varrho$; for every $t \in [0,1]$, we set ${\underline{F}\vphantom{F}}_t = \inf_{s {\leqslant}t} F^\varrho_s$ and $X^\varrho_t = F^\varrho_t - {\underline{F}\vphantom{F}}_t$; this process encodes a Brownian forest ${\mathscr{T}}_{X^\varrho}$ as explained in Section \[sec:convergence\_forets\]. For every $y \in [0,\varrho]$, let us set $\tau_y = \inf\{t \in [0,1] : F^\varrho_t = -y\}$. We construct next another process $Z^\varrho = (Z^\varrho_t)_{t \in [0,1]}$ on the same probability space as follows. First, conditional on $X^\varrho$, let ${\widetilde{Z}\vphantom{Z}}^\varrho$ be a centred Gaussian process with covariance $${{{\mathbf{E}}}\left[{\widetilde{Z}\vphantom{Z}}^\varrho_s {\widetilde{Z}\vphantom{Z}}^\varrho_t \;\middle|\; X^\varrho\right]} = \min_{r \in [s,t]} X^\varrho_r \qquad\text{for every}\qquad 0 {\leqslant}s {\leqslant}t {\leqslant}1.$$ It is known, see e.g. Bettinelli [@Bettinelli-Scaling_limits_for_random_quadrangulations_of_positive_genus], that ${\widetilde{Z}\vphantom{Z}}^\varrho$ admits a continuous version and, without further notice, we shall work throughout this paper with this version. In the case $\varrho = 0$, we simply set $Z^0 = {\widetilde{Z}\vphantom{Z}}^0$. If $\varrho > 0$, independently of ${\widetilde{Z}\vphantom{Z}}^\varrho$, let ${\mathbf{b}}^\varrho$ be a standard Brownian bridge from $0$ to $0$ of duration $\varrho$, which has the law of $\varrho^{1/2} {\mathbf{b}}(\varrho \cdot)$ where ${\mathbf{b}}$ is a standard Brownian bridge on the time interval $[0,1]$, and set $$Z^\varrho_t = {\widetilde{Z}\vphantom{Z}}^\varrho_t + \sqrt{3} \cdot {\mathbf{b}}^\varrho_{- {\underline{F}\vphantom{F}}^\varrho_t} \qquad\text{for every}\qquad 0 {\leqslant}t {\leqslant}1.$$ This construction is the continuum analog of the decomposition of the process ${L_{d_n}^{\varrho_n}}$ in . Observe that, almost surely, $Z^\varrho_s = Z^\varrho_t$ whenever ${\mathscr{d}}_{X^\varrho}(s, t) = 0$ so $Z^\varrho$ can be seen as a process indexed by ${\mathscr{T}}_{X^\varrho}$ by setting $Z^\varrho_x = Z^\varrho_t$ if $x = \pi_{X^\varrho}(t)$. We interpret $Z^\varrho_{x}$ as the label of an element $x \in {\mathscr{T}}_{X^\varrho}$; the pair $({\mathscr{T}}_{X^\varrho}, (Z^\varrho_x; x \in {\mathscr{T}}_{X^\varrho}))$ is a continuum analog of labelled plane forest. For $\varrho = 0$, the process $Z^0$ is interpreted as a Brownian motion on the Brownian tree ${\mathscr{T}}_{X^0}$ started from $0$ on the root. For $\varrho > 0$, the space ${\mathscr{T}}_{X^\varrho}$ is interpreted as a collection indexed by $[0,\varrho]$ of Brownian trees glued at their root; for every $y \in [0,\varrho]$, it holds that $- {\underline{F}\vphantom{F}}^\varrho_{\tau_y} = y$ so each point $\pi_{X^\varrho}(\tau_y)$ ‘at the first generation’ receives label $\sqrt{3} \cdot {\mathbf{b}}_{y}$ and then the labels on each of the trees $\pi_{X^\varrho}((\tau_{y-}, \tau_y])$ evolve like independent Brownian motions. The Brownian map and the Brownian disks are constructed from these processes, see Section \[sec:carte\_brownienne\] below. The main result of this section is the following. \[thm:convergence\_etiquettes\] Suppose that $\limsup_{n \to \infty} \epsilon_n^{-1} d_n(1) < 1$ and that $\lim_{n \to \infty} \sigma_n^{-1} \varrho_n = \varrho$ for some $\varrho \in [0,\infty]$. 1. \[thm:convergence\_etiquettes\_carte\_disque\] If $\varrho < \infty$, then the sequence of processes $(\sigma_n^{-1/2} {L_{d_n}^{\varrho_n}}(\upsilon_n \cdot))_{n {\geqslant}1}$ is tight in $\mathscr{C}([0,1],{{\mathbf{R}}})$. If furthermore $\lim_{n \to \infty} \sigma_n^{-1} \Delta_n = 0$, then the convergence in distribution $$\left(\left(\frac{3}{2\sigma_n}\right)^{1/2} {L_{d_n}^{\varrho_n}}(\upsilon_n t) ; t \in [0,1]\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}(Z^\varrho_t; t \in [0,1])$$ holds in $\mathscr{C}([0,1],{{\mathbf{R}}})$. 2. \[thm:convergence\_etiquettes\_arbre\] If $\varrho = \infty$, then the convergence in distribution $$\left((2\varrho_n)^{-1/2} \left({b_{d_n}^{\varrho_n}}(\varrho_n t), {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(\upsilon_n t)\right); t \in [0,1]\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}(({\mathbf{b}}_t, 0); t \in [0,1])$$ holds in $\mathscr{C}([0,1],{{\mathbf{R}}}^2)$. Let us note that the assumption that $\epsilon_n^{-1} d_n(1)$ is bounded away from $1$, which was already used in Theorem \[thm:tension\_etiquettes\] implies that $\epsilon_n^{-1} \sigma_n^2$ is bounded away from $0$ so in particular $\lim_{n \to \infty} \sigma_n = \infty$. Let us immediately prove the second part of the theorem. Let us point out that the convergence of ${b_{d_n}^{\varrho_n}}$ and the tightness of $(\varrho_n^{-1/2} {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}})_{n {\geqslant}1}$ only require that $\lim_{n\to \infty} \varrho_n = \infty$, and do not necessitate that $\varrho_n \gg \sigma_n$. First, the convergence of the labels of the roots is very easy: By construction, the process ${b_{d_n}^{\varrho_n}}$ has the law of a random walk bridge of length $\varrho_n$, whose step distribution is centred and has variance $2$, so the claim follows from a conditional version of Donsker’s invariance principle, see e.g. [@Bettinelli-Scaling_limits_for_random_quadrangulations_of_positive_genus Lemma 10] for a detailed proof. Next, tightness of the second component follows easily as well. Indeed, by Theorem \[thm:tension\_etiquettes\] the sequence $(\varrho_n^{-1/2} {L_{d_n}^{\varrho_n}})_{n {\geqslant}1}$ is tight so it is equivalent to prove that $(\varrho_n^{-1/2} ({L_{d_n}^{\varrho_n}}- {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}))_{n {\geqslant}1}$ is tight. For every $1 {\leqslant}k {\leqslant}\upsilon_n$, the value ${L_{d_n}^{\varrho_n}}(k) - {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(k)$ is the label of the root of the tree in ${T_{d_n}^{\varrho_n}}$ containing the $k$-th vertex in lexicographical order, so the claim follows from tightness of ${b_{d_n}^{\varrho_n}}$ since ${L_{d_n}^{\varrho_n}}- {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}$ is a ‘time dilation’ of the latter. It only remains to prove that $(\varrho_n^{-1/2} {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(\upsilon_n \cdot))_{n {\geqslant}1}$ converges in probability to the null process. Here we shall view ${T_{d_n}^{\varrho_n}}$ as a forest. Let us sample $i_n$ uniformly at random in $\{1, \dots, \upsilon_n\}$ and independently of the rest, and let us prove that $\varrho_n^{-1/2} {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(i_n)$ converges in probability to $0$; this shows that the processes $(\varrho_n^{-1/2} {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(\upsilon_n \cdot))_{n {\geqslant}1}$ cannot admit a subsequential limit which has a non-zero value (otherwise, by continuity, it would be bounded away from zero on a whole interval). The main idea of the proof of Theorem \[thm:tension\_etiquettes\] is to apply Kolmogorov’s criterion, and to control the moments of the labels in terms of the [Ł]{}ukasiewicz path. Recall that ${{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}$ is centred, we consider the second moment. As detailed in the proof of Proposition 7 in [@Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence] which proves tightness of ${L_{d_n}^{\varrho_n}}$ in a particular case, if $x_n$ is the random vertex associated with $i_n$, we have for some constant $C > 0$ $${{{\mathbf{E}}}\left[|{L_{d_n}^{\varrho_n}}(i_n)|^2 \;\middle|\; {T_{d_n}^{\varrho_n}}\right]} {\leqslant}C \cdot {\mathsf{R}}(\mathopen{\llbracket} x_n, \varnothing \mathclose{\llbracket}) = C \cdot |{W_{d_n}^{\varrho_n}}(i_n) + \varrho_n|,$$ where ${\mathsf{R}}(\mathopen{\llbracket} x_n, \varnothing \mathclose{\llbracket})$ denotes the number of vertices visited strictly after $i_n$ whose parent is an ancestor of $x_n$ in ${T_{d_n}^{\varrho_n}}$ *viewed as a tree*. If one now views ${T_{d_n}^{\varrho_n}}$ as a forest and shifts all the labels of the tree which contains the vertex $x_n$ so the root receives label $0$, then one gets $${{{\mathbf{E}}}\left[|{{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(i_n)|^2 \;\middle|\; {T_{d_n}^{\varrho_n}}\right]} {\leqslant}C \cdot {\mathsf{R}}(x_n) = C \cdot |{W_{d_n}^{\varrho_n}}(i_n) - {{\underline{W}\vphantom{W}}_{d_n}^{\varrho_n}}(i_n)|,$$ where now ${\mathsf{R}}(x_n)$ is the quantity in the tree of the forest ${T_{d_n}^{\varrho_n}}$ containing $x_n$ as in Section \[sec:preliminaires\], i.e. the only difference with the former display is that we do not count the root-vertices of ${T_{d_n}^{\varrho_n}}$ anymore. By [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Proposition 2] the random variable $\varsigma_n^{-1} {\mathsf{R}}(x_n)$ admits uniform exponential tail bounds, so in particular the mean of ${\mathsf{R}}(x_n)$ is bounded by some constant times $\sigma_n = o(\varrho_n)$, which completes the proof. The key to prove the first part of the theorem is to characterise the subsequential limits of ${{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}$ when $\varrho_n = O(\sigma_n)$ and $\lim_{n \to \infty} \sigma_n^{-1} \Delta_n = 0$. \[prop:marginales\_labels\] Suppose that $\lim_{n \to \infty} \sigma_n^{-1} \varrho_n = \varrho$ for some $\varrho \in [0,\infty)$ and $\lim_{n \to \infty} \sigma_n^{-1} \Delta_n = 0$. For every $n,q {\geqslant}1$, sample $({T_{d_n}}, \ell)$ uniformly at random in ${{\mathbf{LT}}_{d_n}^{\varrho_n}}$ and, independently, sample $U_1, \dots, U_q$ uniformly at random in $[0,1]$. Sample ${\widetilde{Z}\vphantom{Z}}^\varrho$ independently of $(U_1, \dots, U_q)$, then we have $$\label{eq:cv_label_unif_multi} \left(\frac{3}{2\sigma_n}\right)^{1/2} \left({{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(\upsilon_n U_1), \dots, {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(\upsilon_n U_q)\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}\left({\widetilde{Z}\vphantom{Z}}^\varrho_{U_1}, \dots, {\widetilde{Z}\vphantom{Z}}^\varrho_{U_q}\right)$$ jointly with the convergence of ${W_{d_n}^{\varrho_n}}$ and ${H_{d_n}^{\varrho_n}}$ in Proposition \[prop:convergence\_Luka\] and \[prop:marginales\_hauteur\]. Let us differ the proof to the next subsection and instead let us prove the first part of Theorem \[thm:convergence\_etiquettes\]. Combining Proposition \[prop:convergence\_Luka\], Proposition \[prop:marginales\_labels\], and the beginning of the proof of Theorem \[thm:convergence\_etiquettes\]\[thm:convergence\_etiquettes\_arbre\], we obtain that when $\lim_{n \to \infty} \sigma_n^{-1} \varrho_n = \varrho \in [0, \infty)$ and $\lim_{n \to \infty} \sigma_n^{-1} \Delta_n = 0$, the convergence $$\left(\frac{1}{\sigma_n} {W_{d_n}^{\varrho_n}}(\upsilon_n t), \left(\frac{3}{2\sigma_n}\right)^{1/2} {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(\upsilon_n t)\right)_{t \in [0,1]} {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}(F^\varrho_t, {\widetilde{Z}\vphantom{Z}}^\varrho_t)_{t \in [0,1]}$$ holds in $\mathscr{C}([0,1],{{\mathbf{R}}}^2)$. Combined with the convergence of ${b_{d_n}^{\varrho_n}}$, we obtain when $\varrho > 0$, $$\left(\left(\frac{3}{2\sigma_n}\right)^{1/2} {b_{d_n}^{\varrho_n}}(1 - {{\underline{W}\vphantom{W}}_{d_n}^{\varrho_n}}(\upsilon_n t))\right)_{t \in [0,1]} {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}\left((3 \varrho)^{1/2} {\mathbf{b}}_{- \varrho^{-1} {\underline{F}\vphantom{F}}^\varrho_t}\right)_{t \in [0,1]} {\enskip\mathop{=}^{(d)}\enskip}\left(3^{1/2} {\mathbf{b}}^\varrho_{- {\underline{F}\vphantom{F}}^\varrho_t}\right)_{t \in [0,1]},$$ in $\mathscr{C}([0,1],{{\mathbf{R}}})$, and the limit is instead the null process if $\varrho = 0$. Recall the decomposition  of ${L_{d_n}^{\varrho_n}}$; we deduce from the conditional independence of ${b_{d_n}^{\varrho_n}}$ and ${{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}$, that $$\left(\left(\frac{3}{2\sigma_n}\right)^{1/2} {L_{d_n}^{\varrho_n}}(\upsilon_n t)\right)_{t \in [0,1]} {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}({\widetilde{Z}\vphantom{Z}}^\varrho_t + 3^{1/2} {\mathbf{b}}^\varrho_{- {\underline{F}\vphantom{F}}^\varrho_t} {{{\mathbf{1}}}_{\{\varrho > 0\}}})_{t \in [0,1]},$$ and the right-hand side is the definition of the process $Z^\varrho$. Marginals of the label process {#sec:marginales_labels} ------------------------------ Let us now prove Proposition \[prop:marginales\_labels\]. We start with the case $q=1$ and comment then on the general case. Let $U$ have the uniform distribution on $[0,1]$ and let $x_n$ be the uniformly random vertex visited at the time $\lceil \upsilon_n U \rceil$ in lexicographical order, with label $\ell(x_n) = {L_{d_n}^{\varrho_n}}(\lceil \upsilon_n U \rceil)$ and height $|x_n| = {H_{d_n}^{\varrho_n}}(\lceil \upsilon_n U \rceil)$. Observe that $$\left(\frac{3}{2\sigma_n}\right)^{1/2} \ell(x_n) = \left(\frac{\sigma_n}{2 \epsilon_n} |x_n|\right)^{1/2} \left(\frac{3 \epsilon_n}{\sigma_n^2 |x_n|}\right)^{1/2} \ell(x_n).$$ Recall from Proposition \[prop:marginales\_hauteur\] that $\frac{\sigma_n}{2 \epsilon_n} |x_n|$ converges in distribution towards $X^\varrho_U$, it is therefore equivalent to show that, jointly with this convergence, we have $$\label{eq:cv_label_unif} \left(\frac{3 \epsilon_n}{\sigma_n^2 |x_n|}\right)^{1/2} \ell(x_n) \quad\mathop{\Longrightarrow}_{n \to \infty}\quad \mathscr{N}(0, 1),$$ where $\mathscr{N}(0, 1)$ denotes the standard Gaussian distribution and ‘$\Rightarrow$’ is a slight abuse of notation to refer to the weak convergence of the law of the random variable. For $K > 0$, let us consider the event $$E_n(K) = \left\{\frac{\sigma_n}{\epsilon_n} |x_n| \in [K^{-1}, K] \text{ and } {\mathsf{LR}}(x_n) {\leqslant}K \sigma_n\right\}.$$ By [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Proposition 2 and 3] as well as Proposition \[prop:marginales\_hauteur\] for the lower bound on $|x_n|$, it holds that $\lim_{K \to \infty} \liminf_{n \to \infty} {{{\mathbf{P}}}\left(E_n(K)\right)} = 1$ and we shall implicitly work on this event. In order to reduce the notation, let us set $$s(y) = \left(\frac{3 \epsilon_n}{\sigma_n^2 y}\right)^{1/2}.$$ Fix $z \in {{\mathbf{R}}}$, we thus aim at showing that $$\lim_{K \to \infty} \limsup_{n \to \infty} \left|{{{\mathbf{E}}}\left[{\mathrm{e}}^{{\mathrm{i}}z s(|x_n|) \ell(x_n)} {{\mathbf{1}}}_{E_n(K)}\right]} - {\mathrm{e}}^{- z^2 / 2}\right| = 0.$$ The idea is to decompose $\ell(x_n)$ as the sum of label increments between two consecutive ancestors. Indeed, for every $1 {\leqslant}p {\leqslant}|x_n|$, let $k_p(x_n)$ denote the number of offsprings of the ancestor of $x_n$ at height $p-1$ and let $j_p(x_n)$ the index of the offspring of this ancestor which is also an ancestor of $x_n$. For every $k {\geqslant}1$, let us denote by $(B(k,1), \dots, B(k,k))$ a uniformly random bridge in $\mathscr{B}_k^{{\geqslant}-1}$. Then, conditionally on ${T_{d_n}^{\varrho_n}}$ and $x_n$, its label $\ell(x_n)$ has the law of $\sum_{1 {\leqslant}p {\leqslant}|x_n|} B_p(k_p(x_n), j_p(x_n))$, where the $B_p$’s are independent; note however that they are not identically distributed, and that the independence is only conditional on ${T_{d_n}^{\varrho_n}}$ and $x_n$ so we cannot directly apply Lyapunov’s version of the Central Limit Theorem, but we may adapt its proof. In order to reduced the notation, let us write $B_p(x_n)$ for $B_p(k_p(x_n), j_p(x_n))$ and $\sigma^2_p(x_n)$ for its variance, which is measurable with respect to ${T_{d_n}^{\varrho_n}}, x_n$. Let us bound the preceding display at $n, K$ fixed by $$\label{eq:separation_variance_marginales_labels} \begin{split} &\left|{{{\mathbf{E}}}\left[{\mathrm{e}}^{- \frac{z^2}{2} s(|x_n|)^2 \sum_{p = 1}^{|x_n|} \sigma^2_p(x_n)} {{\mathbf{1}}}_{E_n(K)}\right]} - {\mathrm{e}}^{- z^2 / 2}\right| \\ &+ \left|{{{\mathbf{E}}}\left[{\mathrm{e}}^{- \frac{z^2}{2} s(|x_n|)^2 \sum_{p = 1}^{|x_n|} \sigma^2_p(x_n)} {{\mathbf{1}}}_{E_n(K)}\right]} - {{{\mathbf{E}}}\left[{\mathrm{e}}^{{\mathrm{i}}z s(|x_n|) \sum_{p = 1}^{|x_n|} B_p(x_n)} {{\mathbf{1}}}_{E_n(K)}\right]}\right|. \end{split}$$ We shall prove separately at the end that there exists a constant $C>0$ such that $$\label{eq:borne_a_part_marginales_labels} \begin{split} &\left|{{{\mathbf{E}}}\left[{\mathrm{e}}^{- \frac{z^2}{2} s(|x_n|)^2 \sum_{p = 1}^{|x_n|} \sigma^2_p(x_n)} {{\mathbf{1}}}_{E_n(K)}\right]} - {{{\mathbf{E}}}\left[{\mathrm{e}}^{{\mathrm{i}}z s(|x_n|) \sum_{p = 1}^{|x_n|} B_p(x_n)} {{\mathbf{1}}}_{E_n(K)}\right]}\right| \\ &{\leqslant}C \frac{|z|^3}{2} {{{\mathbf{E}}}\left[\Delta_n^{1/2} s(|x_n|)^3 \sum_{p = 1}^{|x_n|} (k_p(x_n)-1) \cdot {{\mathbf{1}}}_{E_n(K)}\right]}. \end{split}$$ Let us assume that this holds and let us show that this last expectation converges to $0$. As in the proof of Proposition \[prop:marginales\_hauteur\], we rely on the spinal decomposition of [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Lemma 3]: recall the notation from this proof: $(\xi_{d_n}(i))_{1 {\leqslant}i {\leqslant}\epsilon_n}$ denotes the successive picks without replacement in an urn with initial configuration of $k d_n(k)$ balls labelled $k$ for every $k {\geqslant}1$, and conditional on $(\xi_{d_n}(i))_{1 {\leqslant}i {\leqslant}\epsilon_n}$, the random variables $(\chi_{d_n}(i))_{1 {\leqslant}i {\leqslant}\epsilon_n}$ are independent and each $\chi_{d_n}(i)$ is uniformly distributed in $\{1, \dots, \xi_{d_n}(i)\}$. Let $(\xi, \chi)$ have the law of $(\xi_{n, d_n}(1), \chi_{n, d_n}(1))$, so ${{\mathbf{E}}}[\xi - 1] = \sum_{k {\geqslant}1} (k - 1) k d_n(k) / \epsilon_n = \sigma_n^2 / \epsilon_n$. As we have seen just before the proof of Proposition \[prop:marginales\_hauteur\], we have under our assumption $\upsilon_n \sim \epsilon_n$. Then we have by [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Lemma 3] $$\begin{aligned} &{{{\mathbf{E}}}\left[\Delta_n^{1/2} s(|x_n|)^3 \sum_{p = 1}^{|x_n|} (k_p(x_n) - 1) \cdot {{\mathbf{1}}}_{E_n(K)}\right]} \\ &{\leqslant}\frac{\varrho_n + K \sigma_n}{\upsilon_n} \sum_{h = K^{-1} \epsilon_n / \sigma_n}^{K \epsilon_n / \sigma_n} {{{\mathbf{E}}}\left[\Delta_n^{1/2} \left(\frac{3 \epsilon_n}{\sigma_n^2 h}\right)^{3/2} \sum_{p = 1}^h (\xi_{n, d_n}(p) - 1)\right]} \\ &{\leqslant}K (\varrho + K + o(1)) \sup_{K^{-1} \epsilon_n / \sigma_n {\leqslant}h {\leqslant}K \epsilon_n / \sigma_n} \Delta_n^{1/2} \left(\frac{3 \epsilon_n}{\sigma_n^2 h}\right)^{3/2} h \cdot {{{\mathbf{E}}}\left[\xi-1\right]} \\ &{\leqslant}(3 K)^{3/2} (\varrho + K + o(1)) \left(\frac{\Delta_n}{\sigma_n}\right)^{1/2},\end{aligned}$$ which indeed converges to $0$. It remains to prove that the first term in  converges to $0$. Let us prove the convergence in probability on the event $E_n(K)$ $$s(|x_n|)^2 \sum_{p = 1}^{|x_n|} \sigma^2(x_n) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}1.$$ Let $\sigma^2(k,j)$ denotes the variance of $B(k,j)$, which is known explicitly, see e.g. Marckert & Miermont [@Marckert-Miermont:Invariance_principles_for_random_bipartite_planar_maps page 1664[^2]]: we have $$\sigma^2(k,j) = \frac{2j(k-j)}{k+1}, \qquad\text{so}\qquad \sum_{j=1}^k \sigma^2(k,j) = \frac{k(k-1)}{3}.$$ Fix $\delta > 0$, then by [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Lemma 3] again, $$\begin{aligned} &{{{\mathbf{P}}}\left(\left\{\left|s(|x_n|)^2 \sum_{p = 1}^{|x_n|} \sigma^2(x_n) - 1\right| > \delta\right\} \cap {{\mathbf{1}}}_{E_n(K)}\right)} \\ &{\leqslant}K (\varrho + K + o(1)) \sup_{K^{-1} \epsilon_n / \sigma_n {\leqslant}h {\leqslant}K \epsilon_n / \sigma_n} {{{\mathbf{P}}}\left(\left|\frac{3 \epsilon_n}{\sigma_n^2 h} \sum_{p = 1}^h \sigma^2(\xi_{n, d_n}(p), \chi_{n, d_n}(p)) - 1\right| > \delta\right)}\end{aligned}$$ We calculate the first two moments: let $(\xi, \xi', \chi, \chi')$ have the law of $(\xi_{n, d_n}(1), \xi_{n, d_n}(2), \chi_{n, d_n}(1), \chi_{n, d_n}(2))$. Since $\chi$ has the uniform distribution on $\{1, \dots, \xi\}$, then $${{{\mathbf{E}}}\left[\sum_{p = 1}^h \sigma^2(\xi_{n, d_n}(p), \chi_{n, d_n}(p))\right]} = h \cdot {{{\mathbf{E}}}\left[\sigma^2(\xi, \chi)\right]} = h \cdot {{{\mathbf{E}}}\left[\xi^{-1} \sum_{j=1}^\xi \sigma^2(\xi,j)\right]} = h \cdot {{{\mathbf{E}}}\left[\frac{\xi-1}{3}\right]} = \frac{\sigma_n^2 h}{3 \epsilon_n}.$$ It only remains to prove that the variance is small; we have similarly $${{{\mathbf{E}}}\left[\left(\frac{3 \epsilon_n}{\sigma_n^2 h} \sum_{p = 1}^h \sigma^2(\xi_{n, d_n}(p), \chi_{n, d_n}(p))\right)^2\right]} = \left(\frac{3 \epsilon_n}{\sigma_n^2 h}\right)^2 \left(h {{{\mathbf{E}}}\left[\sigma^2(\xi, \chi)^2\right]} + h(h-1) {{{\mathbf{E}}}\left[\sigma^2(\xi, \chi) \sigma^2(\xi', \chi')\right]}\right).$$ Since the $\chi$’s are independent conditional on the $\xi$’s, we have $${{{\mathbf{E}}}\left[\sigma^2(\xi, \chi) \sigma^2(\xi', \chi')\right]} = {{{\mathbf{E}}}\left[\frac{(\xi-1)(\xi'-1)}{9}\right]} = {{{\mathbf{E}}}\left[\frac{\xi-1}{3}\right]}^2 (1+o(1)) = \left(\frac{\sigma_n^2}{3 \epsilon_n}\right)^2 (1+o(1)),$$ where we used that two samples without replacement decorrelate as the number of possible picks tends to infinity. We conclude that the random variable $\frac{3 \epsilon_n}{\sigma_n^2 h} \sum_{p = 1}^h \sigma^2(\xi_{n, d_n}(p), \chi_{n, d_n}(p))$ has mean $1$ and, since $\sigma^2(\xi, \chi) {\leqslant}\xi {\leqslant}\Delta_n$, it has variance $$\begin{aligned} &\left(\frac{3 \epsilon_n}{\sigma_n^2 h}\right)^2 \left(h {{{\mathbf{E}}}\left[\sigma^2(\xi, \chi)^2\right]} + h(h-1) {{{\mathbf{E}}}\left[\sigma^2(\xi, \chi) \sigma^2(\xi', \chi')\right]}\right) - 1 \\ &{\leqslant}\left(\frac{3 \epsilon_n}{\sigma_n^2 h}\right)^2 \left(h \Delta_n \frac{\sigma_n^2}{3 \epsilon_n} + h(h-1) \left(\frac{\sigma_n^2}{3 \epsilon_n}\right)^2 (1+o(1))\right) - 1 \\ &{\leqslant}\frac{3 \epsilon_n}{\sigma_n^2 h} \Delta_n + o(1),\end{aligned}$$ which converges to $0$ uniformly in $h \in [K^{-1} \epsilon_n / \sigma_n, K \epsilon_n / \sigma_n]$ since we assume that $\Delta_n = o(\sigma_n)$. This proves that $s(|x_n|)^2 \sum_{p = 1}^{|x_n|} \sigma^2(x_n)$ converges in probability to $1$ on the event $E_n(K)$; Lebesgue’s Theorem then yields $$\left|{{{\mathbf{E}}}\left[{\mathrm{e}}^{- \frac{z^2}{2} s(|x_n|)^2 \sum_{p = 1}^{|x_n|} \sigma^2(k_p(x_n), j_p(x_n))} {{\mathbf{1}}}_{E_n(K)}\right]} - {\mathrm{e}}^{- z^2 / 2}\right| {\enskip\mathop{\longrightarrow}^{}_{n \to \infty}\enskip}0.$$ Provided that the bound  holds, this shows that both parts of  tend to zero which completes the proof. It remains to prove the bound , we shall need the following easy moment bound. Recall that $B(k, \cdot)$ is a uniformly random bridge in $\mathscr{B}_k^{{\geqslant}-1}$. \[lem:moment\_pont\_unif\] There exists a constant $C>0$ such that for any $k > j {\geqslant}1$, we have ${{\mathbf{E}}}[|B(k,j)|^4] {\leqslant}C j^2$. Fix $k {\geqslant}1$ and observe that the bridge $B(k,\cdot)$ is invariant under space-time reversal: for any $j {\leqslant}k$, the random variables $B(k,j)$ and $- B(k, k-j)$ have the same law, so it is sufficient to prove the claim for $j {\leqslant}\lceil k/2\rceil$. This enables us to use an absolute continuity result: there exists $K > 0$ such that for any $k {\geqslant}1$ and any set $A \in {{\mathbf{Z}}}^{\lceil k/2\rceil}$, it holds that $$\label{eq:absolue_continuite_pont} {{{\mathbf{P}}}\left((B(k,1), \dots, B(k,\lceil k/2\rceil)) \in A\right)} {\leqslant}K \cdot {{{\mathbf{P}}}\left((S_1, \dots, S_{\lceil k/2\rceil}) \in A\right)}.$$ This follows from the Markov property applied to $S$ at time $\lceil k/2\rceil$ and the local limit theorem, see e.g. the proof of Lemma 4 in [@Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence] for details. We conclude that for any $1 {\leqslant}j {\leqslant}\lceil k/2\rceil$, we have ${{\mathbf{E}}}[|B(k,j)|^4] {\leqslant}K \cdot {{\mathbf{E}}}[|S_j|^4]$. One easily calculate for every $t > 0$ the Laplace transform ${{\mathbf{E}}}[{\mathrm{e}}^{t S_1}] = \sum_{i {\geqslant}-1} {\mathrm{e}}^{t i} 2^{-(i+2)} = (2{\mathrm{e}}^t - {\mathrm{e}}^{2t})^{-1}$, from which we derive ${{\mathbf{E}}}[S_j^4] = 12j^2 + 26j$, which completes the proof. We may now finish the proof of Proposition \[prop:marginales\_labels\] by proving the bound . One easily shows by induction that for any complex numbers in the closed unit disk $(\alpha_p)_{1 {\leqslant}p {\leqslant}q}$ and $(\beta_p)_{1 {\leqslant}p {\leqslant}q}$, it holds that $|\prod_{1 {\leqslant}p {\leqslant}q} \alpha_p - \prod_{1 {\leqslant}p {\leqslant}q} \beta_p| {\leqslant}\sum_{1 {\leqslant}p {\leqslant}q} |\alpha_p - \beta_p|$. By conditioning with respect to ${T_{d_n}^{\varrho_n}}$ and $x_n$ (note that the event $E_n(K)$ is measurable with respect to this pair) and appealing to Jensen’s inequality, the term we consider is bounded by $$\begin{aligned} &{{{\mathbf{E}}}\left[\left|{\mathrm{e}}^{- \frac{z^2}{2} s(|x_n|)^2 \sum_{p = 1}^{|x_n|} \sigma^2_p(x_n)} - {{{\mathbf{E}}}\left[{\mathrm{e}}^{{\mathrm{i}}z s(|x_n|) \sum_{p = 1}^{|x_n|} B_p(x_n)} \;\middle|\; {T_{d_n}^{\varrho_n}}, x_n\right]}\right| {{\mathbf{1}}}_{E_n(K)}\right]} \\ &= {{{\mathbf{E}}}\left[\left|\prod_{p = 1}^{|x_n|} {\mathrm{e}}^{- \frac{z^2}{2} s(|x_n|)^2 \sigma^2_p(x_n)} - \prod_{p = 1}^{|x_n|} {{{\mathbf{E}}}\left[{\mathrm{e}}^{{\mathrm{i}}z s(|x_n|) B_p(x_n)} \;\middle|\; {T_{d_n}^{\varrho_n}}, x_n\right]}\right| {{\mathbf{1}}}_{E_n(K)}\right]} \\ &{\leqslant}{{{\mathbf{E}}}\left[\sum_{p = 1}^{|x_n|} \left|{\mathrm{e}}^{- \frac{z^2}{2} s(|x_n|)^2 \sigma^2_p(x_n)} - {{{\mathbf{E}}}\left[{\mathrm{e}}^{{\mathrm{i}}z s(|x_n|) B_p(x_n)} \;\middle|\; {T_{d_n}^{\varrho_n}}, x_n\right]}\right| {{\mathbf{1}}}_{E_n(K)}\right]}.\end{aligned}$$ We next use the fact that for any random variable, $X$ say, which is centred and has finite variance, it holds that $$\left|{{{\mathbf{E}}}\left[{\mathrm{e}}^{{\mathrm{i}}z X}\right]} - \left(1 - \frac{z^2}{2} {{{\mathbf{E}}}\left[X^2\right]}\right)\right| {\leqslant}\frac{|z|^3}{6} {{{\mathbf{E}}}\left[|X|^3\right]}.$$ Indeed, this bound applied to $B_p(x_n)$ reads $$\left|{{{\mathbf{E}}}\left[{\mathrm{e}}^{{\mathrm{i}}z s(|x_n|) B_p(x_n)} \;\middle|\; {T_{d_n}^{\varrho_n}}, x_n\right]} - \left(1 - \frac{z^2}{2} s(|x_n|)^2 \sigma^2_p(x_n)\right)\right| {\leqslant}\frac{|z|^3}{6} s(|x_n|)^3 {{{\mathbf{E}}}\left[\left|B_p(x_n)\right|^3 \;\middle|\; {T_{d_n}^{\varrho_n}}, x_n\right]},$$ and the same bound applied to a standard Gaussian random variable $G$, so ${{\mathbf{E}}}[|G|^3] = \sqrt{8/\pi} {\leqslant}2$, yields $$\begin{aligned} \left|{\mathrm{e}}^{- \frac{z^2}{2} s(|x_n|)^2 \sigma^2_p(x_n)} - \left(1 - \frac{z^2}{2} s(|x_n|)^2 \sigma^2_p(x_n)\right)\right| &{\leqslant}\frac{|z|^3}{6} \sqrt{\frac{8}{\pi}} \left(s(|x_n|)^2 \sigma^2_p(x_n)\right)^{3/2} \\ &{\leqslant}\frac{|z|^3}{3} s(|x_n|)^3 {{{\mathbf{E}}}\left[\left|B_p(x_n)\right|^3 \;\middle|\; {T_{d_n}^{\varrho_n}}, x_n\right]},\end{aligned}$$ where the last bound follows from Jensen’s inequality. According to Lemma \[lem:moment\_pont\_unif\] and Jensen’s inequality again, there exists a constant $C>0$ such that for any $k > j {\geqslant}1$, we have ${{\mathbf{E}}}[|B(k,j)|^3] {\leqslant}C j^{3/2} {\leqslant}C (k-1)^{3/2}$. Since $k_p(x_n) {\leqslant}\Delta_n$, we obtain $$\left|{{{\mathbf{E}}}\left[{\mathrm{e}}^{{\mathrm{i}}z s(|x_n|) B_p(x_n)} \;\middle|\; {T_{d_n}^{\varrho_n}}, x_n\right]} - {\mathrm{e}}^{- \frac{z^2}{2} s(|x_n|)^2 \sigma^2_p(x_n)}\right| {\leqslant}C \frac{|z|^3}{2} s(|x_n|)^3 \Delta_n^{1/2} \left(k_p(x_n) - 1\right),$$ and  follows by summing over $p$. We next sketch the proof of Proposition \[prop:marginales\_labels\] for $q {\geqslant}2$. We shall need the following result claims that in the case of no macroscopic degree, there is no ‘explosion’ of labels in the sense that the maximal difference along an edge is small. \[lem:deplacement\_max\_autour\_site\] Assume that $\limsup_{n \to \infty} \sigma_n^{-1} \varrho_n < \infty$ and $\lim_{n \to \infty} \sigma_n^{-1} \Delta_n \to 0$. For every $n {\geqslant}1$, sample $({T_{d_n}^{\varrho_n}}, \ell)$ uniformly at random in ${{\mathbf{LT}}_{d_n}^{\varrho_n}}$. Then we have the convergence in probability $$\sigma_n^{-1/2} \max_{x \in {T_{d_n}^{\varrho_n}}} \left|\max_{1 {\leqslant}j {\leqslant}k_x} \ell(xj) - \min_{1 {\leqslant}j {\leqslant}k_x} \ell(xj)\right| {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}0.$$ We refer to the proof of Proposition 2 in [@Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence], just replace $N_\mathbf{n}$ there by $\sigma_n^2$. Let us in fact only discuss the case $q=2$ to ease notation, the general case is exactly the same. Let $x_n$ and $y_n$ be independent uniform random vertices of ${T_{d_n}^{\varrho_n}}$ and let us consider the three branches consisting of: their common ancestors on side, the ancestors of $x_n$ only, and those of $y_n$ only. According to Proposition \[prop:marginales\_hauteur\], their length jointly converge in distribution, when rescaled by a factor $\frac{\sigma_n}{2 \epsilon_n}$. By Lemma \[lem:deplacement\_max\_autour\_site\], the label increment between the last common ancestor of $x_n$ and $y_n$ and its offsprings is small compared to our scaling, and the total label increment on the three remaining branches are independent; it only remains to prove that the law of these increments, multiplied by $(\frac{3 \epsilon_n}{\sigma_n^2})^{1/2}$ and divided by the square-root of their length converges to the standard Gaussian law, as in . This can be obtained in the very same way as in the case $q=1$, appealing to [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Lemma 3] to compare the content of each branch with the sequence $(\xi_{n, d_n}(p), \chi_{n, d_n}(p))_p$. As in the proof of Proposition \[prop:marginales\_hauteur\], the fact that we now have three branches is compensated by the factor $(\sigma_n / \upsilon_n)^3$ in [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Lemma 3], we leave the details to the reader. Convergence of random maps {#sec:convergence_cartes} ========================== We are now ready to prove the convergence of random maps stated in Theorem \[thm:convergence\_carte\_disque\] and \[thm:convergence\_cartes\_CRT\], relying on Theorem \[thm:convergence\_etiquettes\] about the label process of the associated labelled forest. The argument finds its root in the work of Le Gall [@Le_Gall:Uniqueness_and_universality_of_the_Brownian_map] and has already been adapted in many contexts [@Le_Gall:Uniqueness_and_universality_of_the_Brownian_map; @Abraham:Rescaled_bipartite_planar_maps_converge_to_the_Brownian_map; @Bettinelli-Jacob-Miermont:The_scaling_limit_of_uniform_random_plane_maps_via_the_Ambjorn_Budd_bijection; @Bettinelli-Miermont:Compact_Brownian_surfaces_I_Brownian_disks; @Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence; @Marzouk:On_scaling_limits_of_planar_maps_with_stable_face_degrees; @Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence] so we shall be very brief and refer to the preceding references for details. Tightness of planar maps {#sec:tension_cartes} ------------------------ Let us first briefly define the Gromov–Hausdorff–Prokhorov distance following Miermont [@Miermont:Tessellations_of_random_maps_of_arbitrary_genus Proposition 6], which makes separable and complete the set of measure-preserving isometry classes of compact metric spaces equipped with a Borel probability measure. Let $(X, d_x, p_x)$ and $(Y, d_Y, p_y)$ be two such spaces, a *correspondence* between them is a subset $R \subset X \times Y$ such that for every $x \in X$, there exists $y \in Y$ such that $(x,y) \in R$ and vice-versa. The *distortion* of $R$ is defined as $$\mathrm{dis}(R) = \sup\left\{\left|d_X(x,x') - d_Y(y,y')\right| ; (x,y), (x', y') \in R\right\}.$$ Then the Gromov–Hausdorff–Prokhorov distance between these spaces is the infimum of all those $\varepsilon > 0$ such that there exists a coupling $\nu$ between $p_X$ and $p_Y$ and a compact correspondence $R$ between $X$ and $Y$ such that $$\nu(R) {\geqslant}1-\varepsilon \qquad\text{and}\qquad \mathrm{dis}(R) {\leqslant}2 \varepsilon.$$ As explained in Section \[sec:bijection\], in order to rely on the bijection with labelled forests, instead of a uniformly random map ${M_{d_n}^{\varrho_n}}$ in ${{\mathbf{M}}_{d_n}^{\varrho_n}}$, we shall work with a uniformly random negative pointed map $({M_{d_n}^{\varrho_n}}, \star)$ in ${{\mathbf{PM}}_{d_n}^{\varrho_n}}$. We shall consider the space $V({M_{d_n}^{\varrho_n}})\setminus\{\star\}$ of vertices of ${M_{d_n}^{\varrho_n}}$ different from $\star$, equipped with their graph distance *in ${M_{d_n}^{\varrho_n}}$* and the uniform probability measure, then note that the Gromov–Hausdorff–Prokhorov distance between $V({M_{d_n}^{\varrho_n}})$ and $V({M_{d_n}^{\varrho_n}})\setminus\{\star\}$ is bounded by one so it suffices to prove our claims for $V({M_{d_n}^{\varrho_n}})\setminus\{\star\}$. We want to rely on Theorem \[thm:convergence\_etiquettes\] on the label process of the associated labelled forest $({T_{d_n}^{\varrho_n}}, \ell)$. This theorem demands the ratio $d_n(1) / \epsilon_n$ to be bounded away from $1$, but as we already noticed, we may always if needed discard the faces of degree $2$ by gluing their two edges together. For every vertex $x \in {T_{d_n}^{\varrho_n}}$, let $\varphi(x)$ denote the vertex of ${M_{d_n}^{\varrho_n}}$ different from $\star$ which corresponds in the bijection from Section \[sec:bijection\] to the leaf at the extremity of the right-most ancestral line starting from $x$ in ${T_{d_n}^{\varrho_n}}$. Let us list the vertices of ${T_{d_n}^{\varrho_n}}$ as $x_0 < x_1 < \dots < x_{\upsilon_n}$ in lexicographical order and for every $i,j \in \{0, \dots, \upsilon_n\}$, we set $$d_n(i,j) = {d_{\mathrm{gr}}}(\varphi(x_i), \varphi(x_j)),$$ where ${d_{\mathrm{gr}}}$ is the graph distance in ${M_{d_n}^{\varrho_n}}$. We then extend $d_n$ to a continuous function on $[0, \upsilon_n]^2$ by ‘bilinear interpolation’ on each square of the form $[i,i+1] \times [j,j+1]$ as in [@Le_Gall:Uniqueness_and_universality_of_the_Brownian_map Section 2.5]. For $0 {\leqslant}s {\leqslant}t {\leqslant}1$, let us set $$d_{(n)}(s, t) = (\sigma_n + \varrho_n)^{-1/2} d_n(\upsilon_n s, \upsilon_n t) \qquad\text{and}\qquad L_{(n)}(t) = (\sigma_n + \varrho_n)^{-1/2} {L_{d_n}^{\varrho_n}}(\upsilon_n t).$$ For a continuous function $g : [0, 1] \to {{\mathbf{R}}}$, let us set for every $0 {\leqslant}s {\leqslant}t {\leqslant}1$, $$D_g(s,t) = D_g(t,s) = g(s) + g(t) - 2 \max\left\{\min_{r \in [s, t]} g(r); \min_{r \in [0, s] \cup [t, 1]} g(r)\right\}.$$ Following Le Gall [@Le_Gall:The_topological_structure_of_scaling_limits_of_large_planar_maps] we argued in [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Section 4.3] that if the rescaled processes $L_{(n)}$ converge in distribution to some limit process, say $L$, then from every increasing sequence of integers, one can extract a subsequence along which we have $$\label{eq:convergence_distances_sous_suite} \left(L_{(n)}(t), D_{L_{(n)}}(s, t), d_{(n)}(s, t)\right)_{s,t \in [0,1]} {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}(L_t, D_L(s,t), d_\infty(s,t))_{s,t \in [0,1]},$$ where $d_\infty$ depends a priori on the subsequence. Further, deterministically, this convergence implies $$\left(V({M_{d_n}^{\varrho_n}})\setminus\{\star\}, (\sigma_n + \varrho_n)^{-1/2} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}}\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}(M_\infty,d_\infty, p_\infty)$$ for the Gromov–Hausdorff–Prokhorov distance, where $M_\infty = [0,1] / \{d_\infty = 0\}$ is the quotient set induced by the pseudo-distance $d_\infty$, equipped with the distance induced by $d_\infty$ and $p_\infty$ is the push-forward of the Lebesgue measure on $[0,1]$ by the canonical projection. Therefore, in order to prove Theorem \[thm:convergence\_carte\_disque\] and \[thm:convergence\_cartes\_CRT\], in only remains to identify the limit $d_\infty$ in  as the distance process of the Brownian map/disk/tree. Convergence towards a Brownian disk {#sec:carte_brownienne} ----------------------------------- Fix $\varrho \in [0, \infty)$ and let us recall the distance ${\mathscr{D}}^\varrho$ of the Brownian disk, following Le Gall [@Le_Gall:The_topological_structure_of_scaling_limits_of_large_planar_maps] and Bettinelli & Miermont [@Bettinelli-Miermont:Compact_Brownian_surfaces_I_Brownian_disks] to which we refer for details. Recall the head of the Brownian snake $Z^\varrho$; first, we view $D_{Z^\varrho}$ as a function on the forest ${\mathscr{T}}_{X^\varrho}$ by setting $$D_{Z^\varrho}(x,y) = \inf\left\{D_{Z^\varrho}(s,t) ; s,t \in [0,1], x=\pi_{X^\varrho}(s) \text{ and } y=\pi_{X^\varrho}(t)\right\},$$ for every $x, y \in {\mathscr{T}}_{X^\varrho}$, where we recall the notation $\pi_{X^\varrho}$ for the canonical projection. Then we put $${\mathscr{D}}^\varrho(x,y) = \inf\left\{\sum_{i=1}^k D_{Z^\varrho}(a_{i-1}, a_i) ; k {\geqslant}1, (x=a_0, a_1, \dots, a_{k-1}, a_k=y) \in {\mathscr{T}}_{X^\varrho}\right\}.$$ The function ${\mathscr{D}}^\varrho$ is a pseudo-distance on ${\mathscr{T}}_{X^\varrho}$ which can be seen as a pseudo-distance on $[0,1]$ by setting ${\mathscr{D}}^\varrho(s,t) = {\mathscr{D}}^\varrho(\pi_{X^\varrho}(s),\pi_{X^\varrho}(t))$ for every $s,t \in [0,1]$. Assume that $\lim_{n \to \infty} \sigma_n^{-1} \Delta_n = 0$. Let us set for all $t \in [0,1]$, $$L_{[n]}(t) = \left(\frac{3}{2\sigma_n}\right)^{1/2} {L_{d_n}^{\varrho_n}}(\upsilon_n t).$$ Then we deduce from Theorem \[thm:convergence\_etiquettes\] and the preceding discussion that from every increasing sequence of integers, one can extract a subsequence along which we have $$\label{eq:convergence_distances_sous_suite_disque} \left(L_{[n]}(t), D_{L_{[n]}}(s, t), d_{[n]}(s, t)\right)_{s,t \in [0,1]} {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}(Z^\varrho_t, D_{Z^\varrho}(s,t), d_\infty(s,t))_{s,t \in [0,1]},$$ where $d_\infty$ depends a priori on the subsequence. It remains to prove that $d_\infty = {\mathscr{D}}^\varrho$ almost surely. Our argument is adapted from the work of Bettinelli & Miermont [@Bettinelli-Miermont:Compact_Brownian_surfaces_I_Brownian_disks Lemma 32]. It relies on a ‘re-rooting’ trick: it actually suffices to prove that if $U, V$ are i.i.d. uniform random variables on $[0,1]$ and independent of everything else, then $$\label{eq:identite_distances_carte_brownienne_repointee} d_\infty(U,V) {\enskip\mathop{=}^{(d)}\enskip}{\mathscr{D}}^\varrho(U,V).$$ The key point is that, according to Le Gall [@Le_Gall:Uniqueness_and_universality_of_the_Brownian_map Corollary 7.3] for $\varrho = 0$ and Bettinelli & Miermont [@Bettinelli-Miermont:Compact_Brownian_surfaces_I_Brownian_disks Lemma 17 & Theorem 20] for $\varrho > 0$, the right-hand side of  is distributed as $Z^\varrho_V - \min Z^\varrho$. Recall that $d_\infty$ is the limit of $d_n$ which describes the distances in the map between the vertices $(\varphi(x_i))_{0 {\leqslant}i {\leqslant}\upsilon_n}$; some of these vertices may appear more often that others in this sequence so if one samples two uniform random indices $i$ and $j$, they do not correspond to two uniform random vertices of the map. Nonetheless, this effect disappears at the limit: as discussed in [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Section 4.3], if we let $\lambda(i) \in \{1, \dots, \upsilon_n\}$ denote the index such that $x_{\lambda(i)}$ is the $i$-th leaf of ${T_{d_n}^{\varrho_n}}$ for every $1 {\leqslant}i {\leqslant}d_n(0)$, then we have $$\label{eq:approximation_sites_aretes_carte} \left(\upsilon_n^{-1} \lambda(\lceil d_n(0) t\rceil) ; t \in [0,1]\right) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}(t ; t \in [0,1]).$$ Now $X = \varphi(x_{\lambda(\lceil d_n(0) U\rceil)})$ and $Y = \varphi(x_{\lambda(\lceil d_n(0) V\rceil)})$ are uniformly random vertices of $V({M_{d_n}^{\varrho_n}}) \setminus \{\star\}$, they can therefore be coupled with two independent uniform random vertices $X'$ and $Y'$ of ${M_{d_n}^{\varrho_n}}$ in such a way that the conditional probability given ${M_{d_n}^{\varrho_n}}$ that $(X,Y) \ne (X', Y')$ converges to $0$; we implicitly assume in the sequel that $(X,Y) = (X', Y')$. Since $\star$ is also a uniform random vertex of ${M_{d_n}^{\varrho_n}}$, we obtain that $${d_{\mathrm{gr}}}(X,Y) {\enskip\mathop{=}^{(d)}\enskip}{d_{\mathrm{gr}}}(\star, Y).$$ By definition we have ${d_{\mathrm{gr}}}(X,Y) = d_n(\lambda(\lceil d_n(0) U\rceil), \lambda(\lceil d_n(0) V\rceil))$ and by construction of the labels on ${T_{d_n}^{\varrho_n}}$, we have $${d_{\mathrm{gr}}}(\star, Y) = {L_{d_n}^{\varrho_n}}(\lambda(\lceil d_n(0) V\rceil)) - \min_{0 {\leqslant}j {\leqslant}\upsilon_n} {L_{d_n}^{\varrho_n}}(j) + 1.$$ Letting $n \to \infty$ along the same subsequence as in  and appealing to , we obtain . We end this proof by arguing that our assumption on the largest degree is necessary. Indeed, extracting a subsequence if necessary, let us assume that $\sigma_n^{-1} \varrho_n$ and $\sigma_n^{-1} \Delta_n$ converge to respectively to $\varrho {\geqslant}0$ and $\delta > 0$ and that $(V({M_{d_n}^{\varrho_n}}), \sigma_n^{-1/2} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}})$ converges in distribution to some random space $(M, D, p)$ and let us prove that the latter has not the topology of ${\mathscr{M}}^\varrho$, which is that of the sphere if $\varrho = 0$ [@Le_Gall-Paulin:Scaling_limits_of_bipartite_planar_maps_are_homeomorphic_to_the_2_sphere; @Miermont:On_the_sphericity_of_scaling_limits_of_random_planar_quadrangulations] or the disk if $\varrho > 0$ [@Bettinelli:Scaling_limit_of_random_planar_quadrangulations_with_a_boundary]. Let us label all the vertices by their graph distance to the distinguished vertex $\star$. Let $\Phi_n$ be an inner face with degree $2\Delta_n \sim 2 \delta \sigma_n$, then the labels of its vertices read in clockwise order form a (shifted) bridge which, when rescaled by a factor of order $\Delta_n^{-1/2} = \Theta(\sigma_n^{-1/2})$ converges towards a Brownian bridge. Let $x_n^-, x_n^+$ be two vertices of $\Phi_n$ such that their respective labels are the minimum and the maximum over all labels on $\Phi_n$. Then when $\varepsilon > 0$ is small, with high probability we have that $\ell(x_n^+) - \ell(x_n^-) > 6 \varepsilon \sigma_n^{1/2}$. Our argument is depicted in Figure \[fig:grande\_face\]; let us describe it. Let us read the vertices on the face $\Phi_n$ from $x_n^-$ to $x_n^+$ in clockwise order, there is a vertex which is the last one with label smaller than $\ell(x_n^-) + \varepsilon \sigma_n^{1/2}$ and another one which is the first one with label larger than $\ell(x_n^+) - \varepsilon \sigma_n^{1/2}$; let us call ‘blue vertices’ all the vertices visited between these two. Let us similarly call ‘green vertices’ the vertices defined similarly when going from $x_n^-$ to $x_n^+$ in counter-clockwise order. A vertex may be simultaneously green and blue, but in this case, and more generally if there exists a pair of blue and green vertices at graph distance $o(\sigma_n^{1/2})$ in the whole map, then this creates at the limit a pinch-point separating the map into two parts each with diameter larger than $\varepsilon \sigma_n^{1/2}$, which yields our claim. ![A portion of the pointed map represented as a surface, seen from the distinguished vertex. If there is no pinch-point on the grey face, then the light blue and light green regions are disjoint, so the red simple path separates the map into two macroscopic parts.[]{data-label="fig:grande_face"}](grande_face){height="8\baselineskip"} We assume henceforth that the distance between green and blue vertices is larger than $4\eta \sigma_n^{1/2}$ for some $\eta > 0$; the light blue and light green regions in Figure \[fig:grande\_face\] represent the hull of the set of vertices at distance smaller than $\eta \sigma_n^{1/2}$ from the blue and green vertices respectively. Consider the simple red path obtained by taking the boundary of the light blue region: its extremities are at macroscopic distance and it separates two parts of the map with macroscopic diameter. A sphere cannot be separated by a simple curve with distinct extremities so this yields our claim when $\varrho = 0$. This is possible on a disk, if the path touches the boundary twice. Therefore, in order to avoid a contradiction, the red path must contain (at least) two points at macroscopic distance from each other, and both at microscopic distance from the boundary of the map. If none of these points is at microscopic distance from an extremity of the path, then the preceding argument still applies, so both extremities, which belong to $\Phi_n$ must lie at microscopic distance from the boundary, but then this creates pinch-points at the limit. Convergence to the Brownian tree {#sec:convergence_carte_arbre} -------------------------------- In this section, we finally prove Theorem \[thm:convergence\_cartes\_CRT\] relying on Theorem \[thm:convergence\_etiquettes\]\[thm:convergence\_etiquettes\_arbre\]. The idea is that the greatest distance in the map to the boundary is small compared to the scaling so only the boundary remains at the limit, and furthermore this boundary, whose distances are related to a discrete bridge, converges to the Brownian tree, which is encoded by the Brownian bridge. Let us denote by ${\mathbf{b}}$ and $X^0$ respectively the standard Brownian bridge and Brownian excursion. Recall the construction of the Brownian tree $({\mathscr{T}}_{X^0}, {\mathscr{d}}_{X^0}, {\mathscr{p}}_{X^0})$ from Section \[sec:marginales\_hauteur\]. Let us define a random pseudo-distance $D_{{\mathbf{b}}}$ as in Section \[sec:tension\_cartes\]. Let further $({\mathscr{T}}_{{\mathbf{b}}}, D_{{\mathbf{b}}}, p_{{\mathbf{b}}})$ be the space constructed as $M_\infty$ in Section \[sec:tension\_cartes\] where $d_\infty$ is replaced by $D_{{\mathbf{b}}}$, so ${\mathscr{T}}_{{\mathbf{b}}}$ is obtained by taking the quotient of $[0,1]$ by a certain equivalence relation. Comparing the definition of $D_{{\mathbf{b}}}$ and ${\mathscr{d}}_{X^0}$, since ${\mathbf{b}}$ and $X^0$ are related by the Vervaat’s transform, it is easy to prove that the metric spaces $({\mathscr{T}}_{X^0}, {\mathscr{d}}_{X^0}, {\mathscr{p}}_{X^0})$ and $({\mathscr{T}}_{{\mathbf{b}}}, D_{{\mathbf{b}}}, p_{{\mathbf{b}}})$ are isometric so it is equivalent to prove the convergence in distribution $$(V({M_{d_n}^{\varrho_n}})\setminus\{\star\}, (2\varrho_n)^{-1/2} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}}) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}({\mathscr{T}}_{{\mathbf{b}}}, D_{{\mathbf{b}}}, p_{{\mathbf{b}}})$$ in the sense of Gromov–Hausdorff–Prokhorov. The discussion in Section \[sec:tension\_cartes\] shows that it suffices to prove the convergence in distribution for the uniform topology $$\label{eq:cv_distance_carte_arbre} \left((2\varrho_n)^{-1/2} d_n(\upsilon_n s, \upsilon_n t)\right)_{s,t \in [0,1]} {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}(D_{{\mathbf{b}}}(s,t))_{s,t \in [0,1]}.$$ Here we may adapt the argument of the proof of Theorem 5 in [@Bettinelli:Scaling_limit_of_random_planar_quadrangulations_with_a_boundary] to which we refer for details. First, for $i < j$, let $[i, j]$ denote the set of integers from $i$ to $j$, and let $[j,i]$ denote the set $[j, \upsilon_n] \cup [1, i]$. Recall that we construct our map from a labelled forest, using a Schaeffer-type bijection; following the chain of edges drawn starting from two points of the forest to the next one with smaller label until they merge, one obtains the following upper bound on distances: $$\label{eq:borne_sup_cactus} d_n(i,j) {\leqslant}{L_{d_n}^{\varrho_n}}(i) + {L_{d_n}^{\varrho_n}}(j) + 2 - 2 \max\left\{\min_{k \in [i, j]} {L_{d_n}^{\varrho_n}}(k); \min_{k \in [j, i]} {L_{d_n}^{\varrho_n}}(k)\right\},$$ see Le Gall [@Le_Gall:The_topological_structure_of_scaling_limits_of_large_planar_maps Lemma 3.1] for a detailed proof in a different context. We have a very similar lower bound, see the proof of Corollary 4.4 in [@Curien-Le_Gall-Miermont:The_Brownian_cactus_I_Scaling_limits_of_discrete_cactuses]: $$\label{eq:borne_inf_cactus} d_n(i,j) {\geqslant}{L_{d_n}^{\varrho_n}}(i) + {L_{d_n}^{\varrho_n}}(j) - 2 \max\left\{\min_{k \in \llbracket i, j\rrbracket} {L_{d_n}^{\varrho_n}}(k); \min_{k \in \llbracket j, i\rrbracket} {L_{d_n}^{\varrho_n}}(k)\right\},$$ where the intervals are defined as follows. First, let us modify the forest ${T_{d_n}^{\varrho_n}}$ by placing the roots on a cycle instead of joining them to an extra root-vertex. Then $\mathopen{\llbracket}i, j\mathclose{\rrbracket}$ denotes the set of those indices $k$ such that $x_k$ lies in the geodesic path between $x_i$ and $x_j$ in this graph; in other words, $k {\geqslant}1$ belongs to $\mathopen{\llbracket}i, j\mathclose{\rrbracket}$ if either $x_k$ is an ancestor of $x_i$ or of $x_j$ (and it is an ancestor of both if and only if it is their last one), or if it is the root of a tree which lies between $x_i$ and $x_j$ in the original forest. We define $\mathopen{\llbracket}j , i\mathclose{\rrbracket}$ similarly as the set of those indices $k {\geqslant}1$ such that either $x_k$ is an ancestor of $x_i$ or of $x_j$, or it is the root of a tree which *does not* lie between $x_i$ and $x_j$. Let us suppose that $\lim_{n \to \infty} \sigma_n^{-1} \varrho_n = \infty$; it follows from Proposition \[prop:convergence\_Luka\] and Theorem \[thm:convergence\_etiquettes\] that $$(2\varrho_n)^{-1/2} \left({b_{d_n}^{\varrho_n}}(1 - {{\underline{W}\vphantom{W}}_{d_n}^{\varrho_n}}(\upsilon_n t)), {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(\upsilon_n t)\right)_{t \in [0,1]} {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}({\mathbf{b}}_t, 0)_{t \in [0,1]}$$ in $\mathscr{C}([0,1],{{\mathbf{R}}})$. Recall that we decompose the labels as ${L_{d_n}^{\varrho_n}}(k) = {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}(k) + {b_{d_n}^{\varrho_n}}(1-{{\underline{W}\vphantom{W}}_{d_n}^{\varrho_n}}(k))$ for each $k$. Then the right-hand side of  with $i=\upsilon_n s$ and $j = \upsilon_n t$ converges in distribution to $D_{{\mathbf{b}}}(s,t)$ once rescaled by a factor $(2\varrho_n)^{-1/2}$. The same holds for the right-hand side of ; indeed, the root-vertices visited in $[i,j]$ and in $\mathopen{\llbracket}i, j\mathclose{\rrbracket}$ are the same, so the only difference lies in the non-root vertices, but their label differ by that of the root of their tree by at most $\max {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}- \min {{\widetilde{L}\vphantom{L}}_{d_n}^{\varrho_n}}= o(\varrho_n^{1/2})$ in probability. This proves the convergence  and whence our claim. Stable Boltzmann maps {#sec:BGW_Boltzmann} ===================== In this last subsection, we consider stable Boltzmann random maps, relying on our main results by showing that the random degree distribution of the faces of such maps satisfy the different assumptions. In the first subsection, we present the precise setup and the assumptions we shall make on such laws, by relying on the corresponding [Ł]{}ukasiewicz path. Throughout this section, we shall divide by real numbers which depend on an integer $n$, and consider conditional probabilities with respect to events which depend on $n$; we shall therefore always implicitly restrict ourselves to those values of $n$ for which such quantities are well-defined and statements such as ‘as $n \to \infty$’ should be understood along the appropriate sequence of integers. On stable [Ł]{}ukasiewicz paths ------------------------------- Throughout this section, we work with a sequence $(X_i)_{i {\geqslant}1}$ of i.i.d. random variables with common distribution, say $\nu$, supported by (a subset of) ${{\mathbf{Z}}}_{{\geqslant}-1}$ with $\nu(-1) \ne 0$, with finite first moment and ${{\mathbf{E}}}[X_1] = 0$. For every integer $n {\geqslant}1$, we let $S_n = X_1 + \dots + X_n$, also set $S_0 = 0$. For every integers $n {\geqslant}1$ and $k {\geqslant}-1$, let $K_k(n) = \#\{1 {\leqslant}i {\leqslant}n : X_i = k\}$ be the number of jumps of size $k$ up to time $n$; for a subset $A \subset {{\mathbf{Z}}}_{{\geqslant}-1}$, set then $K_A(n) = \sum_{k \in A} K_k(n)$. Finally, for $\varrho {\geqslant}1$, let $\zeta(S, \varrho) = \inf\{i {\geqslant}1 : S_i = - \varrho\}$, and simply write $\zeta(S)$ for $\zeta(S,1)$. We recall that a measurable function $l : [0, \infty) \to [0, \infty)$ is said to be *slowly varying* (at infinity) if for every $c > 0$, we have $\lim_{x \to \infty} l(cx)/l(x) = 1$. For $\alpha \in [1,2]$, we say that $\nu$ satisfies $\mathrm{(H_\alpha)}$ if the tail distribution can be written as $${{{\mathbf{P}}}\left(X {\geqslant}n\right)} = n^{-\alpha} L_1(n),$$ where $L_1$ is slowly varying. We also include the case where $X$ has finite variance in $\mathrm{(H_2)}$. Finally, when $\alpha = 1$, we say that $\nu$ satisfies $\mathrm{(H_1^{loc})}$ when the mass function can be written as $${{{\mathbf{P}}}\left(X = n\right)} = n^{-2} L_1(n),$$ where $L_1$ is slowly varying. The assumption $\mathrm{(H_\alpha)}$ corresponds to the *domain of attraction* of a stable law with index $\alpha$, and $\mathrm{(H_1^{loc})}$ is more restrictive than $\mathrm{(H_1)}$. When $\alpha \in (1,2]$, it is well-known that there exists a sequence $(a_n)_{n {\geqslant}1}$ such that $(n^{-1/\alpha} a_n)_{n {\geqslant}1}$ is slowly varying, so in particular $$a_{cn} {\enskip\mathop{\thicksim}^{}_{n \to \infty}\enskip}c^{1/\alpha} a_n \qquad\text{for every}\qquad c > 0,$$ and $a_n^{-1} (X_1 + \dots + X_n)$ converges in distribution to $\mathscr{X}^{(\alpha)}$ with Laplace transform given by ${{\mathbf{E}}}[\exp(- t \mathscr{X}^{(\alpha)})] = \exp(t^\alpha)$ for $t > 0$. Note that $\mathscr{X}^{(2)}$ has the Gaussian distribution with variance $2$; as a matter of fact, if $X$ has finite variance $\sigma^2$, then we may take $a_n = (n \sigma^2 / 2)^{1/2}$. Moreover, there exists another slowly varying function $L$ such that for every $n {\geqslant}1$, we have ${\mathrm{Var}}(X {{{\mathbf{1}}}_{\{X {\leqslant}n\}}}) = n^{2-\alpha} L(n)$. This function is related to $L_1$ by $$\label{eq:ratio_fonctions_var_lente} \lim_{n \to \infty} \frac{L_1(n)}{L(n)} = \lim_{n \to \infty} \frac{n^2 {{\mathbf{P}}}(X {\geqslant}n)}{{\mathrm{Var}}(X {{{\mathbf{1}}}_{\{X {\leqslant}n\}}})} = \frac{2-\alpha}{\alpha},$$ see Feller [@Feller:An_introduction_to_probability_theory_and_its_applications_Volume_2 Chapter XVII, Equation 5.16]. Finally, according to [@Kortchemski:Sub_exponential_tail_bounds_for_conditioned_stable_Bienayme_Galton_Watson_trees Equation 7], we have $$\label{eq:constante_Levy} \lim_{n \to \infty} \frac{n L(a_n)}{a_n^\alpha} = \frac{1}{(2-\alpha) \Gamma(-\alpha)},$$ where, by continuity, the limit is interpreted as equal to $2$ if $\alpha=2$. In the case $\alpha=1$, when $\mathrm{(H_1)}$ is the domain of attraction of a Cauchy distribution, in addition to the sequence $(a_n)_{n {\geqslant}1}$, there exists another sequence $(b_n)_{n {\geqslant}1}$ such that both $(n^{-1} a_n)_{n {\geqslant}1}$ and $(n^{-1} b_n)_{n {\geqslant}1}$ are slowly varying, with $b_n \to -\infty$ and $b_n / a_n \to -\infty$, and now $a_n^{-1} (X_1 + \dots + X_n - b_n)$ converges in distribution to $\mathscr{X}^{(1)}$ with Laplace transform given by ${{\mathbf{E}}}[\exp(- t \mathscr{X}^{(1)})] = \exp(t \ln t)$ for $t > 0$. An example to have in mind when dealing with this rather unusual regime is given by Kortchemski & Richier [@Kortchemski-Richier:Condensation_in_critical_Cauchy_Bienayme_Galton_Watson_trees]: take $\nu(n) \sim \frac{c}{n^2 \ln(n)^2}$, then $a_n \sim \frac{c n}{\ln(n)^2}$ and $b_n \sim - \frac{c n}{\ln(n)}$. By [@Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves Equation 50] the following holds: suppose that $\nu$ satisfies $\mathrm{(H_\alpha)}$ with $\alpha \in (1, 2]$, fix $A \subset {{\mathbf{Z}}}_{{\geqslant}-1}$ such that $\nu(A) > 0$, and, if $\nu$ has infinite variance, suppose that either $A$ or ${{\mathbf{Z}}}_{{\geqslant}-1} \setminus A$ is finite; finally assume that $\limsup_{n \to \infty} a_n^{-1} \varrho_n < \infty$. Then we have $$\label{eq:LLT_GW} \lim_{n \to \infty} \left|n \cdot {{{\mathbf{P}}}\left(K_A(\zeta(S, \varrho_n)) = n\right)} - \frac{\nu(A)^{1/\alpha} \varrho_n}{a_n} \cdot p_1\left(- \frac{\nu(A)^{1/\alpha} \varrho_n}{a_n}\right)\right| = 0,$$ where $p_1$ is the density of $\mathscr{X}^{(\alpha)}$. Let us briefly comment on the assumption that either $A$ or ${{\mathbf{Z}}}_{{\geqslant}-1} \setminus A$ is finite and refer to [@Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves] for details: In order to obtain this local limit theorem, one separates the jumps of the path according as wether they belong to $A$ or to its complement. If ${{\mathbf{Z}}}_{{\geqslant}-1} \setminus A$ is finite, then the jumps in $A$ belong to the domain of attraction of a stable law with index $\alpha$, whilst the other jumps have bounded support, so in particular finite variance, and we may use a local limit theorem for each, and vice versa when $A$ is finite; when both $A$ and ${{\mathbf{Z}}}_{{\geqslant}-1} \setminus A$ are infinite, the tail behaviour of these laws is not clear from that of $\nu$. We shall keep this assumption throughout this section. In the Cauchy regime $\alpha = 1$, such a local limit theorem is not known for general sets $A$, so we shall restrict ourselves to the case $A = {{\mathbf{Z}}}_{{\geqslant}-1}$. Then $K_{{{\mathbf{Z}}}_{{\geqslant}-1}}(k) = k$ so what we consider in  is simply the first hitting time of $-\varrho_n$. After cyclic shift, the probability that the latter equals $n$ is given by $n^{-1} \varrho_n {{\mathbf{P}}}(S_n = - \varrho_n)$. When $\nu$ satisfies $\mathrm{(H_1^{loc})}$, we read from the recent work of Berger [@Berger:Notes_on_random_walks_in_the_Cauchy_domain_of_attraction Theorem 2.4] that, when $\varrho_n = O(a_n) = o(|b_n|)$, we have $$\label{eq:LLT_GW_Cauchy_loc} {{{\mathbf{P}}}\left(\zeta(S, \varrho_n) = n\right)} {\enskip\mathop{\thicksim}^{}_{n \to \infty}\enskip}\varrho_n \cdot \frac{L_1(|b_n| - \varrho_n)}{(|b_n| - \varrho_n)^2},$$ see [@Berger:Notes_on_random_walks_in_the_Cauchy_domain_of_attraction Equation 2.10] with $p = \alpha = 1$ and $x = |b_n| - \varrho_n$. When $\nu$ only satisfies $\mathrm{(H_1)}$, Kortchemski & Richier [@Kortchemski-Richier:Condensation_in_critical_Cauchy_Bienayme_Galton_Watson_trees Proposition 12] proved that, in the case $\varrho_n = 1$, $$\label{eq:LLT_GW_Cauchy_gen} {{{\mathbf{P}}}\left(\zeta(S) {\geqslant}n\right)} {\enskip\mathop{\thicksim}^{}_{n \to \infty}\enskip}\Lambda(n) \cdot \frac{L_1(|b_n|)}{|b_n|},$$ where $\Lambda$ is some other slowly varying function unimportant here. We may now describe our model of random paths, these assumptions and notations shall be used throughout this section. Fix $\nu$ which satisfies either $\mathrm{(H_\alpha)}$ for some $\alpha \in [1,2]$, or $\mathrm{(H_1^{loc})}$ and let us denote by $\mathrm{D}(\nu)$ the set of those subsets $A \subset {{\mathbf{Z}}}_{{\geqslant}-1}$ with $\nu(A) > 0$ which satisfy the following properties: - If $\nu$ has finite variance, so in particular it satisfies $\mathrm{(H_2)}$, then $A$ can be any subset of ${{\mathbf{Z}}}_{{\geqslant}-1}$; - If $\nu$ satisfies $\mathrm{(H_\alpha)}$ for some $\alpha \in (1,2]$ and has infinite variance, then either $A$ or ${{\mathbf{Z}}}_{{\geqslant}-1} \setminus A$ must be finite; - If $\nu$ satisfies either $\mathrm{(H_1)}$ or $\mathrm{(H_1^{loc})}$, then $A = {{\mathbf{Z}}}_{{\geqslant}-1}$. Fix $n {\geqslant}1$ and $\varrho_n {\geqslant}1$, let us denote by $W^{\varrho_n}$ the path $S$ stopped when first hitting $-\varrho_n$, if $\varrho_n = 1$, simply write $W$. Suppose that $\nu$ satisfies either $\mathrm{(H_\alpha)}$ for some $\alpha \in [1,2]$, or $\mathrm{(H_1^{loc})}$ and let $A \in \mathrm{D}(\nu)$. We consider the following random paths: - If $\nu$ satisfies $\mathrm{(H_\alpha)}$ for some $\alpha \in (1,2]$, then we let $W^{\varrho_n}_{n,A}$ be the path $W^{\varrho_n}$ conditioned to have made $n$ jumps with value in $A$. - If $\nu$ satisfies $\mathrm{(H_1^{loc})}$, we define $W^{\varrho_n}_n$ similarly; since $A = {{\mathbf{Z}}}_{{\geqslant}-1}$ it reduces to conditioning the walk $S$ to first hit $-\varrho_n$ at time $n$ and stopping it there. - If $\nu$ satisfies $\mathrm{(H_1)}$, then again $A = {{\mathbf{Z}}}_{{\geqslant}-1}$ but now $\varrho_n = 1$, we let $W_{{\geqslant}n}$ be the walk $S$ conditioned to stay non-negative at least up to time $n$ and stopped when first hitting $-1$. In the first two case, when $A = {{\mathbf{Z}}}_{{\geqslant}-1}$, we recalled in Section \[sec:Luka\] how to construct such a path $W^{\varrho_n}_{n}$ by cyclicly shifting a bridge, i.e. the walk $S$ up to time $n$, conditioned to satisfy $S_n = -\varrho_n$. This construction can be generalised to any subset $A$, by cyclicly shifting the walk $S$ up to its $n$-th jump with value in $A$, and conditioned to be at $-\varrho_n$ at this moment, see Kortchemski [@Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves Lemma 6.4] for a detailed proof in the case $\varrho_n = 1$ and $A = \{-1\}$; it extends here: replacing ‘$=-1$’ by ‘$\in A$’ does not change anything, and when $\varrho_n {\geqslant}2$, there are $\varrho_n$ cyclicly shifted bridges which are first-passage bridges, but this factor $\varrho_n$ cancels since the cycle lemma is used twice. On the empirical jump distribution of a conditioned path -------------------------------------------------------- We study in this subsection the random paths $W^{\varrho_n}_{n,A}$ and $W_{{\geqslant}n}$ described above. Let $\zeta(W^{\varrho_n})$ denote the number of steps of the path $W^{\varrho_n}$, and for every subset $B \subset {{\mathbf{Z}}}_{{\geqslant}-1}$, let $$J_B(W^{\varrho_n}) = \#\{1 {\leqslant}i {\leqslant}\zeta(W^{\varrho_n}) : W^{\varrho_n}(i) - W^{\varrho_n}(i-1) \in B\},$$ and define similar quantities for $W^{\varrho_n}_{n,A}$ and $W_{{\geqslant}n}$. Our first result is a law of large numbers. \[lem:scaling\_GW\] Assume that $\nu$ satisfies $\mathrm{(H_\alpha)}$ for some $\alpha \in (1, 2]$ or $\mathrm{(H_1^{loc})}$; fix $A \in \mathrm{D}(\nu)$ and $B$ any subset of ${{\mathbf{Z}}}_{{\geqslant}-1}$; finally assume that $\limsup_{n \to \infty} a_n^{-1} \varrho_n < \infty$. Then we have $$n^{-1} J_B(W^{\varrho_n}_{n,A}) {\enskip\mathop{\longrightarrow}^{a.s.}_{n \to \infty}\enskip}\frac{\nu(B)}{\nu(A)}.$$ If $\nu$ satisfies $\mathrm{(H_1)}$, then $$\zeta(W_{{\geqslant}n})^{-1} J_B(W_{{\geqslant}n}) {\enskip\mathop{\longrightarrow}^{a.s.}_{n \to \infty}\enskip}\nu(B).$$ In particular, when $\alpha > 1$, recalling that $n^{-1/\alpha} a_n$ is slowly varying, we have for $B = {{\mathbf{Z}}}_{{\geqslant}-1}$ $$n^{-1} \zeta(W^{\varrho_n}_{n,A}) {\enskip\mathop{\longrightarrow}^{a.s.}_{n \to \infty}\enskip}\nu(A)^{-1} \qquad\text{and so}\qquad a_n^{-1} a_{\zeta(W^{\varrho_n}_{n,A})} {\enskip\mathop{\longrightarrow}^{a.s.}_{n \to \infty}\enskip}\nu(A)^{-1/\alpha}.$$ When $\nu$ satisfies $\mathrm{(H_1^{loc})}$, we simply have $\zeta(W^{\varrho_n}_n) = n$; finally, when $\nu$ satisfies $\mathrm{(H_1)}$, we obtain appealing to [@Kortchemski-Richier:Condensation_in_critical_Cauchy_Bienayme_Galton_Watson_trees Theorem 30] that $$|b_n|^{-1} |b_{\zeta(W^{\varrho_n}_n)}| {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}I,$$ where $I$ has the law ${{\mathbf{P}}}(I {\geqslant}x) = x^{-1}$ for all $x {\geqslant}1$. Therefore the natural scaling factor for our conditioned paths, which should involve their total length $\zeta$, may be replaced by a factor depending only on $n$. The more general statement, for an arbitrary set $B$, shall be used later when dealing with random maps. Let us first focus on the more familiar case $\alpha \in (1,2]$. Fix $\delta > 0$ small, we claim that there exists $c, C > 0$ such that for every $n$, we have $$\label{eq:concentration_feuilles} {{{\mathbf{P}}}\left(\left|\frac{n}{J_B(W^{\varrho_n}_{n,A})}- \frac{\nu(A)}{\nu(B)}\right| > \frac{\delta}{n^{1/4}}\right)} {\leqslant}C {\mathrm{e}}^{- c n^{1/2}}.$$ Since the right-hand side is summable, this shows that $n / J_B(W^{\varrho_n}_{n,A})$ converges to $\nu(A) / \nu(B)$ almost surely. Let us write this probability as $$\frac{1}{{{\mathbf{P}}}(J_A(W^{\varrho_n}) = n)} \cdot {{{\mathbf{P}}}\left(\left|\frac{J_A(W^{\varrho_n})}{J_B(W^{\varrho_n})}- \frac{\nu(A)}{\nu(B)}\right| > \frac{\delta}{n^{1/4}} \text{ and } J_A(W^{\varrho_n}) = n\right)}.$$ Recall the local limit theorem in ; it is known that $p_1$ is continuous and positive, so in particular bounded away from $0$ and $\infty$ on any compact interval. Whence $\varrho_n^{-1} n a_n {{\mathbf{P}}}(J_A(W^{\varrho_n}) = n)$ is bounded away from $0$ and $\infty$ since $\varrho_n = O(a_n)$. Moreover, $\varrho_n {\geqslant}1$ and $a_n = O(n^{3/2})$ and finally $\zeta(W^{\varrho_n}) {\geqslant}J_A(W^{\varrho_n})$, so there exists $K > 0$ such that $${{{\mathbf{P}}}\left(\left|\frac{n}{J_B(W^{\varrho_n}_{n,A})}- \frac{\nu(A)}{\nu(B)}\right| > \frac{\delta}{n^{1/4}}\right)} {\leqslant}K n^{5/2} {{{\mathbf{P}}}\left(\left|\frac{J_A(W^{\varrho_n})}{J_B(W^{\varrho_n})}- \frac{\nu(A)}{\nu(B)}\right| > \frac{\delta}{n^{1/4}} \text{ and } \zeta(W^{\varrho_n}) {\geqslant}n\right)},$$ and it remains to bound the probability on the right. Straightforward calculations show that if $n$ is large enough (so e.g. $\delta n^{-1/4} < \nu(B) / 2$), then we have the inclusion of events $$\left\{\left|\frac{J_A(W^{\varrho_n})}{\zeta(W^{\varrho_n})} - \nu(A)\right| {\leqslant}\frac{\delta}{n^{1/4}}\right\} \cap \left\{\left|\frac{J_B(W^{\varrho_n})}{\zeta(W^{\varrho_n})} - \nu(B)\right| {\leqslant}\frac{\delta}{n^{1/4}}\right\} \subset \left\{\left|\frac{J_A(W^{\varrho_n})}{J_B(W^{\varrho_n})} - \frac{\nu(A)}{\nu(B)}\right| {\leqslant}\frac{\delta'}{n^{1/4}}\right\},$$ for some explicit $\delta'$ which depends on $\delta$, $\nu(A)$ and $\nu(B)$. We may write $$\begin{aligned} {{{\mathbf{P}}}\left(\left|\frac{J_A(W^{\varrho_n})}{\zeta(W^{\varrho_n})}- \nu(A)\right| > \frac{\delta}{n^{1/4}} \text{ and } \zeta(W^{\varrho_n}) {\geqslant}n\right)} &{\leqslant}\sum_{N {\geqslant}n} {{{\mathbf{P}}}\left(\left|\frac{J_A(W^{\varrho_n})}{N}- \nu(A)\right| > \frac{\delta}{N^{1/4}} \text{ and } \zeta(W^{\varrho_n}) = N\right)} \\ &{\leqslant}\sum_{N {\geqslant}n} {{{\mathbf{P}}}\left(\left|\frac{\#\{1 {\leqslant}i {\leqslant}N : X_i \in A\}}{N}- \nu(A)\right| > \frac{\delta}{N^{1/4}}\right)} \\ &{\leqslant}\sum_{N {\geqslant}n} 2 \exp(- 2 \delta^2 N^{1/2}),\end{aligned}$$ where the last bound follows from the standard Chernoff bound. The last sum is bounded by some constant times $n^{1/2} \exp(- 2 \delta^2 n^{1/2})$; the same holds with the set $B$, so we obtain after a union bound, $${{{\mathbf{P}}}\left(\left|\frac{n}{J_B(W^{\varrho_n}_{n,A})}- \frac{\nu(A)}{\nu(B)}\right| > \frac{\delta}{n^{1/4}}\right)} {\leqslant}K' n^3 {\mathrm{e}}^{- 2 \delta^2 n^{1/2}},$$ for some $K' > 0$ and the proof in the case $\alpha \in (1,2]$ is complete. The argument is very similar in the case $\alpha=1$, appealing to  and , we leave the details to the reader. In the next theorem, we prove that the empirical jump distribution of our conditioned paths fits in our general framework. For any path $P = (P_i)_{i {\geqslant}0}$, we let $\Delta(P) = \max\{i {\geqslant}1 : P_i - P_{i-1}\}$ be the largest jump, and we let $\Delta'(P)$ be the second largest jump. \[thm:GW\] Assume that $\nu$ satisfies $\mathrm{(H_\alpha)}$ for some $\alpha \in [1, 2]$ or $\mathrm{(H_1^{loc})}$ and fix $A \in \mathrm{D}(\nu)$. Let $(\varrho_n)_{n {\geqslant}1}$ be such that $\lim_{n \to \infty} a_n^{-1} \varrho_n = \varrho \nu(A)^{-1/\alpha}$ for some $\varrho \in [0, \infty)$. 1. \[thm:GW\_Gaussien\] If $\nu$ satisfies $\mathrm{(H_2)}$, then $$a_n^{-2} \sum_{k {\geqslant}1} k (k+1) J_k(W^{\varrho_n}_{n,A}) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}\frac{2}{\nu(A)} \qquad\text{and}\qquad a_n^{-1} \Delta(W^{\varrho_n}_{n,A}) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}0.$$ 2. \[thm:GW\_stable\] If $\nu$ satisfies $\mathrm{(H_\alpha)}$ with $1 < \alpha < 2$, then $a_n^{-2} \sum_{k {\geqslant}1} k (k+1) J_k(W^{\varrho_n}_{n,A})$ converges in distribution towards a random variable $Y_\alpha^\varrho$ whose law does not depend on $\nu$. 3. \[thm:GW\_Cauchy\_loc\] If $\nu$ satisfies $\mathrm{(H_1^{loc})}$, recall that $A = {{\mathbf{Z}}}_{{\geqslant}-1}$, then $$|b_n|^{-1} \left(\Delta(W^{\varrho_n}_n), \Delta'(W^{\varrho_n}_n)\right) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}(1,0) \qquad\text{and}\qquad |b_n|^{-2} \sum_{k {\leqslant}\Delta'(W^{\varrho_n}_n)} k (k+1) J_k(W^{\varrho_n}_n) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}0.$$ 4. \[thm:GW\_Cauchy\_gen\] Similarly, if $\nu$ satisfies $\mathrm{(H_1)}$ and $\varrho_n = 1$, let $I$ be distributed as $P(I {\geqslant}x) = x^{-1}$ for every $x {\geqslant}1$, then $$|b_n|^{-1} \left(\Delta(W_{{\geqslant}n}), \Delta'(W_{{\geqslant}n})\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}(I,0) \qquad\text{and}\qquad |b_n|^{-2} \sum_{k {\leqslant}\Delta'(W_{{\geqslant}n})} k (k+1) J_k(W_{{\geqslant}n}) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}0.$$ In the case $\varrho_n = 1$, when moreover $\nu$ has finite variance, Theorem \[thm:GW\]\[thm:GW\_Gaussien\] was first obtained by Broutin & Marckert [@Broutin-Marckert:Asymptotics_of_trees_with_a_prescribed_degree_sequence_and_applications] for $A = {{\mathbf{Z}}}_{{\geqslant}-1}$ and generalised to any $A$ in [@Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence]. The last two statements when $\alpha = 1$ shall follow easily from [@Kortchemski-Richier:Condensation_in_critical_Cauchy_Bienayme_Galton_Watson_trees]. A key idea to prove the first two statements when $\alpha > 1$, as in [@Broutin-Marckert:Asymptotics_of_trees_with_a_prescribed_degree_sequence_and_applications; @Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence], is first to observe that the claims are invariant under cyclic shift, so we may consider a random walk bridge instead of a first-passage bridge, and then to compare the law of these bridges with that of the unconditioned random walk $S$ for which the claims are easy to prove. Precisely, when $\nu$ satisfies $\mathrm{(H_\alpha)}$ for some $\alpha \in (1, 2]$ and $A \in \mathrm{D}(\nu)$, let us set for every real $t {\geqslant}1$ $$J^-_{A, t}(S) = \inf\left\{k {\geqslant}1 : J_A((S_i)_{i {\leqslant}k}) = \lfloor t\rfloor\right\}$$ the instant at which the walk $S$ makes its $\lfloor t\rfloor$-th step in $A$. Then, as we discussed in the preceding subsection, the path $W^{\varrho_n}_{n,A}$ has the law of the Vervaat’s transform of the path $S$ up to time $J^-_{A, n}(S)$ conditioned to be at $-\varrho_n$ at this time. Further, similarly to , there exists a constant $C > 0$ such that for every $n {\geqslant}1$ and every event $\mathscr{E}_{A, n/2}(S)$ which is measurable with respect to the $J^-_{A, n/2}(S)$ first steps of the path $S$, we have that $$\label{eq:absolue_continuite_GW} \limsup_{n \to \infty} {{{\mathbf{P}}}\left(\mathscr{E}_{A, n/2}(S) \;\middle|\; S_{J^-_{A, n}(S)} = -\varrho_n\right)} {\leqslant}C \limsup_{n \to \infty} {{{\mathbf{P}}}\left(\mathscr{E}_{A, n/2}(S)\right)}.$$ See e.g. Kortchemski [@Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves], Lemma 6.10 and 6.11 and Equation 44 there. We may now prove Theorem \[thm:GW\]. Let us start with the Gaussian regime $\alpha=2$. According to the preceding discussion, it suffices to prove the convergences $$\label{eq:degres_GW_Gaussien} a_n^{-2} \sum_{k {\geqslant}1} k (k+1) J_k\left((S_i)_{i {\leqslant}J^-_{A, n}(S)})\right) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}\frac{2}{\nu(A)} \qquad\text{and}\qquad a_n^{-1} \Delta\left((S_i)_{i {\leqslant}J^-_{A, n}(S)})\right) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}0,$$ under the conditional probability ${{\mathbf{P}}}(\, \cdot \mid S_{J^-_{A, n}(S)} = -\varrho_n)$. Moreover, if we cut this bridge at time $J^-_{A, n/2}(S)$, then the time and space reversal of the second part has the same law as the first one. Therefore it suffices to prove that  holds when $n$ is replaced by $n/2$. By , it suffices finally to prove that  holds for the unconditioned random walk. Recall the two slowly varying functions $L$ and $L_1$ given by $L_1(x) = x^2 {{\mathbf{P}}}(X {\geqslant}x)$ and $L(x) = {\mathrm{Var}}(X {{{\mathbf{1}}}_{\{|X| {\leqslant}x\}}})$ for every $x > 0$; recall from  and  that $L_1 / L$ converges to $0$ and $n a_n^{-2} L(a_n)$ converges to $2$. Then for every $\varepsilon > 0$, it holds that $${{{\mathbf{P}}}\left(a_n^{-1} \max_{1 {\leqslant}i {\leqslant}n} X_i {\geqslant}\varepsilon\right)} {\leqslant}n {{{\mathbf{P}}}\left(X {\geqslant}\varepsilon a_n\right)} = n (\varepsilon a_n)^{-2} L_1(\varepsilon a_n) {\enskip\mathop{\longrightarrow}^{}_{n \to \infty}\enskip}0,$$ where we used the fact that $L_1$ is slowly varying, so $L_1(\varepsilon a_n) \sim L_1(a_n) = o(L(a_n))$. Concerning the first convergence, we aim at showing that $a_n^{-2} \sum_{1 {\leqslant}i {\leqslant}n} X_i (X_i+1)$ converges in probability to $2$, which is equivalent to the fact that $a_n^{-2} \sum_{1 {\leqslant}i {\leqslant}n} X_i^2$ converges in probability to $2$ since $n^{-1} \sum_{1 {\leqslant}i {\leqslant}n} X_i$ converges in probability to $0$ by the law of large numbers, and $n = O(a_n^2)$. Let us fix $\varepsilon > 0$, then we have that $${{{\mathbf{E}}}\left[a_n^{-2} \sum_{1 {\leqslant}i {\leqslant}n} X_i^2 {{{\mathbf{1}}}_{\{|X_i| {\leqslant}\varepsilon a_n\}}}\right]} = n a_n^{-2} {{{\mathbf{E}}}\left[X^2 {{{\mathbf{1}}}_{\{|X| {\leqslant}\varepsilon a_n\}}}\right]} = n a_n^{-2} L(a_n) (1+o(1)) {\enskip\mathop{\longrightarrow}^{}_{n \to \infty}\enskip}2,$$ and, similarly, $${\mathrm{Var}}\left(a_n^{-2} \sum_{1 {\leqslant}i {\leqslant}n} X_i^2 {{{\mathbf{1}}}_{\{|X_i| {\leqslant}\varepsilon a_n\}}}\right) = n a_n^{-4} {\mathrm{Var}}\left(X^2 {{{\mathbf{1}}}_{\{|X| {\leqslant}\varepsilon a_n\}}}\right) {\leqslant}\varepsilon^2 n a_n^{-2} {{{\mathbf{E}}}\left[X^2 {{{\mathbf{1}}}_{\{|X| {\leqslant}\varepsilon a_n\}}}\right]} {\enskip\mathop{\longrightarrow}^{}_{n \to \infty}\enskip}2 \varepsilon^2.$$ We have shown that with high probability, we have $|X_i| {\leqslant}\varepsilon a_n$ for every $i {\leqslant}n$, so we conclude that indeed $a_n^{-2} \sum_{1 {\leqslant}i {\leqslant}n} X_i^2$ converges in probability to $2$. We have thus shown that $$a_n^{-2} \sum_{k {\geqslant}1} k (k+1) J_k((S_i)_{i {\leqslant}n}) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}2 \qquad\text{and}\qquad a_n^{-1} \Delta((S_i)_{i {\leqslant}n})) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}0,$$ for the unconditioned random walk, and so, since $a_{cn} \sim c^{1/2} a_n$, $$a_n^{-2} \sum_{k {\geqslant}1} k (k+1) J_k((S_i)_{i {\leqslant}n / \nu(A)}) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}\frac{2}{\nu(A)} \qquad\text{and}\qquad a_n^{-1} \Delta((S_i)_{i {\leqslant}n / \nu(A)})) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}0.$$ Since $n^{-1} J^-_{A, n}(S)$ converges almost surely to $\nu(A)^{-1}$ by Lemma \[lem:scaling\_GW\], we obtain  for the unconditioned random walk and the proof is complete. We next consider the regime $1 < \alpha < 2$. The claim for the unconditioned random walk $S$ is easy: let $\mathscr{X}$ be the $\alpha$-stable Lévy process whose law at time $1$ is $\mathscr{X}^{(\alpha)}$, then by a classical result on random walks, the convergence in distribution $a_n^{-1} S_n$ towards $\mathscr{X}^{(\alpha)}$ is equivalent to that of $(a_n^{-1} S_{\lfloor n t\rfloor})_{t {\geqslant}0}$ towards $(\mathscr{X}_t)_{t {\geqslant}0}$ in the Skorokhod’s $J_1$ topology. Let $\Delta \mathscr{X}_s = \mathscr{X}_s - \mathscr{X}_{s-} {\geqslant}0$, then the sum $\mathscr{S}_t = \sum_{s {\leqslant}t} (\Delta \mathscr{X}_s)^2$ is well-defined; in fact it is well known that the process $(\mathscr{S}_t)_{t {\geqslant}0}$ is an $\alpha/2$-stable subordinator, whose law can be easily derived from that of $\mathscr{X}$.[^3] Furthermore it is easily seen to be the limit in distribution of $(a_n^{-2} \sum_{i {\leqslant}\lfloor n t\rfloor} X_i (X_i + 1))_{t {\geqslant}0}$; note that no centring is needed here since $\alpha/2 < 1$. A simple consequence of the fact that $\mathscr{S}$ is a pure jump process is that the non-increasing rearrangement of the vector $(a_n^{-2} X_i (X_i + 1))_{1 {\leqslant}i {\leqslant}n}$ converges in distribution in the $\ell^1$ topology towards the decreasing rearrangement of the non-zero jumps of $(\mathscr{S}_t)_{t \in [0,1]}$. Let us return to our random bridges; one may adapt the arguments from [@Kortchemski:Invariance_principles_for_Galton_Watson_trees_conditioned_on_the_number_of_leaves] when $\varrho_n = 1$ to obtain the convergence in distribution of $(\nu(A)^{1/\alpha} a_n^{-1} \sum_{1 {\leqslant}i {\leqslant}\lfloor J^-_{A, n}(S) t \rfloor} X_i)_{t \in [0,1]}$ under ${{\mathbf{P}}}(\, \cdot \mid S_{J^-_{A, n}(S)} = -\varrho_n)$ towards the bridge $(\mathscr{X}^\varrho_t)_{t \in [0,1]}$ which is informally the process $(\mathscr{X}_t)_{t \in [0,1]}$ conditioned to be at $-\varrho$ at time $1$. This implies the convergence of the $N$ largest values amongst $(X_i (X_i + 1))_{1 {\leqslant}i {\leqslant}n}$ towards the $N$ largest values amongst $((\Delta \mathscr{X}^\varrho_t)^2)_{0 {\leqslant}t {\leqslant}1}$ for every $N$; by  and the preceding paragraph, if $N$ is chosen large enough, the sum of all the other jumps is small so we obtain under the law ${{\mathbf{P}}}(\, \cdot \mid S_{J^-_{A, n}(S)} = -\varrho_n)$, $$\nu(A)^{2/\alpha} a_n^{-2} \sum_{1 {\leqslant}i {\leqslant}n} X_i (X_i + 1) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}\sum_{t \in [0,1]} (\Delta \mathscr{X}^\varrho_t)^2.$$ We conclude as in the preceding proof from the space-time reversal invariance. We finally consider the Cauchy regime. Let us start with the local conditioning. Recall that here we assume that $A = {{\mathbf{Z}}}_{{\geqslant}-1}$, and $W^{\varrho_n}_n$ is simply the walk $S$ conditioned to first hit $-\varrho_n$ at time $n$. In the case $\varrho_n = 1$, Kortchemski & Richier [@Kortchemski-Richier:Condensation_in_critical_Cauchy_Bienayme_Galton_Watson_trees Theorem 3] found the joint limit of the $N$ largest jumps of $W^1_n$ for any $N$ fixed: the largest one is equivalent to $|b_n|$, whereas the others are of order $a_n = o(|b_n|)$ which implies our first claim. This can be generalised to any $\varrho_n = O(a_n) = o(b_n)$; indeed, the key is Proposition 20 there which still applies when one takes the $X^{(n)}_i$’s to be the jumps of the path $S$ (denoted by $W$ there!) conditioned to $S_n = - \varrho_n$ instead of $S_n = -1$: the only feature of the case $\varrho_n = 1$ which is used is that $P(S_n = -1) \sim n\varrho_n\cdot P(X = |b_n|)$, which is also valid for ${{\mathbf{P}}}(S_n = -\varrho_n)$ as soon as $\varrho_n = O(a_n) = o(b_n)$ by . Whence, the following holds as $n \to \infty$: let $V_n = \inf\{i {\leqslant}n : X_i = \max_{j {\leqslant}n} X_j\}$ be the first time at which the path $S$ up to time $n$ makes its largest jump, then the law under ${{\mathbf{P}}}(\, \cdot \mid S_n = -\varrho_n)$ of the vector $(X_1, \dots, X_{V_n-1}, X_{V_n+1}, \dots, X_n)$ is close in total variation to that of $n-1$ i.d.d. copies of $X$. From this, the proof of Theorem 21 from [@Kortchemski-Richier:Condensation_in_critical_Cauchy_Bienayme_Galton_Watson_trees] can be extended to show that if we first run the walk $S$ up to time $n-1$ and then send it to $-\varrho_n$, and then we construct $Z^{\varrho_n}_n$ as the Vervaat’s transform of this path $(S_0, \dots, S_{n-1}, -\varrho_n)$, then $$\label{eq:one_big_jump_Cauchy_loc} d_{TV}\left((W^{\varrho_n}_n(i))_{0 {\leqslant}i {\leqslant}n}, (Z^{\varrho_n}_n(i))_{0 {\leqslant}i {\leqslant}n}\right) {\enskip\mathop{\longrightarrow}^{}_{n \to \infty}\enskip}0.$$ Finally, from these results, the proof of Theorem 3 in [@Kortchemski-Richier:Condensation_in_critical_Cauchy_Bienayme_Galton_Watson_trees] remains unchanged and our first claim follows. Concerning the second claim, it is equivalent to showing that $|b_n|^{-2} \sum_{i {\leqslant}n} X_i^2 {{{\mathbf{1}}}_{\{i \ne V_n\}}}$ converges to $0$ under ${{\mathbf{P}}}(\, \cdot \mid S_n = - \varrho_n)$. As in the case $1 < \alpha < 2$, we have that $a_n^{-2}(X_1^2 + \dots + X_n^2)$ converges in distribution towards some $1/2$-stable random variable. Our claim then follows by  since $a_n = o(|b_n|)$. It only remains to consider the tail conditioning. Recall that here $A = {{\mathbf{Z}}}_{{\geqslant}- 1}$ and $\varrho_n = 1$. Compared to the previous proof, we do not need any adaption here and the first claim now directly follows from [@Kortchemski-Richier:Condensation_in_critical_Cauchy_Bienayme_Galton_Watson_trees Theorem 6]. The second claim is more subtle. First, we have a similar approximation to  given by [@Kortchemski-Richier:Condensation_in_critical_Cauchy_Bienayme_Galton_Watson_trees Theorem 27]; what replaces $Z^{\varrho_n}_n$ there is the process $\vec{Z}^{(n)}$ defined as follows: first $I_n$ is the last weak ladder time of $(S_i)_{i {\leqslant}n}$, then conditional on $\{I_n = j\}$, the path $\vec{Z}^{(n)}$ consists in three independent parts: 1. First, the path $(\vec{Z}^{(n)}_i)_{i < j}$ has the law of $(S_i)_{i < j}$ conditioned to satisfy $\min_{i {\leqslant}j} S_i {\geqslant}0$. 2. Then we make a big jump $\vec{Z}^{(n)}_j - \vec{Z}^{(n)}_{j-1}$, sampled from ${{\mathbf{P}}}(\, \cdot \mid X {\geqslant}|b_n|)$. 3. Then the path $(\vec{Z}^{(n)}_{j+i} - \vec{Z}^{(n)}_j)_{i {\geqslant}0}$ continues as the unconditioned random walk $S$. The big jump will be excluded in our sum and we have seen previously that the sum of $N$ copies of $X^2$ is of order $a_N^2$ as $n \to \infty$. Therefore, if we consider only the jumps of $W_{{\geqslant}n}$ after its big jump, then there are less than $\zeta(W_{{\geqslant}n})$ of them, and $|b_n|^{-1} \zeta(W_{{\geqslant}n})$ converges in distribution to $I$ as $n \to \infty$ by [@Kortchemski-Richier:Condensation_in_critical_Cauchy_Bienayme_Galton_Watson_trees Proposition 3.1], so the sum of the square of these jumps is at most of order $(a_{|b_n|})^2$. We recall that $a_n$ is defined by $n {{\mathbf{P}}}(X {\geqslant}a_n) \to 1$; since ${{\mathbf{P}}}(X {\geqslant}x) = x^{-1} L_1(x)$ and that ${{\mathbf{E}}}[|X|] < \infty$, then $L_1(x) \to 0$, therefore $a_n = o(n)$, from which we deduce that the sum of the square of the jumps of the last part is at most of order $(a_{|b_n|})^2 = o(|b_n|^2)$. Finally, let us consider the jumps of the first part, up to time $I_n-1$. As in the preceding proof, the sum of the square of the first $I_n-1$ jumps of $\vec{Z}^{(n)}$ grows like $a_{I_n}^2 = o(|b_n|^2)$, and precisely this path converges after scaling to the meander given by a $1/2$-stable process conditioned to be non-negative at least up to time $1$. Boltzmann planar maps {#sec:Boltzmann} --------------------- Let us fix a sequence of non-negative real numbers ${\mathbf{q}}= (q_i ; i {\geqslant}0)$ which, in order to avoid trivialities, satisfies $q_i > 0$ for at least one $i {\geqslant}2$. For every $\varrho {\geqslant}1$, let ${\mathbf{M}}^{\varrho}$ be the set of all finite rooted bipartite maps $M$ with boundary-length $2\varrho$, and let ${\mathbf{PM}}^{\varrho}$ be the set of all rooted and pointed bipartite maps $(M, \star)$ with such a constraint. Recall that ${\mathbf{M}}^1$ and ${\mathbf{PM}}^1$ can be seen as the set of those maps and pointed maps without boundary (by gluing the two boundary edges); we then drop the exponent $1$. We define a measure $w^{\varrho}$ on ${\mathbf{M}}^{\varrho}$ by setting $$w^{\varrho}(M) = \prod_{f \text{ inner face}} q_{\mathrm{deg}(f)/2}, \qquad M \in {\mathbf{M}}^{\varrho},$$ where $\mathrm{deg}(f)$ is the degree of the face $f$. We set $W^{\varrho} = w^{\varrho}({\mathbf{M}}^{\varrho})$. We define similarly a measure $w^{\varrho}_\star$ on ${\mathbf{PM}}^{\varrho}$ by $w^{\varrho}_\star((M, \star)) = w^{\varrho}(M)$ for every $(M, \star) \in {\mathbf{PM}}^{\varrho}$ and we put $W^{\varrho}_\star = w^{\varrho}_\star({\mathbf{PM}}^{\varrho})$. We say that ${\mathbf{q}}$ is *admissible* when $W_\star = W_\star^1$ is finite; this seems stronger than requiring $W^1$ to be finite, but it is not, see [@Bernardi-Curien-Miermont:A_Boltzmann_approach_to_percolation_on_random_triangulations]; moreover, this implies that $W^\varrho_\star$ (and so $W^\varrho$) is finite for any $\varrho {\geqslant}1$, see e.g. [@Budd:The_peeling_process_of_infinite_Boltzmann_planar_maps] for details. If ${\mathbf{q}}$ is admissible, we set $${{\mathbf{P}}}^\varrho(\cdot) = \frac{1}{W^\varrho} w^\varrho(\cdot) \qquad\text{and}\qquad {{\mathbf{P}}}^{\varrho, \star}(\cdot) = \frac{1}{W^\varrho_\star} w^\varrho_\star(\cdot).$$ For every integers $n, \varrho {\geqslant}1$, let ${\mathbf{M}}^\varrho_{E=n}$, ${\mathbf{M}}^\varrho_{V=n}$ and ${\mathbf{M}}^\varrho_{F=n}$ be the subsets of ${\mathbf{M}}$ of those maps with respectively $n$ edges, $n+1$ vertices, and $n$ inner faces. More generally, for every $A \subset {{\mathbf{N}}}$, let ${\mathbf{M}}_{F,A=n}$ be the subset of ${\mathbf{M}}$ of those maps with $n$ inner faces whose degree belongs to $2A$ (and possibly other faces, but with a degree in $2{{\mathbf{N}}}\setminus 2A$). For every $S = \{E, V, F\} \cup \bigcup_{A \subset {{\mathbf{N}}}} \{F,A\}$ and every $n {\geqslant}2$, we define $${{\mathbf{P}}}^\varrho_{S=n}(M) = {{\mathbf{P}}}^\varrho(M \mid M \in {\mathbf{M}}^\varrho_{S=n}), \qquad M \in {\mathbf{M}}^\varrho_{S=n},$$ the law of a rooted Boltzmann map with boundary-length $2 \varrho$ conditioned to have ‘size’ $n$. We also let ${{\mathbf{P}}}_{E {\geqslant}n}$ be the law of a rooted Boltzmann map without boundary conditioned to have at least $n$ edges. We define similarly the laws ${{\mathbf{P}}}^{\varrho, \star}_{S=n}$ on such pointed maps ${\mathbf{PM}}^\varrho_{S=n}$ and ${{\mathbf{P}}}^\star_{E {\geqslant}n}$ on $\bigcup_{k {\geqslant}n} {\mathbf{PM}}_{E=k}$. When ${\mathbf{q}}$ is admissible, the sequence $$\label{eq:loi_GW_carte_Boltzmann} \mu_{\mathbf{q}}(k) = (W_\star)^{k-1} \binom{2k-1}{k-1} q_k, \qquad k {\geqslant}0,$$ where $\mu_{\mathbf{q}}(0)$ is understood as $(W_\star)^{-1}$, defines a probability measure with mean smaller than or equal to one, and we say that ${\mathbf{q}}$ is *critical* when $\mu_{\mathbf{q}}$ has mean exactly one. Sample a random labelled forest $(T^\varrho, \ell)$ as follows: first $T^\varrho$ has the law of a Bienaymé–Galton–Watson forest with $\varrho$ trees and offspring distribution $\mu_{\mathbf{q}}$, and then, conditional on $T^\varrho$, the random labelling $\ell$ is obtained by sampling independent uniformly random bridges in  at every branch-point with $k$ offspring in $T^\varrho$. Let us first construct a *negative* pointed map from this forest as discussed in Section \[sec:bijection\], then let us re-root it at one of the $2\varrho$ edges along the boundary (which keep the external face to the right) chosen uniformly at random, then this new pointed map has the law ${{\mathbf{P}}}^{\varrho, \star}$. This was shown in [@Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence] in the case $\varrho = 1$, adapting the arguments from [@Marckert-Miermont:Invariance_principles_for_random_bipartite_planar_maps] which rely on the coding of [@Bouttier-Di_Francesco-Guitter:Planar_maps_as_labeled_mobiles], and the generalisation to $\varrho {\geqslant}2$ can be obtained similarly, with a straightforward analog of [@Bettinelli-Miermont:Compact_Brownian_surfaces_I_Brownian_disks Lemma 10] for the re-rooting along the boundary. Moreover, the pointed map has the law ${{\mathbf{P}}}^{\varrho, \star}_{S=n}$ when the forest has the law of such a Bienaymé–Galton–Watson forest conditioned to have $n$ vertices with out-degree in the set $B_S \subset {{\mathbf{Z}}}_+$ given by $$B_E = {{\mathbf{Z}}}_+, \qquad B_V = \{0\}, \qquad B_F = {{\mathbf{N}}}\qquad\text{and}\qquad B_{F,A} = A.$$ We may therefore rely on the preceding sections to obtain information about Boltzmann maps. We let $\nu_{\mathbf{q}}(k) = \mu_{\mathbf{q}}(k+1)$ for every $k {\geqslant}-1$ which is centred if and only if ${\mathbf{q}}$ is critical. \[thm:Boltzmann\] Assume that ${\mathbf{q}}$ is an admissible and critical sequence such that the law $\nu_{\mathbf{q}}$ satisfies $\mathrm{(H_\alpha)}$ for some $\alpha \in [1, 2]$ or $\mathrm{(H_1^{loc})}$. 1. \[thm:Boltzmann\_stable\] Suppose that $\nu_{\mathbf{q}}$ satisfies $\mathrm{(H_\alpha)}$ with $\alpha \in (1, 2]$. Fix $S = \{E, V, F\} \cup \bigcup_{A \subset {{\mathbf{N}}}} \{F,A\}$; if $\nu_{\mathbf{q}}$ has infinite variance and $S = \{F,A\}$, then assume that either $A$ or ${{\mathbf{Z}}}_+ \setminus A$ is finite. Assume that $\limsup_{n \to \infty} a_n^{-1} \varrho_n < \infty$ and for every $n {\geqslant}1$, let $M^{\varrho_n}_{S=n}$ have the law ${{\mathbf{P}}}^{\varrho_n}_{S=n}$. Then from every increasing sequence of integers, one can extract a subsequence along which the spaces $$\left(V(M^{\varrho_n}_{S=n}), a_n^{-1/2} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}}\right)_{n {\geqslant}1}$$ converge in distribution in the sense of Gromov–Hausdorff–Prokhorov. 2. \[thm:Boltzmann\_Gaussien\] Suppose furthermore that $\alpha = 2$; let $c_{{\mathbf{q}}, S} = (\mu_{\mathbf{q}}(B_S) / 2)^{1/2}$ and suppose that there exists $\varrho \in [0, \infty)$ such that $\lim_{n \to \infty} a_n^{-1} \varrho_n = \varrho / c_{{\mathbf{q}}, S}$, then the convergence in distribution $$\left(V(M^{\varrho_n}_{S=n}) \left(\frac{3 c_{{\mathbf{q}}, S}}{2 a_n}\right)^{1/2} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}}\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}({\mathscr{M}}^\varrho, {\mathscr{D}}^\varrho, {\mathscr{p}}^\varrho)$$ holds in the sense of Gromov–Hausdorff–Prokhorov, where ${\mathscr{M}}^0$ is the Brownian map and ${\mathscr{M}}^\varrho$ is the Brownian disk with perimeter $\varrho$ if the latter is non-zero. 3. \[thm:Boltzmann\_Cauchy\_loc\] Suppose that $\nu_{\mathbf{q}}$ satisfies $\mathrm{(H_1^{loc})}$. Assume that $\limsup_{n \to \infty} a_n^{-1} \varrho_n < \infty$ and for every $n {\geqslant}1$, let $M^{\varrho_n}_{E=n}$ have the law ${{\mathbf{P}}}^{\varrho_n}_{E=n}$. Then the convergence in distribution $$\left(V(M^{\varrho_n}_{E=n}), |2b_n|^{-1/2} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}}\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}({\mathscr{T}}_{X^0}, {\mathscr{d}}_{X^0}, {\mathscr{p}}_{X^0})$$ holds in the sense of Gromov–Hausdorff–Prokhorov, where $X^0$ is the standard Brownian excursion. 4. \[thm:Boltzmann\_Cauchy\_gen\] Suppose that $\nu_{\mathbf{q}}$ satisfies $\mathrm{(H_1)}$. For every $n {\geqslant}1$, let $M_{E {\geqslant}n}$ have the law ${{\mathbf{P}}}^1_{E {\geqslant}n}$. Then the convergence in distribution $$\left(V(M_{E {\geqslant}n}), |2b_n|^{-1/2} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}}\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}({\mathscr{T}}_{I^{1/2} X^0}, {\mathscr{d}}_{I^{1/2} X^0}, {\mathscr{p}}_{I^{1/2} X^0})$$ holds in the sense of Gromov–Hausdorff–Prokhorov, where $X^0$ is the standard Brownian excursion, and $I$ is independently sampled from the law ${{\mathbf{P}}}(I {\geqslant}x) = x^{-1}$ for all $x {\geqslant}1$. Finally, the same results hold when the maps are obtained by forgetting the distinguished vertex in the pointed versions of the laws. Let us first discuss the last statement since we shall in fact prove the theorem for pointed maps, thanks to the representation with labelled forests. Let $\phi : {\mathbf{PM}}\to {\mathbf{M}}$ be the projection $\phi((M, \star)) = M$ which ‘forgets the distinguished vertex’. We stress that, unless in the case $S = V$ for which there is no bias, the laws ${{\mathbf{P}}}^\varrho_{S=n}$ and $\phi_\ast {{\mathbf{P}}}^{\varrho, \star}_{S=n}$ on ${\mathbf{M}}^{\varrho}$ differ at $n$ fixed. Nonetheless, this bias disappears at the limit as shown in several works [@Abraham:Rescaled_bipartite_planar_maps_converge_to_the_Brownian_map; @Bettinelli-Jacob-Miermont:The_scaling_limit_of_uniform_random_plane_maps_via_the_Ambjorn_Budd_bijection; @Bettinelli-Miermont:Compact_Brownian_surfaces_I_Brownian_disks; @Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence; @Marzouk:On_scaling_limits_of_planar_maps_with_stable_face_degrees]. \[prop:biais\_cartes\_Boltzmann\_pointees\] Let $\|\cdot\|_{TV}$ denotes the total variation norm; if ${\mathbf{q}}$, $S$, and $\varrho_n$ are as in Theorem \[thm:Boltzmann\], then $$\left\|{{\mathbf{P}}}^\varrho_{S=n} - \phi_* {{\mathbf{P}}}^{\varrho, \star}_{S=n}\right\|_{TV} {\enskip\mathop{\longrightarrow}^{}_{n \to \infty}\enskip}0.$$ The same holds for the laws conditioned to $E{\geqslant}n$ in the last case of Theorem \[thm:Boltzmann\]. This result generalises Proposition 12 in [@Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence] whose proof extends readily here: for notational convenience, suppose that we are in one of the first three cases of the theorem (the last case is similar). Let $\Lambda(T^{\varrho_n}_{n,B_S})$ denote the number of leaves of the forest $T^{\varrho_n}_{n,B_S}$, which equals the number of vertices minus one in the associated map. Then the above total variation distance is bounded above by $${{{\mathbf{E}}}\left[\left|\frac{1}{\Lambda(T^{\varrho_n}_{n,B_S})-1} \frac{1}{{{\mathbf{E}}}[\frac{1}{\Lambda(T^{\varrho_n}_{n,B_S})-1}]} - 1\right|\right]}.$$ see e.g. the proof of Proposition 12 in [@Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence]. It thus suffices to prove that this expectation tends to $0$, which is the content of [@Marzouk:Scaling_limits_of_random_bipartite_planar_maps_with_a_prescribed_degree_sequence Lemma 8]. Again, the proof of this lemma is fairly general once we have the exponential concentration of the proportion of leaves we obtained in the proof of Lemma \[lem:scaling\_GW\]: take the sets $A$ and $B$ in  to be respectively $A$ and $\{-1\}$ here. We may now easily prove Theorem \[thm:Boltzmann\] for pointed maps. The statement \[thm:Boltzmann\_stable\] for $\alpha \in (1, 2)$ follows easily from [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Theorem 1] and Theorem \[thm:GW\]\[thm:GW\_stable\]. Indeed, according to the latter, the factor $\sigma_n^{1/2}$ in [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Theorem 1] for the random degree sequence of $M^{\varrho_n}_{S=n}$ is of order $a_n^{1/2}$; since furthermore conditional on these degrees the map has the uniform distribution on the sets of all possible maps, we may then apply [@Marzouk:On_the_growth_of_random_planar_maps_with_a_prescribed_degree_sequence Theorem 1] conditional on the degrees, with $a_n^{1/2}$ instead of $\sigma_n^{1/2}$, and then average with respect to the degrees. The statement \[thm:Boltzmann\_Gaussien\] for $\alpha = 2$ follows similarly from Theorem \[thm:convergence\_carte\_disque\] and Theorem \[thm:GW\]\[thm:GW\_Gaussien\]; here the scaling constant $(3 / (2 \sigma_n))^{1/2}$ is given by $((9 \mu_{\mathbf{q}}(B_S))/(8 a_n^2))^{1/4}$. The statements \[thm:Boltzmann\_Cauchy\_loc\] and \[thm:Boltzmann\_Cauchy\_gen\] finally follow from Theorem \[thm:convergence\_cartes\_CRT\] and Theorem \[thm:GW\]\[thm:GW\_Cauchy\_loc\] and \[thm:GW\]\[thm:GW\_Cauchy\_gen\] respectively. Sub-critical maps ----------------- Let us end this paper with a similar condensation result for *sub-critical* maps, that is, when the weight sequence ${\mathbf{q}}$ is chosen so that the law $\mu_{\mathbf{q}}$ has mean smaller than one. In order to directly use the results available in the literature, we shall only work with maps without boundary, so $\varrho_n = 1$. The first convergence below, for pointed maps, is the main result of Janson & Stef[á]{}nsson [@Janson-Stefansson:Scaling_limits_of_random_planar_maps_with_a_unique_large_face]. \[thm:Boltzmann\_sous\_crit\] Assume that ${\mathbf{q}}$ is an admissible sequence such that the law $\mu_{\mathbf{q}}$ has mean $m_\mu = \sum_{k {\geqslant}0} k \mu_{\mathbf{q}}(k) < 1$. 1. Suppose that there exists a slowly varying function $L$ and an index $\beta > 1$ such that we have $$\mu_{\mathbf{q}}(k) = k^{-(1+\beta)} L(k),\qquad k {\geqslant}1.$$ For every $n {\geqslant}1$, let $M^{\varrho_n}_{E=n}$ have the law ${{\mathbf{P}}}_{E=n}$. Then the convergence in distribution $$\left(V(M_{E=n}), (2 n)^{-1/2} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}}\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}({\mathscr{T}}_{\gamma^{1/2} X^0}, {\mathscr{d}}_{\gamma^{1/2} X^0}, {\mathscr{p}}_{\gamma^{1/2} X^0})$$ holds in the sense of Gromov–Hausdorff–Prokhorov, where $X^0$ is the standard Brownian excursion and $\gamma = 1 - m_\mu$. 2. Under the weaker assumption that there exists a slowly varying function $L$ and an index $\beta > 1$ such that $$\mu_{\mathbf{q}}([k, \infty)) = k^{-\beta} L(k),\qquad k {\geqslant}1,$$ if now $M_{E {\geqslant}n}$ has the law ${{\mathbf{P}}}_{E {\geqslant}n}$, then the convergence in distribution $$\left(V(M_{E {\geqslant}n}), (2 n)^{-1/2} {d_{\mathrm{gr}}}, {p_{\mathrm{unif}}}\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}({\mathscr{T}}_{I^{1/2} X^0}, {\mathscr{d}}_{I^{1/2} X^0}, {\mathscr{p}}_{I^{1/2} X^0})$$ holds in the sense of Gromov–Hausdorff–Prokhorov, where $X^0$ is the standard Brownian excursion, and $I$ is independently sampled from the law ${{\mathbf{P}}}(I {\geqslant}x) = (\frac{1-m_\mu}{x})^{\beta}$ for all $x {\geqslant}1-m_\mu$. Finally, the same results hold when the maps are obtained by forgetting the distinguished vertex in the pointed versions of the laws. The proof goes exactly as that of Theorem \[thm:Boltzmann\] in the case $\alpha = 1$. First, the analog of Theorem \[thm:GW\]\[thm:GW\_Cauchy\_loc\] is provided by Kortchemski [@Kortchemski-Limit_theorems_for_conditioned_non_generic_Galton_Watson_trees]: according to Theorem 1 there, with the notation of Theorem \[thm:GW\], we have in the first case that $$n^{-1} \left(\Delta(W_n), \Delta'(W_n)\right) {\enskip\mathop{\longrightarrow}^{{{\mathbf{P}}}}_{n \to \infty}\enskip}(1-m_\mu,0).$$ This implies that $n^{-2} \sum_{k {\leqslant}\Delta'(W_n)} k (k+1) J_k(W_n) {\leqslant}n^{-1} (1+\Delta'(W_n))^2$ converges in probability to $0$. The analog of Theorem \[thm:GW\]\[thm:GW\_Cauchy\_gen\] in the second case is provided by Kortchemski & Richier [@Kortchemski-Richier:The_boundary_of_random_planar_maps_via_looptrees]: the convergence in distribution $$n^{-1} \left(\Delta(W_n), \Delta'(W_n)\right) {\enskip\mathop{\longrightarrow}^{(d)}_{n \to \infty}\enskip}(I,0)$$ follows from Proposition 9 there, and again this implies that $n^{-2} \sum_{k {\leqslant}\Delta'(W_n)} k (k+1) J_k(W_n)$ converges in probability to $0$. Theorem \[thm:convergence\_cartes\_CRT\] allows us to conclude in the case of pointed maps. In order to consider un-pointed maps, we need an analogous result to Proposition \[prop:biais\_cartes\_Boltzmann\_pointees\], which holds as soon as the proportion of leaves in the tree coded by $W_n$ or $W_{{\geqslant}n}$ satisfies an exponential concentration as in . The proof is easily adapted: we only need the analog of Equation  and  with $\varrho_n=1$ on the hitting time of $-1$; the former, in the case of the tail conditioning, is given by [@Kortchemski-Richier:The_boundary_of_random_planar_maps_via_looptrees Equation 9] and references therein, and the latter, in the case of the local conditioning, is given by [@Kortchemski-Limit_theorems_for_conditioned_non_generic_Galton_Watson_trees Equation 14]. This completes the proof. [CLGM13]{} C[é]{}line Abraham. Rescaled bipartite planar maps converge to the [B]{}rownian map. , 52(2):575–595, 2016. David J. Aldous. Exchangeability and related topics. In [*École d’été de probabilités de [S]{}aint-[F]{}lour, [XIII]{}—1983*]{}, volume 1117 of [*Lecture Notes in Math.*]{}, pages 1–198. Springer, Berlin, 1985. David Aldous. The continuum random tree. [III]{}. , 21(1):248–289, 1993. David Aldous, Gr[é]{}gory Miermont, and Jim Pitman. The exploration process of inhomogeneous continuum random trees, and an extension of [J]{}eulin’s local time identity. , 129(2):182–218, 2004. Olivier Bernardi, Nicolas Curien, and Gr[é]{}gory Miermont. . , pages 1–43, 2019. Jean Bertoin, Loïc Chaumont, and Jim Pitman. Path transformations of first passage bridges. , 8:155–166, 2003. J[é]{}r[é]{}mie Bouttier, Philippe Di Francesco, and Emmanuel Guitter. Planar maps as labeled mobiles. , 11(1):Research Paper 69, 27, 2004. Quentin Berger. . , 2018. Jérémie Bettinelli. Scaling limits for random quadrangulations of positive genus. , 15:no. 52, 1594–1644, 2010. Jérémie Bettinelli. Scaling limit of random planar quadrangulations with a boundary. , 51(2):432–477, 2015. Jérémie Bettinelli. Geodesics in [B]{}rownian surfaces ([B]{}rownian maps). , 52(2):612–646, 2016. J[é]{}r[é]{}mie Bettinelli, Emmanuel Jacob, and Gr[é]{}gory Miermont. The scaling limit of uniform random plane maps, [*via*]{} the [A]{}mbjørn-[B]{}udd bijection. , 19:no. 74, 16, 2014. Jérémie Bettinelli and Grégory Miermont. Compact [B]{}rownian surfaces [II]{}: [T]{}he general case. *In preparation*. Nicolas Broutin and Jean-Fran[ç]{}ois Marckert. Asymptotics of trees with a prescribed degree sequence and applications. , 44(3):290–316, 2014. Jérémie Bettinelli and Grégory Miermont. Compact [B]{}rownian surfaces [I]{}: [B]{}rownian disks. , 167(3-4):555–614, 2017. Timothy Budd. The peeling process of infinite [B]{}oltzmann planar maps. , 23(1):Paper 1.28, 37, 2016. Nicolas Curien, Jean-François Le Gall, and Grégory Miermont. The [B]{}rownian cactus [I]{}. [S]{}caling limits of discrete cactuses. , 49(2):340–373, 2013. William Feller. Second edition. John Wiley & Sons, Inc., New York-London-Sydney, 1971. Svante Janson and Sigurur [Ö]{}rn Stef[á]{}nsson. Scaling limits of random planar maps with a unique large face. , 43(3):1045–1081, 2015. Olav Kallenberg. . Probability and its Applications (New York). Springer-Verlag, New York, second edition, 2002. Igor Kortchemski. Invariance principles for [Galton–Watson]{} trees conditioned on the number of leaves. , 122(9):3126–3172, 2012. Igor Kortchemski. Limit theorems for conditioned non-generic [G]{}alton-[W]{}atson trees. , 51(2):489–511, 2015. Igor Kortchemski. Sub-exponential tail bounds for conditioned stable [B]{}ienaymé-[G]{}alton-[W]{}atson trees. , 168(1-2):1–40, 2017. Igor Kortchemski and Lo[ï]{}c Richier. The boundary of random planar maps via looptrees. , 2018. Igor Kortchemski and Lo[ï]{}c Richier. Condensation in critical [C]{}auchy [Bienaym[é]{}–Galton–Watson]{} trees. , 29(3):1837–1877, 2019. Tao Lei. Scaling limit of random forests with prescribed degree sequences. , 2017. Jean-Fran[ç]{}ois Le Gall. The topological structure of scaling limits of large planar maps. , 169(3):621–670, 2007. Jean-Fran[ç]{}ois Le Gall. Uniqueness and universality of the [B]{}rownian map. , 41(4):2880–2960, 2013. Jean-Fran[ç]{}ois Le Gall and Gr[é]{}gory Miermont. Scaling limits of random planar maps with large faces. , 39(1):1–69, 2011. Jean-Fran[ç]{}ois Le Gall and Fr[é]{}d[é]{}ric Paulin. Scaling limits of bipartite planar maps are homeomorphic to the 2-sphere. , 18(3):893–918, 2008. Cyril Marzouk. On scaling limits of planar maps with stable face-degrees. , 15:1089–1122, 2018. Cyril Marzouk. . , 53(3):448–503, 2018. Cyril Marzouk. On the growth of random planar maps with a prescribed degree sequence. , 2019. Gr[é]{}gory Miermont. On the sphericity of scaling limits of random planar quadrangulations. , 13:248–257, 2008. Gr[é]{}gory Miermont. Tessellations of random maps of arbitrary genus. , 42(5):725–781, 2009. Gr[é]{}gory Miermont. The [B]{}rownian map is the scaling limit of uniform random plane quadrangulations. , 210(2):319–401, 2013. Jean-Fran[ç]{}ois Marckert and Abdelkader Mokkadem. The depth first processes of [Galton–Watson]{} trees converge to the same [B]{}rownian excursion. , 31(3):1655–1678, 2003. Jean-Fran[ç]{}ois Marckert and Abdelkader Mokkadem. Limit of normalized quadrangulations: [T]{}he [B]{}rownian map. , 34(6):2144–2202, 2006. Jean-Fran[ç]{}ois Marckert and Gr[é]{}gory Miermont. Invariance principles for random bipartite planar maps. , 35(5):1642–1705, 2007. Jim Pitman. , volume 1875 of [*Lecture Notes in Mathematics*]{}. Springer-Verlag, Berlin, 2006. Lectures from the 32nd Summer School on Probability Theory held in Saint-Flour, July 7–24, 2002, With a foreword by Jean Picard. [^1]: CNRS, IRIF UMR 8243, Université Paris-Diderot, France.[`[email protected]`](mailto:[email protected]) This work was supported first by a public grant as part of the Fondation Mathématique Jacques Hadamard and then the European Research Council, grant `ERC-2016-STG 716083` (CombiTop). [^2]: Note that they consider uniform random bridges in $\mathscr{B}_{k+1}^{{\geqslant}-1}$! [^3]: Its Lévy measure is the image of $\frac{\alpha (\alpha - 1)}{\Gamma(2-\alpha)} r^{-\alpha - 1} {{{\mathbf{1}}}_{\{r > 0\}}} {\mathrm{d}}r$, that of $\mathscr{X}$, by the square function, which reads $\frac{\alpha (\alpha - 1)}{2 \Gamma(2-\alpha)} r^{-1 - \alpha/2} {\mathrm{d}}r$, and then ${{\mathbf{E}}}[\exp(- \lambda \mathscr{S}_1)] = \exp(\frac{(\alpha - 1) \Gamma(1 - \alpha / 2)}{\Gamma(2-\alpha)} \lambda^{\alpha / 2})$ for every $\lambda > 0$.
--- address: - '$^{\dagger}$Centre de Physique Th[é]{}orique, Ecole Polytechnique, 91128 Palaiseau Cedex, France. Laboratoire Propre du CNRS UPR A.0014' - '$^{\ddagger}$ENSLAPP [^1], Chemin de Bellevue BP 110, 74941 Annecy-le-Vieux Cedex, France.' author: - 'B. Abdesselam$^{\dagger}$, D. Arnaudon $^{\ddagger}$ and A. Chakrabarti$^{\dagger,}$ [^2]' title: 'NON–MINIMAL $q$–DEFORMATIONS AND ORTHOGONAL SYMMETRIES: ${\cal U}_{\displaystyle{q}}$(SO(5)) EXAMPLE' --- \#1\#2\#3\#4[[\#1]{} [**\#2**]{}, \#3 (\#4)]{} Symmetry is one theme of this workshop. Suppose one $q$-deforms some classical symmetry (unitary, orthogonal,$\cdots$) intending to explore the possibilities of applications of the symmetry thus generalized. Given such a goal, one should go further than the formally $q$-deformed Hopf algebra. One should construct explictly the representations irreducible ones to start with but also, so far as possible, non-decomposable ones for $q$ a root of unity. By explicit construction I mean a complete set of suitably parametrized basis states spanning the space of the representation in question and the matrix elements of the generators acting on these state vectors. The invariant parameters and the variable indices labelling the states will each (some very directly while others less so) have their specific significance in the description of the phenomenon studied. the matrix elements will measure the response of the states to the constraints of the symmetry (the action of the generators). They will also yield the values of the crucial invariants. Unless all these elements are obtained a central problem remains unsolved. One is not fully equipped to explore possible applications. As one proceeds with this program one encounters, among others, the following fact. The $q$-deformed unitary algebras are relatively docile while the corresponding orthogonal ones are suprisingly refractory. To give this statement more precise content let me introduce at this point some definitions and terminology. Let us start with a classical quantity $x$, typically a factor in some classical matrix element. 0.25cm [*Minimal $q$-deformation:*]{} $$\begin{array}{lll} q=1 & &\;\;\;\;\;\;\;\;\;\;\;\;q\not = 1 \\ x & \rightarrow &\;\;\;\;\;\;\;\;\;\;\;\;[x]_{p}\equiv {q^{px}-q^{-px} \over q^{p}-q^{-p}} \\ \end{array}$$ One may further refine this by defining the deformation to be strictly minimal only for $p=1$ (when the subscript $p=1$ will be omitted) and to be pseudo-minimal for $p\not = 1$. 0.25cm [*Non-minimal $q$-deformation:*]{} An unlimited number of more complicated deformations (retaining the symmetry $q\rightleftharpoons q^{-1}$ and the same classical limit $x$) is evidently possible. 0.25cm [*Example. 1.*]{} $$\begin{array}{l} x \rightarrow [x_{1}]_{p_{1}}-[x_{2}]_{p_{2}},\;\;\;\;\;\;\;\;\; (x_{1}-x_{2}=x) \end{array}$$ 0.25cm [*Example. 2.*]{} $$\begin{array}{l} x \rightarrow [x]_{p}{[y]_{p_{1}} \over [y]_{p_{2}}} \end{array}$$ Apart from simple $q$-factors ($x\rightarrow [x]_{p}\;q^{f(x)}$, $f(x)$ being some non-singular function of $x$ and possibly other parameters) we have, as yet, encountered only the types $(2)$ and $(3)$ (but possibly with more than one $y$-factors in $(3)$). Further study may permit the classification of all the relevant ones. Let us now go back to our initial statement. The well-known Gelfand–Zetlin matrix elements $[1]$ for irreps. of $SU(N)$ are square roots of ratios of products of integer factors. [*A minimal $q$-deformation of each factor*]{} $$\begin{array}{l} \left ( {x_{1}\;x_{2}\;x_{3}\cdots \over y_{1}\;y_{2}\;y_{3}\cdots } \right )^{1/2} \rightarrow \left ( {[x_{1}]\;[x_{2}]\;[x_{3}]\cdots \over [y_{1}]\;[y_{2}]\;[y_{3}]\cdots } \right )^{1/2} \end{array}$$ gives the corresponding element for ${\cal U}_{q}(SU(N))$ for generic $q$. For $q$ a root of unity periodic representations are obtained by introducing suitable [*fractional parts*]{} for each $x$ and $y$ $[2]$. Relatively simple modifications yield other classes of representations $[3,\;4]$. One can of course introduce a unitary transformation after deforming as in $(4)$ to obtain complicated matrix elements. But all representations, at least for real positive $q$, can be obtained as in $(4)$. Transformations can only introduce spurious non-minimalities masking the basic simplicity. In $q$-deforming $\;$orthogonal algebras the $\;$well-known $\;$prescriptions for ${\cal U}_{q}(SU(2))$ suffice for $SO(3)(\approx SU(2))$ and $SO(4)\approx (SU(2) \otimes SU(2))$. But even for $SO(4)$ problems (and non-minimalities) arise if one tries to $q$-deform directly the canonical Gelfand–Zetlin matrix elements $[5]$. The first intrinsically non-trivial case is ${\cal U}_{q}(SO(5))$ which we discuss here showing exactly where and how non-minimalities enter and analysing their implications. ${\cal U}_{q}(SO(5))$: Corresponding to the two unequal roots one has two $q$-deformed Chevalley triplets. The standard Drinfeld-Jimbo Hopf algebra ( omitting the coproducts, counits and antipodes ) becomes in our conventions $[6,\;7]$, $$\begin{array}{ll} q^{\pm h_{1}} e_{1} = q^{\pm 1} e_{1} q^{\pm h_{1}}, & q^{\pm h_{1}} f_{1} = q^{\mp 1} f_{1} q^{\pm h_{1}}, \\ q^{\pm 2 h_{2}} e_{1} = q^{\mp 1} e_{1} q^{\pm 2h_{2}}, & q^{\pm 2 h_{2}} f_{1} = q^{\pm 1} f_{1} q^{\pm 2h_{2}}, \\ q^{\pm h_{1}} e_{2} = q^{\mp 1} e_{2} q^{\pm h_{1}}, & q^{\pm h_{1}} f_{2} = q^{\pm 1} f_{2} q^{\pm h_{1}}, \\ q^{\pm h_{2}} e_{2} = q^{\pm 1} e_{2} q^{\pm h_{2}}, & q^{\pm h_{2}} f_{2} = q^{\mp 1} f_{2} q^{\pm h_{2}}, \\ [e_{1} , f_{2}] = 0, & [e_{2} , f_{1}] = 0, \\ [e_{1} , f_{1} ]=[2 h_{1}] & \end{array}$$ and $$\begin{array}{ll} [ e_{2},f_{2} ]=\;[2 h_{2}]_{2}, & \\ e_{2} e_{3}^{(\pm)} = q^{\mp 2} e_{3}^{(\pm)} e_{2}, & f_{3}^{(\pm)} f_{2} = q^{\mp 2} f_{2} f_{3}^{(\pm)} \\ [e_{1} , e_{4}]=0, & [f_{1} ,f_{4}]=0 \end{array}$$ where $$\begin{array}{l} e_{3}^{(\pm)} = q^{\pm 1} e_{1} e_{2}- q^{\mp 1} e_{2} e_{1},\;\;\;\;\;\;\; f_{3}^{(\pm)} = q^{\pm 1} f_{2} f_{1}- q^{\mp 1} f_{1} f_{2}, \\ e_{4}= q^{-1} e_{1} e_{3}^{(+)} - q\;e_{3}^{(+)} e_{1}= q\;e_{1} e_{3}^{(-)} - q^{-1} e_{3}^{(-)} e_{1}, \\ f_{4}= q^{-1} f_{3}^{(+)} f_{1} - q\;f_{1} f_{3}^{(+)}= q\;f_{3}^{(-)} f_{1} - q^{-1} f_{1} f_{3}^{(-)}. \end{array}$$ We define $(M, \;K,\;M_{2},\;M_{4})$ through $$\begin{array}{ll} q^{\pm M}=q^{\pm h_{1}}, & q^{\pm (K-M)}=q^{\pm h_{2}} \\ q^{\pm M_{2}}=q^{\pm h_{2}}, & q^{\pm M_{4}}=q^{\pm (h_{1}+h_{2})} \\ \end{array}$$ The fundamental Casimir (classically quadratic in the Cartan-Weyl generators) can now be written $[6,\;7]$ for arbitrary $q$ as $$\begin{array}{ll} A &= {1\over [2]} \Bigl\lbrace \bigl( f_{1}e_{1}+ [M][M+1]\bigl) {[2 K+3]_{2}\over [2 K+3]} + [K][K+3]\Bigr\rbrace \cr \\ & +\bigl(f_{2}e_{2}+{1\over [2]^{2}} f_{4}e_{4}\bigl)+ {1 \over [2]^{2}}\bigl(f_{3}^{(+)}e_{3}^{(+)} q^{2M+1} + f_{3}^{(-)}e_{3}^{(-)}q^{-2M-1}\bigl). \end{array}$$ For brevity we consider in this talk only generic $q$, real positive. The case $q$ a root of unity has been discussed in $[6]$. For generic $q$ the irreps. are labelled by two (half) integers $$\begin{array}{l} n_{1} \geq n_{2} \end{array}$$ One has the following general result $[7]$. The state annihilated by $e_{1}$, $e_{2}$ corresponds to eigenvalue $n_{2}$ of $M$ and $n_{1}$ of $K$. Hence on this space $$\begin{array}{l} A = {1\over [2]} \Bigl\lbrace [n_{1}][n_{1}+3]+ [n_{2}][n_{2}+1]{[2n_{1}+3]_{2}\over [2n_{1}+3]}\Bigl\rbrace\; \hbox{{\bf 1}} \end{array}$$ where is the unit matrix corresponding to the dimension $$\begin{array}{l} {1\over 6}(2n_{2}+1)(2n_{1}+3)(n_{1}+n_{2}+2)(n_{1}-n_{2}+1) \end{array}$$ For $n_{2}=0,\;{1\over 2}$ and $n_{1}$ $(11)$ reduces to the respective results in $[6]$. The factor $[2\;n_{1}+3]_{2} /[2\;n_{1}+3]$ in $(11)$ is a particularly interesting example of the non-minimal case $(3)$. Its implications will be analysed at the end. After all the $SU(N)$ and $SO(3)$, $SO(4)$ one encounters unequal roots for the first time for $SO(5)$. In constructing matrix elements one can associate the well-known $SU(2)$ structure either with the Chevalley triplet ($e_{1}$, $f_{1}$, $h_{1}$) or with the triplet ($e_{2}$, $f_{2}$, $h_{2}$). The consequences are very different and even more so concerning $q$-deformations. Non-minimality and non-simple lacing (unequal roots) will appear, at least in this example, associated together. The well-known Gelfand-Zetlin basis and matrix elements $[1]$ are quite unsuitable (for the orthogonal case) as starting point for $q$-deformation. The situation is entirely different from that of the unitary case. The reasons were discussed in $[6]$. After this remark I now introduce the two bases starting with the Chevalley triplets 1 and 2 respectively. 0.5cm [**Basis I.**]{} Standard ${\cal U}_{\displaystyle{q}}(SU(2))$ structure for ($e_{1}$, $f_{1}$, $q^{\pm h_{1}}$), invariants ($n_{1}$, $n_{2}$), variable indices ($j$, $m$, $k$, $l$). The domain of the indices are $[7]$: 0.25cm (i) For ($n_{1},\;n_{2}$) integers $$\begin{array}{l} j = 0,\;1,\cdots,\;n_{1}-1,\;n_{1},\;\;\;\;\;\;\;\; m = -j,\;-j+1, \cdots,\;j-1,\;j \\ k = -l,\;-l+2, \cdots,\;l-2,\;l,\;\;\;\;\;\;\;\; l = 0,\;1,\;2\;\cdots \\ j+l = n_{1}-n_{2},\;n_{1}-n_{2}+1, \cdots,\;n_{1}+n_{2} \\ j-l-{1\over2}\bigl(1-(-1)^{n_{1}+n_{2}-j-l}\bigl)=-n_{1}+n_{2}, \;-n_{1}+n_{2}+2, \cdots,\;n_{1}-n_{2}. \end{array}$$ 0.25cm (ii) For ($n_{1},\;n_{2}$) half-integers $$\begin{array}{l} j = {1\over 2},\;{3\over 2},\cdots,\;n_{1}-1,\;n_{1},\;\;\;\;\;\;\;\; m = -j,\;-j+1, \cdots,\;j-1,\;j \\ k = -l,\;-l+1, \cdots,\;l-1,\;l,\;\;\;\;\;\;\;\; l = {1\over 2},\;{3\over 2},\cdots \\ j+l = n_{1}-n_{2}+1,\;n_{1}-n_{2}+3, \cdots,\;n_{1}+n_{2} \\ j-l=-n_{1}+n_{2},\;-n_{1}+n_{2}+2, \cdots,\;n_{1}-n_{2}. \end{array}$$ The dimension is given by $(12)$ for both cases. The matrix elements are (suppressing $n_{1}$, $n_{2}$ in the state labels) $$\begin{array}{l} q^{\pm M } \vert j\;m\;k\;l\rangle = q^{\pm m } \vert j\;m\;k\;l\rangle \\ q^{\pm K } \vert j\;m\;k\;l\rangle = q^{\pm k} \vert j\;m\;k\;l\rangle \\ e_{1} \vert j\;m\;k\;l\rangle = ([j-m]\;[j+m+1])^{1/2}\vert j\;m+1\;k\;l \rangle \\ e_{2} \vert j\;m\;k\;l\rangle= ([j-m+1][j-m+2])^{1/2}\sum_{ l'}\;a(j,k,l,l') \vert j+1\;m-1\;k+1\;l'\rangle \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+ (\;[j+m][j+m-1])^{1/2}\sum_{ l'}\;b(j,k,l,l') \vert j-1\;m-1\;k+1\;l'\rangle \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+(\;[j+m][j-m+1])^{1/2}\sum_{ l'}\;c(j,k,l,l') \vert j\;m-1\;k+1\;l'\rangle \\ \end{array}$$ We consider only real solutions of the matrix elements when for any two states $|x\rangle$, $|y\rangle$ $$\begin{array}{l} \langle y | f_{i} |x\rangle = \langle x | e_{i} |y\rangle,\;\;\;\;\;\;\; (i=1,\;2) \end{array}$$ As yet solutions have been obtained $[6]$ for arbitrary (half) integer $n_{1}$ only for the extreme values of $n_{2}$, $$\begin{array}{l} n_{2}=0\;\;\;\hbox{or}\;\;\;{1\over 2} \end{array}$$ and $$\begin{array}{l} n_{2}=n_{1} \end{array}$$ For these cases $l$-dependence is trivial. One labels the states as $|j\;m\;k\rangle$. Even classical $(q=1)$ solutions are not available for the general case. Referring to $[6]$ for complete solutions of the cases $(17)$ I now present the solution of $(18)$ for comparing it to the corresponding one in Basis II to follow. For $n_{2}=n_{1}=n$ (integer or half-integer), suppressing trivial $l$-dependence $$\begin{array}{l} a(j,k) =b(j+1,-k-1)=(q+q^{-1})^{-1} \Biggl({[n-j]_{2}\;[n+j+2]_{2}\;[j+k+1]\; [j+k+2]\over [2j+3]\;[2j+1]\;[j+1]_{2}^{2}}\Biggl)^{1/2} \\ c(j,k) =(q+q^{-1})^{-1} [n+1]_{2} {([j-k]\;[j+k+1])^{1/2} \over [j+1]_{2} \;[j]_{2}} \end{array}$$ The dimension is $$\begin{array}{l} {1\over 3}(n+1)(2n+1)(2n+3) \end{array}$$ Comparing with the limit $q=1$, it is evident that each factor undergoes a (pseudo) minimal deformation of type $(1)$. The situation will change in the basis to follow. 0.5cm [**Basis II.**]{} Standard ${\cal U}_{q^{2}}(SU(2))$ structure for ($e_{2}$, $f_{2}$, $q^{\pm h_{2}}$), invariants ($n_{1}$, $n_{2}$), variable indices ($j_{2}$, $m_{2}$, $j_{4}$, $m_{4}$). The domain of the indices are $[7]$ (for integer and half-integer $n_{1}$, $n_{2}$): $$\begin{array}{l} j_{2} = 0,\;{1\over 2},\;1,\cdots,\;{n_{1}+n_{2}\over 2},\;\;\;\;\;\;\;\; m_{2} = -j_{2},\;-j_{2}+1, \cdots,\;j_{2}-1,\;j_{2} \\ j_{4} = 0,\;{1\over 2},\;1,\cdots,\;{n_{1}+n_{2}\over 2},\;\;\;\;\;\;\;\; m_{4} = -j_{4},\;-j_{4}+1, \cdots,\;j_{4}-1,\;j_{4} \\ j_{2}+j_{4} = n_{2},\;n_{2}+1,\cdots,\;n_{1} \\ j_{2}-j_{4} = -n_{2},\;-n_{2}+1,\cdots,\;n_{2}. \end{array}$$ The dimension is given by $(12)$. The matrix elements are $$\begin{array}{l} q^{\pm M_{2}} \vert j_{2}\;m_{2}\;j_{4}\;m_{4}\rangle = q^{\pm m_{2}} \vert j_{2}\;m_{2}\;j_{4}\;m_{4}\rangle \\ q^{\pm M_{4}} \vert j_{2}\;m_{2}\;j_{4}\;m_{4}\rangle = q^{\pm m_{4}} \vert j_{2}\;m_{2}\;j_{4}\;m_{4}\rangle \\ e_{2} \vert j_{2}\;m_{2}\;j_{4}\;m_{4} \rangle = ([j_{2} - m_{2}]_{2} [j_{2}+m_{2}+1]_{2})^{1/2} \vert j_{2}\;m_{2}+1\;j_{4}\;m_{4} \rangle \\ e_{1} \vert j_{2}\;m_{2}\;j_{4}\;m_{4}\rangle= \sum_{\epsilon,\; \epsilon '}\;\; ([j_{2}-\epsilon\;m_{2}+{1+\epsilon \over 2}]_{2})^{1/2}\; \\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\times\; c_{(\epsilon,\epsilon ')}(j_{2},j_{4},m_{4}) \;\vert j_{2}+{\epsilon \over 2}\;\;m_{2}-{1\over 2}\;\;j_{4}+ {\epsilon ' \over 2}\;\;m_{4}+{1\over 2}\rangle \end{array}$$ with, as in $(16)$, $$\begin{array}{l} \langle y | f_{i} |x\rangle = \langle x | e_{i} |y\rangle,\;\;\;\;\;\;\; (i=1,\;2) \end{array}$$ Now a classical solution is available $[8]$. In our notation this is $$\begin{array}{l} c_{(\epsilon,\epsilon ')}(j_{2},j_{4},m_{4})=(j_{4}+\epsilon '\;m_{4}+ {1+\epsilon ' \over 2})^{1/2} c_{(\epsilon,\epsilon ')}(j_{2},j_{4})\;\;\; (\epsilon,\epsilon '=\pm 1) \\ c_{(\epsilon,\epsilon ')}(j_{2},j_{4})=\epsilon\; \epsilon '\; c_{(-\epsilon,-\epsilon ')}(j_{2}+{\epsilon \over 2},j_{4}+ {\epsilon ' \over 2}) \end{array}$$ where $$\begin{array}{l} c_{(++)}(j_{2},\;j_{4})=\biggl({(n_{1}+j_{2}+j_{4}+3)(n_{1}-j_{2}-j_{4}) (j_{2}+j_{4}+n_{2}+2)(j_{2}+j_{4}-n_{2}+1) \over (2j_{2}+1)\;(2j_{2}+2)\; (2j_{4}+1)\;(2j_{4}+2)}\biggl )^{1/2} \\ c_{(+-)}(j_{2},\;j_{4})=\biggl({(n_{1}+j_{2}-j_{4}+2)(n_{1}-j_{2}+j_{4}+1) (j_{2}-j_{4}+n_{2}+1)(j_{4}-j_{2}+n_{2}) \over (2j_{2}+1)\;(2j_{2}+2)\; (2j_{4})\;(2j_{4}+1)}\biggl )^{1/2}. \end{array}$$ (see the comments in $[7]$ concerning the relation to $[8]$). But now $q$-deformation is the problem. So far solutions have been obtained for $$\begin{array}{l} n_{2}=0 \end{array}$$ and $$\begin{array}{l} n_{2}=n_{1}=n \end{array}$$ Referring to $[7]$ for $(26)$ I now reproduce only the solution for $(27)$. For $n_{1}=n_{2}=n$ $$\begin{aligned} j_{2}+j_{4}=n,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;&j_{2}=0,\;{1\over 2},\cdots,\;n \\ c_{(\epsilon \epsilon)}(j_{2},\;j_{4},\;m_{4})=0. & \\\end{aligned}$$ and $$\begin{aligned} c_{(\epsilon , -\epsilon)}(j_{2},\;j_{4},\;m_{4})=([n+1]_{2}- [j_{2}+\epsilon\;m_{4}+{1 \over 2}(1+\epsilon)]_{2})^{1/2}\; c_{(\epsilon , -\epsilon)}(j_{2})\end{aligned}$$ with $$\begin{array}{l} c_{(+-)}(j_{2})=-c_{(-+)}(j_{2}+{1\over 2})=\biggl({[2j_{2}+1]\;[2j_{2}+2]\over [2j_{2}+1]_{2}\;[2j_{2}+2]_{2}}\biggl)^{1/2}. \end{array}$$ The $m_{4}$-dependence in $(28)$ is of type $(2)$ with $$x_{1}=n+1,\;\;\;\;x_{2}=j_{2}+\epsilon m_{4}+{1 \over 2}(1+\epsilon)$$ so that $$\begin{array}{l} x=x_{1}-x_{2}=j_{4}+\epsilon ' m_{4} +{1\over 2}(1+\epsilon') \end{array}$$ consistensly with $(24)$. The $c(j_{2})$’s in $(28)$ are of type $(3)$ with $x=1$ but double $y$-factors. [*If one tries to solve using only (pseudo) minimal deformations of type $(1)$ one runs into contradictions*]{}. Thus non-minimality is essential for this basis. This basis, in turn, seems to be essential for providing access to certain interesting sectors. Let me just mention two such points. \(i) Suitably adapting familiar continuation techniques Basis I leads to ${\cal U}_{q}(SO(3,2))$ while Basis II is needed to arrive at ${\cal U}_{q}(SO(4,1))$. \(ii) Under suitable contraction procedures quite different $q$-deformed inhomogeneous algebras are obtained from the two bases respectively (see the comments in $[6]$ and $[7]$). It is not possible to discuss these aspects here. But they suffice to indicate the potential interest of a general solution of $(22)$ for arbitrary $q$. (In fact once solutions are found for generic $q$ our method of fractional parts explained in $[3]$ and $[4]$ and already used for Basis I in $[6]$ will readily yield solutions for $q$ a root of unity.) The general solution will also permit a better understanding of the role of non-minimality. This role is not merely formal. If there is indeed some physical application, the physical content of different types of deformations will be different. As $q$ moves away from unity they will respond differently. Thus, to take an example, for $q=e^{\delta}$ and $x=x_{1}-x_{2}$ $$\begin{array}{l} [x_{1}]_{p_{1}}-[x_{2}]_{p_{2}}=x+{1\over 6}\delta^{2}(p_{1}^{2} x_{1}(x_{1}^{2}-1)-p_{2}^{2}x_{2}(x_{2}^{2}-1))+\cdots \end{array}$$ In this context the non-minimality of type $(3)$ noted in $(11)$ also has a striking consequence. For $q=e^{\delta}$, $$\begin{array}{l} {1\over [2]} \Bigl\lbrace [n_{1}][n_{1}+3]+ [n_{2}][n_{2}+1]{[2n_{1}+3]_{2}\over [2n_{1}+3]}\Bigl\rbrace=A_{2}+ \delta^{2}\;A_{4}+\cdots \end{array}$$ where $$\begin{array}{l} A_{2}={1\over 2}(n_{1}(n_{1}+3)+n_{2}(n_{2}+1)) \end{array}$$ $$\begin{array}{l} A_{4}=4\;n_{2}(n_{2}+1)(n_{1}+1)(n_{1}+2)+\cdots \end{array}$$ The other terms of $A_{4}$ are very easily obtained. Let us, however, concentrate on the first term, a direct consequence of the factor $[2n_{1}+3]_{2}/ [2n_{1}+3]$ in $(31)$. $A_{2}$ is just the well-known eigenvalue of the first classical Casimir (quadratic in the Cartan-Weyl generators) for the irrep. ($n_{1}$, $n_{2}$). The first term of $A_{4}$ is the classical eigenvalue of the second (quadratic) Casimir operator. [*This is an illustration of the general result announced in my first talk $[9]$. The $q$-deformed quadratic Casimir alone completely characterizes the irreps. ($n_{1}$, $n_{2}$). We note morever that this is here achieved through a typical non-minimality.*]{} Consider now the consequence of the same non-minimal structure in the context of contraction. Certain aspects of contraction of ${\cal U}_{q}(SO(5))$ are discussed in $[6]$. Here let us just note that the eigenvalue of the contracted Casimir ( for $q > 1$ for example ) is obtained by dividing the l.h.s. of $(31)$ by the leading term of $[n_{1}][n_{1}+3]$ as $n_{1}\rightarrow \infty$ multiplied by a constant $\lambda^{-2}$, i.e. by $$\begin{array}{l} {1 \over \lambda^{2}[2]}{q^{2n_{1}+3} \over (q-q^{-1})^{2}} \end{array}$$ and taking the limit. This gives an eigenvalue $$\begin{array}{l} \lambda^{2}\lbrace 1+ {(q-q^{-1})^{2} \over (q+q^{-1})}[n_{2}][n_{2}+1] \rbrace \end{array}$$ A general solution for ${\cal U}_{q}(SO(5))$ on a suitable basis can lead through contraction to a successful construction of representations of ${\cal U}_{q}(E(4))$ (the $q$-deformed $4$-dimensional Euclidean algebra) for arbitrary $q$. Then one has to see if a suitable analytic continuation to $q$-deformed Poincaré algebra is possible through this approach. This is one of our main goals. This remains to be done. [*But (35) shows that it will include the following remarkable feature. The $q$-deformed “mass-like” operator (the $q$-deformation of the sum of squares of the translations) will have eigenvalues depending on the “spin-like” parameter $n_{2}$ as in $(35)$*]{}. This is reminiscent of a famous feature of $SU(6)$ type models. This seeping of internal discrete quantum numbers into the “mass-like” spectrum seems to be a typical feature of $q$-deformations $[10]$. But for the orthogonal symmetry (at least for the present example) this turns out to be an intriguing consequence of non-minimality. One need not inject ans${\ddot{a}}$tze to construct a mass spectrum depending on internal quantum numbers. One just solves the mathematical problem of constructing representations explicitly and the result is there. References {#references .unnumbered} ========== [99]{} I.M. Gelfand, R.A. Milnos and Z.Ya. Shapiro, Representations of Rotation and Lorentz Groups; Pergman, New-york, 1963. (Supplements). D. Arnaudon and A. Chakrabarti. Comm. Math. Phys. 139, 461 (1991) B. Abdesselam, D. Arnaudon and A. Chakrabarti, Representations of ${\cal U}_{q}(SU(N))$ at roots of unity, q-alg/9504006 ( to be published in J. Phys. A: Math. Gen. ) B. Abdesselam, Atypical Representations of ${\cal U}_{q}(SU(N))$ at roots of unity ( in preparation). A. Chakrabarti, Jour. Math. Phys. 34, 1964 (1993). A. Chakrabarti, J.Math.Phys. 35, 4247 (1994). B. Abdesselam, D. Arnaudon and A. Chakrabarti, J. Phys. A: Math. Gen. 28 (1995) 3701-3708. J.W.B. Hughes, J.Math.Phys. 24, 1015 (1983). A. Chakrabarti, talk presented in the Satellite Meeting (Nakai 1995). A. Chakrabarti, (i) Jour. Math. Phys. 32, 1227 (1991). \(ii) ${\cal U}_{q}(IU(n))$: $q$-deformations of inhomogeneous unitary algebras, Proceedings of Wigner Symp. II, Goslar 1991. [^1]: *URA 14-36 du CNRS, associée à l’E.N.S. de Lyon, et au L.A.P.P. d’Annecy-le-Vieux.* [^2]: *Talk presented at the Nankai workshop, Tianjin, 1995 by A. Chakrabarti.*
--- abstract: 'We investigate the geometry of approximates in multiplicative Diophantine approximation. Our main tool is a new averaging result for Siegel transforms on the space of unimodular lattices in ${{\mathbb}R}^n$ which is of independent interest.' address: - 'J.S.A.: Department of Mathematics, University of Illinois Urbana-Champaign, 1409 W. Green Street, Urbana, IL 61801, USA' - 'A.G.: School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400005 India' - 'J.T.: School of Mathematics, University of Bristol, University Walk, Bristol, BS8 1TW UK' author: - 'Jayadev S. Athreya' - Anish Ghosh - Jimmy Tseng title: Spherical averages of Siegel transforms for higher rank diagonal actions and applications --- [^1] [^2] [^3] Introduction ============ The main result in the present paper is a new averaging theorem for Siegel transforms on the homogeneous space ${\operatorname{SL}}_{n}({{\mathbb}R})/{\operatorname{SL}}_{n}({{\mathbb}Z})$. Such results have found several applications in number theory and indeed our motivation is to investigate the distribution of approximates in certain foundational results in Diophantine approximation. In [@AGT1], we studied the phenomenon of *spiraling* of approximates in Dirichlet’s theorem and obtained a number of distribution results for approximates. In the present paper, we continue our investigations in this subject and present a new multi parameter averaging result for Siegel transforms and as a consequence, obtain new results on the geometry of approximates in *multiplicative* and *weighted* Diophantine approximation. We briefly recall the setup in [@AGT1] and then state our main results. The bulk of the paper is concerned with the proof of Theorem \[theorem:siegel:equidistUpper\], our result on averages of Siegel transforms. The general principle that equidistribution of spherical averages implies distribution results for approximates applies in a wide variety of situations. In the final section, we briefly survey some such situations. Dirichlet’s theorem and spiraling --------------------------------- Let $\alpha_{ij}, 1 \leq i \leq m, 1\leq j \leq n$ be real numbers and $Q > 1$. Then Dirichlet’s theorem in Diophantine approximation states that there exist integers $q_1,\dots, q_m, p_1,\dots, p_n$ such that $$\label{d0} 1 \leq \max\{|q_1|,\dots,|q_m|\} \leq Q$$ and $$\label{d1} \max_{1 \leq i \leq n} |\alpha_{i1}q_1 + \dots + \alpha_{im}q_m - p_i| \leq Q^{-m/n}.$$ In our earlier work [@AGT1], we studied the problem of *spiraling* of the approximates appearing in Dirichlet’s theorem and showed as a consequence, that on average, the directions of approximates spiral in a uniformly distributed fashion on the unit sphere of one lower dimension. In fact, the problem can be recast as a special case of a more general equidistribution result in the space of lattices. As far as we are aware, this is the only work addressing the natural question of how the approximates appearing in Dirichlet’s theorem are distributed. In [@AGT1], we considered vectors rather than linear forms although the proof goes through for linear forms with very minor modifications. Given ${\bf x} \in {{\mathbb}R}^{d}$, we form the associated unimodular lattice in ${{\mathbb}R}^{d+1}$ $$\nonumber \Lambda_{\bf x} := \begin{pmatrix} {\operatorname{Id}}_{d} & {\bf x}\\0 & 1 \end{pmatrix} {\mathbb}Z^{d+1} = \left\{\begin{pmatrix} q{\bf x} - {\bf p}\\ q \end{pmatrix} ~:~{\bf p} \in {\mathbb}Z^{d}, q \in {\mathbb}Z\right\}.$$ Then we can view the approximates $({\bf p}, q)$ of ${\bf x}$ appearing in Dirichlet’s Theorem as points of the lattice $\Lambda_{\bf x}$ in the region $$\label{cone} R := \left\{{\bf v} = \begin{pmatrix}{\bf v}_1\\ v_2 \end{pmatrix} \in {\mathbb}R^{d} \times {\mathbb}R~:~\|{\bf v}_1\||v_2|^{1/d} \leq 1\right\}. $$ The set $R$ is a thinning region around the $v_2$-axis, and the following sets are used to study the distribution of lattice approximates in $R$. Let $$\label{defR1} R_{\epsilon, T} := \left\{ {\bf v} \in R~:~ \epsilon T \le v_2 \le T \right\}$$ and, for a subset $A$ of ${\mathbb}S^{d-1}$ with zero measure boundary, $$\label{defR2} R_{A, \epsilon, T} := \left\{ {\bf v} \in R_{\epsilon, T}~:~ \frac{{\bf v}_1}{\|{\bf v}_1\|} \in A \right\}.$$ For a unimodular lattice ${\Lambda}$, define $$N(\Lambda, \epsilon, T) = \#\{\Lambda \cap R_{\epsilon, T}\}$$ and $$N(\Lambda, A, \epsilon, T) = \#\{\Lambda \cap R_{A, \epsilon, T}\}.$$ Let $dk$ denote Haar measure on $K := K_{d+1}:={\operatorname{SO}}_{d+1}({\mathbb}R)$, and let $X_{d+1} := {\operatorname{SL}}_{d+1}({\mathbb}R)/{\operatorname{SL}}_{d+1}({\mathbb}Z)$. In [@AGT1], we proved \[AGT1\] For every $\Lambda \in X_{d+1}$, $A \subset {\mathbb}S^{d-1}$ as above, and for every $\epsilon > 0$, $$\label{main-1} \lim_{T \rightarrow \infty} \frac{\int_{K} N(k^{-1}\Lambda, A, \epsilon, T)~{\mathrm{d}{k}}}{\int_{K} N(k^{-1}\Lambda, \epsilon, T)~{\mathrm{d}{k}}}= {\operatorname{vol}}(A).$$ The main tool in proving Theorem \[AGT1\] is an equidistribution result for spherical averages. Given a lattice $\Lambda$ in ${{\mathbb}R}^{d+1}$ and a bounded Riemann-integrable function $f$ with compact support on ${\mathbb}R^{d+1}$, denote by $\widehat{f}$ its *Siegel transform*: $$\widehat{f}(\Lambda) := \sum_{\bf v \in \Lambda \backslash \{\boldsymbol{0}\}} f(\bf v).$$ Then \[theorem:siegel:equidist\] Let $f$ be a bounded Riemann-integrable function of compact support on ${{\mathbb}R}^{d+1}$. Then for any ${\Lambda}\in X_{d+1}$, $$\lim_{t \to \infty} \int_{K_{d+1}} \widehat{f}(g_t k {\Lambda}) ~{\mathrm{d}{k}} = \int_{X_{d+1}}\widehat{f}~{\mathrm{d}{\mu}}.$$ Multiplicative and weighted variants ------------------------------------ Dirichlet’s theorem lends itself to several interesting generalisations. Here is a *multiplicative* analogue which can be proved using either Dirichlet’s original approach or Minkowski’s geometry of numbers. With notation as above, there exist integers $q_1,\dots, q_m, p_1,\dots, p_n$ such that $$\label{md0} \left(\prod_{1 \leq j \leq m}\max\{1,|q_j|\}\right)^{1/m} \leq Q$$ and $$\label{md1} \left(\prod_{1 \leq i \leq n} |\alpha_{i1}q_1 + \dots + \alpha_{im}q_m - p_i|\right)^{1/n} \leq Q^{-m/n}.$$ As a corollary, it follows that there are infinitely many $q_1,\dots, q_m$ such that $$\label{md2} \left(\prod_{1 \leq i \leq n} |\alpha_{i1}q_1 + \dots + \alpha_{im}q_m - p_i|\right) \leq \left(\prod_{1 \leq j \leq m}\max\{1,|q_j|\}\right)^{-1}$$ for some $p_1,\dots, p_n$.\ The study of Diophantine inequalities using the multiplicative “norm" as above instead of the supremum norm is referred to as *multiplicative Diophantine approximation*. This subject is considered more difficult and is much less understood in comparison to its standard counterpart. For instance, arguably the most emblematic open problem in metric Diophantine approximation namely the Littlewood conjecture, is a problem in this genre. We refer the reader to the nice survey [@Bugeaud] by Bugeaud for an overview of the theory. There have been several important advances recently, several arising from applications of homogeneous dynamics to number theory. We mention the work of Kleinbock and Margulis [@KM] settling the Baker-Sprindzhuk conjecture as well as the work of Einsiedler-Katok-Lindenstrauss making dramatic progress towards Littlewood’s conjecture.\ Another variation of Diophantine approximation is developed as follows. Let $\alpha_{ij}, 1 \leq i \leq m, 1\leq j \leq n$ be real numbers and let ${\bf r} = (r_1, \dots, r_n) \in {{\mathbb}R}^n$ and ${\bf s} = (s_1,\dots, s_m) \in {{\mathbb}R}^m$ be probability vectors. Recall that a *probability vector* has nonnegative real components, the sum of which is equal to $1$. Then a weighted version of Dirichlet’s theorem states that there exist infinitely many integers $q_1,\dots, q_m$ such that $$\label{wd1} \max_{1 \leq i \leq n} |\alpha_{i1}q_1 + \dots + \alpha_{im}q_m - p_i|^{1/r_i} \leq \max_{1 \leq j \leq m}|q_j|^{1/s_j}$$ for some $p_1,\dots, p_n$. The subject of *weighted* Diophantine approximation has also witnessed significant progress of late. We refer the reader to the works of Kleinbock and Weiss, [@KW10; @KW14] as well as the resolution of Schmidt’s conjecture on weighted badly approximable vectors due to Badziahin-Pollington-Velani [@BDV].\ Spiraling --------- In this paper, we study the distribution of approximates in the multiplicative setting as well as the setting of Diophantine approximation with weights. Again, as far as we are aware, these are the first results of their kind. While our strategy remains the same as in [@AGT1], our main tool, an equidistribution theorem for Siegel transforms on homogeneous spaces (Theorem \[theorem:siegel:equidist\]) is new and new inputs are required for the proof. Equidistribution results of this kind have found many applications (cf. [@KM], [@EMM], [@MS1; @MS2]) in number theory. We hope our result will be of interest to both dynamicists as well as number theorists. The setup --------- Let $\ell\geq1$ be an integer. Define functions ${{\mathbb}R}^\ell \rightarrow {{\mathbb}R}_{\geq 0}$ as follows: $$\begin{aligned} \|\boldsymbol{v}\|_{\boldsymbol{p}} := \max_{i=1, \cdots, \ell} |v_i|^{1/p_i} \quad \textrm{ and } \quad \|\boldsymbol{v}\|_{{\operatorname{pr}}} := \prod_{i=1}^\ell |v_i| \end{aligned}$$ where $\boldsymbol{p} \in {{\mathbb}R}^\ell$ is a probability vector. Let $m,n \geq 1$ be integers and $d:=m+n$. Let $e_1, \cdots, e_m$ be the standard basis for ${{\mathbb}R}^m$ and $e_1, \cdots, e_d$ be the standard basis for ${{\mathbb}R}^m \times {{\mathbb}R}^n = {{\mathbb}R}^d$. Fix probability vectors $\boldsymbol{r} \in {{\mathbb}R}^m$ and $\boldsymbol{s} \in {{\mathbb}R}^n$, these vectors are also referred to as *weights* in the literature. Let $$g^{(\boldsymbol{r})}_t :={\operatorname{diag}}(e^{r_1t}, \cdots ,e^{r_m t}) \in {\operatorname{GL}}_{m}({\mathbb}R),$$ and let ${\mathbb}S^{m-1}$ denote the $m-1$ dimensional unit sphere centered at the origin. For a subset $\widetilde{A}$ of ${\mathbb}S^{m-1}$, the union of all rays in ${{\mathbb}R}^m$ through each point of $\widetilde{A}$ is called the *cone in ${{\mathbb}R}^m$ through $\widetilde{A}$* and denoted by ${\mathcal C}\widetilde{A}$. The region of interest for Diophantine approximation with weights is $$R:= R^{(\boldsymbol{r}, \boldsymbol{s})}:=\bigg{\{}\boldsymbol{v}=\begin{pmatrix} \boldsymbol{v}_1 \\ \boldsymbol{v}_2 \end{pmatrix} \in {{\mathbb}R}^m \times {{\mathbb}R}^n : 0<\|\boldsymbol{v}_1\|_{\boldsymbol{r} } \|\boldsymbol{v}_2\|_{\boldsymbol{s}} \leq 1 \bigg{\}}.$$ Fix an $0<\epsilon<1,$ $T >0$, and a subset $A$ of ${\mathbb}S^{m-1}$ with zero measure boundary. The subsets that concern us, in particular, are $$R_{\epsilon, T} := \left\{ \boldsymbol{v} \in R~:~ \epsilon T \le \|\boldsymbol{v}_2\|_{\boldsymbol{s}} \le T \right\} \quad \textrm{ and } \quad R_{A, \epsilon, T} := \left\{ \boldsymbol{v} \in R_{\epsilon, T}~:~ \boldsymbol{v}_1\in g^{(\boldsymbol{r})}_{-\log(T)} ({\mathcal C}A) \right\}.$$ The subset $R_{\epsilon, T}$ is analogous to the subset above which played a role in [@AGT1]. Indeed if we consider the special case of $\boldsymbol{r}$ equal to $(1/m, \cdots, 1/m)$, then the set $R_{A, \epsilon, T}$ is equal to $\left\{ \boldsymbol{v} \in R_{\epsilon, T}~:~ \frac{\boldsymbol{v}_1}{\|\boldsymbol{v}_1\|_{2} } \in A \right\}$, which was considered in [@AGT1]. The reason that our formulation in terms of cones is the appropriate generalization is as follows. Let us again consider an arbitrary $\boldsymbol{r}$. Consider the slices of $R$ given by the equations $$\|\boldsymbol{v}_1\|_{\boldsymbol{r} } = 1/p$$ for a real number $p>1$. To map the slice given by $p$ to the one given by $p'\geq p$, apply the contracting (and, in general, nonuniformly contracting) automorphism $g_{\log(p)-\log(p')}^{(\boldsymbol{r})}$ to the slice. Now $g_{-t}^{(\boldsymbol{r})}$ takes ${\mathbb}S^{m-1}$ into ellipsoids, whose eccentricities are increasing as $t$ increases. It is reasonable that the distribution of directions respects the action of $g_{-t}^{(\boldsymbol{r})}$—that this holds is the content of our result, Theorem \[thmWeightedDASphereAve\].\ The regions of interest for multiplicative Diophantine approximation are $$P:=\bigg{\{}\boldsymbol{v}=\begin{pmatrix} \boldsymbol{v}_1 \\ \boldsymbol{v}_2 \end{pmatrix} \in {{\mathbb}R}^m \times {{\mathbb}R}^n : 0<\|\boldsymbol{v}_1\|_{{\operatorname{pr}}} \|\boldsymbol{v}_2\|_{{\operatorname{pr}}} \leq 1 \bigg{\}},$$ $$P_{\epsilon, T} := \left\{ \boldsymbol{v} \in P~:~ \epsilon T \le \|\boldsymbol{v}_2\|_{{\operatorname{pr}}} \le T \right\} \quad \textrm{ and } \quad P_{A, \epsilon, T} := \left\{ \boldsymbol{v} \in P_{\epsilon, T}~:~ \boldsymbol{v}_1 \in g^{(\boldsymbol{r})}_{-\log(T)} ({\mathcal C}A) \right\}.$$ The region $P$ is sometimes referred to as a *star body*. For the special case of $\boldsymbol{r}$ equal to $(1/m, \cdots, 1/m)$, the set $P_{A, \epsilon, T}$ is equal to $\left\{ \boldsymbol{v} \in P_{\epsilon, T}~:~ \frac{\boldsymbol{v}_1}{\|\boldsymbol{v}_1\|_{2} } \in A \right\}.$ Now, unlike for Diophantine approximation with weights, the $m$-volume of $P_{1,1}$ is infinite. Let ${{\mathbb}P}_i$ denote the coordinate codimension-one hyperplane in ${{\mathbb}R}^m$ normal to $e_i$. Then $${{\mathbb}P}_i \cap {{\mathbb}S}^{m-1} =: {{\mathbb}S}_i$$ are *great spheres of ${{\mathbb}S}^{m-1}$*; namely, ${{\mathbb}S}_i = k_i {{\mathbb}S}^{m-2}$ for some $k_i \in {\operatorname{SO}}_m({{\mathbb}R})$. For any $\delta>0$, let $${{\mathbb}S}^{(\delta)}_i := {{\mathbb}P}_i \times [-\delta, \delta] \cap {{\mathbb}S}^{m-1}$$ denote the $\delta$-thickening of ${{\mathbb}S}_i$ on ${{\mathbb}S}^{m-1}$. By elementary calculus, it is easy to see that the ${{\mathbb}P}_i$ point in the directions in which $P_{1,1}$ has regions with infinite volume (see also the Appendix). Radially projecting $P_{1,1}$ onto $${\mathcal S}:={\mathcal S}(\delta) := {{\mathbb}S}^{m-1} \backslash \cup_{i=1}^m {{\mathbb}S}^{(\delta)}_i$$ it is easy to see that ${\mathcal C}{\mathcal S}\cap P_{1,1}$ has finite $m$-volume for every $\delta>0$. We also note that the $g_{-t}^{(\boldsymbol{r})}$-action contracts slices of $P$ in the same way as it does $R$ and that it preserves each of the coordinate planes: $g_{-t}^{(\boldsymbol{r})}({{\mathbb}P}_i) = {{\mathbb}P}_i$, and consequently, the action of $g_{-t}^{(\boldsymbol{r})}$ on ${\mathcal C}{\mathcal S}\cap P_{1,1}$ keeps the $m$-volume finite. By continuity in $t$, the $m$-volume of slices between $\epsilon T\leq t\leq T$ has a maximum for all fixed $1\geq \epsilon >0$ and $T>0$, and Riemann-integration implies that $$\begin{aligned} \label{eqnFiniteVolSMAD} {\operatorname{vol}}_{{{\mathbb}R}^d}(P_{{\mathcal S}, \epsilon, T})< \infty.\end{aligned}$$ For Theorem \[thmMultiDASphereAve\] below, we will only consider the sets $$P_{{\mathcal S}, \epsilon, T} \quad \textrm{ and } \quad P_{A, \epsilon, T}$$ for $A$ with zero measure boundary contained in ${\mathcal S}(\delta)$ for some $\delta >0$. For Theorem \[thmMultiDASphereAveCuspSet\], we will consider some sets outside of ${\mathcal S}$. Statement of results for lattice approximates --------------------------------------------- Let ${\mathrm{d}{k}}$ denote the probability Haar measure on $K := K_d := {\operatorname{SO}}_d({{\mathbb}R})$. Our main number-theoretic results are three averaged spiraling of lattice approximates results, one for approximation in the setting of Diophantine approximation with weights and two in the setting of multiplicative Diophantine approximation. We point out that our proof of Theorems \[thmWeightedDASphereAve\] and \[thmMultiDASphereAve\] gives that the equality of the numerator and the equality of the denominator hold independently. One consequence is that other ratios may be obtained. \[thmWeightedDASphereAve\] For every unimodular lattice $\Lambda \in X_d$, subset $A \subset {{\mathbb}S}^{m-1}$ with zero measure boundary, and $\epsilon >0$, we have that $$\lim_{T \to \infty} \frac{\int_{K}\#\{k \Lambda \cap R_{A, \epsilon, T}\}~{\mathrm{d}{k}}}{\int_{K} \#\{k \Lambda \cap R_{\epsilon, T}\}~{\mathrm{d}{k}}} = \frac{{\operatorname{vol}}_{{{\mathbb}R}^d}(R_{A, \epsilon,1})}{{\operatorname{vol}}_{{{\mathbb}R}^d}(R_{\epsilon,1})}$$ The special case of setting $\boldsymbol{r}$ equal to $(1/m, \cdots, 1/m)$ is, itself, already a generalization of [@AGT1 Theorem 1.4], except, since the function $\|\cdot\|_{(1/m, \cdots, 1/m)}$ is (a power of) the sup norm, instead of the Euclidean norm of  [@AGT1 Theorem 1.4]. Here, we obtain that the limit of the ratio is $$\frac{{\operatorname{vol}}_{{{\mathbb}R}^m}(R_{1,1} \cap {\mathcal C}A)}{{\operatorname{vol}}_{{{\mathbb}R}^m}(R_{1,1})},$$ where ${\operatorname{vol}}_{{{\mathbb}R}^m}(R_{1,1}) = 2^m$. Note, as mentioned, the sets $R_{A, \epsilon, T}$ for the special case reduce to their counterparts in [@AGT1]. To obtain the exact generalization of [@AGT1 Theorem 1.4], replace the function $\|\cdot\|_{(1/m, \cdots, 1/m)}$ by the Euclidean norm. Then the proof of the theorem will also give this generalization and the conclusion is that the limit of the ratios is ${\operatorname{vol}}_{{{\mathbb}S}^{m-1}}(A)$. Note that, in all cases, the function $\|\cdot\|_{\boldsymbol{s}}$ can be for an arbitrary probability $n$-vector $\boldsymbol{s}$. We now state our results in the setting of multiplicative Diophantine approximation. \[thmMultiDASphereAve\] For every unimodular lattice $\Lambda \in X_d$, $\delta >0$, subset $A \subset {\mathcal S}(\delta)=:{\mathcal S}$ with zero measure boundary, and $\epsilon >0$, we have that $$\lim_{T \to \infty} \frac{\int_{K}\#\{k \Lambda \cap P_{A, \epsilon, T}\}~{\mathrm{d}{k}}}{\int_{K} \#\{k \Lambda \cap P_{{\mathcal S},\epsilon, T}\}~{\mathrm{d}{k}}} = \frac{{\operatorname{vol}}_{{{\mathbb}R}^d}(P_{A, \epsilon,1})}{{\operatorname{vol}}_{{{\mathbb}R}^d}(P_{{\mathcal S},\epsilon,1})}$$ \[thmMultiDASphereAveCuspSet\] For every unimodular lattice $\Lambda \in X_d$ and open subset $A \subset {{\mathbb}S}^{m-1}$ such that $$A \cap (\cup_{i=1}^m {{\mathbb}S}_i) \neq \emptyset,$$ we have that $$\lim_{T \to \infty} \int_{K}\#\{k \Lambda \cap P_{A, \epsilon, T}\}~{\mathrm{d}{k}} = \infty.$$ Theorem \[thmMultiDASphereAveCuspSet\] tells us that on average there are arbitrarily small neighborhoods of directions (which we know explicitly) for which every unimodular lattice has infinitely many elements in our star body. To prove these theorems, we need our main ergodic result on equidistribution of Siegel translates, Theorem \[theorem:siegel:equidist\]. We note that the spiraling results for multiplicative and weighted Diophantine approximation follow by applying the Theorems above to the unimodular lattice $$\begin{pmatrix}{\operatorname{Id}}_{m \times m} & \alpha\\0 & {\operatorname{Id}}_{n \times n} \end{pmatrix}{\mathbb}Z^{d}$$ attached to a matrix $\alpha = (\alpha_{ij})$ as usual. ### Acknowledgements {#acknowledgements .unnumbered} Part of this work was completed during the Group Actions and Number theory (GAN) programme at the Isaac Newton Institute for Mathematical Sciences. We thank the INI for providing a nice venue. Equidistribution on the space of lattices {#sec:lattices} ========================================= Given a unimodular lattice $\Lambda$ in ${{\mathbb}R}^{d}$ and a bounded Riemann-integrable function $f$ with compact support on ${\mathbb}R^{d}$, denote by $\widehat{f}$ its *Siegel transform*[^4]: $$\widehat{f}(\Lambda) := \sum_{\bf v \in \Lambda \backslash \{\boldsymbol{0}\}} f(\bf v).$$ Let $\mu = \mu_{d}$ be the probability measure on $X_{d} := {\operatorname{SL}}_{d}({\mathbb}R)/{\operatorname{SL}}_{d}({\mathbb}Z)$ induced by the Haar measure on ${\operatorname{SL}}_{d}({{\mathbb}R})$ and ${\mathrm{d}{\bf v}}$ denote the usual volume measure on ${{\mathbb}R}^{d}$. We recall the classical Siegel Mean Value Theorem [@Siegel]: Let $f$ be as above.[^5] Then $\widehat{f} \in L^{1}(X_{d}, \mu)$ and $$\int_{{{\mathbb}R}^{d}} f ~{\mathrm{d}{\bf v}} = \int_{X_{d}} \widehat{f} ~{\mathrm{d}{\mu}}.$$ Note that if $f$ is the indicator function of a set $A \backslash \{\boldsymbol{0}\}$, then $\hat{f}(\Lambda)$ is simply the number of points in $\Lambda \cap (A \backslash \{\boldsymbol{0}\})$. Let $$g_t:=g^{(\boldsymbol{r}, \boldsymbol{s})}_t :={\operatorname{diag}}(e^{r_1t}, \cdots ,e^{r_m t},e^{-s_1t}, \cdots,e^{-s_n t}) \in {\operatorname{SL}}_{d}({\mathbb}R)$$ and $e_1, \cdots, e_{d}$ be the standard basis of ${\mathbb}R^{d}$. We use $\widehat{{\bf 1}}_A$ to denote the indicator function of the set $A$. Setting $t$ so that $e^{t} = T$ gives $$g_t R_{\epsilon, T} = R_{\epsilon, 1} =: R_{\epsilon} \quad \textrm{ and } \quad g_t P_{\epsilon, T} = P_{\epsilon, 1} =: P_{\epsilon}$$ and $$g_t R_{A, \epsilon, T} = R_{A, \epsilon, 1} =: R_{A, \epsilon} \quad \textrm{ and } \quad g_t P_{A, \epsilon, T} = P_{A, \epsilon, 1} =: P_{A, \epsilon}.$$ Given a unimodular lattice $\Lambda \in {\operatorname{SL}}_d({{\mathbb}R}) / {\operatorname{SL}}_d({{\mathbb}Z})$, a simple computation shows that $$\label{eq:Sphere} \#\{k \Lambda \cap R_{\epsilon, T}\} = \widehat{{\bf 1}}_{R_{\epsilon}}(g_{t} k \Lambda) \quad \textrm{ and } \quad \#\{k \Lambda \cap P_{{\mathcal S}, \epsilon, T}\} = \widehat{{\bf 1}}_{P_{{\mathcal S}, \epsilon}}(g_{t} k \Lambda)$$ $$\label{eq:A} \#\{k \Lambda \cap R_{A, \epsilon, T}\} = \widehat{{\bf 1}}_{R_{A,\epsilon}}(g_{t} k \Lambda) \quad \textrm{ and } \quad \#\{k \Lambda \cap P_{A,\epsilon, T}\} = \widehat{{\bf 1}}_{P_{A,\epsilon}}(g_{t} k \Lambda).$$ Statement of results for Siegel transforms {#sec:siegel:equidist} ------------------------------------------ To prove Theorems \[thmWeightedDASphereAve\] and \[thmMultiDASphereAve\], we need to show the equidistribution of the Siegel transforms of the sets $R_{A, \epsilon}$, $P_{A, \epsilon}$, $R_{\epsilon}$, and $P_{{\mathcal S},\epsilon}$ with respect to averages over $g^{(\boldsymbol{r}, \boldsymbol{s})}_t$-translates of $K$. The main ergodic tool in this setting is our fourth main theorem, a result on the mutiparameter spherical averages of Siegel transforms: \[theorem:siegel:equidist\] Let $f$ be a bounded Riemann-integrable function of compact support on ${{\mathbb}R}^{d}$. Then for any ${\Lambda}\in X_{d}$, $$\lim_{t \to \infty} \int_{K_{d}} \widehat{f}(g^{(\boldsymbol{r}, \boldsymbol{s})}_t k {\Lambda}) ~{\mathrm{d}{k}} = \int_{X_{d}}\widehat{f}~{\mathrm{d}{\mu}}.$$ The above theorem is the generalization to the multiparameter case of our theorem for the single parameter case [@AGT1 Theorem 2.2]. Unlike in the single parameter case where the proof can be assembled from the work of Kleinbock-Margulis [@KM Appendix], the multiparameter case cannot, as far as we are aware. Instead, we generalize our proof of [@AGT1 Theorem 2.2]. As in [@AGT1], the substantial part of the argument lies in the upper bound. \[theorem:siegel:equidistUpper\] Let $f$ be a bounded function of compact support in ${{\mathbb}R}^{d}$ whose set of discontinuities has zero Lebesgue measure. Then for any ${\Lambda}\in X_{d}$, $$\lim_{t \to \infty} \int_{K_{d}} \widehat{f}(g^{(\boldsymbol{r}, \boldsymbol{s})}_t k {\Lambda}) ~{\mathrm{d}{k}} \leq \int_{X_{d}}\widehat{f}~{\mathrm{d}{\mu}}.$$ The assumption that $f$ has compact support can be replaced with that of $f \in L^1({\mathbb}R^{d})$—the other assumptions are still, however, necessary for the proof. Let $f$ be a bounded Riemann-integrable function of compact support in ${{\mathbb}R}^{d}$. Then for any ${\Lambda}\in X_{d}$, $$\lim_{t \to \infty} \int_{K_{d}} \widehat{f}(g^{(\boldsymbol{r}, \boldsymbol{s})}_t k {\Lambda}) ~{\mathrm{d}{k}} \leq \int_{X_{d}}\widehat{f}~{\mathrm{d}{\mu}}.$$ Immediate from the theorem and the Lebesgue criterion. As mentioned in [@AGT1], the lower bound follows either from the methods in [@KleinMarg] or by applying the following equidistribution theorem (Theorem \[theorem:DRS\]) of Duke, Rudnick and Sarnak (cf. [@DRS]) (see also Eskin and McMullen [@EMc] and Shah [@Shah]) and then approximating the Siegel transform $\widehat{f}$ from below by $h \in C_c(X_{d})$. \[theorem:DRS\] Let $G$ be a non-compact semisimple Lie group and let $K$ be a maximal compact subgroup of $G$. Let $\Gamma$ be a lattice in $G$, let $\lambda$ be the probabilty Haar measure on $G/\Gamma$, and let $\nu$ be any probability measure on $K$ which is absolutely continuous with respect to a Haar measure on $K$. Let $\{a_n\}$ be a sequence of elements of $G$ without accumulation points. Then for any $x \in G/\Gamma$ and any $h \in C_{c}(G/\Gamma)$, $$\lim_{n \to \infty} \int_{K} h(a_n k x)~{\mathrm{d}{\nu}}(k) = \int_{G/\Gamma}h~{\mathrm{d}{\lambda}}.$$ One can replace ${\mathrm{d}{k}}$ by ${\mathrm{d}{\nu}}(k)$ in Theorems \[theorem:siegel:equidist\] and \[theorem:siegel:equidistUpper\] without any changes to the proofs. Proof of Theorems \[thmWeightedDASphereAve\] and \[thmMultiDASphereAve\] {#secProofMainThms} ------------------------------------------------------------------------ We prove Theorems \[thmWeightedDASphereAve\] and \[thmMultiDASphereAve\] using Theorem \[theorem:siegel:equidist\], while deferring the proof of the latter to Section \[secSiegelEquidisProof\]. Thus, applying Theorem \[theorem:siegel:equidist\] to the indicator function of $R_{A, \epsilon}$, we obtain $$\lim_{t \to \infty} \int_{K} \widehat{{\bf 1}}_{R_{A, \epsilon}}(g^{(\boldsymbol{r}, \boldsymbol{s})}_t k \Lambda){\mathrm{d}{k}} =\int_{X_{d}} \widehat{{\bf 1}}_{R_{A, \epsilon}}{\mathrm{d}{\mu}} = {\operatorname{vol}}_{{{\mathbb}R}^d}(R_{A, \epsilon}),$$ where we have applied Siegel’s mean value theorem in the last equality.[^6] Doing likewise for $R_{\epsilon}$, $P_{A, \epsilon}$, and $P_{{\mathcal S},\epsilon}$, we obtain $$\begin{aligned} \lim_{T \to \infty} \frac{\int_{K}\#\{k \Lambda \cap R_{A, \epsilon, T}\}~{\mathrm{d}{k}}}{\int_{K} \#\{k \Lambda \cap R_{\epsilon, T}\}~{\mathrm{d}{k}}} &= \frac{{\operatorname{vol}}_{{{\mathbb}R}^d}(R_{A, \epsilon})}{{\operatorname{vol}}_{{{\mathbb}R}^d}(R_{\epsilon})} \\ \lim_{T \to \infty} \frac{\int_{K} \#\{k \Lambda \cap P_{A,\epsilon, T}\}~{\mathrm{d}{k}}}{\int_{K} \#\{k \Lambda \cap P_{{\mathcal S}, \epsilon, T}\}~{\mathrm{d}{k}}} &= \frac{{\operatorname{vol}}_{{{\mathbb}R}^d}(P_{A, \epsilon})}{{\operatorname{vol}}_{{{\mathbb}R}^d}(P_{{\mathcal S}, \epsilon})}, \end{aligned}$$ which proves our desired results. Note that (\[eqnFiniteVolSMAD\]) with $T=1$ gives that ${\operatorname{vol}}_{{{\mathbb}R}^d}(P_{{\mathcal S}, \epsilon})<\infty$. Proof of Theorem \[thmMultiDASphereAveCuspSet\] ----------------------------------------------- As in the Section \[secProofMainThms\], we use Theorem \[theorem:siegel:equidist\] before its proof. Let $\{\delta_i\}$ be a sequence of positive real numbers decreasing to $0$. Then $$A \supset \cup_i A \cap {\mathcal S}(\delta_i).$$ Let ${\mathcal C}_i := {\mathcal C}(A \cap {\mathcal S}(\delta_i))$. Applying Theorem \[theorem:siegel:equidist\], we have $$\lim_{T \to \infty} \int_{K}\#\{k \Lambda \cap P_{A \cap {\mathcal S}(\delta_i), \epsilon, T}\}~{\mathrm{d}{k}} = {\operatorname{vol}}_{{{\mathbb}R}^d}(P_{A \cap {\mathcal S}(\delta_i), \epsilon}) =O\bigg({\operatorname{vol}}_{{{\mathbb}R}^m}({\mathcal C}_i)\bigg),$$ which $\rightarrow \infty$ as $i \rightarrow \infty$ by Lemma \[lemmInfiniteVolCusp\]. Proof of Theorem \[theorem:siegel:equidistUpper\] {#secSiegelEquidisProof} ================================================= We adapt our proof in [@AGT1 Section 3] from the single parameter (i.e. the diagional action is ${{\mathbb}R}$-rank $1$) case to the multiparameter (i.e. the diagional action is any allowed ${{\mathbb}R}$-rank) case. Recall our diagional action is $$g^{(\boldsymbol{r}, \boldsymbol{s})}_t =:g_t.$$ As mentioned, to prove Theorem \[theorem:siegel:equidist\], we need only show the upper bound (Theorem \[theorem:siegel:equidistUpper\]): $$\lim_{t \to \infty} \int_{K_{d}} \widehat{f}(g_t k {\Lambda}) ~{\mathrm{d}{k}} \leq \int_{X_{d}}\widehat{f}~{\mathrm{d}{\mu}}.$$ Fix a unimodular lattice $\Lambda \in X_d$. The strategy of the proof is to approximate using step functions on balls. We will divide the proof into four types of multiparameter actions: 1. $\boldsymbol{r} := (1/m, \cdots, 1/m)$ and $n=1$. 2. $\boldsymbol{r}$ is an arbitrary probability $m$-vector and $n=1$. 3. $\boldsymbol{r}$ is an arbitrary probability $m$-vector and $\boldsymbol{s}$ is an probability $n$-vector such there exist a unique entry $j$ for which $s_j = \|\boldsymbol{s}\|$ where $\|\cdot \|$ is the sup norm. 4. $\boldsymbol{r}$ is an arbitrary probability $m$-vector and $\boldsymbol{s}$ is an arbitrary probability $n$-vector. The first type is just our single parameter case [@AGT1 Theorem 2.2]. Proof for the second type of multiparameter {#sec2typeMultpara} ------------------------------------------- In this section, $\boldsymbol{r}$ is an arbitrary probability $m$-vector and $n=1$. Using [@AGT1 Section 3.4] without change, we will approximate using step functions on balls, where we use the norm on ${{\mathbb}R}^{d} = {{\mathbb}R}^m \times {{\mathbb}R}$ given by the maximum of the Euclidean norm in ${{\mathbb}R}^m = \mbox{span}(e_1, \cdots, e_m)$ and the absolute value in ${{\mathbb}R}= \mbox{span}(e_{d})$. Hence, balls will be open regions of ${{\mathbb}R}^{d}$, which we also refer to as *rods* or *solid cylinders*. As in [@AGT1], we need four cases: balls centered at $\boldsymbol{0} \in {{\mathbb}R}^{d}$, balls centered in ${\operatorname{span}}(e_{d}) \backslash\{\boldsymbol{0}\}$, balls centered in ${\operatorname{span}}(e_1, \cdots, e_m) \backslash\{\boldsymbol{0}\}$, and all other balls. Since we will approximate using step functions, it suffices (as we had shown in [@AGT1 Section 3.4]) to assume that the balls in the second case do not meet $\boldsymbol{0}$ and in the last case do not meet ${\operatorname{span}}(e_{d}) \cup {\operatorname{span}}(e_1, \cdots, e_m)$.[^7] Let $E := B(\boldsymbol{w}, r)$ be any such ball and $\chi_E$ be its characteristic function. By the monotone convergence theorem, we have $$\int_{K_{d}} \widehat{\chi}_E(g_t k {\Lambda}) ~{\mathrm{d}{k}} =\sum_{\boldsymbol{v} \in {\Lambda}\backslash \{\boldsymbol{0}\} }\int_{K_{d}} \chi_{k^{-1} g_t^{-1} E}(\boldsymbol{v}) ~{\mathrm{d}{k}}.$$ It is more convenient to prove the second and fourth cases together and before the others. Let $E$ be a rod in either of these two cases. Let $r$ be small. Let $R:=e^t$. Fix $R$, or equivalently, $t$ be a large value. Now $g_t^{-1} E$ is also a rod, but narrow in the directions given by ${{\mathbb}R}^m$ and long in the direction given by $e_d$. Recall from [@AGT1 Section 3], we have $$\begin{aligned} \label{eqnInvarofRotationonSphere}\int_{K_{d}} \chi_{k^{-1} g_t^{-1} E}(\boldsymbol{v}) ~{\mathrm{d}{k}} =: A_R^E(\|\boldsymbol{v}\|)\end{aligned}$$ and $$\begin{aligned} A_R^E(\tau) = \frac {{\operatorname{vol}}_{\tau {{\mathbb}S}^m}(\tau {{\mathbb}S}^m \cap g_t^{-1} E)}{{\operatorname{vol}}_{\tau{{\mathbb}S}^m}(\tau{{\mathbb}S}^m)}. \end{aligned}$$ Also, recall, from [@AGT1 Section 3], the definition of a *cap* ${\frak C}(\tau)$ , namely it is the intersection of the rod $g_t^{-1} E$ with the sphere $\tau {{\mathbb}S}^{m}$. Now, unlike in [@AGT1], the caps are no longer spherical, but, for fixed $R$, are ellipsoidial of fixed eccentricity. All our geometric considerations are for a fixed $R$ (which is only allowed to $\rightarrow \infty$ at the end). In particular, $A_R^E(\tau)$ is a strictly decreasing smooth function with respect to $\tau$. Let $B_{{\operatorname{Euc}}}(\boldsymbol{0}, \tau)$ denote a ball of radius $\tau$ in ${{\mathbb}R}^{d}$ with respect to the Euclidean norm. Now it follows from the formula for $A_R^E$ that $$\sum_{\boldsymbol{v} \in {\Lambda}\backslash \{\boldsymbol{0}\} } A_R^E(\|\boldsymbol{v}\|) \leq \int_{\tau_-}^{\tau_+} \#\big(B_{{\operatorname{Euc}}}(\boldsymbol{0}, \tau) \cap{\Lambda}\backslash \{\boldsymbol{0}\}\big) ~(-{\mathrm{d}{A}}_R^E(\tau))$$ where the integral is the Riemann-Stieltjes integral and the integrability of the function $\#\big(B_{{\operatorname{Euc}}}(\boldsymbol{0}, \tau) \cap{\Lambda}\backslash \{\boldsymbol{0}\}\big)$ follows from its monotonicity and the continuity and monotoncity of $A_R^E(\tau)$. The rest of the proof is identical to that in [@AGT1 Section 3] and shows $$\lim_{t \rightarrow \infty} \int_{K_{d}} \widehat{\chi}_E(g_t k {\Lambda}) ~{\mathrm{d}{k}} \leq{\operatorname{vol}}_{{{\mathbb}R}^d}(E).$$ Finally, we prove the first and third case together. Let $E$ be a rod in either of these two cases. The difference between these two cases and the second and fourth cases is that the rod extends in both the positive $e_d$ and negative $e_d$ directions. As the lattice $\Lambda$ is fixed, there is a ball $B_{{\operatorname{Euc}}}(\boldsymbol{0}, \tau_0))$ in ${{\mathbb}R}^d$ that does not meet $\Lambda \backslash \{\boldsymbol{0}\}$ for some $\tau_0>0$ depending only on $\Lambda$. Therefore, we can consider the two ends separately. The proof is the same as in [@AGT1 Section 3.3], except that $\mathfrak{B}$ is not a sphere, but an elliposoid of fixed eccentricity depending on $R$ (which, recall is fixed until the end of the proof), but this does not affect the proof. Consequently, for the second type of multiparameter, we can conclude $$\lim_{t \rightarrow \infty} \int_{K_{d}} \widehat{\chi}_E(g_t k {\Lambda}) ~{\mathrm{d}{k}} \leq{\operatorname{vol}}_{{{\mathbb}R}^d}(E).$$ Proof for the third type of multiparameter {#sec3typeMultpara} ------------------------------------------ In this section, $\boldsymbol{r}$ is an arbitrary probability $m$-vector and $\boldsymbol{s}$ is an probability $n$-vector such there exist a unique entry $j$ for which $s_j = \|\boldsymbol{s}\|$ where $\|\cdot \|$ is the sup norm. On the other hand, for the rods that we define for this multiparameter type, we will use the norm on ${{\mathbb}R}^{d} = {{\mathbb}R}^m \times {{\mathbb}R}^n$ given by the maximum of the Euclidean norm in $\mbox{span}(e_1, \cdots, e_{m+j-1}, e_{m+j+1}, \cdots e_{d})$ and the absolute value in $\mbox{span}(e_{m+j})$. As before, we have four cases: balls centered at $\boldsymbol{0} \in {{\mathbb}R}^{d}$, balls centered in ${\operatorname{span}}(e_{m+j}) \backslash\{\boldsymbol{0}\}$, balls centered in ${\operatorname{span}}(e_1, \cdots, e_{m+j-1}, e_{m+j+1}, \cdots e_{d})\backslash\{\boldsymbol{0}\}$, and all other balls (again, Footnote \[FootnoteCase24\] applies). Again, we may assume that the balls in the second case do not meet $\boldsymbol{0}$ and in the last case do not meet ${\operatorname{span}}(e_{m+j}) \cup {\operatorname{span}}(e_1, \cdots, e_{m+j-1}, e_{m+j+1}, \cdots e_{d})$. Now $g_t^{-1}$ has a unique largest expanding direction, namely along $e_{m+j}$. Replace the role of $e_d$ from Section \[sec2typeMultpara\] with $e_{m+j}$. Let $R = e^{s_jt}$. Fix a large $R$, then the analysis of the geometry of $g_t^{-1} E$ is analogous to that in Section \[sec2typeMultpara\] because, for a fixed large $R$, the rod is much longer in along the $e_{m+j}$ direction than any other. The only difference is that there exists a minimum sphere radius $\widetilde{\tau}(R)$ larger than which the analysis of the geometry is valid because some directions are expanding (but less than in $e_{m+j}$). However, for $R$ large, $\widetilde{\tau}(R)$ is small in comparision to the length of the rod $\tau_+(R)$ (which is on the order of $R$). In particular, $\lim_{R \rightarrow \infty} \widetilde{\tau}(R)/ \tau_+(R) =0$. Consequently (as shown in [@AGT1 Section 3.3] for example, the error is $O(R^{-1})$, which does not affect the proof. The conclusion, in all four cases, is $$\lim_{t \rightarrow \infty} \int_{K_{d}} \widehat{\chi}_E(g_t k {\Lambda}) ~{\mathrm{d}{k}} \leq{\operatorname{vol}}_{{{\mathbb}R}^d}(E).$$ Proof for the fourth type of multiparameter ------------------------------------------- In this section, $\boldsymbol{r}$ is an arbitrary probability $m$-vector and $\boldsymbol{s}$ is an arbitrary probability $n$-vector. We may assume without loss of generality that there exist indices $1 \leq j_1< \cdots< j_\ell \leq n$ such that $s_{j_1} = \cdots = s_{j_\ell} = \|\boldsymbol{s}\|=:\lambda$ and $2\leq \ell \leq n$. (Again $\|\cdot\|$ denotes the sup norm.) Let $\widetilde{\lambda}$ denote the largest component of $\boldsymbol{s}$ strictly less than $\lambda$, or, if no such component exists, set $\widetilde{\lambda}=1$. Let us denote this set of indices $J$ and the remaining indices by $J^c$, and note that $J \sqcup J^c = \{1, \cdots, n\}$. The main difference and problem with this case is that caps are no longer relatively small in relation to the largest dimension of the rod. To take care of this problem, we adapt the proof in Section \[sec3typeMultpara\] in two ways, the first for the analog of first and third cases and the second for the analog of the second and fourth cases. We use two types of balls/rods. For the balls/rods that we define for this multiparameter type for the first and third case, we will use the norm on ${{\mathbb}R}^{d} = {{\mathbb}R}^m \times {{\mathbb}R}^n$ given by the maximum of the Euclidean norm in $$\mbox{span}(\bigcup_{i=1}^m e_i \cup \bigcup_{j \in J^c} e_{m+j})$$ and the sup norm in $$\mbox{span}(\bigcup_{j \in J} e_{m+j}).$$ For the balls/rods that we define for this multiparameter for the second and fourth cases, we use the sup norm until almost the end of the proof (again, Footnote \[FootnoteCase24\] applies). As before, we have the four cases: balls centered at $\boldsymbol{0} \in {{\mathbb}R}^{d}$, balls centered in $\mbox{span}(\bigcup_{j \in J} e_{m+j}) \backslash\{\boldsymbol{0}\}$, balls centered in $\mbox{span}(\bigcup_{i=1}^m e_i \cup \bigcup_{j \in J^c} e_{m+j})\backslash\{\boldsymbol{0}\}$, and all other balls. Again, we may assume that the balls in the second case do not meet $\boldsymbol{0}$ and in the last case do not meet $\mbox{span}(\bigcup_{i=1}^m e_i \cup \bigcup_{j \in J^c} e_{m+j}) \cup \mbox{span}(\bigcup_{j \in J} e_{m+j})$. Let $E := B(\boldsymbol{w}, r)$. We prove each case in turn—for convenience of exposition, we prove the cases in the order first, third, second, and fourth. ### The first case: balls centered at $\boldsymbol{0}$ Let $R = e^{\lambda t}$. Fix a large $R$. Consider the rod $g_t^{-1} E$. The directions $J$ are all expanded to a radius of $Rr$. All other directions are expanding less or contracting. As in Section \[sec3typeMultpara\], there exists a minimal radius $\widetilde{\tau}(R)$ larger than which the analysis of the geometry is valid and we can choose $\widetilde{\tau}(R) =3 e^{\widetilde{\lambda}t}$; hence, we have $\lim_{R \rightarrow \infty} \widetilde{\tau}(R)/ Rr =0$, which implies we can ignore radius smaller than $\widetilde{\tau}(R)$. As mentioned, caps ${\frak C}(\tau)$ are no longer small, but this does not affect the analysis of the geometry from Section \[sec3typeMultpara\] up to the inequality $$\begin{aligned} \sum_{\boldsymbol{v} \in {\Lambda}\backslash \{\boldsymbol{0}\} } A_R^E(\|\boldsymbol{v}\|) &\leq O(R^{-\ell}) + d \int_{\widetilde{\tau}}^{Rr} (1+ \varepsilon) {\operatorname{vol}}(B_{{\operatorname{Euc}}}(\boldsymbol{0}, 1))\frac{C(\widetilde{\tau})}{{{\operatorname{vol}}_{{{\mathbb}S}^d}({{\mathbb}S}^d)}} ~{\mathrm{d}{\tau}} \\ & = O(R^{-\ell}) + d (1 + \varepsilon) \frac {{\operatorname{vol}}(B_{{\operatorname{Euc}}}(\boldsymbol{0}, 1))}{{{\operatorname{vol}}_{{{\mathbb}S}^d}({{\mathbb}S}^d)}} C(\widetilde{\tau}) (Rr - \widetilde{\tau})\end{aligned}$$ where $C(\tau)={\operatorname{vol}}_{{{\mathbb}S}^d}{\frak C}(\tau)$ and the $O(R^{-\ell})$ comes from $\tau < \widetilde{\tau}(R)$. Now $C(\widetilde{\tau}) (Rr - \widetilde{\tau})$ is the volume of the rod with a (relatively) small hole missing. Letting $R \rightarrow \infty$ and $\varepsilon \rightarrow 0$ yields our desired result: $$\lim_{t \rightarrow \infty} \int_{K_{d}} \widehat{\chi}_E(g_t k {\Lambda}) ~{\mathrm{d}{k}} \leq{\operatorname{vol}}_{{{\mathbb}R}^d}(E).$$ ### The third case: balls centered at $\mbox{span}(\bigcup_{i=1}^m e_i \cup \bigcup_{j \in J^c} e_{m+j})\backslash\{\boldsymbol{0}\}$ The proof is similar to the first case. In any expanding directions $\bigcup_{j \in J} e_{m+j}$, the components of $\boldsymbol{w}$ are zero and hence $g_t^{-1} \boldsymbol{w}$ has at most expansion at a rate of $e^{\widetilde{\lambda}t}$. Let $R = e^{\lambda t}$. Fix a large $R$. Let $\widetilde{\varepsilon}:= \widetilde{\varepsilon}(R):=\|g_t^{-1} \boldsymbol{w}\|/R$. Hence, $\lim_{R \rightarrow \infty} \widetilde{\varepsilon}(R)= 0.$ Consequently, using the analogous proof as in the first case for a slightly larger rod (replacing $r$ with $(1 + 2\widetilde{\varepsilon})r$), we have $$\begin{aligned} \sum_{\boldsymbol{v} \in {\Lambda}\backslash \{\boldsymbol{0}\} } A_R^E(\|\boldsymbol{v}\|) &\leq O(R^{-\ell})+ d \int_{\widetilde{\tau}}^{\tau_+} (1+ \varepsilon) {\operatorname{vol}}(B_{{\operatorname{Euc}}}(\boldsymbol{0}, 1))\frac{C(\widetilde{\tau})}{{{\operatorname{vol}}_{{{\mathbb}S}^d}({{\mathbb}S}^d)}} ~{\mathrm{d}{\tau}} \end{aligned}$$ where $\tau_+ := Rr (1+ 2\widetilde{\varepsilon})$ and $C(\widetilde{\tau}) (\tau^+ - \widetilde{\tau}) \rightarrow {\operatorname{vol}}_{{{\mathbb}R}^d}(E)$ as $R \rightarrow \infty$. This yields our desired result: $$\lim_{t \rightarrow \infty} \int_{K_{d}} \widehat{\chi}_E(g_t k {\Lambda}) ~{\mathrm{d}{k}} \leq{\operatorname{vol}}_{{{\mathbb}R}^d}(E).$$ ### The second case: balls centered in $\mbox{span}(\bigcup_{j \in J} e_{m+j}) \backslash\{\boldsymbol{0}\}$ Of the indices in $J$ pick one, say $j_1$. Let us first consider the special case that $\boldsymbol{w} = w e_{m+j_1}$ for some $w \neq 0$. This index will play the role of $d$ from Section \[sec2typeMultpara\]. Let $I=\{1, \cdots, m\} \cup \{m+j : j \in J^c\}$ and $R = e^{\lambda t}$. For the second (and fourth) cases, we will assume an additional condition (which we later show does not affect the generality of our result): for a fixed $\alpha \geq 1$, we only consider balls $E$ for which $$\begin{aligned} \label{eqnRadBnd}\frac{{\operatorname{dist}}\left(\boldsymbol{w}, {\operatorname{span}}(\bigcup_{i \in I} e_i)\right) - r}{r}\geq \alpha\end{aligned}$$ holds. Recall that our ball $E$ is given by the sup norm—it is a $d$-cube. Its translate $E - \boldsymbol{w}$ has exactly one vertex $\boldsymbol{p}$ with all positive coordinates. Let us change $E$ into a “half-closed” ball $F$ by the union of all the $d-1$ hyperfaces of the cube with $\boldsymbol{p} + \boldsymbol{w}$ as vertex. Any half-closed ball will be constructed like this. We will refer to $F$ and its $g_t$-translates as *half-closed rods* or simply *rods* if the context is clear. Fix a large $R$. Consider the rod $g_t^{-1} F$ and one of the $d-1$-dimensional faces that is normal to $e_{m+j_1}$—call this face ${\mathcal F}$ and note that it is a $d-1$-dimensional box. Choose a large natural number $N$. Partition the smallest side length of ${\mathcal F}$ into $N$ segments of length $L$. For each of the other side lengths in ${\mathcal F}$, partition into segments whose length is nearest to $L$. This partitions ${\mathcal F}$ into ${\mathcal N}(N)$ boxes with the same side lengths and, furthermore, each of which can be contained in a $d-1$-dimensional cube of side length $2L < 2/N$. Let us index these little boxes by $k$. The cartesian product of each of these little cubes with the $e_{m+j_1}$-th coordinate of $g_t^{-1} F$ are rods, which we make into half-closed rods in the way specified above. This is a partition of $g_t^{-1} F$ such that there is only one direction, namely $e_{m+j_1}$, that is long. To each element of this partition, cases two and four of Section \[sec2typeMultpara\] applies (the fact that each element is a half-closed rod as opposed to an open rod does not affect the proof in Section \[sec2typeMultpara\]). Since this is a partition, elements are pairwise disjoint and we may sum over each element of the partition to obtain $$\begin{aligned} \sum_{\boldsymbol{v} \in {\Lambda}\backslash \{\boldsymbol{0}\} } A_R^F(\|\boldsymbol{v}\|) &\leq d \sum_{k=1}^{{\mathcal N}(N)} \int_{\tau_k^-}^{\tau_k^+} (1+ \varepsilon) {\operatorname{vol}}(B_{{\operatorname{Euc}}}(\boldsymbol{0}, 1))\frac{C_k(\tau_k^-)}{{{\operatorname{vol}}_{{{\mathbb}S}^d}({{\mathbb}S}^d)}} ~{\mathrm{d}{\tau}} \\ & = d (1 + \varepsilon) \frac {{\operatorname{vol}}(B_{{\operatorname{Euc}}}(\boldsymbol{0}, 1))}{{{\operatorname{vol}}_{{{\mathbb}S}^d}({{\mathbb}S}^d)}} \sum_{k=1}^{{\mathcal N}(N)} C_k(\tau_k^-) (\tau_k^+ - \tau_k^-) \end{aligned}$$ where $\tau_k^-$ and $\tau_k^+$ are, up to $O(R^{-1})$, the minimum and maximium radii such that $\tau {{\mathbb}S}^d$ meets the $k$-th partition element and $C_k(\tau)$ is the volume of the cap of the $k$-th element, i.e. $C_k(\tau) = {\operatorname{vol}}_{\tau {{\mathbb}S}^d}({\frak C}_k(\tau))$ where ${\frak C}_k(\tau)$ is the intersection of $k$-th partition element with $\tau {{\mathbb}S}^d$. Within $O(R^{-1})$, $\tau_k^+ - \tau_k^-$ is the length of the rod $g_t^{-1} F$ along the $e_{m+j_1}$ direction. Now $C_k(\tau_-) (\tau_+ - \tau_-)$ is the volume of an element that has length along $e_{m+j_1}$ within $O(R^{-1})$ of the length along $e_{m+j_1}$ of $g_t^{-1} F$, but with cross-section volume $C_k(\tau_-)$. Since in the second case (\[eqnRadBnd\]) holds, a direct calculation (using trigonometry) gives that $$\begin{aligned} \label{eqnLargeCapEst}C_k(\tau_-) \leq \widetilde{\gamma} \frac{{\operatorname{vol}}_{{{\mathbb}R}^{d-1}}({\frak B}_k)}{\sin^{d-1}(\pi/2 - \arcsin(1/\alpha))} \end{aligned}$$ where $\widetilde{\gamma} >1$ is a number depending only $N$ and $\alpha$ such that $\widetilde{\gamma} \searrow 1$ as $N, \alpha \rightarrow \infty$ and ${\frak B}_k$ is the interesection of the $d-1$-dimensional hyperplane normal to $e_{m+j_1}$ with the $k$-th partition element. Consequently, for large $N, \alpha$, $$\sum_{k=1}^{{\mathcal N}(N)} C_k(\tau_k^-) (\tau_k^+ - \tau_k^-)$$ is arbitrarily close to ${\operatorname{vol}}_{{{\mathbb}R}^d}(E)$. As $ \frac {{\operatorname{vol}}(B_{{\operatorname{Euc}}}(\boldsymbol{0}, 1))}{{{\operatorname{vol}}_{{{\mathbb}S}^d}({{\mathbb}S}^d)}}= \frac 1 {d+1}$, letting $R \rightarrow \infty$ and $\varepsilon \rightarrow 0$, we have our desired result for the special case: $$\lim_{t \rightarrow \infty} \int_{K_{d}} \widehat{\chi}_E(g_t k {\Lambda}) ~{\mathrm{d}{k}} \leq{\operatorname{vol}}_{{{\mathbb}R}^d}(E),$$ up to the restrictions that the balls are now half-closed and that (\[eqnRadBnd\]) must hold. Likewise, we have the same conclusion for $\boldsymbol{w} = w e_{m+j}$ for any $j \in J$. We now consider the second case in general. We may assume that $\boldsymbol{w} \in \mbox{span}(\bigcup_{j \in J} e_{m+j})=:{{\mathbb}P}_J$ but not in any of the coordinate axes. Let $\boldsymbol{q}:=\boldsymbol{q}(R)$ denote the point of $\overline{g_t^{-1} F}$ with smallest Euclidean norm. By convexity, it is easy to see that the point $\boldsymbol{q}$ is unique (for fixed $R$) and that $\boldsymbol{q} \in \mbox{span}(\bigcup_{j \in J} e_{m+j})$. We remark that $\boldsymbol{q}$ is an eigenvector of $g_t^{-1}$ and thus the direction of $\boldsymbol{q}$ is fixed for all $R$. Let $\|\cdot\|_J$ be the sup norm in ${{\mathbb}P}_J$. Rotate a coordinate axis to the direction of $\boldsymbol{q}$—doing this to a half-closed $\|\cdot\|_J$-ball of radius $\beta$ yields a rotated half-closed $\ell$-cube ${\mathcal{C}}(\beta)$ with side length $2 \beta$. Cover $F \cap {{\mathbb}P}_J$ by a partition of affine translates of $\bigcup_t {\mathcal{C}}_t(\beta)$ where $\beta>0$ is a constant so small that ${\operatorname{vol}}_{{{\mathbb}R}^\ell}(F \cap {{\mathbb}P}_J)$ is less than but as close to ${\operatorname{vol}}_{{{\mathbb}R}^\ell}(\bigcup_t {\mathcal{C}}_t(\beta))$ as desired. For each ${\mathcal{C}}_t(\beta)$ take the cartesian product with the other directions of $F$ to obtain $\widetilde{F}_t(\beta)$. Then ${\operatorname{vol}}_{{{\mathbb}R}^d}(F)$ is less than but as close to ${\operatorname{vol}}_{{{\mathbb}R}^d}(\bigcup_t \widetilde{F}_t(\beta))$ as desired. Choose $t$ so large that $e^{\lambda t}\beta$ is larger than the $R$ chosen in the special case (where the center is on the axis) above—this gives us a much larger $R$ for this, the second case in general. Now the special case holds for each $\widetilde{F}_t(\beta)$ and applying it to each and summing over the partition and noting that the volume of the partition is arbitrarily close to ${\operatorname{vol}}_{{{\mathbb}R}^d}(E)$ yields the second case in general: $$\lim_{t \rightarrow \infty} \int_{K_{d}} \widehat{\chi}_E(g_t k {\Lambda}) ~{\mathrm{d}{k}} \leq{\operatorname{vol}}_{{{\mathbb}R}^d}(E),$$ up to the restrictions that the balls are now half-closed and that (\[eqnRadBnd\]) must hold. ### The fourth case: all other balls This is an adaption of the second case. The difference is that $\boldsymbol{q} \notin {{\mathbb}P}_J$. Let $\|\cdot\|_I$ be the sup norm in the ${\operatorname{span}}(\bigcup_{i \in I} e_{i}) =: {{\mathbb}P}_I$ and let $\boldsymbol{q}_I$ and $\boldsymbol{q}_J$ be the orthogonal projections of $\boldsymbol{q}$ onto ${{\mathbb}P}_I$ and ${{\mathbb}P}_J$, respectively. Then $$\frac{\|\boldsymbol{q}_I(R)\|_I}{\|\boldsymbol{q}_J(R)\|_J} = 0$$ as $R \rightarrow \infty$. Consequently, for $R$ large enough, (\[eqnLargeCapEst\]) holds and thus the proof of the second case also applies to this case, allowing us to conclude: $$\lim_{t \rightarrow \infty} \int_{K_{d}} \widehat{\chi}_E(g_t k {\Lambda}) ~{\mathrm{d}{k}} \leq{\operatorname{vol}}_{{{\mathbb}R}^d}(E),$$ up to the restrictions that the balls are now half-closed and that (\[eqnRadBnd\]) must hold. ### Finishing the second and fourth cases We wish to prove the second and fourth cases for the balls defined for the first and third cases (i.e. in terms of the product of Euclidean norms). To remove the restriction of half-closed rods, consider the measure zero boundaries of the half-closed rods at each stage. Using [@AGT1 Lemma 3.5] to approximate this measure zero set and the method of handling the null term from [@AGT1 Section 3.4], we can remove this restriction. To remove the restriction given by (\[eqnRadBnd\]), we note that [@AGT1 Lemmas 3.1 and 3.5] apply to the balls of the second and fourth case with the restriction (\[eqnRadBnd\]) because $\alpha$ is fixed and the ball given by the product of the Euclidean norms not do meet ${{\mathbb}P}_I$. This is all that is needed to apply [@AGT1 Section 3.4]. Doing so allows us to conclude $$\lim_{t \rightarrow \infty} \int_{K_{d}} \widehat{\chi}_E(g_t k {\Lambda}) ~{\mathrm{d}{k}} \leq{\operatorname{vol}}_{{{\mathbb}R}^d}(E),$$ where $E$ is a ball in the same norm as for the first and third case. Finishing the proof of Theorem \[theorem:siegel:equidistUpper\] --------------------------------------------------------------- For each mutiparameter type, we apply [@AGT1 Section 3.4] without change. Appendix ======== We prove Lemma \[lemmInfiniteVolCusp\]. Recall that $\|\boldsymbol{v}\|_{{\operatorname{pr}}} := \prod_{i=1}^\ell |v_i|$; let $\ell=m$ in this section. Let $$S := \{ \boldsymbol{v} \in {{\mathbb}R}^m :\|\boldsymbol{v}\|_{{\operatorname{pr}}} \leq 1\}.$$ \[lemmInfiniteVolCusp\] Let $\boldsymbol{w}$ be on a great sphere for ${{\mathbb}S}^{m-1}$. Let $A:=B(\boldsymbol{w}, r) \cap {{\mathbb}S}^{m-1}$ for some $r>0$. Then $${\operatorname{vol}}_{{{\mathbb}R}^m}({\mathcal C}A \cap S) = \infty.$$ For $m=1$, a great sphere is simply the intersection of an axis with the circle. Elementary calculus gives the result. We may assume that $m \geq 2$. Without loss of generality, we may assume that $r$ is small. There are two cases. First assume that $\overline{A}$ does not meet any coordinate axes. Then there exists exactly one coordinate in which the points in $\overline{A}$ may have small absolute value. By reordering indices if necessary, we may assume that the $m$-th coordinate is the one that has small absolute values. In the other directions, the absolute values are bounded away from $0$. In other words, given a constant $c>0$, we have $$\prod_{i=1}^{m-1}|v_i| \geq c$$ for all $\boldsymbol{v} \in \overline{A}$. Note that ${\operatorname{vol}}_{{{\mathbb}S}^{m-1}}(A) =O(r^{m-1})$ and that $$\prod_{i=1}^{m-1}|v_i| = O({\operatorname{vol}}_{{{\mathbb}S}^{m-1}}(A))$$ for all $\boldsymbol{v} \in \overline{A}$ (because $r$ is small). Consequently, for large $\tau$, we have that $$\prod_{i=1}^{m-1}|\tau v_i| = O({\operatorname{vol}}_{\tau {{\mathbb}S}^{m-1}}(\tau A)).$$ Perhaps by cutting off the part of the cone nearest to the origin, we have that ${\mathcal C}A \cap S$ is the graph of the function over $A$ determined by $\prod_{i=1}^{m}|x_i| =1$, giving us that $ {\operatorname{vol}}_{\tau {{\mathbb}S}^{m-1}}(\tau A \cap S) |\tau v_m| =O(1)$. This implies that $|v_m| \tau^m \leq O(1)$. Riemann integration now gives $${\operatorname{vol}}_{{{\mathbb}R}^m}({\mathcal C}A \cap S) = const \int_{1}^\infty {\operatorname{vol}}_{\tau {{\mathbb}S}^{m-1}}(\tau A \cap S)~{\mathrm{d}{\tau}} = const \int_{1}^\infty \frac 1 {|\tau v_m|}~{\mathrm{d}{\tau}} \geq const \int_{1}^\infty \tau^{m-1}~{\mathrm{d}{\tau}} = \infty.$$ We note that $const$ depends on how close $A$ is to a coordinate axis. The other case is when $\overline{A}$ meets coordinate axes. Since $r$ is small, it may only meet one. Pick an open ball $\widetilde{B} \subset A$ that avoids the axis and apply the previous proof. Concluding remarks and generalisations ====================================== As we have noted in [@AGT1], Theorem \[theorem:siegel:equidist\] is reminiscent of some results in the literature, for instance the work of Eskin-Margulis-Mozes, where the authors average over a different compact group. Closest to our work is the result of Kleinbock-Margulis who provide a proof in [@KM] Corollary A.8 of a very general averaging result for one parameter flows, or more precisely, flows with the property “EM" on quotients of semisimple groups by irreducible lattices. We refer the reader to [@KM] for precise statements and the definition of the property “EM". The main tool in loc. cit. is exponential decay of correlation coefficients of Hölder vectors. In [@AGT1], we provide a new, elementary proof of the above theorem for one parameter flows on ${\operatorname{SL}}_{n}({{\mathbb}R})/{\operatorname{SL}}_{n}(Z)$. On the other hand, as we have mentioned, the higher rank averaging result in the present paper is new. Several authors have investigates variations of Dirichlet’s theorem in the context of number fields. Let $k$ be a number field which we assume to be totally real for convenience, let $S$ be the set of infinite places of $k$ and let $O_S$ be the ring of $S$-integers of $k$. For $x \in k^n$, we define the $S$ height to be $$h_{S}(x) := \prod_{v \in S} \max_{1 \leq i \leq n}\{|x_i|_v\}$$ where $|~|_v$ denotes the $v$-adic valuation on the completion $k_v$ of $k$. In [@Burger], Burger proves[^8] that for every $x_v \in k_v$ and $Q > c(k)^{(1+n)/n}$, there exist $q \in O^{n}_{S}$ and $p \in O_S$ such that $h_{S}(q) \leq Q$ and $$\prod_{v \in S}|x_v q - p|_{v} \leq c(k)^{1+n}Q^{-n}$$ where $c(k) = \Delta^{1/2d}_{k}$, $d$ is the degree of the number field and $\Delta_k$ is the discriminant of $k$. Using the methods of [@AGT1] and the above mentioned Theorem of Kleinbock-Margulis applied to $$G = \prod_{v \in S}{\operatorname{SL}}_{n+1}(k_v), \Gamma = {\operatorname{SL}}_{n+1}(O_S)$$ and $g_t = ({\operatorname{diag}}(e^{nt}, e^{-t}, \dots, e^{-t}))_{v \in S}$ one can obtain equidistribution of approximates for the analogue of Dirichlet’s theorem for number fields as above. Other generalisations of Dirichlet’s theorem have been established by W. M. Schmid[@Schmidt-num], Qûeme [@Queme], Hattori [@Hattori], see also [@EGL] where another analogue of Dirichlet’s theorem and an analogue of badly approximable vectors in number fields is investigated. It would not be difficult to obtain equidistribution of approximates in each of these cases with appropriate choices of $G$, $\Gamma$ and $g_t$. However, multplicative, weighted and versions for more places ($p$-adic) do not follow from the above methods. It would be an interesting problem to generalise our higher rank result in this paper to include other semisimple Lie groups and also to include finite, i.e. $p$-adic places. [99]{} Jayadev S. Athreya, Anish Ghosh, and Jimmy Tseng, *Spherical averages of Siegel transforms and spiraling of lattice approximations*, to appear in J. London Math Soc. D. Badziahin, A. Pollington, S. Velani, *On a problem in simultaneous Diophantine approximation: Schmidt’s conjecture*, Ann. of Math. 174 (2011),1837–1883. Yann Bugeaud, *Multiplicative Diophantine approximation*, Dynamical systems and Diophantine approximation, 105–125, Semin. Congr., 19, Soc. Math. France, Paris, 2009. E. Burger, *Homogeneous Diophantine approximation in S-integers*, Pacific J. Math. 152 (1992), no. 2, 211–253. L. G. P. Dirichlet, *Verallgemeinerung eines Satzes aus der Lehre von den Kettenbrüchen nebst einige Anwendungen auf die Theorie der Zahlen*, S.-B. Preuss. Akad. Wiss. (1842), 93–95. W. Duke, Z. Rudnick, and P. Sarnak. Density of integer points on affine homogeneous varieties. Duke Math. J., 71(1):143–179, 1993. M. Einsiedler, A. Ghosh and B. Lytle, *Badly approximable vectors, $C^{1}$ curves, and number fields*, to appear in Ergodic Theory and Dynamical Systems. A. Eskin, G. Margulis and S. Mozes, *Upper bounds and asymptotics in a quantitative version of the Oppenheim conjecture*, Ann. of Math. 147 (1998), no. 1, 93–141. A. Eskin and C. McMullen, *Mixing, counting, and equidistribution in Lie groups*, Duke Math. J., 71(1):181–209, 1993. T. Hattori, *Some Diophantine approximation inequalities and products of hyperbolic spaces*, J. Math. Soc. Japan 59 (2007), no. 1, 239–264. D. Kleinbock and G. Margulis, *Bounded orbits of nonquasiunipotent flows on homogeneous spaces*, Amer. Math. Soc. Transl.(1996), v. 171, 141–172. D. Kleinbock and G. Margulis, *Logarithm laws for flows on homogeneous spaces*, Invent. Math., 138 (1999), 451–494. D. Kleinbock and B. Weiss, *Modified Schmidt games and Diophantine approximation with weights*, Advances in Math. 223 (2010), 1276–1298. D. Kleinbock and B. Weiss, *Modified Schmidt games and a conjecture of Margulis*, J. Mod. Dyn. 7 (2014) 429–460. J. Marklof and A. Strömbergsson, *The Boltzmann-Grad limit of the periodic Lorentz ga*s, Annals of Mathematics 174 (2011) 225–298. J. Marklof and A. Strömbergsson, *The distribution of free path lengths in the periodic Lorentz gas and related lattice point problems*, Annals of Mathematics 172 (2010) 1949–2033. R. Qûeme, *On Diophantine approximation by algebraic numbers of a given number field: a new generalization of Dirichlet approximation theorem*, Journées Arithmétiques, 1989 (Luminy, 1989). Astérisque No. 198-200 (1991), 273–283 (1992). W. M. Schmidt, *Asymptotic formulae for point lattices of bounded determinant and subspaces of bounded height*, Duke Math J. 35 (1968), 327–339. , *Simultaneous approximation to algebraic numbers by elements of a number field*, Monatsh. Math. 79 (1975), 55–66. , *Diophantine approximation*, Lecture Notes in Mathematics, vol. 785, Springer-Verlag, Berlin, 1980. Nimish Shah, *Limit distributions of expanding translates of certain orbits on homogeneous spaces*, Proc. Indian Acad. Sci., Math. Sci. 106 (1996) 105–125. C. S. Siegel, *A mean value theorem in geometry of numbers*, Ann. Math. 46 (1945), 340–347. [^1]: J.S.A. partially supported by NSF grant DMS 1069153, and NSF grants DMS 1107452, 1107263, 1107367 “RNMS: GEometric structures And Representation varieties" (the GEAR Network). [^2]: A.G. partially supported by an ISF-UGC grant. [^3]: J.T. acknowledges the research leading to these results has received funding from the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 291147. [^4]: One could define the Siegel transform only over primitive lattice points, in which case results analogous to Theorems \[theorem:siegel:equidist\] and \[theorem:siegel:equidistUpper\] also hold (using, essentially, the same proof). [^5]: This condition can be generalized to $f \in L^1({\mathbb}R^{d})$. [^6]: A proof that $\widehat{{\bf 1}}_{R_{A, \epsilon}}, \widehat{{\bf 1}}_{R_{\epsilon}}, \widehat{{\bf 1}}_{P_{A, \epsilon}}, \widehat{{\bf 1}}_{P_{{\mathcal S}, \epsilon}}$ are Riemann-integrable is analogous to that in [@AGT1 Footnote 4]. [^7]: We note that the second and the fourth cases already suffice to show Theorems \[thmWeightedDASphereAve\], \[thmMultiDASphereAve\], and \[thmMultiDASphereAveCuspSet\].\[FootnoteCase24\] [^8]: He actually proves his result for matrices with entries in $k_v$
--- abstract: | Many sorts of structured data are commonly stored in a multi-relational format of interrelated tables. Under this relational model, exploratory data analysis can be done by using relational queries. As an example, in the Internet Movie Database (IMDb) a query can be used to check whether the average rank of action movies is higher than the average rank of drama movies. We consider the problem of assessing whether the results returned by such a query are statistically significant or just a random artifact of the structure in the data. Our approach is based on randomizing the tables occurring in the queries and repeating the original query on the randomized tables. It turns out that there is no unique way of randomizing in multi-relational data. We propose several randomization techniques, study their properties, and show how to find out which queries or hypotheses about our data result in statistically significant information. We give results on real and generated data and show how the significance of some queries vary between different randomizations. author: - 'Markus Ojala [^1]' - 'Gemma C. Garriga' - 'Aristides Gionis [^2]' - Heikki Mannila date: 'June 30, 2009' title: Query Significance in Databases via Randomizations --- Introduction ============ The question of evaluating whether certain hypotheses made from observed data are significant or not, is one of the oldest problems in statistics. Statistical significance reduces an observed result (statistic) to a $p$-value that tells about the probability of observing the same result at random when a certain null hypothesis is true. If this $p$-value is sufficiently small, we can assume that the null hypothesis is false. The technical challenge of defining an exact $p$-value for a given hypothesis is typically resolved by studying the null distribution of the test analytically; for example, the well known chi-squared test is based on statistics that follow a chi-square distribution under the null hypothesis. Alternatively, when analytical solutions are not possible or hard to state exactly, the null distribution can be defined via permutation tests. These useful statistical concepts have been used for years in experimental fields such as medicine, biology, geology or physics, to name a few. Many of these considerations have been extended as well to the data mining and database community. In a very first paper about association rules, Brin et al. [@silverstein98beyond] considered measuring the significance of rules via the chi-squared test, and from there many other papers followed—see e.g. [@775053] for a comprehensive survey. More recently, the approach of defining randomization tests to assess data mining results was introduced for binary data [@swaprand-gionisetal], and for real-valued data [@conf/sdm/OjalaVKHM08]. [$$\begin{aligned} \mathrm{GM} & = & \{(\mathrm{Romance},m_1), (\mathrm{Romance},m_2),(\mathrm{Drama},m_3), \\ & & \ (\mathrm{Drama},m_4),(\mathrm{Drama},m_5),(\mathrm{Drama},m_6), \\ & & \ (\mathrm{Drama},m_7),(\mathrm{History},m_6),(\mathrm{History},m_7)\} \\[.5em] \mathrm{MD} & = &\{(m_1,\mathrm{C.~Waitt}), (m_2,\mathrm{C.~Waitt}), (m_3,\mathrm{C.~Waitt}), \\ & & \ (m_4,\mathrm{C.~Waitt}), (m_5,\mathrm{C.~Waitt}), (m_6,\mathrm{T.~George}), \\ & & \ (m_7,\mathrm{T.~George})\} \\[.5em] \mathrm{DA} & = & \{(\mathrm{C.~Waitt},30), (\mathrm{T.~George}, 60)\} \end{aligned}$$]{} Abstracting a bit from the question of how significant patterns are in the data, we introduce here the statistical testing framework to databases and the exploratory task of querying the relations of the database. The question of understanding what we know and what we believe about our dataset becomes tricky when the data is highly structured and interrelated. Structured data is everywhere: examples are the Internet Movie Database (IMDb), or the DBLP computer science bibliography, and indeed, most of today’s information systems are actually relational databases. In IMDb, e.g., basic entities are directors, movies, genres, ranks or years; in addition, we have relations such as directors direct movies, movies are classified by a genre, movies are ranked with some quality criteria, and directors are born in a certain year. Each of these relations is represented in a separate table which relates to others through their common attribute values. A simple toy example is given in Figure \[fig:toyex0\]. In multi-relational databases, users and applications access the data via queries. E.g., a query can be made to check the average age of directors of history movies, or the average age of directors of romance movies. In the toy example of Figure \[fig:toyex0\], the first query returns a value of 60, while the second query returns a value of 30. Usually, the answer returned by the query is assumed as a fact, thus implying some conventional wisdom—for this toy example we might be tempted to believe that directors of romance movies are younger than directors of history movies. But, should we really believe that this hypothesis is significant from the data? If we knew that all history movies are also classified as drama movies, would the value of 60 still have the same importance? Or, if we knew that the same director has participated in both romance and drama movies? We study whether the results returned by queries are significant or just a random artifact due to the structure in the data. Our statistical tool is randomizations and the approach is simple: randomize certain relations occurring in the queries and repeat the original query in the random samples. This provides an empirical $p$-value, and, as in basic statistics, we can reject or accept our hypothesis linked to the query. The goal behind this idea is to provide an understanding of how the structure of the data affects the significance of the information we derive from our queries. If certain structures or patterns remain after simple randomizations (e.g., the fact that history movies are also drama movies in the toy example), the answers of a query that rely on such patterns should be regarded as not significant. It turns out that there is no unique way of randomizing in multi-relational data, and indeed, it is difficult to give a fully satisfactory answer about which randomizations are more important than others. We study several randomization methods and show the combinatorial properties of the null distributions on multiple tables. Our contribution makes a first step towards understanding how the significance of a query is linked to the structure hidden in the data; randomizations are a sound statistical tool to make such a connection. We believe this is an important problem of interest to both the database and data mining communities. We present experimental results on synthetic data, and show the usability of the method for several queries in real datasets. Problem statement {#sect:probdef} ================= Let $A$ be a binary relation $A \subseteq I \times J$ between sets $I$ and $J$. In the market basket application, for example, $I$ could be a set of customers and $J$ a set of products. A binary relation $A \subseteq I \times J$ identifies which customers from $I$ buy which products from $J$. Notice that every binary relation can be seen as a binary matrix describing the occurrences between the row set $I$ and column set $J$, see Figure \[fig:toyex1\] for examples. Let $\{A_1,\ldots,A_n\}$ be a set of $n$ binary relations representing some structured data. This relational model is very general. It applies, for example, to a movie database system, as shown in Figure \[fig:toyex1\]. The representation of the same example as a sequence of bipartite graphs is depicted in Figure \[fig:toyex1:graph\]. ![The bipartite graph representation of the movie database shown in Figure \[fig:toyex1\]. The graph shows all the possible paths from the source nodes, $\mathrm{Genre}$, to the destination nodes, $\mathrm{Age}$.[]{data-label="fig:toyex1:graph"}](toy-example){width="6cm"} The basic operator to combine relations is the [*natural join*]{}. Conceptually, a join between two relations $A$ and $B$, denoted $A {{\negthickspace}\Join{\negthickspace}}B$, combines all entries from $A$ and $B$ that share common attribute values to return a composition of the relations. For example, given $(i,j) \in A$, $(j,k) \in B$ and $(j,k') \in B$, we have $(i,j,k) \in A {{\negthickspace}\Join{\negthickspace}}B$ and also $(i,j,k') \in A {{\negthickspace}\Join{\negthickspace}}B$. The join operator is associative over a set of relations and its result explicitly represents all existing paths between the occurring relations. For example, the natural join of the three tables in Figure \[fig:toyex1\] returns a tuple for each path there is between Genre and Age. For an [*ordered*]{} subset of binary relations from the database $S \subseteq \{A_1,\ldots,A_n\}$, we use ${\Join{\negthickspace}}S$ to denote the final join between all elements in $S$. The order in $S$ is to ensure a join of consistent relations; we assume that $S$ in ${\Join{\negthickspace}}S$ is always implicitly ordered. A [*query*]{} $q$ is applied to the join of a subset of the relations in the database $S \subseteq \{A_1,\ldots,A_n\}$. The result of a query is denoted by $q({\Join{\negthickspace}}S)$. We say that $S$ is the set of relations occurring in the query. A query can be described with the operators of projection and selection [@ramakrirshnan03], applied to a join ${\Join{\negthickspace}}S$. Projection is a unary operator $\pi_{X}({\Join{\negthickspace}}S)$ that restricts tuples of ${\Join{\negthickspace}}S$ to attributes in $X$. Selection is a unary operator $\sigma_\varphi({\Join{\negthickspace}}S)$ where $\varphi$ is a propositional formula. The operator selects all tuples in the relation ${\Join{\negthickspace}}S$ for which $\varphi$ holds. Consider the movie database in Figure \[fig:toyex1\]. A possible query is: select drama movies and project movie and age of its director. We can write this query as follows, $$q_1 = \pi_{\text{Movie,Age}}(\sigma_{\text{Genre = Drama}}(\mathrm{GM} {{\negthickspace}\Join{\negthickspace}}\mathrm{MD} {{\negthickspace}\Join{\negthickspace}}\mathrm{DA}))$$ The result of query $q_1$ is a set of pairs: $\{(m_3,30),(m_4,30),$ $(m_5,30),(m_6,60),(m_7,60)\}$. Another very similar query is: select drama movies and project age only. That is, $$q_2 = \pi_{\text{Age}}(\sigma_{\text{Genre = Drama}}(\mathrm{GM} {{\negthickspace}\Join{\negthickspace}}\mathrm{MD} {{\negthickspace}\Join{\negthickspace}}\mathrm{DA}))$$ Query $q_2$ returns:$\{30,60\}$. Although queries $q_1$ and $q_2$ are very similar, the projection made by $q_2$ on only Age, has eliminated repeated values. The results of query $q_1$ tell us how many paths there are between directors of Drama and Age, while in query $q_2$ we only know if a path exists or not. Our goal is to assess whether the results returned by a query provide significant information about our hypothesis on the data. For simplicity, a [*statistic*]{} $f$ is required to map the results of a query to a single real value. We assume this function $f$ is provided by the user together with the query; they define the hypothesis on the data the user wants to test. Examples of this statistic are the average of the returned results, or the number of tuples in the answer, but indeed $f$ can be any general function returning a real value. For example, the average value of Age in query $q_1$ is 42.5 (i.e., the average age of directors of Drama weighted by the number of directed movies). Then, we may want to know whether that average age is interesting or not. Another two-tailed hypothesis is whether that average is significantly different from the average age of directors of romance movies. Formally, our problem reads as follows. Given a set of binary relations $\{A_1,$ $\ldots,A_n\}$ of structured data and a query $q$ on some occurring $S \subseteq \{A_1,$ $\ldots,A_n\}$, is the value of $f(q({\Join{\negthickspace}}S))$ for a statistic $f$, significant (in some sense to be made more specific later)? Overview of the method ====================== In this section we present an overview of the approach and describe the intuition behind it. We show how our method can be used to test the significance of queries and to uncover the structurally important relations in the data. Significance testing via randomizations --------------------------------------- We approach the problem of testing the statistical significance of the query via randomizations. Randomizations have been widely used as a method to generate samples from null distributions. For example, in medical studies it is customary to measure the effect of a certain drug via permutation tests between the control and case group [@Good:2000]. For short, let $R = {\Join{\negthickspace}}S$ for some $S \subseteq \{A_1,\ldots,A_n\}$. To assess the significance of $f(q(R))$, we generate randomized versions of $R$ and run the same query over the samples. Let $\hat{\mathcal{R}} = \{\hat{R}_1,\ldots,\hat{R}_k\}$ be a set of randomizations of $R$. We will specify in Section \[subsec:randtypes\] how to generate such randomized versions of $R$. Then the one-tailed [*empirical $p$-value*]{} of $f(q(R))$ with the hypothesis of $f(q(R))$ being small is, $$\frac{|\{\hat{R} \in \hat{\mathcal{R}}: f(q(\hat{R})) \leq f(q(R))\}|+1}{k+1}. \label{eq:emppvalue}$$ This definition represents the fraction of randomized samples having a smaller value of the statistic $f$. If the $p$-value is small, e.g., below a threshold value $\alpha=0.05$, we can say that the value of $f(q(R))$ is significant in the original data. The one-tailed $p$-value with the hypothesis of $f$ being large and the two-tailed $p$-value are defined similarly. Where to randomize? {#subsec:whrtorand} ------------------- The challenge is how to generate the set $\hat{\mathcal{R}}$, that is, the different randomized versions of $R={\Join{\negthickspace}}S$, to compute the empirical $p$-value. Consider the toy example in Figure \[fig:toyex1\]. Suppose we want to evaluate whether the average age of the directors of drama movies, as in query $q_2$ of Section \[sect:probdef\], is young. A first naive approach is to consider randomizing directly the binary matrix obtained from the boolean product of all relations from Genre to Age. The boolean product tells us whether there is a path from the set of nodes of Genre to the set of nodes of Age, as required by query $q_2$. A traditional permutation test[^3] on this new matrix shown in Figure \[fig:booleanprod\](a) can produce only two possible random samples: either the original matrix, or a matrix where the age values between Romance and History are swapped. For the particular case of romance movies with the hypothesis of having small age, we would obtain a $p$-value close to 0.5 (i.e. 50% of the randomized samples would have the same value as the original). Thus the result is not significant. Indeed, under such randomization none of the three genres would test significantly small, nor large, nor different. Alternatively, we could apply a permutation test on the contingency table of paths [@art/ChenDHL05], shown in Figure \[fig:booleanprod\](b). This table gives the number of paths between the Genre and Age, as required by $q_1$. The hypothesis related to our queries under those permutation tests would never be significant. The problem of these naive approaches is that they ignore the structure of the relations occurring in the query. In our toy example there are three binary relations participating in the query: GM, MD and DA. Indeed these relationships convey some structure on the data: the relation MD shows that all history movies are also drama movies; the relation MD shows that all movies from Drama and Romance have been directed by the same person. How do these structures affect the significance of the results in a query? In queries involving multiple binary relations, there is no unique way to randomize. To assess the structural effect that each relation from $S$ has over the query $q({\Join{\negthickspace}}S)$, we should randomize only the corresponding relation. That is, the different randomizations of ${\Join{\negthickspace}}S$ are obtained by randomizing a single relation $A \in S$ while keeping the rest fixed. More formally, the random samples of ${\Join{\negthickspace}}S$, when only $A \in S$ is randomized, are defined as follows: $$\hat{\mathcal{R}}_A = \{{\Join{\negthickspace}}\text{T}\cup \hat{A}\ |\ \hat{A} \in \hat{\mathcal{A}} \text{ and } T = S\backslash A\},$$ where $\hat{\mathcal{A}} = \{\hat{A}_1,\ldots,\hat{A}_k\}$ is the set of randomized versions of the original $A \in S$. In Section \[subsec:randtypes\] we describe the different randomization techniques to obtain such samples. Finally, these randomized samples $\hat{\mathcal{R}}_A$ will be used to compute the corresponding $p$-value, as described in Equation \[eq:emppvalue\]. Observe that for a given query involving relations in $S$, we can obtain one $p$-value for each $A \in S$ we randomize (while keeping $S\backslash A$ fixed). Each $p$-value is interesting as it measures the structural effect that the participant relation $A$ has on the significance of the result of the query. The sketch of the method is described in Algorithm \[alg:method\]. The basis of our proposal can be found in traditional statistics under the name of restricted randomizations (see e.g. [@Good:2000], typically to test whether a treatment variable has effect on a response variable). A set of binary relations $S \subseteq \{A_1,\ldots,A_n\}$, a query $q({\Join{\negthickspace}}S)$ and a hypothesis over the statistic $f(q({\Join{\negthickspace}}S))$ A set of $p$-values Obtain $k$ random samples of $A$, $\hat{\mathcal{A}} = \{\hat{A}_1,\ldots,\hat{A}_k\}$\[alg:qs:obt\] Let $\hat{\mathcal{R}}_A = \{{\Join{\negthickspace}}\text{T}\cup \hat{A}\ |\ \hat{A} \in \hat{\mathcal{A}} \text{ and } T = S\backslash A\}$\[alg:qs:comb\] Compute the $p$-value using the random samples $\hat{\mathcal{R}}_A$ \[alg:method\] Example {#sub:example} ------- We study now the toy example in Figure \[fig:toyex1\]. Consider a query defined such as $q_1$ from Section \[sect:probdef\], yet on the three different Genres. The first hypothesis that romance movies are directed by young directors obtains a $p$-value of 0.131 when randomizing on GM, a $p$-value of 0.494 on MD, and a $p$-value of 0.495 on MA. The hypothesis is not significant under any randomization, but we observe that randomizing on GM obtains the smallest $p$-value for this query. The hypothesis that history movies are directed by old directors obtains $p$-values 0.269, 0.045, 0.495 when randomizing on GM, MD, DA, respectively. Thus the hypothesis is significant considering the structure in relation MD: all non-history movies are directed by the same person. Finally, the hypothesis that drama movies are directed by young directors is not significant in any of the randomizations, always with a $p$-value close to $1$ when randomizing on GM or MD, and $p$-value of 0.495 when randomizing on DA. In summary: the age value of 30 associated to romance movies is close to being significant when randomizing on GM because Romance is a non-intersecting genre with Drama and History; the age value of 60 associated to history movies is significant when randomizing on MD because, when focusing on the directors, the history movies are non-intersecting with the romance and drama movies—all romance and drama movies are directed by the same person; also, the relation DA always swaps with equal probability, because of its one-to-one structure. In the next section we will understand better the reason of these explanations. Randomizations in multi-relational model ======================================== This section describes how to obtain random samples for a single relation $A$ (line \[alg:qs:obt\] in Algorithm \[alg:method\]), and presents the combinatorial properties of combining such samples with the other relations in the query (line \[alg:qs:comb\] in Algorithm \[alg:method\]). Types of randomization {#subsec:randtypes} ---------------------- Given a binary relation $A$ we use three different types of randomization to obtain random samples from $A$. The running times and space consumptions of the methods are linear in the size of the relation $A$. - [*Swap randomization*]{} of $A$, as used in [@cobb03; @swaprand-gionisetal], produces random samples of $A$ that preserve the row and column sums. The algorithm starts from $A$ and performs local swaps interchanging a pair of 1’s with a pair of 0’s preserving the row and column sums. Technically, a local swap consists of selecting entries $(i,j),(k,l) \in A$ such that $(i,l),(k,j) \notin A$, and swapping the elements so that $(i,j),(k,l) \notin A$ and $(i,l),(k,j) \in A$. On the bipartite graph representation of the relation $A$, a local swap represents a flip between two independent edges. ![image](sw1){width="2.3cm"} ![image](sw2){width="2.3cm"} A sequence of swaps is performed until the data mixes sufficiently enough in a Markov chain approach [@besag; @besagclifford89], and therefore, a random sample of $A$ is obtained. We use ten times the number of ones in the matrix as the number of swaps, which suffices for the convergence of the chain [@swaprand-gionisetal]. We denote the set of all random samples reached via swap randomization of $A$ as $\sw(A)$. - [*Row permutation*]{} of $A$ permutes the order of the rows of $A$. We denote the set of all random samples reached via row permutation of $A$ as $\rp(A)$. - [*Column permutation*]{} of $A$ permutes the order of the columns of $A$. We denote the set of all random samples reached via column permutation of $A$ as $\cp(A)$. Note, particularly, that $\sw(A)$, $\rp(A)$ and $\cp(A)$ refer to sets of matrices. The relationship between swap randomizations and permutations can be stated as follows. Let $A$ be a binary matrix. Then: - $\rp(A) =\sw(I) \cdot A$, where $I$ is an identity matrix; - $\cp(A) = A \cdot \sw(I)$, where $I$ is an identity matrix; - if $A$ has one 1 in each row, then $\sw(A) = \rp(A)$; if $A$ has one 1 in each column, then $\sw(A) = \cp(A)$. \[prop0\] Note that $\sw(I)$, for identity matrix $I$, can produce any swap permutation matrix with uniform distribution. Thus, we have that the boolean product $\sw(I) \cdot A$ produces all permutations for the rows of $A$ and similarly, $A \cdot \sw(I)$ produces all permutations of the columns of $A$. Intuitively, these row (or column) permutations can be seen as a random re-assignment of the row (or column) names in $A$. While the swap randomization has been used in [@swaprand-gionisetal] to assess the data mining results on a single binary relation, the new randomizations, corresponding to row and column permutations, do not make sense in such a context. The row or column permutation of a matrix does not change any of the frequent pattern solutions in the new randomized matrix. These permutations only make sense in a multi-relational data model, where the permuted matrices are combined with other relations. Both row and column permutation of a single relation change the global paths from the source nodes to destination nodes in the query graph, and thus, the evaluation of the query can change on the randomized data. Properties ---------- Next we study the properties of combining the obtained random samples with the other relations in the query. For simplicity, we study the case of queries with only two occurring relations $q(A{{\negthickspace}\Join{\negthickspace}}B)$ and use boolean product as a simplification of the natural join. For notational convenience, we overload the boolean product for the sets of binary matrices, e.g., $\sw(A) \cdot \sw(B)$ represents the boolean product of each pair of elements $A \in \sw(A)$ and $B\in \sw(B)$. The following inclusions with swap randomization follow immediately after the definitions. All other inclusions do not hold. The inclusions can also be proper in all cases. Let $A,B$ be binary matrices. Then: - $A \cdot B \subseteq \sw(A) \cdot B \subseteq \sw(A) \cdot \sw(B)$; - $A \cdot B \subseteq A \cdot \sw(B) \subseteq \sw(A) \cdot \sw(B)$; - $A \cdot B \subseteq \sw(A \cdot B)$. \[prop1\] Proposition \[prop1\] tells us that the set of samples that can be obtained by randomizing two relations is larger than by randomizing only one relation. As discussed in Section \[subsec:whrtorand\], we prefer to randomize a single table at a time in order to control much better the structural effect the randomized relation has on the query. Additionally, we know that the set of randomized samples $\sw(A) \cdot B$ is different from the set $A \cdot \sw(B)$, thus it makes sense to do them both separately. Next we present several properties relating swap randomization to row and column permutations. Let $A,B$ be binary relations. If $B$ is a one-to-one relation, then $A \cdot \sw(B) = \cp(A)$. If $A$ is a one-to-one relation, then $\sw(A) \cdot B = \rp(B)$. \[prop2\] Proposition \[prop2\] follows immediately from Proposition \[prop0\]. In real world datasets, it is quite common to have one-to-one relations. For example, the ages of the directors in the example in Figure \[fig:toyex1\] are one-to-one. Thus swap randomization of the relation DA produces the same set of samples as the column permutation of MD. Let $A,B$ binary relations. Then: - $\cp(A \cdot B) = A \cdot \cp(B)$ - $\rp(A \cdot B) = \rp(A) \cdot B$ - $\cp(A) \cdot B = A \cdot \rp(B) = \cp(A) \cdot \rp(B)$ \[prop3\] This means that column and row permutations do not make sense in more than one relation, e.g., $A \cdot \cp(B \cdot C \cdot D) \cdot E = A \cdot B \cdot C \cdot \cp(D) \cdot E$. The last property of Proposition \[prop3\] states that only one permutation, either column permutation on $A$ or row permutation on $B$, is indeed necessary. Finally, we give an implication of Proposition \[prop0\] that reduces the number of different randomizations considerably. Let $A,B$ be binary relations. Then: $A \cdot \sw(I) \cdot B = \cp(A)\cdot B = A \cdot \rp(B)$, where $I$ is an identity matrix. \[prop4\] Hence, we prefer to use the notation with the identity matrix $I$ to refer to the row and column permutations. The operation $A \cdot \sw(I) \cdot B$ randomizes the boolean product, whereas the operations $\sw(A) \cdot B$ and $A \cdot \sw(B)$ randomize the original data. From this perspective, $A \cdot \sw(I) \cdot B$ tells about the significance of the combination operation, while $\sw(A) \cdot B$ tells whether the structure in $A$ is significant. To sum up, we have the following result: For a query $q(A {{\negthickspace}\Join{\negthickspace}}B)$, there exist three different randomizations: (i) $\sw(A)$ while keeping $B$ fixed; (ii) $\sw(B)$ while keeping $A$ fixed; (iii) $\sw(I)$ where $I$ is an identity relation between the columns of $A$ and the rows of $B$. Notice that if $A$ or $B$ are one-to-one relations, then randomization (iii) will be the same as (i) or (ii) respectively. Each randomization provides a set of samples from where we can compute a $p$-value for our query (hypothesis on the data). Every $p$-value is interesting as it shows how the structure of the randomized relation affects the significance. Example revisited ----------------- The $p$-values reported in Section \[sub:example\] for the toy example in Figure \[fig:toyex1\], correspond to swap randomization of the binary tables GM, or MD, or DA, respectively. Indeed because MD has one single 1 in each row, we have that $\text{GM} \cdot \sw(I) \cdot \text{MD} \cdot \text{DA}$ is equal to $\text{GM} \cdot \sw(\text{MD}) \cdot \text{DA}$. Similarly, because DA is a one-to-one relation, we have $\text{GM} \cdot \text{MD} \cdot \sw(I) \cdot \text{DA}$ equals $\text{GM} \cdot \text{MD} \cdot \sw(\text{DA})$. Thus, for this example, only swap randomization in the three tables is necessary. Interestingly, we can understand better now the $p$-values reported in Section \[sub:example\]. On the relation GM, drama movies and history movies have no independent edges to swap between them. Therefore, the pattern of History implying Drama tends to remain in random samples. As a result, the $p$-value of the hypothesis related to history or drama movies is not significant. On the other hand, the $p$-value related to romance movies becomes close to being significant because, for this genre, the null distribution diverges more from the original. The fact that there are only two romance movies raises this $p$-value slightly above the 0.05 threshold. Similar explanation goes when randomizing MD. When looking at MD, local swaps can interchange at most two edges between movies of the young director C. Waitt and movies of the not-so-young director T. George. Actually, in all random samples coming from MD we observe that C. Waitt has always at least three movies from either drama or romance. As a result, neither drama nor romance can be significant—in the null distribution they are always closely linked to a young director as in the original data. Yet, history movies directed by T. George have more local swaps that would create a diverging null distribution—most of the samples in the null distribution have the history movies connected to the age of 30. The hypothesis of history movies being directed by a not-so-young person is then significant. Studying path distributions =========================== For a query $q(A {{\negthickspace}\Join{\negthickspace}}\ldots {{\negthickspace}\Join{\negthickspace}}B)$ where $A \subseteq I \times L$ and $B \subseteq J \times K$, let $P = A \ast \ldots \ast B$ be the matrix product of all relations participating in $q$. This corresponds to the contingency table of paths from origin $I$ to destination nodes $K$. An example is shown in Figure \[fig:booleanprod\](b) for the toy data of Figure \[fig:toyex1\]. For all types of queries, the significance of the result is closely related to the path distributions between nodes $I$ and $K$. For example, suppose we want to test whether the average age of history-movie directors is large. In the original data of Figure \[fig:toyex1:graph\] there are two paths from History to the age of 60 and no path to the age of 30. It is sensible to assume that if we had random samples where paths are mainly swapped the other way round, the hypothesis would be significant. Naturally, a simple way to visualize whether there exists an interesting finding in the data is to compare the path distribution of $P$ with the expected path distribution on the given random samples. The larger the change, the more significant the result would tend to be. The following three matrices show the expectation of the paths when swap randomizing relation GM, MD or DA, respectively, for the example in Figure \[fig:toyex1\].\ [$E[\sw(\text{GM}) \ast \text{MD} \ast \text{DA}]$ $E[\text{GM} \ast \sw(\text{MD}) \ast \text{DA}]$ $E[\text{GM} \ast \text{MD} \ast \sw(\text{DA})]$ ]{}\ [$\begin{pmatrix} \mathbf{0.849} & \mathbf{1.151}\\ 3.269 & 1.731 \\ 0.882 & 1.118 \end{pmatrix}$ $\begin{pmatrix} 1.413 & 0.587 \\ 3.587 & 1.413 \\ \mathbf{1.455} & \mathbf{0.545} \end{pmatrix}$ $\begin{pmatrix} 0.984 & 1.016 \\ 2.492 & 2.508 \\ 1.016 & 0.984 \end{pmatrix}$\ ]{} -1 The genre that swaps most of its paths under randomizations with GM is Romance. History swaps the paths from the age of 60 to the age of 30 when randomizing on MD. Randomization on DA distributes paths fifty-fifty for each genre. The $p$-values obtained there were always close to 0.5. Empirical results ================= In this section we present empirical results on synthetic and real datasets. Our real dataset is **MovieLens**, which is very similar to IMDb. In all cases, we calculate the empirical $p$-values over 999 randomized samples and use the threshold of $\alpha =0.05$ to determine the query significance. The randomization methods are fast in practice. In our experiments, producing one randomized sample took approximately the same time as evaluating the query. With the tested datasets, the times for producing one sample were at most few seconds with Java implementations integrated with MATLAB on a 2.2GHz Opteron. The time and space consumption of the methods scale linearly in the size of the relation. In large-scale applications, fewer number of randomized samples can be used to calculate the empirical $p$-values. For example, 30 samples is usually sufficient in a preliminary significance analysis. This corresponds to approximately 30 times increase in the evaluation time. Synthetic dataset ----------------- To motivate our approach and understand better why randomizations are consistent with the inferences about our hypothesis, we generate a synthetic dataset to simulate relations of users, movies and genres. We will be interested in testing the following hypothesis. \[query:synth:mldtomtw\] Men watch different types of movies than women. The relations occurring in the query are: Gender$\times$User (SU), User$\times$Movie (UM) and Movie$\times$Genre (MG). For studying the behavior of randomizations, we generate the tables SU, UM and MG to make our hypothesis clearly be significant. We let SU contain 30 men and 20 women, thus SU is a $2\times50$ binary table where the first 30 values in the first row and the last 20 values in the second row are 1s. We generate UM to be a $50\times100$ binary table where men watch any of the first 60 movies with probability of 0.40 and any of the last 40 movies with probability of 0.05. To create a strong pattern, we let the probabilities of a female watching movies be the other way round. Finally, we generate MG as a $100\times6$ binary table where the first three genres will be considered to be manly and the last three genres will be considered to be womanly. For each movie in the relation, we select two genres as follows: for the first 60 movies we select a genre from the manly genres with a probability of 0.9 and from the womanly genres with a probability of 0.1. For the last 40 movies the probabilities are the other way round. So, each movie has at most two genres, because if we happen to select the same genre for a movie twice, then we say that the movie has only one genre. Next we create the anti-tables from those above, called rSU, rUM and rMG. These anti-tables will not contain any structure at all, they are random. We let rSU be a $2\times50$ binary table with 30 men and 20 women where the order of the users is random. We generate rUM to be a $50\times100$ table with each element being 1 with a probability of $(0.40+0.05)/2$. And we let rMG be formed similarly to MG but with the two genres for each movie assigned uniformly with replacement. The goal of this experiment is to study how the $p$-values of Hyp \[query:synth:mldtomtw\] change when combining the original significant tables SU, UM and MG to one of these non-significant tables. Figure \[fig:synt:paths:distr\] shows the contingency table of paths from those combinations. We notice that using the original tables SU, UM and MG (Figure \[fig:synt:paths:distr\](a)) produces clearly a significant difference between the types of movies that males and females watch. By replacing one of the original tables with a random version, the pattern seems to disappear. Still, we cannot clearly see from the path distributions which of the underlying tables mainly breaks the original structure. We would like to check with our tests whether randomizing in the proper tables will tell us where the pattern is broken. For the test, we use the following statistic. \[stat:synth:mldtomtw\] $L_1$ distance between the distribution of genres of the movies that men and women have watched. This statistic is the sum of the absolute differences between the proportion of paths of men and women, as shown in Figure \[fig:synt:paths:distr\] for each of the combinations. The original value of the statistic with the tables SU, UM and MG is 1.23, implying a clear difference between males and females. When one of the tables SU, UM and MG is replaced with a corresponding anti-table, the value of the $L_1$ statistic is around 0.1. [c\*[2]{}[@c]{}\*[4]{}[r]{}]{} &\ (lr)[1-3]{} (lr)[4-7]{} A & B& C & ($I_{AB}$) & (B) & ($I_{BC}$) & (C)\ SU & UM & MG & 0.001 & 0.001 & 0.001 & 0.001\ rSU & UM & MG & [**0.517**]{} & 0.030 & 0.013 & 0.003\ SU & rUM & MG & [**0.282**]{} & [**0.279**]{} & [**0.155**]{} & 0.124\ SU & UM & rMG & 0.001 & 0.001 & [**0.704**]{} & [**0.727**]{}\ In Table \[table:sign:synth\] we show the results of the several significance tests for the hypothesis Hyp \[query:synth:mldtomtw\] on the several combined tables. There is a clear connection between the structure of the relations A, B and C occurring in the query and the $p$-values obtained by randomizing in different relations. As expected, the empirical $p$-value of Hyp \[query:synth:mldtomtw\] with tables SU, UM and MG is significant with randomizations in all tables. On the other hand, when one of the clearly-structured tables SU, UM or MG is replaced by the anti-tables rSU, rUM or rMG respectively, we obtain large empirical $p$-values for those randomizations that touch the anti-tables (see the bold values of Table \[table:sign:synth\]). This illustrates how randomizations can tell about the structural effects in the significance of a query. MovieLens dataset ----------------- The **MovieLens** data is collected through the MovieLens web site ([movielens.umn.edu](movielens.umn.edu)). The downloadable data is already cleaned up, i.e., users who had less than 20 ratings or did not have complete demographic information were removed from the data set. In all, the data consists of 100,000 ratings (valued from 1 to 5) from 943 users on 1,682 movies. Each user has rated at least 20 movies and the demographic information for the users correspond to attributes of age, gender, occupation and zip code. For each movie we have title, release year and a list of genres. Furthermore, we interpret that if a user has rated a movie, it means that he or she has watched it. This corresponds to the binary table named UM. We do not use the information of ratings in any other way. In Table \[table:movielens:summary\] we summarize the binary relations in the **MovieLens** dataset. The table UA is just an identity matrix which maps the users to their ages, thus two different columns of the table UA may correspond to the same age. Handling numerical values in this way guarantees that two users having the same age are not combined into a single user after a join and a projection. Relation Description Rows Cols \# of 1’s/row ---------- ------------------------ ------ ------ --------------- UM User$\times$Movie 943 1680 106 MG Movie$\times$Genre 1680 18 1.7 UO User$\times$Occupation 943 21 1 US User$\times$Gender 943 2 1 UA User$\times$Age 943 943 1 : Summary of tables in MovieLens dataset. The table $\mathrm{UA}$ is an identity map between users and their ages. We denote a transpose by reversing the relation name.[]{data-label="table:movielens:summary"} Next, we go through a few queries on the dataset and analyze their significances. \[query:mldtomtw\] Men watch different types of movies than women. \[stat:mldtomtw\] $L_1$ distance between the distribution of genres of the movies that men and women have watched. In Table \[table:sign:query:mldtomtw\] we give the empirical $p$-values for Hyp \[query:mldtomtw\]. Each row shows the relation being randomized for obtaining the corresponding $p$-value. The query associated to the hypothesis traverses the relations Gender $\times$ User $\times$ Movie $\times$ Genre, corresponding to relations SU, UM and MG. There are five different types of randomizations of the query which each produce a unique $p$-value. The results in Table \[table:sign:query:mldtomtw\] show that Hyp \[query:mldtomtw\] is significant wrt all different randomizations. Mean (Std) $p$-value ------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------ -------- ----------- $\mathrm{SU} {{\negthickspace}\Join{\negthickspace}}\mathrm{UM} {{\negthickspace}\Join{\negthickspace}}\mathrm{MG}$ 0.16 $\sw(\mathrm{SU}) {{\negthickspace}\Join{\negthickspace}}\mathrm{UM} {{\negthickspace}\Join{\negthickspace}}\mathrm{MG}$ 0.03 (0.01) 0.001 $\mathrm{SU} {{\negthickspace}\Join{\negthickspace}}\sw(I) {{\negthickspace}\Join{\negthickspace}}\mathrm{UM} {{\negthickspace}\Join{\negthickspace}}\mathrm{MG}$ 0.03 (0.01) 0.001 $\mathrm{SU} {{\negthickspace}\Join{\negthickspace}}\sw(\mathrm{UM}) {{\negthickspace}\Join{\negthickspace}}\mathrm{MG}$ 0.01 (0.00) 0.001 $\mathrm{SU} {{\negthickspace}\Join{\negthickspace}}\mathrm{UM} {{\negthickspace}\Join{\negthickspace}}\sw(I){{\negthickspace}\Join{\negthickspace}}\mathrm{MG}$ 0.03 (0.01) 0.001 $\mathrm{SU} {{\negthickspace}\Join{\negthickspace}}\mathrm{UM} {{\negthickspace}\Join{\negthickspace}}\sw(\mathrm{MG})$ 0.02 (0.00) 0.001 : Significance evaluation of Hyp \[query:mldtomtw\]. Mean and std are the average and standard deviation of Statistic \[stat:mldtomtw\] in the original input data (first row) and several randomizations.[]{data-label="table:sign:query:mldtomtw"} Indeed, the results on Hyp \[query:mldtomtw\] seem to indicate that men watch movies with different genres than women. All randomizations are consistent. We will next analyze which genres separate men and women. We repeat the following hypothesis (with associated query) for each genre $G$. \[query:mlggmtw\] Men watch genre $G$ more (or less) than women. \[stat:mlggmtw\] The difference between the %-proportions of the movies from genre $G$ among all the movies men and women have watched. Notice this statistic is similar to Statistic \[stat:mldtomtw\] but now we only look at the difference for the specific genre $G$. The empirical $p$-values of the significance testings of Hyp \[query:mlggmtw\] are given in Table \[table:sign:query:mlggmtw\]. Again we find out that randomizing in different relations produces fairly similar results in general. We can observe that men watch significantly more, for example, action and sci-fi movies than women, whereas women watch significantly more romance and drama movies than men. Interestingly, we can say the popularity of mystery and documentary movies do not depend on the gender. Actually the genres which have the smallest amount of movies are the least significant ones. The genres with fewest number of movies are fantasy (with 22 movies), film-noir (24), western (27), animation (41) and documentary (50). [@l@r\*[5]{}[@r]{}@]{} $G$ & Orig. & (SU) & ($I_1$) & (UM) & ($I_2$) & (MG)\ Action & 2.5 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001\ Sci-fi & 1.5 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001\ Thriller & 1.1 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001\ Adventure & 0.8 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001\ Crime & 0.6 & 0.002 & 0.001 & 0.001 & 0.001 & 0.002\ War & 0.5 & 0.002 & 0.001 & 0.001 & 0.004 & 0.002\ Horror & 0.4 & 0.019 & 0.018 & 0.001 & 0.011 & 0.020\ Western & 0.2 & 0.001 & 0.001 & 0.001 & 0.005 & 0.003\ Film-noir & 0.1 & 0.012 & 0.009 & 0.001 & 0.054 & 0.058\ Mystery & 0.0 & 0.392 & 0.401 & 0.395 & 0.424 & 0.469\ Document. & 0.0 & 0.404 & 0.392 & 0.391 & 0.468 & 0.489\ Fantasy & -0.1 & 0.064 & 0.070 & 0.051 & 0.243 & 0.201\ Animation & -0.2 & 0.032 & 0.033 & 0.001 & 0.027 & 0.018\ Musical & -0.5 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001\ Children’s & -1.0 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001\ Comedy & -1.3 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001\ Drama & -2.3 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001\ Romance & -2.3 & 0.001 & 0.001 & 0.001 & 0.001 & 0.001\ Next we study users by their occupation. \[query:uwagowdtomtou\] The users with occupation $O$ watch different types of movies than other users. \[stat:uwagowdtomtou\] $L_1$ distance between the distributions of genres of the movies watched by users with occupation $O$ and users with other occupations. The results of the significance testings are given in Table \[table:sign:query:uwagowdtomtou\]. When evaluating the associated query, we find that randomizing in different relations matters for that query. For most of the occupations, Hyp \[query:uwagowdtomtou\] is not significant when randomizing on $\sw(\mathrm{OU}){{\negthickspace}\Join{\negthickspace}}\mathrm{UM}{{\negthickspace}\Join{\negthickspace}}\mathrm{MG}$ nor $\mathrm{OU}{{\negthickspace}\Join{\negthickspace}}\sw(I){{\negthickspace}\Join{\negthickspace}}\mathrm{UM}{{\negthickspace}\Join{\negthickspace}}\mathrm{MG}$. For the other randomizations we have that all occupations, except for homemakers, exhibit significance of the hypothesis. We observe that the largest occupation groups of librarians (51), educators (95) and students (196) have the most significant empirical $p$-values for the query, with all type of randomizations. We could infer that those type of users watch different genres than other users. [@l@r\*[2]{}[@r@r@r]{}@]{} & & &\ (r[3mm]{})[3-5]{} (r)[6-8]{} & Orig. & Mean & (Std) & $p$-val. & Mean & (Std) & $p$-val.\ None & 0.23 & 0.13 & (0.05) & **0.038** & 0.07 & (0.01) & 0.001\ Librarian & 0.18 & 0.05 & (0.02) & **0.001** & 0.04 & (0.01) & 0.001\ Retired & 0.18 & 0.10 & (0.04) & **0.040** & 0.05 & (0.01) & 0.001\ Homemaker & 0.17 & 0.14 & (0.05) & 0.269 & 0.15 & (0.03) & **0.226**\ Doctor & 0.15 & 0.14 & (0.05) & 0.373 & 0.08 & (0.02) & 0.001\ Entert. & 0.14 & 0.09 & (0.03) & 0.073 & 0.04 & (0.01) & 0.001\ Educator & 0.13 & 0.04 & (0.01) & **0.001** & 0.03 & (0.01) & 0.001\ Lawyer & 0.13 & 0.11 & (0.04) & 0.237 & 0.05 & (0.01) & 0.001\ Salesman & 0.12 & 0.11 & (0.04) & 0.330 & 0.06 & (0.01) & 0.001\ Healthcare & 0.12 & 0.09 & (0.03) & 0.211 & 0.04 & (0.01) & 0.001\ Student & 0.11 & 0.03 & (0.01) & **0.001** & 0.03 & (0.01) & 0.001\ Scientist & 0.11 & 0.07 & (0.02) & 0.052 & 0.05 & (0.01) & 0.001\ Artist & 0.10 & 0.07 & (0.03) & 0.130 & 0.04 & (0.01) & 0.001\ Technician & 0.10 & 0.07 & (0.03) & 0.183 & 0.03 & (0.01) & 0.001\ Programmer & 0.08 & 0.05 & (0.02) & **0.025** & 0.03 & (0.01) & 0.001\ Engineer & 0.08 & 0.05 & (0.02) & **0.034** & 0.03 & (0.01) & 0.001\ Marketing & 0.08 & 0.07 & (0.03) & 0.340 & 0.05 & (0.01) & 0.006\ Writer & 0.08 & 0.06 & (0.02) & 0.122 & 0.03 & (0.01) & 0.001\ Executive & 0.07 & 0.07 & (0.02) & 0.337 & 0.04 & (0.01) & 0.001\ Administr. & 0.05 & 0.04 & (0.02) & 0.367 & 0.02 & (0.01) & 0.001\ Other & 0.04 & 0.04 & (0.01) & 0.483 & 0.02 & (0.00) & 0.002\ \[query:aauwhwmggs\] Average age of the users who have watched movies of a given genre is significant. \[stat:aauwhwmggs\] Weighted average age of the users who have watched movies of the given genre. The results of assessing Hyp \[query:aauwhwmggs\] are given in Table \[table:sign:query:aauwhwmggs\]. The empirical $p$-values of the queries depend largely on the type of randomization used. By randomizing the ages of the users, that is, $\sw(\mathrm{AU}){{\negthickspace}\Join{\negthickspace}}\mathrm{UM}{{\negthickspace}\Join{\negthickspace}}\mathrm{MG}$, the movies whose average age of watchers has originally been around 34 years are not significant. This makes sense when it is compared to the average of all users which is 34.1 years. Notice that in the query the average is weighted by the number of movies watched by the user. Thus randomizing the table AU tests the connection between the ages and the users. Other randomization points tell us that the results on western, romance, crime and fantasy are not significant, whereas the results on other genres are significant. Thus the inner structure of the User$\times$Movie and Movie$\times$Genre relations explain the results of our query. The average ages of the users of the genres with a star in Table \[table:sign:query:aauwhwmggs\] were significant with all types of randomizations. [l\*[5]{}[@r]{}]{} & Orig. & (AU) & (UM) & $\sw(I_2)$ & (MG)\ Film-noir\* & 35.8 & 0.001 & 0.001 & 0.003 & 0.001\ Documentary & 35.0 & **0.134** & 0.001 & 0.001 & 0.001\ Mystery & 34.3 & **0.197** & 0.001 & 0.004 & 0.001\ War & 34.2 & **0.308** & 0.001 & 0.004 & 0.001\ Drama & 34.1 & **0.493** & 0.001 & 0.001 & 0.001\ Western & 33.8 & **0.307** & 0.001 & **0.168** & **0.060**\ Romance\* & 33.4 & 0.024 & 0.001 & 0.039 & 0.002\ Musical & 33.0 & 0.016 & **0.253** & **0.469** & **0.257**\ Crime & 32.6 & 0.001 & 0.001 & **0.181** & **0.411**\ Comedy\* & 32.5 & 0.001 & 0.001 & 0.003 & 0.007\ Thriller\* & 32.2 & 0.001 & 0.001 & 0.003 & 0.004\ Adventure\* & 32.0 & 0.001 & 0.001 & 0.001 & 0.006\ Fantasy & 32.0 & 0.002 & 0.001 & **0.130** & **0.164**\ Children’s\* & 31.8 & 0.001 & 0.001 & 0.002 & 0.001\ Sci-fi\* & 31.8 & 0.001 & 0.001 & 0.001 & 0.003\ Action\* & 31.7 & 0.001 & 0.001 & 0.001 & 0.001\ Horror\* & 31.1 & 0.001 & 0.001 & 0.001 & 0.001\ Animation\* & 30.9 & 0.001 & 0.001 & 0.004 & 0.002\ Related work ============ Obviously, there is a large amount of statistical literature about hypothesis testing [@casella01; @Good:2000]. For the particular case of data mining, many papers work on the significance of association rules and other patterns [@silverstein98beyond; @775053]. In the recent years, the framework of randomizations has been introduced to the data mining community to test significance of patterns: the papers [@cobb03; @swaprand-gionisetal] deal with randomizations on binary data, and the work in [@conf/sdm/OjalaVKHM08] studies randomizations on real-valued data. For another type of approach to measuring $p$-values for patterns, see [@Webb07]. A related work that studies permutations on networks and how this affects significance of patterns is [@art/KashtanIMA04]. Sub-sampling methods such as bootstrapping [@Efron79] use randomization to study the properties of the underlying distribution instead of testing the data against some null-model. Finally, database theory studies mainly query processing and optimization in different complex data [@DBLP:conf/pods/MoorSAV08; @DBLP:conf/pods/JhaRS08]. To the best of our knowledge there is no work that directly addresses the problem presented in this paper. Conclusions and future work =========================== We have addressed the problem of assessing the significance of queries made for the exploratory analysis of relational databases. Each query, together with the associated statistic, define the hypothesis to test on our data. Our mathematical tool to decide the significance is via randomizations. It turns out that in multi-relational data there is no unique way to randomize. We propose to randomize tables occurring in the queries one at a time, and obtain a set of $p$-values for each randomization. Each $p$-value tells what is the structural impact of the randomized table in the query. For example, if certain structures or patterns remain after the randomizations, the answers of a query that rely on such patterns should not be significant. Experiments with synthetic data showed that for well defined significant patterns randomizations uncover which tables from our database are key in significance testing. For real datasets, we tested several hypothesis to show the usability of the method. Still, we found out that in real data it is difficult to give a fully satisfactory answer about how to use all the obtained $p$-values to conclude the correct inference. Our contribution makes an important first step towards understanding how the structure hidden in the data makes some hypotheses more significant than others, but still, a lot of interesting future work needs to be done: study of the combinatorial properties and its connection to the significance of queries and patterns. [10]{} J. Besag. Markov chain [Monte Carlo]{} methods for statistical inference. <http://www.ims.nus.edu.sg/Programs/mcmc/files/besag_tl.pdf>, 2004. J. Besag and P. Clifford. Generalized [Monte]{} [Carlo]{} significance tests. , 76(4):633–642, 1989. G. Casella and R. Berger. . , 2001. Y. Chen, P. Diaconis, S. P. Holmes, and J. S. Liu. Sequential [MC]{} methods for statistical analysis of tables. , 100(469):109–120, 2005. G. W. Cobb and Y.-P. Chen. An application of [Markov]{} chain [Monte]{} [Carlo]{} to community ecology. , 110:265–288, 2003. O. de Moor, D. Sereni, P. Avgustinov, and M. Verbaere. Type inference for datalog and its application to query optimisation. In [*PODS’08*]{}, pages 291–300, 2008. B. Efron. Bootstrap methods: Another look at the jackknife. , 7(1):1–26, 1979. A. Gionis, H. Mannila, T. Mielikäinen, and P. Tsaparas. Assessing data mining results via swap randomization. , 1(3), 2007. P. Good. , volume 2nd. Springer, 2000. A. Jha, V. Rastogi, and D. Suciu. Query evaluation with softkey constraints. In [*PODS’08*]{}, pages 119–128, 2008. N. Kashtan, S. Itzkovitz, R. Milo, and U. Alon. Efficient sampling algorithm for estimating subgraph concentrations and detecting network motifs. , 20(11):1746–1758, 2004. M. Ojala, N. Vuokko, A. Kallio, N. Haiminen, and H. Mannila. Randomization of real-valued matrices for assessing the significance of data mining results. In [*SDM’08*]{}, pages 494–505, 2008. R. Ramakrishnan and J. Gehrke. . McGraw-Hill Higher Ed., 2003. C. Silverstein, S. Brin, and R. Motwani. Beyond market baskets: Generalizing association rules to dependence rules. , 2(1):39–68, 1998. P.-N. Tan, V. Kumar, and J. Srivastava. Selecting the right interestingness measure for association patterns. In [*KDD ’02*]{}, pages 32–41, 2002. G. I. Webb. Discovering significant patterns. , 68(1):1–33, 2007. [^1]: HIIT, Department of Information and Computer Science, Helsinki University of Technology, Finland [^2]: Yahoo! Research, Barcelona, Spain [^3]: A traditional permutation test would swap any values in the matrix, while keeping the row and column sums fixed. In binary data this is called swap randomization.
--- abstract: 'The rise of Big Data has led to new demands for Machine Learning (ML) systems to learn complex models with millions to billions of parameters, that promise adequate capacity to digest massive datasets and offer powerful predictive analytics (such as high-dimensional latent features, intermediate representations, and decision functions) thereupon. In order to run ML algorithms at such scales, on a distributed cluster with 10s to 1000s of machines, it is often the case that significant engineering efforts are required — and one might fairly ask if such engineering truly falls within the domain of ML research or not. Taking the view that Big ML systems can benefit greatly from ML-rooted statistical and algorithmic insights — and that ML researchers should therefore not shy away from such systems design — we discuss a series of principles and strategies distilled from our recent efforts on industrial-scale ML solutions. These principles and strategies span a continuum from application, to engineering, and to theoretical research and development of Big ML systems and architectures, with the goal of understanding how to make them efficient, generally-applicable, and supported with convergence and scaling guarantees. They concern four key questions which traditionally receive little attention in ML research: How to distribute an ML program over a cluster? How to bridge ML computation with inter-machine communication? How to perform such communication? What should be communicated between machines? By exposing underlying statistical and algorithmic characteristics unique to ML programs but not typically seen in traditional computer programs, and by dissecting successful cases to reveal how we have harnessed these principles to design and develop both high-performance distributed ML software as well as general-purpose ML frameworks, we present opportunities for ML researchers and practitioners to further shape and grow the area that lies between ML and systems.' author: - | Eric P. Xing, Qirong Ho, Pengtao Xie, Wei Dai\ School of Computer Science, Carnegie Mellon University bibliography: - 'literature.bib' title: Strategies and Principles of Distributed Machine Learning on Big Data ---
--- abstract: 'Let $F$ be an algebraically closed field of characteristic zero. We consider the question which subsets of $M_n(F)$ can be images of noncommutative polynomials. We prove that a noncommutative polynomial $f$ has only finitely many similarity orbits modulo nonzero scalar multiplication in its image if and only if $f$ is power-central. The union of the zero matrix and a standard open set closed under conjugation by $GL_n(F)$ and nonzero scalar multiplication is shown to be the image of a noncommutative polynomial. We investigate the density of the images with respect to the Zariski topology. We also answer Lvov’s conjecture for multilinear Lie polynomials of degree at most $4$ affirmatively.' address: 'Institute of Mathematics, Physics, and Mechanics, Ljubljana, Slovenia' author: - Špela Špenko title: On the image of a noncommutative polynomial --- [^1] Introduction ============ Let $n$ be a (fixed) integer $\ge 2$, and let $F$ be a field. We will be concerned with the following problem: [*Which subsets of $M_n(F)$ are images of noncommutative polynomials?*]{} According to [@Bel], this question was “reputedly raised by Kaplansky". By the image of a (noncommutative) polynomial $f=f(x_1,\ldots,x_d)$ we mean, of course, the set ${\mathrm{im}}(f) = \{f(a_1,\ldots,a_d)\,|\, a_1,\ldots,a_d\in M_n(F)\}$. An obvious necessary condition for a subset $S$ of $M_n(F)$ to be equal to ${\mathrm{im}}(f)$ for some $f$ is that $S$ is closed under conjugation by invertible matrices, i.e., $tSt^{-1}\subseteq S$ for every invertible $t\in M_n(F)$. Chuang [@Ch] proved that if $F$ is a finite field, $0\in S$, and if we consider only polynomials with zero constant term, then this condition is also sufficient. This is not true for infinite fields. Say, the set of all square zero matrices cannot be the image of a polynomial [@Ch Example, p. 294]. We will consider the case where $F$ is an algebraically closed field of characteristic 0. From now on let $M_n$ stand for $M_n(F)$, $M_n^0$ for the space of all trace zero matrices in $M_n$, and $GL_n$ for the group of all invertible matrices in $M_n$. If $f$ is a polynomial identity, then ${\mathrm{im}}(f) =\{0\}$. Another important situation where ${\mathrm{im}}(f)$ is “small" is when $f$ is a central polynomial; then ${\mathrm{im}}(f)$ consists of scalar matrices. What are other possible small images? When considering this question, one has to take into account that if $a\in {\mathrm{im}}(f)$, then the similarity orbit of $a$ is also contained in ${\mathrm{im}}(f)$. The images of many polynomials (for example the homogeneous ones) are also closed under scalar multiplication. Accordingly, let us denote $a^\sim = \{\lambda tat^{-1}\,|\, t\in { GL_n},\lambda\in F\}$. Is it possible that ${\mathrm{im}}(f)\subseteq a^\sim$ for some nonscalar matrix $a$? In the $n=2$ case the answer comes easily: ${\mathrm{im}}(x_1x_2-x_2x_1)^3 = \left (\begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array} \right)^{\sim}$. One can check this by an easy computation, but the concept behind this example is that the polynomial $(x_1x_2-x_2x_1)^2$ is central, so that ${\mathrm{im}}(x_1x_2-x_2x_1)^3$ can consist only of those trace zero matrices whose determinant is nonzero. Let us also mention that $x_1x_2 - x_2x_1$ also has a relatively small image, namely $M_2^0=\left( \begin{array}{cc} 1 & 0 \\ 0 & -1 \end{array}\right)^\sim\bigcup \left( \begin{array}{cc} 0 & 1 \\ 0 & 0\end{array}\right)^\sim$. Return now to an arbitrary $n$, and let us make the following definition: A polynomial $f$ is [*finite on $M_n$*]{} if there exist $a_1,\ldots,a_k\in M_n$ such that $\{0\}\neq{\mathrm{im}}(f) \subseteq a_1^\sim\bigcup \cdots \bigcup a_k^\sim$. Next, let $j$ be a positive integer that divides $n$, choose a primitive $j$-th root of unity $\mu_j$, denote by ${\bf 1}_r$, $r=\frac{n}{j}$, the identity matrix in $M_r$, and finally, denote by ${\bf w}_j$ the diagonal matrix in $M_n$ having ${\bf 1}_r,\mu_j{\bf 1}_r,\ldots,\mu_j^{j-1}{\bf 1}_r$ on the diagonal. Our first main result is: \[Theorem \[koncen\], Corollary \[pot\]\] \[koncenint\] A polynomial $f$ is finite on $M_n$ if and only if there exists a positive integer $j$ dividing $n$ such that $f^j$ is central on $M_n$ and $f^i$ is not central for $1\le i < j$. In this case ${\mathrm{im}}(f) \subseteq {\bf w}_j^\sim\bigcup n_2^\sim\bigcup \cdots \bigcup n_k^\sim$, where $n_i$ are nilpotent matrices. Moreover, ${\mathrm{im}}(f^{j+1})\subseteq {\bf w}_j^\sim$. It is worth mentioning that the existence of polynomials whose certain powers are central is an interesting question that has been studied by several authors (see, e.g., [@Alb; @AS; @Sal]). Yet not everything is fully understood. Recall that a subset $U$ of $F^{n^2}$ ($\cong M_n$) is said to be a standard open set (with respect to the Zariski topology) if there exists $p\in F[z_1,\ldots,z_{n^2}]$ such that $U=\{(u_1,\ldots,u_{n^2})\in F^{n^2}\,|\,p(u_1,\ldots,u_{n^2}) \ne 0\}$. Our second main result is \[stdint\] If $U$ is a standard open set in $F^{n^2}$ that is closed under nonzero scalar multiplication and conjugation by invertible matrices, then $U\cup \{0\}={\mathrm{im}}(f)$ for some polynomial $f$. A simple concrete example of such a set $U$ is $GL_n$. The third topic that we consider is the density of ${\mathrm{im}}(f)$ (with respect to the Zariski topology of $F^{n^2}\cong M_n$). Given a noncommutative polynomial $f=f(x_1,\ldots,x_d)$, we can consider ${\mathrm{tr}}(f)$ as a commutative polynomial in $n^2d$ indeterminates. The density of ${\mathrm{im}}(f)$ can be characterized as follows. \[genericint\] The image of a polynomial $f$ is dense in $M_n$ (resp. $M_n^0$ if ${\mathrm{tr}}(f)=0$) if and only if the polynomials ${\mathrm{tr}}(f),{\mathrm{tr}}(f^2),\dots,{\mathrm{tr}}(f^n)$ (resp. ${\mathrm{tr}}(f^2),\dots,{\mathrm{tr}}(f^n)$) are algebraically independent. Our original motivation for studying the density was the question by Lvov asking whether the image of a multilinear polynomial is a linear space. This was shown to be true for $n= 2$ by Kanel-Belov, Malev and Rowen [@Bel]. In general this problem is, to the best of our knowledge, open. If the answer was positive, then either ${\mathrm{im}}(f)=M_n$ or ${\mathrm{im}}(f)=M_n^0$ would hold for every multilinear polynomial $f$ that is neither an identity nor central (see [@BK] or [@Bel]). Establishing the density could be an important intermediate step for proving these equalities. On the other hand, as it will be apparent from the next paragraph, this would be sufficient for some applications. Motivated by Lvov’s problem and Theorems \[koncenint\] and \[genericint\], we have posed ourselves the following two questions concerning a multilinear polynomial $f$. It has turned out that versions of the first one had already been discussed before (see [@Ler; @Ro]). (Q1) If there exists $k\ge 2$ such that $f^k$ is central for $M_n$, $n\ne 2 $, is then $f$ central? (Q2) If there exists $k\ge 2$ such that ${\mathrm{tr}}(f^k)$ vanishes on $M_n$, $n \ne 2$, is then $f$ an identity? (Incidentally, the condition that ${\mathrm{tr}}(f^k)$ vanishes on $M_n$ is equivalent to the condition that $f^k$ is the sum of commutators and an identity [@BK].) Note that an affirmative answer to Lvov’s question implies that both (Q1) and (Q2) have affirmative answers. Moreover, to establish the latter it would be enough to know only that ${\mathrm{im}}(f)\cap M_n^0$ is dense in $M_n^0$. Further, since ${\bf w}_j$ has trace zero, one can easily deduce from the last assertion of Theorem \[koncenint\] that an affirmative answer to (Q2) implies an affirmative answer to (Q1). Unfortunately, we were unable to solve any of these two questions, so we leave them as open problems. We have only solved the dimension-free version of (Q2): If $f $ is a nonzero multilinear polynomial, then $f^k$, $k\ge 2$, is not a sum of commutators. In the final part we prove a result giving a small evidence that the answer to Lvov’s question may be affirmative. If $f$ is a nonzero multilinear Lie polynomial of degree at most $4$, then ${\mathrm{im}}(f)=M_n^0$. preliminaries {#pre} ============= A polynomial $f(x_1,\dots,x_d)$ in the free associative algebra $F{\langle\ushort X\rangle}=F\langle x_1, . . . , x_d\rangle$ is called a polynomial identity of $M_n$ if $f(a_1, . . . , a_d) = 0$ for all $a_1, . . . , a_d \in M_n$; $f\in F\langle x_1, . . . , x_d\rangle$ is a central polynomial of $M_n$ if $f(a_1, . . . , a_d) \in F {\bf1}$ for any $a_1, . . . , a_d\in M_n$ but $f$ is not a polynomial identity of $M_n$. By $GM(n)$ we denote the algebra of generic matrices over $F$, which is a domain by Amitsur’s theorem [@Row Theorem 3.26]. $UD(n)$ stands for the generic division ring. The trace of a matrix can be expressed as a quotient of two central polynomials and can be therefore viewed as an element of $UD(n)$ (see [@Row Corollary 1.4.13, Exercise 1.4.9]). Since we will need some properties of this expression we repeat here the form we need. \[tr\] There exist a multilinear central polynomial $c_0$ and central polynomials $c_1,\dots,c_n$, such that $${\mathrm{tr}}(a^i)c_0(x_1,\dots,x_t)=c_i(x_1,\dots,x_t,a)$$ for every $a,x_1,\dots,x_{t}\in M_n$, where $t=2n^2$. By replacing $a$ by $f(y_1,\dots,y_d)$ we can therefore determine the traces of evaluations of $f$. It is well-known that the coefficients of the characteristic polynomial can be expressed through the traces as follows: \[CH0\] There exist $\alpha_{(j_1,\dots,j_n)}\in {\mathbb{Q}}$ such that the characteristic polynomial can be written as $$x^n+\sum_{j=1}^n\sum_{j_1+\dots+j_n=j}\alpha_{(j_1,\dots,j_n)}{\mathrm{tr}}(x^{j_1})\cdots {\mathrm{tr}}(x^{j_n})x^{n-j}.$$ A consequence of the above description of the characteristic polynomial is a well-known fact that a matrix is nilpotent if and only if the trace of each of its powers is zero. The scalars from Proposition \[CH0\] can be deduced from Newton’s formulas, but we do not need their explicit form. However, note that we have a bijective polynomial map from $F^n$ to $F^n$, whose inverse is also a polynomial map, which maps coefficients of the characteristic polynomial of any matrix $x$ into its “trace" tuple, $({\mathrm{tr}}(x),\dots,{\mathrm{tr}}(x^n))$. Let us record an easy lemma for future reference. \[simsled\] Let $p$ be a symmetric polynomial in $n$ variables. If $f(x)=p(\lambda_1(x),\dots,\lambda_n(x))$, where $\lambda_1(x),\dots,\lambda_n(x)$ are the eigenvalues of a matrix $x\in M_n$, then $f(x)=q({\mathrm{tr}}(x),\dots,{\mathrm{tr}}(x^n))$ for some polynomial $q$. Since $p$ is a symmetric polynomial, it can be expressed as a polynomial in the elementary symmetric polynomials $e_1,\dots,e_n$ by the fundamental theorem of symmetric polynomials. Thus, $$f(x)=\tilde{p}\Big (e_1\big(\lambda_1(x),\dots,\lambda_n(x)\big),\dots,e_n\big(\lambda_1(x),\dots,\lambda_n(x)\big)\Big).$$ Therefore it suffices to prove that $e_i(\lambda_1(x),\dots,\lambda_n(x))=q({\mathrm{tr}}(x),\dots,{\mathrm{tr}}(x^n))$ for some polynomial $q$. Since $\lambda_1(x),\dots,\lambda_n(x)$ are the eigenvalues of a matrix $x$, they are the zeros of the characteristic polynomial of $x$, hence by Vieta’s formulas $e_i(\lambda_1(x),\dots,\lambda_n(x))$ equals the coefficient at $x^{n-i}$ in the characteristic polynomial of $x$. The assertion of the lemma follows for every coefficient can be expressed as a polynomial in the traces of powers of $x$ by Proposition \[CH0\]. finite polynomials {#pog3} ================== In this section we want to find the “smallest possible" images of polynomials evaluated on $M_n$. If we want a set $S\subseteq M_n$ to be the image of a polynomial we have to require that it is closed under conjugation by invertible matrices. Hence, one possible criterion for the smallness of the image would be the number of similarity orbits contained in it. Therefore one may be inclined to study polynomials that have just a finite number of similarity orbits in their image. However, the images of many polynomials (for example homogeneous as $F$ is algebraically closed) are closed under scalar multiplication, therefore we take only the orbits modulo scalar multiplication (by nonzero scalars) into account. (In this section the expression “modulo scalar multiplication" will always mean modulo scalar multiplication by nonzero scalars.) In this way we arrive at the definition of a *finite polynomial*, as given in the introduction. Central polynomials of $M_n$ have only one nonzero similarity orbit in their image modulo scalar multiplication and we will see that finite polynomials are in close relation with them. \[powerc\] If $f^j$ is a central polynomial of $M_n$ for some $j\geq 1$, then $f$ is finite on $M_n$. If $b=f({\underline}{a})$ for some ${\underline}{a}\in M_n^d$, then the Jordan form of $b$ is either diagonal or nilpotent. If it is diagonal, then it is a scalar multiple of a matrix having $j$-th roots of unity on the diagonal. There are only finitely many such matrices modulo scalar multiplication. Also the number of similarity orbits of nilpotent matrices modulo scalar multiplication is finite. Thus, $f$ is finite. The goal of this section is to prove the converse of this simple observation. Let us introduce a family of matrices that plays an important role in the next theorem. For every $j$ dividing $n$ choose a primitive $j$-th root of unity $\mu_j$ and define the matrix $${\bf w}_j= \begin{pmatrix} { \bf 1}_r& & & \\ & \mu_j {\bf 1}_r & & \\ & & \ddots & \\ & & &\mu_j^{j-1}{\bf 1}_r\\ \end{pmatrix},$$ where $r=\frac{n}{j}$ and ${\bf 1}_r$ denotes the $r\times r$ identity matrix. A polynomial $f$ is said to be *$j$-central* on $M_n$ if $f^j$ is a central polynomial, while smaller powers of $f$ are not central. We call a polynomial *power-central* if it is $j$-central for some $j>1$. \[koncen\] A polynomial $f$ is finite on $M_n$ if and only if there exists $j\in \mathbb {N}$ such that $f$ is $j$-central on $M_n$. Moreover, in this case every nonnilpotent matrix in ${\mathrm{im}}(f)$ is similar to a scalar multiple of ${\bf w}_j$. Let $a_1,\dots,a_l$ be the representatives of distinct nonnilpotent similarity orbits of ${\mathrm{im}}(f)$ on $M_n$ modulo scalar multiplication. For each $a_i$ we set $j_i=\min\{j\;|{\mathrm{tr}}(a_i^j)\neq 0\}$ (such $j$ exists since $a_i$ is not nilpotent) and let $\alpha_{ik}=\frac{{\mathrm{tr}}(a_i^k)^{j_i}}{{\mathrm{tr}}(a_i^{j_i})^k}$ for $1\leq k\leq n;$ these scalars carry the information about the coefficients of the characteristic polynomial of $a_i$. Note that $\alpha_{ik}=0$ for $1\le k\le j_{i}-1$. The trace polynomial $$\sum_{k=1}^n({\mathrm{tr}}(x^k)^{j_i}-\alpha_{ik}{\mathrm{tr}}(x^{j_i})^k)x_k$$ vanishes if we substitute a scalar multiple of $a_i$ for $x$ and arbitrary $b_1,\dots,b_n\in M_n$ for $x_1,\dots,x_n$. Since every nonnilpotent matrix in ${\mathrm{im}}(f)$ is similar to a scalar multiple of $a_i$ for some $i$ and the trace of powers of nilpotent matrices is zero, the following identity holds in $UD(n)$ (according to Proposition \[tr\], all ${\mathrm{tr}}(f^k)$ lie in $UD(n)$): $$\prod_{i=1}^l\Big( \sum_{k=1}^n({\mathrm{tr}}(f^k)^{j_i}-\alpha_{ik}{\mathrm{tr}}(f^{j_i})^k)x_k\Big)=0.$$ Since $UD(n)$ is a division ring, one of the factors in the product equals zero in $UD(n)$. Hence there exists $i$ such that $\sum_{k=1}^n({\mathrm{tr}}(f^k)^{j_i}-\alpha_{ik}{\mathrm{tr}}(f^{j_i})^k)x_k=0$ and so ${\mathrm{tr}}(f^k)^{j_i}-\alpha_{ik}{\mathrm{tr}}(f^{j_i})^k=0$ in $UD(n)$ for every $1\le k\le n$. For simplicity of notation we write $j, \alpha_k$ instead of $j_i,\alpha_{ik}$, respectively, for $1\leq k\leq n$. We will first consider the case when $\alpha_1\neq 0$, i.e., $j=1$ and ${\mathrm{tr}}(f)\neq 0$. Then the characteristic polynomial of $f$ can be expressed as $$\label{CH} f^n+\sum_{j=1}^n\beta_j{\mathrm{tr}}(f)^jf^{n-j}=0$$ for some $\beta_1,\dots,\beta_n\in F$ (see Proposition \[CH0\]). Let $\lambda_1,\dots,\lambda_n\in {F}$ be zeros of the polynomial $x^n+\sum_{j=1}^n\beta_j x^{n-j}$. Then we can factorize (\[CH\]) in $UD(n)$ as $$\prod_{k=1}^n(f-\lambda_k{\mathrm{tr}}(f))=0.$$ This is an identity in $UD(n)$, hence $f-\lambda_k{\mathrm{tr}}(f)=0$ for some $1\le k\le n$, implying that $f$ is a central polynomial. Now we consider the general case. We have $\alpha_j\neq 0$ for some $1\leq j\leq n$ and $\alpha_k=0$ for $1\leq k\leq j-1$. Then ${\mathrm{tr}}(f^j)\neq 0$ and $f^j$ is also finite, so we can just repeat the first part of the proof for $f^j$ from which it follows that $f^j$ is a central polynomial. In this case ${\mathrm{tr}}(f^k)=0$ for all $1\le k <j$, therefore $f^k$ is not central. So far we have proved that $f$ is $j$-central for some $j\ge 1$ and ${\mathrm{tr}}(f^k)=0$ for all $1\le k <j$. It remains to prove that nonnilpotent matrices in ${\mathrm{im}}(f)$ are similar to a scalar multiple of ${\bf w}_j$. The values of $f$ on $M_n$ can be nilpotent matrices and matrices for which the Jordan form has (modulo scalar multiplication) just powers of the primitive $j$-th root $\mu_j$ of unity on the diagonal. For simplicity of notation we write $\mu$ instead of $\mu_j$. Take a nonnilpotent matrix $a\in {\mathrm{im}}(f)$. We are reduced to proving that the eigenvalues of $a$ are equal to $\lambda,\lambda\mu,\dots,\lambda\mu^{j-1 }$ for some $0\neq\lambda\in {F}$ (depending on $a$) and all have the same algebraic multiplicity $\frac{n}{j}$. Recall that ${\mathrm{tr}}(f^k)=0$ for $k<j$. Hence, if $k_i$ is the multiplicity of $\mu^i$ in the characteristic polynomial of $a\in {\mathrm{im}}(f)$, then we have $$\begin{array}{*{3}{c@{\:+\:}}c@{\;=\;}c} k_0 & k_1\mu & \dots & k_{j-1}\mu^{j-1} & 0\\ k_0 & k_1\mu^2 & \dots & k_{j-1}(\mu^{j-1})^2 & 0\\ \multicolumn{5}{c}{\dotfill}\\ k_0 & k_1 \mu^{j-1} & \dots & k_{j-1}(\mu^{j-1})^{j-1} & 0. \end{array}$$ The above equations can be rewritten as $\sum_{i=1}^{j-1}k_i(\mu^t)^i=-k_0 $, $1\leq t\leq j-1$. Having fixed $k_0$, the system of equations in variables $k_1,\dots,k_{j-1}$ will have a unique solution if and only if the determinant of $((\mu^t)^i)$, $1\leq t,i\leq j-1$, is different from zero. Since $\mu^t$, $1\leq t\leq j-1$, are distinct, the Vandermonde argument shows that it is nonzero indeed. Thus, $k_i=k_0$ for every $1\leq i\leq j-1$ is the unique solution. Hence, every nonnilpotent matrix in ${\mathrm{im}}(f)$ is similar to a scalar multiple of the matrix ${\bf w}_j$. The converse follows from Lemma \[powerc\]. \[pot\] If $f$ is $j$-central on $M_n$ for some $j\in {\mathbb{N}}$, then ${\mathrm{im}}(f^m)$ for $m\geq j$ consists of scalar multiples of exactly one similarity orbit generated by ${\bf w}_j^m$. Since every nonnilpotent matrix in ${\mathrm{im}}(f)$ is similar to a scalar multiple of ${\bf w}_j$, its $m$-th power is similar to a scalar multiple of ${\bf w}_j^m$. If $f({\underline}{a})^m$, $m\geq j$, is nilpotent, so is $f({\underline}{a})^j$. In this case $f({\underline}{a})^j=0$ due to the centrality of $f^j$. Hence, ${\mathrm{im}}(f^m)$, $m\geq j$, does not contain nonzero nilpotent matrices. Power-central polynomials are important in the structure theory of division algebras. The question whether $M_p({\mathbb{Q}})$ has a power-central polynomial for a prime $p$ is equivalent to the long-standing open question whether division algebras of degree $p$ are cyclic. This is known to be true for $p\leq 3$. An example of $2$-central polynomial on $M_2(K)$ for an arbitrary field $K$ is $[x,y]$, which is also multilinear. The truth of Lvov’s conjecture would imply that there are no multilinear power-central polynomials on $M_n(K)$ for $n\ge 3$. While it is easy to see that multilinear $j$-central polynomials for $j>2$ do not exist over ${\mathbb{Q}}$ (see, e.g., [@Ler]), the same question over an algebraically closed field $F$ remains open. If $f$ is $j$-central, then ${\mathrm{tr}}(f^2)=0$ if $j>2$, and ${\mathrm{tr}}(f^3)=0$ if $j=2$. Thus, if for multilinear polynomials $f,g$, the identity ${\mathrm{tr}}(f^2)=0$ implies $f=0$ (in UD(n)) and the identity ${\mathrm{tr}}(g^3)=0$ implies $g=0$ (in UD(n)), then it would follow that there do not exist multilinear noncentral power-central polynomials. (See also Section \[tr\^2\].) standard open sets as images of polynomials =========================================== We will show that if $U$ is a Zariski open subset of $F^{n^2}$, defined as the nonvanishing set of a polynomial in $F[x_{11},\dots,x_{nn}]$ satisfying some natural conditions, then there exists a polynomial $f$ such that ${\mathrm{im}}(f)=U\cup \{0\}$. We will first prove that this is true for the most prominent example of such a set, $GL_n$. We follow the standard notation and denote by $V(p)$ the set of zeros of a polynomial $p$, $V(p)=\{(u_1,\dots,u_k)\in F^k|\; p(u_1,\dots,u_k)=0\}$, and by $D(p)=\{(u_1,\dots,u_k)\in F^k|\; p(u_1,\dots,u_k)\neq 0\}$ the complement of $V(p)$. For a subset $V$ of $F^k$ we define $I(V)$ to be the ideal of all polynomials vanishing on $V$, $I(V)=\{p\in F[z_1,\dots,z_k]\,|\; p(u)=0 \text{ for all } u\in V\}$. In this section we will use some basic facts from algebraic geometry which can be found in any standard textbook. There exists a noncommutative polynomial $f$ such that ${\mathrm{im}}(f)=GL_n\cup \{0\}$ on $M_n$. As $\det(x)$ is a polynomial in the traces of powers of $x$ it can be expressed as the quotient of two central polynomials due to Proposition \[tr\]. We can write $\det(x)=\frac{c(x_1,\dots,x_t,x)}{c_0(x_1,\dots,x_t)^n}$, where $c,c_0$ are central polynomials, $c_0$ is multilinear and $t=2n^2$. Note that if we choose $a_1,\dots,a_{t}$ such that $c_0(a_1,\dots,a_{t})\neq 0$ then $\det(x)=0$ if and only if $c(a_1,\dots,a_t,x)\neq 0$. Define $f=c(x_1,\dots,x_t,x)x$. As $c_0$ is multilinear, $c$ is homogeneous in the first variable. Therefore $a\in {\mathrm{im}}(f)$ forces $F a\subseteq {\mathrm{im}}(f)$ because $F$ is algebraically closed. Hence, the image of $f$ consists of all invertible matrices and the zero matrix. In this section we will consider (commutative) polynomials and polynomial maps on $F^{n^2}$. Since these maps will be often evaluated on $n\times n$ matrices we denote the variables by $x_{11},\dots,x_{nn}$. Let $X$ denote the matrix corresponding to the $n^2$-tuple $(x_{11},\dots,x_{nn})$. By a slight abuse of notation we will sometimes regard a polynomial map $p:F^{n^2}\to F^k$ as a map from $M_n$ to $F^k$. For example, $p(x_{11},\dots,x_{nn})=x_{11}+x_{22}+\dots+x_{nn}$ can be seen as a map from $M_n$ to $F$, assigning to every matrix in $M_n$ its trace. In this case we write $p(x_{11},\dots,x_{nn})={\mathrm{tr}}(X)$ or even $p(X)={\mathrm{tr}}(X)$. We say that a polynomial map $p$ from $F^{n^2}$ to $F^{n^2}$ is a *trace polynomial* if $p(x_{11},\dots,x_{nn})=P(X,{\mathrm{tr}}(X){\bf 1},\dots,{\mathrm{tr}}(X^n){\bf 1})$ for some polynomial $P(z_0,\dots,z_n)$ with zero constant term. A polynomial $p:F^{n^2}\to F$ is a *pure trace polynomial* if $p(x_{11},\dots,x_{nn})=P({\mathrm{tr}}(X),\dots,{\mathrm{tr}}(X^n))$. (In the previous example we have $P(z_0,\dots,z_n)=z_1$.) Recall that a polynomial $p:F^{n^2}\to F$ is called a matrix invariant if $p(X)=p(S X S^{-1})$ for every $S\in GL_n$, where $p(S X S^{-1})$ denotes the map that first conjugates the matrix $X$ corresponding to the $n^2$-tuple $(x_{11},\dots,x_{nn})$ with $S$ and then applies $p$ on the $n^2$-tuple corresponding to the matrix $SXS^{-1}$. Matrix invariants are exactly the pure trace polynomials [@Spring Theorem 1.5.7]. We will use this correspondence without further reference. \[nicsled\] If $V$ is the zero set in $F^{n^2}$ of trace polynomials $p_1,\dots,p_l$ and if $V$ is closed under scalar multiplication, then there exists a noncommutative homogeneous polynomial $f$ such that ${\mathrm{im}}(f)=V^\mathsf{c}\cup\{0\}$. Let $p_i(x_{11},\dots,x_{nn})=P_i(X,{\mathrm{tr}}(X){\bf 1},\dots,{\mathrm{tr}}(X^n){\bf 1})$ for a polynomial $P_i(z_0,z_1,\dots,z_n)$, $1\le i\le l$. Let us write ${\mathrm{tr}}(X^i)=\frac{c_i(X_1,\dots,X_t,X)}{c_0(X_1,\dots,X_t)}$ where $c_0,c_i$, $1\le i\le n$, are polynomials from Proposition \[tr\]. We replace $P_i(X,{\mathrm{tr}}(X){\bf 1},\dots,{\mathrm{tr}}(X^n){\bf 1})$, $1\leq i\leq l$, with $Q_i(X,Y_i)={\mathrm{tr}}(P_i(X,{\mathrm{tr}}(X){\bf 1},\dots,{\mathrm{tr}}(X^n){\bf 1})Y_i)$, $1\le i\le l$, which map to $F$. Let $r_i-1$ be the degree of the polynomial $P_i(z_0,z_1,\dots,z_n)$ treated as a polynomial in the last $n$ variables, $z_1,\dots,z_n$. Then $c_0(X_1,\dots,X_t)^{r_i}Q_i(X,Y_i)$ is a central polynomial. We denote ${\underline}{X}_i=(X_{i1},\dots,X_{it})$, $1\le i\le l$, ${\underline}{Y}=(Y_1,\dots,Y_l)$. Then $c({\underline}{X}_1,\dots,{\underline}{X}_l,X,{\underline}{Y})=\sum_{i=1}^l c_0({\underline}{X}_i)^{r_i}Q_i(X,Y_i)$ is a sum of central polynomials and therefore a central polynomial. If $A\in V$ we have $c({\underline}{A}_1,\dots,{\underline}{A}_l,A,{\underline}{B})=0$ for any choice of matrices $A_{ij},B_i$. On the other hand, suppose that $A\not\in V$, hence $p_i(a_{11},\dots,a_{nn})\neq 0$ for some $i$. Consequently, there exists $B\in M_n$ such that $Q_i(A,B)\neq 0$. If we choose $A_{i1},\dots,A_{it}$ such that $c_0({\underline}{A}_i)\neq 0$ and write ${\underline}{B}$ for the $l$-tuple that has $B$ on the $i$-th place and zero elsewhere, then $c(0,\dots,0,{\underline}{A}_i,0,\dots,0,A,{\underline}{B})=\mu {\bf 1}$ for some $0\neq \mu \in F$. Let $$f({\underline}{X}_1,\dots,{\underline}{X}_l,X,{\underline}{Y})=c({\underline}{X}_1,\dots,{\underline}{X}_l,X,{\underline}{Y})X.$$ By construction, $\mu a\in {\mathrm{im}}(f)$ and since $Q_i(X,Y_i)$ is linear in $Y_i$, all scalar multiples of $A$ belong to the image of $f$ (indeed, $\lambda\mu A=c_0(A_1,\dots,A_t)^{r_i}Q_i(A,\lambda B)A=f(0,\dots,0,{\underline}{A}_i,0,\dots,0,A,\lambda{\underline}{B})$ for every $\lambda \in F$). Hence the image of $f$ equals $V^c\cup\{0\}$. Since $V$ is closed under scalar multiplication we can assume that $p_i$, $1\le i\le l$, are homogeneous, since otherwise we can replace them by their homogeneous components. These also belong to $I(V)$, which can be easily seen by the Vandermonde argument. The homogeneous components of $p_i$ are also trace polynomials, which follows by comparing both sides of the equality $p_i(\lambda x_{11},\dots,\lambda x_{nn})=P_i(\lambda X,{\mathrm{tr}}(\lambda X),\dots,{\mathrm{tr}}((\lambda X)^n))$. Hence, we can assume that $P_i(X,{\mathrm{tr}}(X),\dots,{\mathrm{tr}}(X^n))$ are homogeneous polynomials of degree $d_i$. We denote $d=\max\{d_i+r_it,\;1\le i\le l\}$. If we replace $Q_i(X,Y_i)$ in the above construction by $Q_i(X,Y_i^{d-d_i-r_it+1})$ then $f$ becomes a homogeneous polynomial of degree $d+1$. Noting that $F$ being algebraically closed guarantees that ${\mathrm{im}}(f)$ is closed under scalar multiplication it is easy to verify that the above proof remains valid with polynomials $Q_i(X,Y_i^{d-d_i-r_it+1})$ replacing polynomials $Q_i(X,Y_i)$. We illustrate this result with some examples of sets that can be realized as images of noncommutative polynomials. \(a) The union of matrices that are not nilpotent of the nilindex less or equal to $k$ and the zero matrix is the image of a noncommutative polynomial. The matrices whose $k$-th power equals zero are closed under conjugation by $GL_n$ and under scalar multiplication, and they are the zero set of the (trace) polynomial $X^k$. Hence, we can apply Lemma \[nicsled\]. \(b) Matrices with at most $k$ distinct eigenvalues, $0\le k\le n-1$, are also the zero set of trace polynomials. Define polynomials $p_0(X)=X$, $q_l(z_1,\dots,z_{l+1})=\prod_{1\le i<j\le l+1}(z_i-z_j)^2$ and $$p_l(X)=\sum_{1\le i_1<\dots< i_{l+1}\le n} q_l(\lambda_{i_1}(X),\dots,\lambda_{i_{l+1}}(X)),\qquad 1\le l\le n-1,$$ where $\lambda_i(X)$, $1\le i\le n$, are the eigenvalues of a matrix $X$. Note that the polynomials on the right-hand side of the above definition of $p_l$, $1\le l\le n-1$, are symmetric polynomials in the eigenvalues of the matrix $X$, and thus pure trace polynomials by Lemma \[simsled\]. The polynomials $p_l$, $k\le l\le n-1$, define the desired variety. Indeed, $p_{n-1}(X)$ is the discriminant of $X$ and a matrix $A$ is a zero of $p_{n-1}$ if and only if $A$ has at most $n-1$ distinct eigenvalues. Then we can proceed by reverse induction to show that the common zeros of $p_{n-1},\dots,p_k$ are the matrices that have at most $k$ distinct eigenvalues supposing that the common zeros of $p_{n-1},\dots,p_{k+1}$ are the matrices that have at most $k+1$ distinct eigenvalues. If $A$ is a zero of $p_{n-1},\dots,p_{k+1}$, i.e. $A$ has at most $k+1$ distinct eigenvalues by the induction hypothesis, then $p_k(A)$ is a scalar multiple of $q_k(\lambda_{1},\dots,\lambda_{{k+1}})\neq 0$ where $\lambda_1,\dots,\lambda_{k+1}$ are possible distinct eigenvalues of $A$. Therefore, $A$ is a zero of $p_k$ if the evaluation of $q_k$ in this $k+1$-tuple is equal to zero, i.e. if $A$ has at most $k$ distinct eigenvalues. By Lemma \[nicsled\], the matrices with at least $k$ distinct eigenvalues together with the zero matrix form the image of a noncommutative polynomial for every $1\le k\le n$. \(c) Define trace polynomials $t_i(X)={\mathrm{tr}}(X^i)X-{\mathrm{tr}}(X)X^i$ for $2\leq i\leq n$. Let a matrix $A$ be a zero of $t_2,\dots,t_n$. Since $A$ is a zero of $t_2$, $A$ is a scalar multiple of an idempotent or ${\mathrm{tr}}(A)=0$. In the second case, ${\mathrm{tr}}(A^i)=0$, $1\le i\le n$, since $A$ is a zero of $t_i$, $2\le i\le n$. Thus, the variety defined by $t_i$, $2\le i\le n$, contains precisely the scalar multiples of idempotents and nilpotent matrices (only these have the trace of all powers equal to zero). Consequently, the complement of this variety, matrices that are not scalar multiples of an idempotent and not nilpotent, together with the zero matrix equals the image of a noncommutative polynomial. We will give two proofs of the following theorem. The first one might lead to possible generalizations, while we find the second one, based on the idea suggested to us by Klemen Šivic, is quite interesting. We first introduce some notation and prove a lemma that will play a role also in the subsequent section. Let $\phi:M_n\to F^n$ be the map that assigns to every matrix the coefficients of its characteristic polynomial. More precisely, if $x^n+\alpha_1x^{n-1}+\cdots+\alpha_n$ is the characteristic polynomial of a matrix $a$, then $\phi(a)=(\alpha_1,\dots,\alpha_n)$. Note that $\phi$ is a surjective polynomial map. \[fi\] If $Z$ is a proper closed subset of $M_n$ that is closed under conjugation by $GL_n$, then $\phi(Z)$ is contained in a proper closed subset of $F^n$. Since the closure of similarity orbits of the set $D$ of all diagonal matrices equals $M_n$, $Z\cap D$ is also a proper closed subset of $D\cong F^n$. Hence $\dim(Z\cap D)<n$. Therefore $\dim(\overline{\phi(Z\cap D)})<n$, which implies that $\overline{\phi(Z\cap D)}$ is a proper closed set of $F^n$. Denote by $\tilde{D}$ the set of all diagonalizable matrices. As $Z$ is closed under conjugation by $GL_n$, $\phi(Z\cap \tilde{D})=\phi(Z\cap D)$. Decompose $\phi(Z)=\phi(Z\cap \tilde{D})\cup\phi(Z\cap \tilde{D}^\mathsf{c})$ and notice that $\phi(Z\cap \tilde{D}^c)$ is a subset of the proper closed subset of the variety defined by the discriminant, $V(\operatorname{disc})$. Hence the closure of $\phi(Z)$ is a proper closed subset of $F^n$. \[hiper\] Let $p$ be a commutative polynomial in $n^2$ variables. If $V(p)\subset F^{n^2}$ is closed under conjugation by invertible matrices then $p$ is a pure trace polynomial. By Lemma \[fi\], $\phi(V(p))$ is contained in a proper closed subset of $F^n$. It thus belongs to $V(f)$ for some polynomial $f$. Define $\tilde{f}(X)=f(\alpha_1(X),\dots,\alpha_n(X))$ where $\alpha_1(X),\dots,\alpha_n(X)$ are the coefficients of the characteristic polynomial of a matrix $X$. As we have a bijective polynomial correspondence between the “trace" tuple of a matrix $X$, $({\mathrm{tr}}(X),\dots,{\mathrm{tr}}(X^n))$, and its “characteristic" coefficients, $(\alpha_1(X),\dots,\alpha_n(X))$, (see Section \[pre\]), $\tilde{f}$ is a pure trace polynomial. We have $V(p)\subset V(\tilde{f})$ and by Hilbert’s Nullstellensatz $\tilde{f}^n=pq$ for some $n\in {\mathbb{N}}$ and some polynomial $q$. Since $\tilde{f}$ is a pure trace polynomial we have $\tilde{f}(SXS^{-1})=\tilde{f}(X)$ for every $S\in GL_n$, $X\in M_n$, and, in consequence, $p(SXS^{-1})q(SXS^{-1})=p(X)q(X)$. If $S=(s_{ij})$, then $S^{-1}=\frac{1}{\det(S)}S'$, where $S'$ is a matrix which elements are polynomial functions in $s_{ij}$, $1\le i,j\le n$. Thus, we can choose $k,l\in {\mathbb{N}}$ such that $\overline{p}(S,X)=\det(S)^k p(SXS^{-1})$, $\overline{q}(S,X)=\det(S)^l q(SXS^{-1})$ are polynomials. As $F[x_{11},\dots,x_{nn},s_{11},\dots,s_{nn}]$ is a unique factorization domain we conclude from $\overline{p}(S,X)\overline{q}(S,X)=\det(S)^{k+l}p(X)q(X)$ that $\overline{p}(S,X)=\det(S)^mp_1(X)$ for some $m\in {\mathbb{Z}}$ and some polynomial $p_1$, and hence $p(SXS^{-1})=\det(S)^{m-k} p_1(X)$. Setting $S=1$ yields $p_1=p$, and, in consequence, $p(S)=\det(S)^{m-k}p(S)$ for every $S\in GL_n$, which implies $m=k$. (Indeed, $\det(S)^{m-k}$ has to be equal to $1$ on the open set $D(p)\cap GL_n$, and therefore on the whole $M_n$.) Hence, $p$ is a matrix invariant and according to the characterization of matrix invariants a pure trace polynomial. Firstly, we can assume that $p$ is irreducible. To see this we only need to observe that all irreducible components $V_i$ of $V(p)=\bigcup V_i$ are closed under conjugation by invertible matrices. Take $X\in V_i$, then the variety $V_X=\overline{\{SXS^{-1},\;S\in GL_n\}}$ is rationally parametrized, and therefore irreducible (see, e.g., [@CLO Proposition 4.5.6]). Hence, we have $V_X\subseteq V_i$ for every $X\in V_i$, so $V_i$ is closed under conjugation by invertible matrices. In the rest of the proof we therefore assume $p$ to be irreducible. We fix an invertible matrix $S$ and define a polynomial $p_S(x_{11},\dots,x_{nn})=p(SXS^{-1})$, which means that we first conjugate the matrix $X$ corresponding to the $n^2$-tuple $(x_{11},\dots,x_{nn})$ with $S$ and then apply $p$ on the $n^2$-tuple corresponding to the matrix $SXS^{-1}$. According to the assumption of the theorem, $p$ and $p_S$ have equal zeros. Hence, $V(p)=V(p_S)$. As $p$ and hence also $p_S$ are irreducible, we have $p_S=\alpha_S p$ for some scalar $\alpha_S\in F$ by Hilbert’s Nullstellensatz. We shall have established the lemma if we prove that $\alpha_S=1$ for every $S\in GL_n$. Indeed, then we can use the characterization of matrix invariants. We have $p(SXS^{-1})=\alpha_S p(X)$ for every $S\in GL_n$, $X\in M_n$. In particular, $p(S)=\alpha_S p(S)$, which implies $\alpha_S=1$ for every $S\in U=GL_n\cap D(p)$. Then for every $X\in M_n$ the polynomials $p(SX)$ and $p(XS)$ in $n^2$ variables $s_{11},\dots,s_{nn}$ equal on $U$. Since $U$ is a dense subset of $F^{n^2}$, they are equal. Thus $\alpha_S=1$ for every $S\in GL_n$. The next corollary rephrases the last statement in the language of invariant theory. If for a polynomial $p:F^{n^2}\to F$ and for every $X\in M_n,S\in GL_n$ we have $p(SXS^{-1})=0$ if and only if $p(X)=0$, then $p$ is a matrix invariant. Having established Lemma \[nicsled\] and Theorem $\ref{hiper}$, we can now state the main result of this section. \[std\] Let $U=D(p)$ be a standard open set in $F^{n^2}$ closed under conjugation by $GL_n$ and nonzero scalar multiplication. There exists a noncommutative homogeneous polynomial $f$ such that ${\mathrm{im}}(f)=U\cup \{0\}$. To generalize this theorem to arbitrary open subsets of $F^{n^2}$ that are closed under conjugation by $GL_n$ and scalar multiplication with the similar approach (employing Lemma \[nicsled\]), one would need to prove that every variety that is closed under conjugation by $GL_n$ and under scalar multiplication can be determined by trace polynomials. (Those trace polynomials may include some extra variables. It is easy to adjust the proof of Lemma \[nicsled\] to that slightly more general context. See Example \[min2\] below.) However, we do not know whether this is true or not. \[min2\] Let $V$ be the set of all matrices having minimal polynomial of degree at most 2. This is a closed set since each of its element is a zero of the Capelli polynomial $C_5(1,X,X^2,Y,Z)$ for arbitrary $Y,Z\in M_n$, and due to [@Row Theorem 1.4.34] for $X\not\in V$ there exist $Y,Z\in M_n$ such that $C_5(1,X,X^2,Y,Z)\neq 0$. Hence $c_0(X_1,\dots,X_t){\mathrm{tr}}(C_5(1,X,X^2,Y,Z)Y_1)X$ has in its image exactly the zero matrix and matrices whose minimal polynomial has degree at least 3. density ======= Each noncommutative polynomial $f$ in $d$ variables gives rise to a function $f:M_n^d\to M_n$. In this section we will consider this function as a polynomial map in $n^2d$ variables. We will be concerned with some topological aspects of its image on $M_n$. We discuss the sufficient conditions for establishing the “dense counterpart" of Lvov’s conjecture. By this we mean the question whether the image of a multilinear polynomial $f$ on $M_n$ is dense in $M_n$ or in $M_n^0$, assuming that $f$ is neither a polynomial identity nor a central polynomial of $M_n$. Recall that the map $\phi:M_n\to F^n$, introduced in the previous section, assigns to every matrix the coefficients of its characteristic polynomial. The restriction of $\phi$ to $M_n^0$ will be denoted by $\phi_0$. Identifying $\{0\}\times F^{n-1}$ with $F^{n-1}$, we may and we shall consider $\phi_0$ as a map into $F^{n-1}$. By saying that ${\mathrm{im}}(f)$ is dense in $F^{n^2-1}$ we mean that the image of $f$ is dense in $M_n^0$, an ($n^2-1$)-dimensional space over $F$, with the inherited topology from $F^{n^2}$. \[gost\] Let $f$ be a noncommutative polynomial. Then ${\mathrm{im}}(f)$ is dense in $F^{n^2}$ (resp. $F^{n^2-1}$ if ${\mathrm{tr}}(f)=0$) if and only if ${\mathrm{im}}(\phi(f))$ (resp. ${\mathrm{im}}(\phi_0(f))$) is dense in $F^n$ (resp. $F^{n-1}$). Assume that ${\mathrm{im}}(\phi(f))$ is dense in $F^n$. Denote by $Z$ the Zariski closure of ${\mathrm{im}}(f)$. As ${\mathrm{im}}(f)$ is closed under conjugation by $GL_n$ so is $Z$, thus we can apply Lemma \[fi\] to derive that $Z= F^{n^2}.$ Conversely, if ${\mathrm{im}}(f)$ is dense in $F^{n^2}$ then ${\mathrm{im}}(\phi(f))$ is dense in $F^n$ since $\phi$ is a surjective continuous map. The respective part can be handled in much the same way, the only difference being the analysis of respective maps within the framework of $M_n^0$. Let $f$ be a noncommutative polynomial depending on $d$ variables. In the following corollary we regard ${\mathrm{tr}}(f)$ as a commutative polynomial in $n^2d$ commutative variables. \[generic\] The image of a polynomial $f$ is dense in $F^{n^2}$ (resp. $F^{n^2-1}$ if ${\mathrm{tr}}(f)=0$) if and only if ${\mathrm{tr}}(f),\dots,{\mathrm{tr}}(f^n)$ (resp. ${\mathrm{tr}}(f^2),\dots,{\mathrm{tr}}(f^n)$) are algebraically independent. We have a bijective polynomial map from $F^n$ to $F^n$ (whose inverse is also a polynomial map), which maps the coefficients of the characteristic polynomial of an arbitrary matrix $a\in M_n$ to its “trace" tuple, $({\mathrm{tr}}(a),\dots,{\mathrm{tr}}(a^n))$ (see Section \[pre\]). Hence ${\mathrm{tr}}(f),\dots,{\mathrm{tr}}(f^n)$ are algebraically independent if and only if the coefficients of the characteristic polynomial of $f$ are algebraically independent. Assume that the coefficients of the characteristic polynomial of $f$ are algebraically dependent. Then the image of $\phi(f)$ is contained in a proper algebraic subvariety in $F^n$, which is in particular not dense in $F^n$, therefore ${\mathrm{im}}(f)$ cannot be dense in $F^{n^2}$. To prove the converse assume that the coefficients of the characteristic polynomial of $f$ are algebraically independent. Then the closure of ${\mathrm{im}}(\phi(f))$ cannot be a proper subvariety and is thus dense in $F^n$. We can now apply Lemma \[gost\] to conclude the proof of the first part. The respective part of Lemma \[gost\] yields in the same manner as above the respective part of this corollary. Let $X$ be an irreducible algebraic variety. Recall that the closure of the image of a polynomial map $p:X\to F^k$ is an irreducible algebraic variety. Thus, if $p(X)\cap (F^{k-1}\times \{0\})$ is dense in $F^{k-1}\times \{0\}$ and $p(X)\not\subseteq F^{k-1}\times \{0\}$ then the (Zariski) closure $Z$ of $p(X)$ equals $F^k$. (Suppose on contrary that $Z=V(p_1,\dots,p_l)\neq F^k$. We can assume that $p_1(z_1,\dots,z_k)\neq \alpha z_k^r$ for $r\in {\mathbb{N}}$, $\alpha\in F$. Write $p_1(z_1,\dots,z_k)=\sum_{i=0}^m q_{i}(z_1,\dots,z_{k-1})z_k^i$ and note that $q_{0}$ equals zero since $Z\cap (F^{k-1}\times\{0\})=F^{k-1}\times\{0\}$. Thus, there exists the maximal $r\ge 1$ such that we can write $p_1(z_1,\dots,z_k)=q(z_1,\dots,z_k)z_k^{r}$ for some nonconstant polynomial $q$. Hence, $V(p_1)=V(q)\cup V(z_k)$ and, by assumptions and choice of $r$, $Z\neq V(q)\cap Z \neq \emptyset$, $Z\neq V(z_k)\cap Z \neq \emptyset$. We derived a contradiction, $Z=Z\cap V(p_1)=(Z\cap V(q))\cup (Z\cap V(z_k))$.) In the next lemma we will see how the image of a polynomial $f$ evaluated on $M_{n-1}$ impacts ${\mathrm{im}}(f)$ on $M_n$. In order to distinguish between these images we write ${\mathrm{im}}_k(f)$ for ${\mathrm{im}}(f)$ evaluated on $M_k$. We identify $M_{n-1}$ with $\left(\begin{array}{ccc} M_{n-1}& 0\\ 0 & 0 \end{array}\right) $ inside $M_n$. \[n-1,n\] If ${\mathrm{im}}_{n-1}(f)\cap M_{n-1}^0$ is dense in $M_{n-1}^0$ then ${\mathrm{im}}_n(f)$ is dense is $M_n^0$. If, additionally, ${\mathrm{im}}_n(f)\not\subseteq M_n^0$ then ${\mathrm{im}}_n(f)$ is dense in $M_n$. Assume that ${\mathrm{im}}_{n-1}(f)\cap M_{n-1}^0$ is dense in $M_{n-1}^0$. Therefore ${\mathrm{im}}_n(\phi(f))\cap (\{0\}\times F^{n-2}\times \{0\}) $ is dense in $\{0\}\times F^{n-2}\times \{0\}$. (The last component of the polynomial map $\phi(f)$ is $\det(f)$.) According to the discussion preceding the lemma, ${\mathrm{im}}(\phi(f))\cap(\{0\}\times F^{n-1})$ is dense in $\{0\}\times F^{n-1}$ if it contains an invertible matrix. The later was observed in [@LeZhou Theorem 2.4]. Thus, ${\mathrm{im}}_n(f)\cap M_n^0$ is dense in $M_n^0\cong F^{n^2-1}$ by Lemma \[gost\]. If, additionally, there exists a matrix in the image of $f$ with nonzero trace, ${\mathrm{im}}_n(f)$ is dense in $M_n$ by the above discussion identifying $M_n^0$ with $F^{n^2-1}$. If a multilinear polynomial $f$ is neither a polynomial identity nor a central polynomial of $M_2$, then ${\mathrm{im}}(f)$ is dense in $M_n$ for every $n\geq 2$. Apply [@Bel Theorem 2] and Lemma \[n-1,n\]. In view of Lemma \[n-1,n\] it would suffice to verify the density version of Lvov’s conjecture for a polynomial $f$ evaluated on $M_n$ for such $n$ that $f$ is a polynomial identity or a central polynomial of $M_{n-1}$ and is not a polynomial identity or a central polynomial of $M_n$. The first step in this direction may be to establish the density of the image of the standard polynomials $St_n$. The following questions arise when trying to establish a connection between Lvov’s conjecture and its dense counterpart. Does the density of ${\mathrm{im}}(f)$ in $M_n$ or in $M_n^0$ for a multilinear polynomial $f$ imply that ${\mathrm{im}}(f)=M_n$ or $M_n^0$, respectively? Is the image of a multilinear polynomial closed in $F^{n^2}$? The image of a homogeneous polynomial is not necessarily closed in $F^{n^2}$. Lemma \[nicsled\] provides examples of such homogeneous polynomials. We were dealing with the Zariski topology, however, if the underlying field $F$ equals ${\mathbb{C}}$, the field of complex numbers, all statements remain valid when we replace the Zariski topology with the (more familiar) Euclidean topology. This rests on the result from algebraic geometry (see, e.g., [@Mil Theorem 10.2]) asserting that the image of a polynomial map $g$ contains a Zariski open set of its closure. Thus, if the image of $g:{\mathbb{C}}^m\to {\mathbb{C}}^k$ is dense in the Zariski topology, then it contains a dense open subset, which is clearly open and also dense in ${\mathbb{C}}^k$ in the Euclidean topology. (Indeed, its complement, which is a set of zeros of some polynomials, cannot contain an open set.) Consequently, the image of a polynomial map $g:{\mathbb{C}}^m\to{\mathbb{C}}^k$ is dense in the Zariski topology in ${\mathbb{C}}^k$ if and only if it is dense in the Euclidean topology. However, the question whether the image of a multilinear polynomial $f:M_n({\mathbb{C}})^{d}\to M_n({\mathbb{C}})$ is closed in the Euclidean topology in $M_n({\mathbb{C}})$ might be approachable with tools of complex analysis. zero trace squares of polynomials {#tr^2} ================================= Let $f$ be a polynomial that is not an identity of $M_n$. The simplest situation where the conditions of Corollary \[generic\] are not fulfilled is when ${\mathrm{tr}}(f^2 )=0$. Let us first show that this can actually occur. The proof of the next proposition is due to Igor Klep who has kindly allowed us to include it here. Let $n=2^m \ell$, where $\ell>1$ is odd. Then there exists a multihomogeneous polynomial $f$ which is not a polynomial identity of $M_n$ with ${\mathrm{tr}}(f^2)=0$ on $M_n$. [D]{}[D]{} Consider the universal division algebra ${\mathcal D}=UD(n)$. ${\mathcal D}$ comes equipped with the reduced trace ${\mathrm{tr}}:{\mathcal D}\to\cZ=Z({\mathcal D})$. We claim that the quadratic trace form $q: x\mapsto {\mathrm{tr}}(x^2)$ on ${\mathcal D}$ is isotropic. Since $\ell>1$, there is an odd degree extension $K$ of $\cZ$ such that ${\mathcal D}\otimes_{\cZ} K = M_l(K{\otimes}_{\cZ}{\mathcal D}')$ where ${\mathcal D}'$ is a division ring (see, e.g., [@Row Theorem 3.1.40]). The natural extension $q_K$ of $q$ to $M_l(K{\otimes}_{\cZ}{\mathcal D}')$ is obviously isotropic, i.e., there is $A\in M_l(K{\otimes}_{\cZ}1)$ with $q_K(A)={\mathrm{tr}}(A^2)=0$. Hence by Springer’s theorem [@EKM Corollary 18.5], $q$ is isotropic as well. There exists $0\neq y\in{\mathcal D}$ with ${\mathrm{tr}}(y^2)=0$. We have $y=f c^{-1}$ for some $f\in GM(n)$ and $c\in Z(GM(n))$. Replacing $y$ by $c y$, we may assume without loss of generality that $y\in GM(n)$. There is $f\in F{\langle\ushort X\rangle}$ whose image in $GM(n)$ coincides with $y$. By the universal property of the reduced trace on ${\mathcal D}$, ${\mathrm{tr}}(y^2)=0$ translates into ${\mathrm{tr}}(f({\ushort a})^2)=0$ for all $n$-tuples ${\ushort a}$ of $n\times n$ matrices over $F$. By (multi)homogeneizing we can even achieve that $0\neq f$ is multihomogeneous. As explained in the introduction of the paper, one would expect that multilinear polynomials that are not identities cannot satisfy ${\mathrm{tr}}(f^k)=0$ for $k\ge 2$. Unfortunately, we are able to prove this only in the dimension-free setting. That is, we consider the situation where $f$ satisfies ${\mathrm{tr}}(f^k)=0$ on $M_n$ for every $n\ge 1$. This is equivalent to the condition that $f^k$ is a sum of commutators [@BK Corollary 4.8]. \[vsotakom\] If $f \in F{\langle\ushort X\rangle}$ is a nonzero multilinear polynomial, then $f^k$, $k\geq 2$, is not a sum of commutators. To avoid notational difficulties, we will consider only the case where $k=2$. The modificatons needed to cover the general case are rather obvious. If $f^2$ is a sum of commutators then ${\mathrm{tr}}(f^2)=0$ in matrix algebras of arbitrary dimension. Let us write $f=f(x_1,\dots,x_d)=\sum f_i x_1 g_i$. Since ${\mathrm{tr}}(f(x+y,x_2,\dots,x_d)^2-f(x,x_2,\dots,x_d)^2-f(y,x_2,\dots,x_d)^2)=0$, we have ${\mathrm{tr}}(x(\sum_{i,j}g_i f_j y g_j f_i))=0$. This implies that $\sum_{i,j}g_i f_j x_1 g_j f_i$ is a polynomial identity for every matrix algebra, so it has to be trivial. Denote by $f^*$ the Razmyslov transform of $f$ according to $x_1$, $f^*=\sum g_i x_1 f_i$. We have $f^*(f(x_1,x_2,\dots,x_d),x_2,\dots,x_d)=0$ in the free algebra $F{\langle\ushort X\rangle}$, which further yields $f^*=0$. Indeed, suppose $f^*\neq 0$ and choose monomials $m_1,m_2$ with nonzero coefficients in $f$ and $f^*$, respectively, which are minimal due to the first appearance of $x_1$. Then the coefficient of the monomial $m_2(m_1,x_2,\dots,x_d)$ in the polynomial $f^*(f(x_1,x_2,\dots,x_d),x_2,\dots,x_d)$ is nonzero, a contradiction. Hence, $f^*$ has to be zero, which leads to the contradiction $f=0$ ($f^*=0$ if and only if $f=0$, see, e.g., [@For Proposition 12]). From the proof we deduce that only the linearity in one variable is needed. However, for general polynomials we were not able to find out whether $f^k$ can be a sum of commutators. In any case, this problem can be just a test for a more general question (see Question \[M8\]). Let $M_\infty$ denote the algebra of all infinite matrices with finitely many nonzero entries. We write $M_\infty ^0$ for the set of elements in $M_\infty$ with zero trace, where the trace is defined as the sum of diagonal entries. \[M8\] Is the image of an arbitrary noncommutative polynomial $f$ on $M_\infty$ a dense subset (in the Zariski topology) of $M_\infty$ or of $M_\infty^0$? Is ${\mathrm{im}}(f)=M_\infty$ or ${\mathrm{im}}(f)=M_\infty^0$? If $f^k$ is a sum of commutators for some polynomial $f$ and some $k>1$, then ${\mathrm{im}}(f)$ on $M_\infty$ is not dense in $M_\infty$, hence such a polynomial $f$ would provide a counterexample to the above question. The question about the density in the sense of the Jacobson density theorem was settled in [@C-L]. Lie polynomials of degree $2,3,4$ ================================= We prove that Lvov’s conjecture holds for multilinear Lie polynomials of degree less or equal to $4$. We use the right-normed notation, $[x_n,\dots,x_1]$ denotes $[x_n,[x_{n-1},[\dots[x_2,x_1]]\dots]$. \[baza\] If $f$ is of the form $f(x_1,\dots,x_d)=[x_{i_1},x_{i_2},\dots,x_{i_{k-1}},x_1]$, where $2\leq i_j\leq d$, then ${\mathrm{im}}(f)=M_n^0$. Choose a diagonal matrix $s$ with distinct diagonal entries $\lambda_i$, $1\leq i\leq n$. Then $f(x,s,\dots,s)_{ij}=\pm (\lambda_i-\lambda_j)^{k-1}x_{ij}$, where $x=(x_{ij})$. Thus, ${\mathrm{im}}(f)$ contains all matrices with zero diagonal entries. Since ${\mathrm{im}}(f)$ is closed under conjugation and every matrix with zero trace is similar to a matrix with zero diagonal (see, e.g., [@Shoda]), we have ${\mathrm{im}}(f)=M_n^0$. If $f$ is a Lie polynomial of degree 2, $f=\alpha [x_1,x_2]$, $\alpha\neq 0$, it has been known for a long time [@Shoda; @Albert] that ${\mathrm{im}}(f)=M_n^0$. We list this as a lemma for the sake of reference. \[2\] If $f$ is a Lie polynomial of degree 2, then ${\mathrm{im}}(f)=M_n^0$. \[3\] If $f$ is a multilinear Lie polynomial of degree $3$, then ${\mathrm{im}}(f)=M_n^0$. We can assume that $f(x,y,z)=[z,y,x]+\alpha[y,z,x]$. If we take $x=z$, we have $f(x,y,x)=[x,y,x]=-[x,x,y]$. We apply Lemma \[baza\] to conclude $M_n^0={\mathrm{im}}(f(x,y,x))\subseteq {\mathrm{im}}(f)\subseteq M_n^0$. \[4\] If $f$ is a multilinear Lie polynomial of degree $4$, then ${\mathrm{im}}(f)=M_n^0$. It is easy to see that the monomials $[x_i,x_j,x_k,x_1]$, $\{i,j,k\}=\{2,3,4\}$, form a basis of multilinear Lie polynomials of degree 4. We can assume that $$\begin{aligned} f(x_1,x_2,x_3,x_4)&=&[x_4,x_3,x_2,x_1]+\alpha_1[x_3,x_4,x_2,x_1]+\alpha_2[x_4,x_2,x_3,x_1]+\alpha_3[x_2,x_4,x_3,x_1]+\\ &&\alpha_4[x_3,x_2,x_4,x_1]+\alpha_5[x_2,x_3,x_4,x_1].\end{aligned}$$ Consider $f(x,x,x,y)=(\alpha_4+\alpha_5)[x,x,y,x]=-(\alpha_4+\alpha_5)[x,x,x,y]$. Due to Lemma \[baza\] we can assume $\alpha_4+\alpha_5= 0$. Similarly, setting $x_1=x_2=x_4$ and $x_1=x_3=x_4$ yields $\alpha_2+\alpha_3=0$ and $1+\alpha_1=0$, respectively. Hence we can write $$\begin{aligned} f(x_1,x_2,x_3,x_4)&=&[x_4,x_3,x_2,x_1]-[x_3,x_4,x_2,x_1]+\alpha_2([x_4,x_2,x_3,x_1]-[x_2,x_4,x_3,x_1])+\\ & &\alpha_4([x_3,x_2,x_4,x_1]-[x_2,x_3,x_4,x_1])\\ &=&[[x_4,x_3],x_2,x_1]+\alpha_2[[x_4,x_2],x_3,x_1]+\alpha_4[[x_3,x_2],x_4,x_1].\end{aligned}$$ Then $f(x,y,y,z)=(1+\alpha_2)[[z,y],y,x]=(1+\alpha_2)[[z,y],[y,x]]$. It follows from the proof of [@Smith Theorem 1] that any matrix with zero trace except for the rank-one matrix is similar to a commutator of two matrices with zero diagonal. Choose a diagonal matrix $s$ with distinct diagonal entries. Take $a\in M_n^0$ with rank at least 2. Since ${\mathrm{im}}(f)$ is closed under conjugation by $GL_n$, we may assume that $a=[b,c]$ where $b,c\in M_n$ have zero diagonal. Hence, we can write $b=[b',s],\; c=[c',s]$ for some $b',c'\in M_n$. Thus $(1+\alpha_2)a=f(b',s,s,c')$. If $a\in M_n^0$ has rank one, then $a$ is similar to the matrix unit $e_{12}$ and $(1+\alpha_2)e_{12}=f(e_{21},e_{12},e_{12},-\frac{1}{2}e_{11})$. Hence, $1+\alpha_2=0$ or ${\mathrm{im}}(f)$ contains all matrices with zero trace. Therefore we can assume $1+\alpha_2=0$. If we set $x_1=x_4$ we get in a similar way that $1-\alpha_2=0$, which leads to a contradiction. Hence, $M_n^0\subseteq {\mathrm{im}}(f)\subseteq M_n^0$ yields the desired conclusion. \[Lie\] If $f$ is a nonzero multilinear Lie polynomial of degree at most $4$, then ${\mathrm{im}}(f)=M_n^0$. Apply Lemmas \[2\], \[3\], \[4\]. [**Acknowledgement.**]{} The author would like to thank to her supervisor Matej Brešar for many fruitful discussions and insightful comments, and to Igor Klep and Klemen Šivic for generous sharing of ideas. [99]{} A. A. Albert, [*Structure of algebras*]{}, AMS Coll. Pub., vol. 24, AMS, Providence, RI, 1961. A. A. Albert, B. Muckenhoupt, On matrices of trace 0, [*Michigan Math. J.*]{} [**1**]{} (1957), 1-3. S. A. Amitsur, D. J. Saltman, Generic abelian crossed products and $p$-algebras, [*J.Algebra*]{} [**51**]{} (1978), 76–87. M. Brešar, I. Klep, Values of noncommutative polynomials, Lie skew-ideals and tracial Nullstellensätze, [*Math. Res. Lett.*]{} [**16**]{} (2009), 605–626. C.-L. Chuang, On ranges of polynomials in finite matrix rings, [*Proc. Amer. Math. Soc.*]{} [**140**]{} (1990), 293-302. C.-L. Chuang, T.-K. Lee, Density of polynomial maps, [*Canad. Math. Bull.*]{} [**53**]{} (2010), 223-229. D. Cox, J. Little, D. O’Shea, [*Ideals, varieties, and algorithms*]{}, Springer-Verlag, 2007. R. Elman, N. Karpenko, A. Merkurjev, [*The algebraic and geometric theory of quadratic forms*]{}, Amer. Math. Soc. Colloq. Publ., vol. 56, Amer. Math. Soc., Providence, RI, 2008. E. Formanek, [*The polynomial identities and invariants of $n\times n$ matrices*]{}, CBMS Reg. Conf. Ser. Math., vol. 78, Amer. Math. Soc., Providence, RI, 1991, Published for the Conference Board of the Mathematical Sciences, Washington, DC. A. Kanel-Belov, S. Malev, L. H. Rowen, The images of non-commutative polynomials evaluated on $2\times2$ matrices, [*Proc. Amer. Math. Soc.*]{} [**140**]{} (2012), 465-478. T.-K. Lee, Y. Zhou, Right ideals generated by an idempotent of finite rank, [*Linear Algebra Appl.*]{} [**431**]{} (2009), 2118-2126. U. Leron, Nil and power-central polynomials in rings, [*Trans. Amer. Math. Soc.*]{} [**202**]{} (1975), 97–103. J. S. Milne, [*Algebraic geometry*]{}, 2012. Retrieved from http://www.jmilne.org/math/CourseNotes/AG.pdf. L. H. Rowen, Universal PI-algebras and algebras of generic matrices, [*Israel J. Math.*]{} [**18**]{} (1974), 65–74. L. H. Rowen, [*Polynomial identities in ring theory*]{}, Pure and Applied Mathematics [**84**]{}, Academic Press, Inc., 1980. D. J. Saltman, On $p$-power central polynomials, [*Proc. Amer. Math. Soc.*]{} [**78**]{} (1980), 11–13. K. Shoda, Einige Sätze über Matrizen, [*Jap. J. Math.*]{} [**13**]{} (1936), 361-365. J. H. Smith, Commutators of nilpotent matrices, [*Linear and Multilinear Algebra*]{} [**4**]{} (1976), 17-19. T. A. Springer, [*Invariant theory*]{}, Springer-Verlag, 1977. [^1]: 2010 [*Math. Subj. Class.*]{} Primary: 16R30; Secondary: 16K20, 16R99,16S50.
--- abstract: 'The neutron elastic magnetic form factor $G_M^n$ has been extracted from quasielastic scattering from deuterium in the CEBAF Large Acceptance Spectrometer, CLAS [@CLAS]. The kinematic coverage of the measurement is continuous over a broad range, extending from below 1 $\rm{GeV^2}$ to nearly 5 $\rm{GeV^2}$ in four-momentum transfer squared. High precision is achieved by employing a ratio technique in which most uncertainties cancel, and by a simultaneous in-situ calibration of the neutron detection efficiency, the largest correction to the data. Preliminary results are shown with statistical errors only.' address: - | Thomas Jefferson National Accelerator Facility, 12000 Jefferson Ave.,\ Newport News, VA, 23606, USA - 'Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA 15213, USA' author: - 'W. K. Brooks [^1] and J. D. Lachniet[^2] for the CLAS Collaboration.' title: Precise Determination of the Neutron Magnetic Form Factor to Higher $Q^2$ --- INTRODUCTION ============ The elastic form factors of the proton and neutron are fundamental quantities which have been studied for decades. The dominant features of the larger form factors $G_M^p$, $G_E^p$, and $G_M^n$ were established in the 1960’s: the dipole form $G_{dipole} = (1+Q^2/0.71)^{-2}$ gives a good description, corresponding to an exponential falloff in the spatial densities of charge and magnetization. In the intervening decades, obtaining higher precision measurements of these quantities has been one thrust of the field, while new directions have also emerged, especially over the past decade. These include precise measurements of the neutron electric form factor [@Gen], and extractions of the strange electric and magnetic form factors for the proton [@G0], as well as time-like form factors [@timelike]. In addition to experimental progress, there has been renewed theoretical interest on several fronts [@Kees]. First, models of the nucleon ground state can often be used to predict several of these quantities, and it has proven to be very difficult to describe all of the modern data simultaneously in a single model approach. Second, lattice calculations are now becoming feasible in the few-GeV$^2$ range, and over the next decade these calculations will become increasingly precise. Finally, since elastic form factors are a limiting case of the generalized parton distributions (GPDs), they can be used to constrain GPD models. For this purpose, high precision and a large $Q^2$ coverage is quite important [@Kroll]. At present the neutron magnetic form factor at larger $Q^2$ is known much more poorly than the proton form factors. THE CLAS MEASUREMENT ==================== The present measurement [@E94-017] makes use of quasielastic scattering on deuterium where final state protons and neutrons are detected. The ratio of $^2\rm{H}(e,e'n)$ to $^2\rm{H}(e,e'p)$ in quasi-free kinematics is approximately equal to the ratio of elastic scattering from the free neutron and proton. The ratio is: $$\begin{aligned} R_D ~~ = ~~ {{d\sigma \over d\Omega}[^2\rm{H}(e,e'n)_{QE}] \over {d\sigma \over d\Omega}[^2\rm{H}(e,e'p)_{QE}] } ~~ = ~~ a \cdot R_{free} ~~ = ~~ a \cdot { {(G_E^n)^2+\tau(G_M^n)^2 \over 1+\tau} +2\tau(G_M^n)^2\tan^2({\theta\over2}) \over {(G_E^p)^2+\tau(G_M^p)^2 \over 1+\tau} +2\tau(G_M^p)^2\tan^2({\theta\over2}) }\end{aligned}$$ Using deuteron models one can accurately compute the correction factor $a(Q^2,\theta _{pq})$, which is nearly unity for quasielastic kinematics and higher $Q^2$. The value of $G_M^n(Q^2)$ is then obtained from the measured value of $R_D$ and the experimentally known values of $G_E^n(Q^2)$, $G_M^p(Q^2)$, and $G_E^p(Q^2)$; this method has been used previously [@Sick]. The $(e,e'n)$ and $(e,e'p)$ reactions were measured at the same time from the same target. Use of the ratio $R_D$ under these circumstances reduces or eliminates several experimental uncertainties, such as those associated with the luminosity measurement or radiative corrections. The remaining major correction is for the detection efficiency of the neutron. Neutron Detection Efficiency ---------------------------- Neutrons were measured in three CLAS scintillator-based detectors: the forward-angle and large-angle electromagnetic shower calorimeters, and the time-of-flight scintillators. The efficiency measurement was performed using tagged neutrons from the $^1\rm{H}(e,e'\pi^+)X$ reaction where the mass of the final state $M_X$ was chosen to be that of the neutron. Since the precise value of the detection efficiency can vary with time-dependent and rate-dependent quantities such as photomultiplier tube gain, the detection efficiency was measured *simultaneously* with the primary deuterium measurement. Two separate targets were positioned in the beam at the same time, one for deuterium and the other for hydrogen, separated by less than 5 cm. A plot of the resulting neutron detection efficiency is shown in Fig. \[fig:brooks\_nde\]. The main plot shows the results for the forward electromagnetic shower calorimeter, while the insets show the results for the large angle calorimeter and the time of flight scintillators. Overlapping Measurements ------------------------ The CLAS extraction of $G_M^n(Q^2)$ actually consists of multiple overlapping measurements. The time of flight scintillators cover the full angular range of the spectrometer, while the calorimeters cover subsets of these angles, thus $G_M^n(Q^2)$ can be obtained from two independent measures of the neutron detection efficiency. In addition, the experiment was carried out with two different beam energies that had overlapping coverage in $Q^2$, so that the detection of the protons of a given $Q^2$ took place in two different regions of the drift chambers. As a result, essentially four measurements of $G_M^n(Q^2)$ have been obtained from the CLAS data that potentially could have four independent sets of systematic errors. In practice these four measurements are consistent within the statistical errors, suggesting that the systematic errors are well-controlled and small. Systematic Uncertainties ------------------------ The final evaluation of the systematic uncertainties for this measurement has not been performed, and therefore only statistical uncertainties are presented. It is anticipated that several systematic uncertainties will contribute at the percent level, with a number of others contributing at a fraction of a percent. The larger uncertainties are expected to be due to the neutron detection efficiency determination, the two-photon-exchange portion of the radiative correction, uncertainties in $G_E^n(Q^2)$, $G_M^p(Q^2)$, and $G_E^p(Q^2)$, and suppression of inelastic background. The smaller uncertainties are expected to be due to the proton detection efficiency (measured by elastic scattering from the hydrogen target), the remnant of the radiative corrections not cancelling in the ratio, the theoretical correction $a(Q^2)$ for quasi-free scattering, the definition of the fiducial volume for neutrons and protons, and a number of other small contributions. It is expected that the ultimate uncertainties will range from two to three percent over the full range in $Q^2$. Preliminary Results ------------------- The preliminary results are shown in Fig. \[fig:brooks\_results\] together with a sample of existing data. The error bars shown are due only to statistical uncertainties. The data shown are the weighted averages of the four overlapping individual measurements discussed above. Because these results are preliminary, it is necessary to be cautious about the conclusions drawn, since few-percent shifts in the results are still possible. Nonetheless, a few features are noteworthy. First, the quality and coverage of the data is a very substantial improvement over the existing world’s data set. Second, the dipole form appears to give a good representation of the data over the $Q^2$ range measured, which is at variance at higher $Q^2$ with parameterizations based on previous data, which tend to show a more strongly decreasing trend for $G_M^n /(\mu_n G_{dipole})$ with increasing $Q^2$. At face value, the lowest $Q^2$ points appear to disagree with previous high-precision data. However, these data are too preliminary to make this conclusion. The lowest four points, unlike all others on the plot, are not an average over multiple measurements, and they are near the edge of the detector acceptance. Some further study is required to establish the final centroids and uncertainties for these points. [9]{} B. Mecking et al., Nucl. Inst. and Meth. A 503 (2003) 513. R. Madey et al., Phys. Rev. Lett. 91 (2003) 122002, G. Warren et al., Phys. Rev. Lett. 92 (2004) 042301, and references therein; see also M. Seimetz, these proceedings. K. S. Kumar, P. A. Souder, Prog. Part. Nucl. Phys. 45 (2000) s333; see also K. Paschke, J. Martin, and S. Baunack, these proceedings. M. Ambrogianai et al., Phys. Rev. D 60 (1999) 032002. C. E. Hyde-Wright and Kees de Jager, Annu. Rev. Nucl. Part. Sci. 54 (2004) 217; H. Gao, Int. J. Mod. Phys. E 12 (2003) 1; Erratum-ibid. E 12 (2003); 567 see also D. Day, these proceedings. M. Diehl, et al., hep-ph/0408173; P. Kroll, private communication. Experiment E94-017, W. K. Brooks and M. F. Vineyard, spokespersons. G. Kubon, Phys.Lett. B 524 (2002) 26; see also I. Sick, these proceedings. [^1]: This work was supported by DOE contract DE-AC05-84ER40150 Modification No. M175, under which the Southeastern Universities Research Association (SURA) operates the Thomas Jefferson National Accelerator Facility. [^2]: This work was supported under DOE contract DE-FG02-87ER40315.
--- author: - | Danny Dolev\ The Hebrew University\ Jerusalem, Israel\ [email protected] - | Christoph Lenzen\ Department for Computer Science and Applied Mathematics\ Weizmann Institute of Science, Israel\ [email protected] - | Shir Peled\ The Hebrew University\ Jerusalem, Israel\ [email protected] bibliography: - '../triangles.bib' title: '“Tri, Tri again”: Finding Triangles and Small Subgraphs in a Distributed Setting' --- =8.5in =1 > *‘Tis a lesson you should heed:* > > *Try, try, try again.* > > *If at first you don’t succeed,* > > *Try, try, try again.* > > *(William Edward Hickson, 19th century educational writer)*
--- abstract: 'We continue to study the rank functions of tropical matrices. In this paper, we explain how to reduce the computation of ranks for matrices over the ‘supertropical semifield’ to the standard tropical case. Using a counting approach, we prove the existence of a $01$-matrix with many ones and without large all-one submatrices, and we put our results together and construct an $n\times n$ matrix with tropical rank $o(n^{0.5+\varepsilon})$ and Kapranov rank $n-o(n)$.' author: - Yaroslav Shitov title: A separation between tropical matrix ranks --- The *tropical* arithmetic operations on $\R$ are $(a,b)\to\min\{a,b\}$ and $(a,b)\to a+b$. One can complete this algebraic structure with an element neutral with respect to addition (it plays the role of an infinite positive element and is denoted by $\infty$) and get the structure $(\Ro,\min,+)$ known as the *tropical semiring*. This semiring and related structures are being studied since the 1960’s because of their applications in optimization theory [@Vor]; the tropical methods also arise naturally in algebraic geometry and lead to important developments in the field (see e.g. [@BH; @CDPR; @Mikh]). Other applications of tropical mathematics include operations research [@CG], discrete event systems [@BCOQ], automata theory [@Sim], and optimal control [@KM; @McE]. This paper is a continuation of the study of tropical rank functions initiated in [@DSS] and developed in [@CJR; @mylaa; @mytrb; @myproc]. Namely, we are going to focus on the tropical rank and Kapranov rank—the functions arisen from the context of tropical algebraic geometry. (It should be mentioned that there are many more rank functions of tropical matrices that are being extensively studied in the literature, see [@AGG; @GS] and references therein.) We refer the reader to [@CJR; @DSS; @mytrb] for definitions and a detailed discussion of motivation behind these concepts; the rank functions also admit combinatorial descriptions, and we are going to recall them in Section 2. For the purpose of this introduction, we briefly recall that the *tropical rank* of a matrix is the topological dimension of the tropical convex hull of its columns, and the *Kapranov rank* is the smallest dimension of tropical linear spaces containing these columns. As we will see, one needs to specify a field $\mathbb{F}$ to give the definition of Kapranov rank, and the corresponding function is referred to as the Kapranov rank *over* $\mathbb{F}$. It is very well known that these functions can be different, but the tropical rank cannot exceed any of the Kapranov ranks (see [@DSS]). The papers [@CJR; @DSS; @mytrb] contain a resolution of the following question: For which $d,n,r$ does every $d\times n$ matrix of tropical rank less than $r$ have Kapranov rank (over $\mathbb{C}$) less than $r$ as well? This condition is equivalent to the $r\times r$ minors of a $d\times n$ matrix of variables being what is called a *tropical basis* of the ideal that they generate. The paper [@DSS] contains an example showing that the $4\times4$ minors of a $7\times 7$ matrix are not a tropical basis, and it was asked if it is the case for the minors of the $5\times 5$ matrix. The authors of [@CJR] proved that the $4\times 4$ minors of a $5\times 5$ matrix form a tropical basis, and a complete description of the tuples $(d,n,r)$ for which the answer to the above question is positive was given in [@mytrb]. The result below is valid for the Kapranov rank function computed with respect to any infinite field. \[thrmytrb\] Let $d,n,r$ be positive integers, $r\leq\min\{d,n\}$. The $d$-by-$n$ matrices with tropical rank less than $r$ always have Kapranov rank less than $r$ if and only if one of the following conditions holds: \(1) $r\leq3$; \(2) $r=\min\{d,n\}$; \(3) $r=4$ and $\min\{d,n\}\leq6$. The answer to the same question but for finite fields remains unknown, but the corresponding characterization should be different from the one given in Theorem \[thrmytrb\]. In fact, Example 2.7 in [@mylaa] shows that matrices of tropical rank two and larger Kapranov rank exist for any finite ground field. As we see, there are a lot of results describing the cases when the tropical and Kapranov ranks are equal; on the opposite end, there is a result (see [@KR]) stating that a matrix of tropical rank three can have arbitrarily large Kapranov rank. No non-trivial analogue of this result is known if we compare the behavior of the Kapranov rank functions taken over different fields. (See also Question 5 in Section 8 of [@DSS] and Problem 4.1 in [@LGAG].) For which $d$ does there exist a matrix with rational Kapranov rank three and real Kapranov rank $d$? Clearly, this may be possible only if $d\geqslant 3$, and for $d=3$ the answer is trivially positive. The answer is also positive for $d=4$, and to see this, one can construct the *cocircuit matrix* as in [@DSS] for the matroid corresponding to the *Perles configuration* (see page 94 of [@Grun]). Nothing is known for $d\geqslant 5$. We note that the difference between the tropical rank and Kapranov rank in the above mentioned example is of the order of $0.5\sqrt{n}$, for an $n\times n$ tropical matrix (see Theorem 2.4 in [@KR]). One can also construct a sequence of $n\times n$ matrices whose tropical rank and Kapranov rank differ by $\varepsilon n$ as a diagonal matrix whose diagonal blocks are equal to any fixed matrix with different ranks. In our paper, we improve these bounds to the asymptotically best possible separation of $n-o(n)$; our result is valid for the Kapranov ranks over all fields. \[thrmainthis\] For all $n>1000$, $\alpha\in(0,0.1)$, there is an $n\times n$ matrix $A$ such that $$\operatorname{tropical\,\,rank\,\,}(A)\leqslant\frac{4\sqrt{n}\ln n}{\alpha^2}\,\,\,\,\mbox{and}\,\,\,\,\operatorname{Kapranov\,\,rank\,\,}(A)\geqslant n(1-\alpha).$$ Taking $n\to\infty$ and choosing $\alpha_n$ to be $(\ln n)^{-1}$ or any other sequence sufficiently slowly decreasing to $0$, we get an $n-o(n)$ separation between the tropical rank and Kapranov rank of an $n\times n$ tropical matrix. Since the Kapranov rank is a lower bound for the tropical factorization rank (see [@DSS]), Theorem \[thrmainthis\] gives an $n-o(n)$ separation between the tropical rank and factorization rank as well. A similar question is wide open for separations between the conventional rank and factorization rank of nonnegative matrices, and related problems have important applications in optimization and computational complexity theory (see [@FMPTdW]). The following version of this problem is open in both the tropical and nonnegative settings. Let $k$ be a fixed constant. Is it correct that, for all $n,m$ satisfying $m\geqslant n\geqslant k$, there exists an $n\times m$ matrix with tropical rank $k$ and tropical factorization rank $n$? Does there exist a nonnegative $n\times m$ matrix with conventional rank $k$ and nonnegative rank $n$? The ‘nonnegative part’ of this question has negative answer for $k\leqslant 3$ (this is easy for $k\leqslant 2$ as shown in [@CR], and the case of $k=3$ has been done in [@PP; @my7x7]). If $k\geqslant 4$, this problem remains open, see Question 1 in [@Hru]. The ‘tropical part’ is open already for $k=3$, and the problem is non-trivial even in the case $k\leqslant 2$ for which Theorem 4.6 in [@myspb] gives a negative answer. Preliminaries. The rank of a supertropical matrix ================================================= This paper was inspired by the idea of *symmetrized semirings* (see [@AGG; @Plus]), which are intended to give an analogue of subtraction for those semirings that are not rings. The *symmetrized tropical semiring* is essentially the set $\Ro\times\Ro$, and we may think of a pair $(r_1,r_2)$ as a formal subtraction $r_1\ominus r_2$. The tropical operations can be naturally extended to the symmetrized setting as $(r_1,r_2)\oplus(s_1,s_2)=(\min\{r_1,r_2\},\min\{s_1,s_2\})$ and $(r_1,r_2)\odot(s_1,s_2)=(\min\{r_1+s_1,r_2+s_2\},\min\{r_1+s_2, r_2+s_1\})$ because $\min$ is the tropical addition and $+$ is the tropical multiplication. A related structure was introduced by Izhakian and Rowen in [@Izh; @IRIR1] and became known as the ‘*supertropical semifield*’. Their structure ‘*is somehow reminiscent of the symmetrized max-plus semiring, and has two kind of elements, the “real” ones (which can be identified to elements of the max-plus semiring and some “ghost” elements which are similar to the “balanced” ones,*’ as Akian, Gaubert, and Guterman wrote in [@AGG]. We decided to write this paper in terms of the ‘*supertropical*’ structure because it seems to have become more popular nowadays due to a considerable amount of papers on the topic written by Izhakian, Rowen, and their colleagues (see also [@IKR; @IKR2] and references therein). As said above, the structure introduced by Izhakian and Rowen belongs to the class most commonly known as ‘*supertropical semifields*’ (see [@IKR]), but since it contains non-zero elements without multiplicative inverses, it is not an actual semifield according to the standard definition of the latter. We denote this structure by $\S=(\R^{\tau}\cup\R^{\gamma}\cup\{\infty\},\oplus,\odot)$, where $\infty$ is an infinite positive element, and $\R^\tau$, $\R^\gamma$ are two copies of $\R$ whose elements are called in the literature ‘*tangible*’ and ‘*ghost*’, respectively. Assuming $i,j\in\{\tau,\gamma\}$, $s\in\S$, $a,b\in \R$ and $a>b$, we define the operations by $\infty\oplus s=s\oplus \infty=s$, $\infty\odot s=s\odot \infty=\infty$; $b^{j}\oplus a^{i}=a^{i}\oplus b^{j}=b^{j}$, $b^{i}\oplus b^{j}=b^\gamma$; $a^{i}\odot b^{j}=(a+b)^{\alpha}$, where $\alpha=\tau$ if $i=j=\tau$, and $\alpha=\gamma$ otherwise. One can check that $\oplus$ and $\odot$ are commutative and associative operations, and distributivity also holds. Moreover, there is a homeomorphism $\nu$ from $\S$ to the tropical semiring $\Ro$ defined by $\infty\to\infty$ and $a^{i}\to a$. One writes $c\models d$ if either $c=d$ or $c=d\oplus g$, for some ghost element $g$; this relation is known in the literature as ‘*ghost surpassing*’ relation. Let $A=(a_{ij})$ be an $n\times n$ supertropical matrix; its *permanent* is $$\operatorname{per} A=\bigoplus_{\sigma\in S_n}A(1|\sigma_1)\odot\ldots\odot A(n|\sigma_n),$$ where $S_n$ denotes the symmetric group on $\{1,\ldots,n\}$ and $A(p|q)$ denotes the entry in the $p$th row and $q$th column of $A$. This matrix is said to be *tropically non-singular* if $\operatorname{per} A$ is tangible and *tropically singular* otherwise. \[defntr\] The *tropical rank* of a supertropical matrix is the largest size of its non-singular square submatrix. In order to recall the definition of *Kapranov rank*, we need to introduce the field $\F=\mathbb{F}\{\{t\}\}$ of *generalized Puiseux series*. The elements of $\F$ are formal sums $a(t)=\sum_{e\in\mathbb{R}} a_et^{e}$ which have coefficients $a_e$ in $\mathbb{F}$ and whose *support* $\operatorname{Supp}(a)=\{e\in\mathbb{R}: a_e\neq0\}$ is well-ordered (which means that every non-empty subset of $\operatorname{Supp}(a)$ has a minimal element). The *tropicalization* mapping $\deg:K\rightarrow\Ro$ sends a series $a$ to the exponent of its *leading term*; in other words, we define $\deg a=\min\operatorname{Supp}(a)$ and $\deg 0=\infty$. \[defKap\] The *Kapranov rank* of a supertropical matrix $A$ is the smallest possible rank of a matrix $L$ whose entries are in $\F$ and which satisfies $A\models\deg L$. (Such a matrix $L$ is to be called a *lifting* of $A$.) \[remr1\] If $A$ is a supertropical matrix without ghost elements, then these rank functions match those of conventional tropical matrices as introduced and studied in [@CJR; @DSS; @mylaa; @mytrb]. Namely, the tropical rank and Kapranov rank of a matrix $T$ with entries in $\Ro$ coincide with those in Definitions \[defntr\] and \[defKap\] if the elements in $\R$ are replaced by their tangible copies. We finalize the section by proving two results using well known techniques. \[statexp\] Let $A$ be an $n\times n$ supertropical matrix satisfying $A_{11}=A_{21}=0\t$ and $A_{31}=\ldots=A_{n1}=\infty$. Let $B$ be the $(n-1)\times(n-1)$ matrix obtained from $A$ by removing the first column and replacing the first two rows by their (supertropical) sum. Then $\operatorname{per} A=\operatorname{per} B$. Let us expand $\operatorname{per} A$ with the first column (as in Lemma 3.2 in [@PH]). Since $\infty$ and $0\t$ are neutral with respect to $\oplus$ and $\odot$, respectively, we get that $\operatorname{per} A$ is the sum of the permanents of the $(1,1)$ and $(2,1)$ cofactors of $A$. Since the permanent of a matrix is linear in its rows (see Lemma 3.13 in [@PH]), the result follows. \[lemeas\] Let $A$ be a non-singular supertropical $n\times n$ matrix and $a,b\in\R$. Then one of the columns of $A$ can be replaced by $v=(a\t\,b\t\,\infty\ldots\infty)^\top$ so that the resulting matrix remains non-singular. Moreover, we can choose a column in which one of the first two entries is tangible. Since permutations of rows and columns and their scaling by elements in $\R^\tau$ cannot affect the non-singularity, we can apply the *Hungarian algorithm* for the assignment problem corresponding to the matrix $\nu(A)$, see [@Kuhn] for details. Therefore, we can assume without loss of generality that the diagonal entries of $A$ are equal to $0^\tau$, and the off-diagonal entries are positive. If $a<b$, then we choose the first column to be replaced by $v$, and otherwise we replace the second column. Reducing the supertropical case to tropical matrices {#secred} ==================================================== Let $S\in\S^{I\times J}$ be a supertropical matrix, where $I$ and $J$ denote the row and column indexing sets which we assume to be disjoint. Let us denote by $I^1$, $I^2$ two copies of the set $I$, and $i^1, i^2$ will stand for the elements that correspond to $i\in I$ in these copies. \[def1111\] We call a tropical matrix $T$ *symmetrized* if it has $I^1\cup I^2$ as row indexing set and $I\cup J$ as column indexing set (where $I,I^1, I^2, J$ are as above), and satisfies $T(i^1|i)=T(i^2|i)=0$, $T(i^1|\hat{\iota})=T(i^2|\hat{\iota})=\infty$ for all $i\in I$, $\hat{\iota}\in I\setminus\{i\}$. \[def11111\] Let $T$ be a matrix as in the above definition. We define $\Sigma\in\S^{I\times J}$ as the matrix obtained from $T$ by putting on the $i$th place the (supertropical) sum of the $i^1$th and $i^2$th rows of $T$ and removing the columns with indexes in $I$. The relation between $T$ and $\Sigma(T)$ can be understood in terms of the symmetrized tropical semiring discussed above. As we see, $T$ is obtained from $\Sigma(T)$ by replacing every row with a pair of rows which can be thought of as a vector over the symmetrized semiring corresponding to the row of $T$. Let us illustrate this construction with the example essentially appeared in the paper [@myarx1] published in 2010. (Namely, the matrix $\T(A)$ below is the one from Example 2.1 in [@myarx1] up to permutations of rows and columns and replacing the $4$’s by the $\infty$’s, which is not crucial for the argument given in [@myarx1].) \[exam1111\] Consider the symmetrized tropical matrix $$A=\begin{pmatrix} 0&\infty&\infty&0&2&1\\ \infty&0&\infty&2&0&2\\ \infty&\infty&0&2&1&0\\ 0&\infty&\infty&0&\infty&\infty\\ \infty&0&\infty&\infty&0&\infty\\ \infty&\infty&0&\infty&\infty&0 \end{pmatrix}$$ and its supertropical counterpart $$\Sigma(A)=\begin{pmatrix} 0^\gamma&2^\tau&1^\tau\\ 2^\tau&0^\gamma&2^\tau\\ 2^\tau&1^\tau&0^\gamma \end{pmatrix}$$ constructed as in Definition \[def11111\]. If the rows of $\Sigma(A)$ had indexes $1,2,3$ (from top to bottom) and its columns had indexes $4,5,6$ (from left to right), then the rows of $A$ are indexed with $1^1,2^1,3^1,1^2,2^2,3^2$, and the columns of $A$ with $1,2,3,4,5,6$. One can check it directly that the tropical rank and Kapranov rank of the matrix $\Sigma(A)$ above are $1$ and $2$, respectively; it is proven in [@myarx1] that the respective ranks of $A$ are $4$ and $5$. We can begin proving the main results of this section, which give a general relation between the ranks of $A$ and $\Sigma(A)$. \[equalKap\] Let $T$ be a matrix as in Definition \[def1111\]. If $|\mathbb{F}|\geqslant3$, then $$\operatorname{Kapranov\,\, rank\,\,}T=\operatorname{Kapranov\,\, rank\,\,}\Sigma(T)+|I|.$$ Let $\mathcal{L}$ be a lifting of $T$ with smallest possible rank $r$; row scalings allow us to assume that $\mathcal{L}(i^1|i)=\mathcal{L}(i^2|i)=1$ for all $i\in I$. Let $L'$ be the matrix obtained from $\mathcal{L}$ by subtracting, for any $i$, the $i^1$th row from $i^2$th row. The ranks of $\mathcal{L}$ and $L'$ are equal, and we have $$\label{eqLLLL}L'=\left(\begin{array}{c|c} \mathcal{I}&*\\\hline \mathcal{O}&L \end{array}\right),$$ where $\mathcal{I}$ and $\mathcal{O}$ are, respectively the unit and zero $|I|\times|I|$ matrices, $L$ is a lifting of $\Sigma(T)$, and $*$ stands for a matrix we need not specify. Therefore, the Kapranov rank of $\Sigma(T)$ is at most $r-|I|$. Conversely, consider a lifting $L$ of $\Sigma(T)$ with smallest possible rank $\rho$. We define the matrix $\mathcal{L}\in\F^{(I^1\cup I^2)\times(I\cup J)}$ as follows. For all $i\in I$, $j\in J$, $\hat{\iota}\in I\setminus\{i\}$, we set \(i) $\mathcal{L}(i^1|i)=\mathcal{L}(i^2|i)=1$ and $\mathcal{L}(i^1|\hat{\iota})=\mathcal{L}(i^2|\hat{\iota})=0$, \(ii) $\mathcal{L}(i^1|j)=\zeta t^s$, $\mathcal{L}(i^2|j)=L(i|j)+\zeta t^s$ if $T(i^1|j)=T(i^2|j)=s$, \(iii) $\mathcal{L}(i^1|j)=t^s$, $\mathcal{L}(i^2|j)=L(i|j)+t^s$ if $s=T(i^1|j)>T(i^2|j)$, \(iv) $\mathcal{L}(i^1|j)=t^s-L(i|j)$, $\mathcal{L}(i^2|j)=t^s$ if $T(i^1|j)<T(i^2|j)=s$. Since the ground field $\mathbb{F}$ contains more than two elements, we can avoid cancellation of leading (degree-$s$) terms in $L(i|j)+\zeta t^s$ by choosing an appropriate non-zero value of $\zeta$ in $\mathbb{F}$. As we see, the constructed matrix $\mathcal{L}$ is a lifting of $T$, and in order to compute its rank we subtract, as above, the $i^1$th row from $i^2$th row, for any $i$. We get the matrix $L'$ as in , so the rank of $\mathcal{L}$ equals $\rho+|I|$. \[equaltrop\] Let $T$ be a matrix as in Definition \[def1111\]. Then $$\operatorname{tropical\,\, rank\,\,}T=\operatorname{tropical\,\, rank\,\,}\Sigma(T)+|I|.$$ Denote by $I_0,J_0$ the sets of row and column indexes of a largest non-singular submatrix $C$ of $\Sigma(T)$. Denoting $|I|=n$ and $|I_0|=|J_0|=r$, we observe that the submatrix of $T$ formed by the rows with indexes in $I_0^2\cup I^1$ and columns with indexes in $I\cup J_0$ looks like (with upper left block having row indexes in $I_0^2\cup I_0^1$ and column indexes in $I\setminus I_0$) $$\label{eqTTTT} \left(\begin{array}{c|c} \mathcal{Z}_{2r\times(n-r)}&T'\\\hline \mathcal{U}_{n-r}&* \end{array}\right), %\begin{equation}\label{eqTTTT}\left(\begin{array}{c|c|c} %\mathcal{Z}_{r\times(n-r)}&\mathcal{U}_r&T_1\\\hline %\mathcal{Z}_{r\times(n-r)}&\mathcal{U}_r&T_2\\\hline %\mathcal{U}_{n-r}&*&* %\end{array}\right),$$ where $\mathcal{U}$ is the tropical unit matrix (the one with $0$’s on the diagonal and $\infty$’s everywhere else), $\mathcal{Z}$ is the all-$\infty$ matrix, and $T'$ is a symmetrized matrix such that $\Sigma(T')=C$. The permanent of  equals $\per(T')$, which in turn, according to Proposition \[statexp\], equals $\per(C)$. Since $C$ is non-singular, it has a tangible permanent, and so does the matrix , which is therefore non-singular as well. In particular, we get a ‘$\geqslant$’ inequality for the values in the formulation of the lemma. In order to prove the ‘$\leqslant$’ inequality, we use Lemma \[lemeas\] and observe that any of the largest tropically non-singular submatrices of $T$ can be reduced to the form , and the proof can be finalized as in the previous paragraph. Constructing a tropical matrix ============================== In this section, we explain how to construct a tropical matrix with small tropical rank and large Kapranov rank if we are given a $0-1$ matrix as below. \[defgood\] Numbers $(d,k,r,u)$ are said to be a *good* tuple if there exists an $d\times (kd-d)$ matrix $\mathcal{M}$ of zeros and ones such that \(1) at least $u$ entries of $\mathcal{M}$ are ones; \(2) any $\rho\times\rho$ submatrix of $\mathcal{M}$ contains a zero unless $\rho<r$. We enumerate the rows and columns of $\mathcal{M}$ by disjoint sets $I$ and $J$ and construct the tropical matrix $\Phi=\Phi(\mathcal{M})$ as follows. Its rows are indexed with $\{1,\ldots,k\}\times I$, its columns with $I\cup J$, and its entries are \(1) $\Phi(\alpha,i|i)=0$ and $\Phi(\alpha,i|\hat{\imath})=\infty$ if $\hat{\imath}\in I\setminus\{i\}$; \(2) $\Phi(\alpha,i|j)=0$ if $j\in J$ and $\mathcal{M}(i|j)=0$; \(3) $\Phi(\alpha,i|j)=a_{ij\alpha}$ if $j\in J$ and $\mathcal{M}(i|j)=1$, where $(a_{ij\alpha})$ are a family of numbers in $[1,1+1/(kd)]$ that are linearly independent over $\mathbb{Q}$. Notice that $\Phi$ is an $n\times n$ matrix with $n=kd$. \[lemkapb\] $\operatorname{Kapranov\,\, rank\,\,}\Phi\geqslant n-\sqrt{n^2-ku}$. Any lifting of $\Phi$ has $ku$ entries with degrees in $(a_{ij\alpha})$, and since these degrees are linearly independent over $\mathbb{Q}$, the corresponding entries should be algebraically independent over $\mathbb{F}$. It remains to note that any matrix of rank $\rho$ has transcendence degree at most $2n\rho-\rho^2$ and resolve the inequality $2n\rho-\rho^2\geqslant ku$ for $\rho$. \[lemtropb\] $\operatorname{Tropical\,\, rank\,\,}\Phi\leqslant d+kr$. Let $H$ be a square submatrix of $\Phi$ of size greater than $d+kr$. We need to check that $H$ is singular, that is, that the permanent of $H$ seen as a supertropical matrix is either $\infty$ or a ghost. By Dirichlet’s principle, there is a subset $\mathcal{I}\subset I$ of cardinality $\rho\geqslant r$ such that, for any $i\in\mathcal{I}$, there are two distinct pairs $(\alpha_i,i)$ and $(\beta_i,i)$ which appear to be row indexes of $H$. We denote by $H_0$ the submatrix of $\Phi$ formed by the rows with indexes in $\bigcup_{i\in\mathcal{I}}\{(\alpha_i,i),(\beta_i,i)\}$; we will be done if we manage to show that every $2\rho\times 2\rho$ submatrix of $H_0$ is singular. By Theorem \[equaltrop\], we need to show that every $\rho\times \rho$ submatrix of $\Sigma(H_0)$ is singular. Removing the columns of $\Sigma(H_0)$ consisting of $\infty$-entries, we get a matrix $\mathcal{H}\in\S^{\mathcal{I}\times J}$ such that \(1) $\mathcal{H}(i|j)=0^\gamma$ if $\mathcal{M}(i|j)=0$, and \(2) $\mathcal{H}(i|j)=a_{ij}^\tau$ with $a_{ij}\in[1,1+1/n]$ if $\mathcal{M}(i|j)=1$. Since every $\rho\times \rho$ submatrix of $\mathcal{M}$ contains a zero, the permanent of every $\rho\times \rho$ submatrix of $\mathcal{H}$ should have a summand $g^\gamma$ with $g\leqslant 0+(r-1)(1+1/n)<r$. Therefore, the products of tangible entries do not contribute to the permanent of any $r\times r$ submatrix of $\mathcal{H}$. Many authors (see [@CJR; @DSS]) consider tropical matrices with finite entries only. We note that the bounds as in Lemmas \[lemkapb\] and \[lemtropb\] will still hold if we replace every $\infty$ in the entries of $\Phi$ by $2$. In fact, the resulting matrix can be obtained as $D\odot\Phi$, where $D$ is the $n\times n$ matrix with $0$’s on the diagonal and $2$’s everywhere else. Of course, the tropical rank of $D\odot\Phi$ is at most that of $\Phi$ (see Theorem 9.4 in [@AGG]), and the proof of Lemma \[lemkapb\] reads equally well if we replace $\Phi$ by $D\odot\Phi$. Constructing the matrix $\mathcal{M}$ ===================================== In this section, we give a probabilistic construction of the matrix as in Definition \[defgood\] and finalize the proof of Theorem \[thrmainthis\]. \[lemgood\] Let $q\in(0,0.1)$ and $d\geqslant 2$. Then the numbers $$k=d,\mbox{$ $ $ $ $ $ $ $}r=4\ln d/q,\mbox{$ $ $ $ $ $ $ $}u=(1-q-d^{-1.5})(d^3-d^2)$$ are a good tuple in the sense of Definition \[defgood\]. Let $X$ be a random $d\times(d^2-d)$ matrix with independent entries each of which is either $0$ or $1$, and the probability of $0$ is $q$. According to Hoeffding’s inequality (see Theorem 1 in [@Hoeff]), the probability that the number of $1$-entries of $X$ does not exceed $u$ is at most $\exp(-2d^{-3}(d^3-d^2))<0.5$. Therefore, the condition (1) as in Definition \[defgood\] fails with probability less than $0.5$. We proceed with condition (2), whose negation means that $X$ has an $\lceil r\rceil\times\lceil r\rceil$ submatrix of all ones. The probability that this happens with any particular such submatrix is at most $(1-q)^{r^2}$, and the number of these submatrices does not exceed $(d^2-d)^{r+1}d^{r+1}$. Therefore, the condition (2) fails with probability at most $$d^{3r+3}(1-q)^{r^2}<e^{\frac{12(\ln d)^2}{q}+3\ln d+\frac{16(\ln d)^2\ln (1-q)}{q^2}}<e^{\frac{(\ln d)^2}{q}\left(15+16\frac{\ln (1-q)}{q}\right)},$$ which is also less than $0.5$ because $\ln (1-q)/q<-1$. Now we can complete the proof of Theorem \[thrmainthis\]. We define $d=\lfloor\sqrt{n}\rfloor$ and write $q=\left(\alpha-2n^{-0.25}\right)^2$. We can assume without loss of generality that the bound for the tropical rank is less than $n$ because otherwise the result is trivial. In particular, we have that $\alpha>2n^{-0.25}\sqrt{\ln n}$. The numbers $d,k,r,u$ as in Lemma \[lemgood\] allow us to construct a $d^2\times d^2$ matrix $\Phi$ satisfying the assumptions of Definition \[defgood\]; we complete $\Phi$ to an $n\times n$ matrix $\Phi_0$ by adding the copies of existing rows and columns. According to Lemma \[lemkapb\], the Kapranov rank of $\Phi_0$ is at least $$d^2-\sqrt{d^4-d^4(1-d^{-1})(1-q-d^{-1.5})}\geqslant d^2\left(1-\sqrt{q}-\sqrt{d^{-1}+d^{-1.5}}\right),$$ which is greater than or equal to $n(1-\sqrt{q}-2n^{-0.25})=n(1-\alpha)$. By Lemma \[lemtropb\], the tropical rank of $\Phi_0$ does not exceed $$\frac{4d\ln d}{q}+d\leqslant\frac{2.4\sqrt{n}\ln n}{q}\leqslant\frac{4\sqrt{n}\ln n}{\alpha^2}.$$ [99]{} Linear independence over tropical semirings and beyond, *Contemporary Mathematics* 495 (2009) 1–38. F. Babaee, J. Huh, A tropical approach to a generalized Hodge conjecture for positive currents, *Duke Math. J.* 166 (2017) 2749–2813. F. Baccelli, G. Cohen, G.J. Olsder, J.P. Quadrat, *Synchronization and Linearity*, Wiley, 1992. M. Chan, A. N. Jensen, E. Rubei, The 4x4 minors of a 5xn matrix are a tropical basis, *Linear Algebra Appl.* 435 (2011) 1598–1611. J. E. Cohen, U. G. Rothblum, Nonnegative ranks, decompositions, and factorizations of nonnegative matrices, *Linear Algebra Appl.* 190 (1993) 149–168. F. Cools, J. Draisma, S. Payne, E. Robeva, A tropical proof of the Brill-Noether theorem, *Adv. Math.* 230 (2012) 759-776. R. A. Cuninghame-Green, Minimax algebra, volume 166 of *Lecture Notes in Economics and Mathematical Systems*, Springer-Verlag, Berlin, 1979. M. Develin, F. Santos, B. Sturmfels,On the rank of a tropical matrix, in *Discrete and Computational Geometry* (E. Goodman, J. Pach and E. Welzl, eds.), MSRI Publications, Cambridge Univ. Press, 2005. S. Fiorini, S. Massar, S. Pokutta, H.R. Tiwary, R. de Wolf, Linear vs. semidefinite extended formulations: exponential separation and strong lower bounds, in *Proc. 44th Symposium on Theory of Computation*, ACM, 2012. B. Grünbaum, *Arrangements of hyperplanes*. Springer, New York, 2003. A. Guterman, Ya. Shitov, Rank functions of tropical matrices, *Linear Algebra Appl.* 498 (2016) 326–348. W. Hoeffding, Probability inequalities for sums of bounded random variables, *J. Am. Stat. Assoc.* 58 (1963) 13–30. P. Hrubeš, On the nonnegative rank of distance matrices, *Inform. Process. Lett.* 112 (2012) 457–461. Z. Izhakian, Tropical arithmetic and matrix algebra, *Commun. Algebra* 37 (2009) 1445–1468. Z. Izhakian, L. Rowen, Supertropical algebra, *Adv. Math.* 225 (2010) 2222–2286. Z. Izhakian, M. Knebusch, L. Rowen, Supertropical linear algebra, *Pacific J. Math.* 266 (2013) 43–75. Z. Izhakian, M. Knebusch, L. Rowen, Layered tropical mathematics, *J. Algebra* 416 (2014) 200–273. Z. Izhakian, A. Niv, L. Rowen, Supertropical $SL_n$, *Linear Multilinear A.* (2017). K. H. Kim, N. F. Roush, Kapranov rank vs. tropical rank, *Proc. Amer. Math. Soc.* 134 (2006) 2487–2494. V. N. Kolokoltsov, V. P. Maslov, *Idempotent analysis and applications*, Kluwer Academic Publishers, 1997. H. W. Kuhn, The Hungarian Method for the assignment problem, *Nav. Res. Logist. Q.* 2 (1955) 83–97. Z. Li, Y. Gao, M. Arav, F. Gong, W. Gao, F. J. Hall, H. van der Holst, Sign patterns with minimum rank 2 and upper bounds on minimum ranks, *Linear Multilinear A.* 61 (2013) 895–908. W. M. McEneaney, Max-plus methods for nonlinear control and estimation, Systems $\verb"&"$ Control: Foundations $\verb"&"$ Applications, Birkhäuser Boston Inc., Boston, MA, 2006. G. Mikhalkin, Enumerative tropical algebraic geometry in $\R^2$, *J. Amer. Math. Soc.* 18 (2005) 313–377. M. Plus. Linear systems in $(\max,+)$-algebra, in *Proceedings of the 29th Conference on Decision and Control*, Honolulu, 1990. P. L. Poplin, R. E. Hartwig, Determinantal identities over commutative semirings, *Linear Algebra Appl.* 387 (2004) 99–132. A. Padrol, J. Pfeifle, Polygons as Sections of Higher-Dimensional Polytopes, *Electron. J. Comb.* 22 (2015) 1.24. I. Simon, Limited Subsets of a Free Monoid, in *Proc. 19th Annual Symposium on Foundations of Computer Science*, Piscataway, N.J., Institute of Electrical and Electronics Engineers, 1978. Ya. Shitov, Example of a 6-by-6 Matrix with Different Tropical and Kapranov Ranks, preprint (2010) arXiv:1012.5507. Ya. Shitov, On the Kapranov ranks of tropical matrices, *Linear Algebra Appl.* 436 (2012) 3247–3253. Ya. Shitov, When do the r-by-r minors of a matrix form a tropical basis? *J. Combin. Theory A* 120 (2013) 1166–1201. Ya. Shitov, Mixed subdivisions and ranks of tropical matrices, *Proc. Amer. Math. Soc.* 142 (2014) 15–19. Ya. Shitov, An upper bound for nonnegative rank, *J. Comb. Theory A* 122 (2014) 126–132. Ya. Shitov, Tropical semimodules of dimension two, *St. Petersb. Math. J.* 26 (2015) 341–350. N. N. Vorobyev, Extremal algebra of positive matrices, *Elektron. Informationsverarbeitung und Kybernetik* 3 (1967) 39–71.
--- abstract: 'We introduce a simple protocol for adaptive quantum state tomography, which reduces the worst-case infidelity ($1-F({\hat{\rho}},\rho)$) between the estimate and the true state from $O(1/\sqrt{N})$ to $O(1/N)$. It uses a single adaptation step and just one extra measurement setting. In a linear optical qubit experiment, we demonstrate a full order of magnitude reduction in infidelity (from $0.1\%$ to $0.01\%$) for a modest number of samples ($N\approx 3\times10^4$).' author: - 'D.H. Mahler' - 'Lee A. Rozema' - Ardavan Darabi - Christopher Ferrie - 'Robin Blume-Kohout' - 'A.M. Steinberg' bibliography: - '../Abib.bib' title: Adaptive quantum state tomography improves accuracy quadratically --- Quantum information processing requires reliable, repeatable preparation and transformation of quantum states. *Quantum state tomography* is used to identify the density matrix $\rho$ that was prepared by such a process. No finite ensemble of $N$ samples is sufficient to uniquely identify $\rho$, so we *estimate* it, reporting either a single state ${\hat{\rho}}$ that is “close” to $\rho$ with high probability [@HradilPRA97; @ParisBook04; @RBKNJP10; @RBKPRL10; @GrossPRL10], or a confidence region of nonzero radius that contains $\rho$ with high probability [@ChristandlPRL12; @RBK12]. Both approaches must accept some inaccuracy (the discrepancy between ${\hat{\rho}}$ and $\rho$) or imprecision (the diameter of the confidence region). The universal goal of state tomography is to minimize this discrepancy, which has been quantified with various metrics (e.g., trace norm, fidelity, relative entropy, etc.). In this paper, we focus on the particularly well-motivated *quantum infidelity*, $$1-F({\hat{\rho}},\rho)=1-{\mathrm{Tr}}\left(\sqrt{\sqrt{\rho}{\hat{\rho}}\sqrt{\rho}}\right)^2,$$ and show that as $N\rightarrow\infty$, *adaptive* tomography reduces expected infidelity from $O(1/\sqrt{N})$ to $O(1/N)$. Unlike alternative metrics, $1-F({\hat{\rho}},\rho)$ quantifies an important operational quantity: *how many copies are required to reliably distinguish ${\hat{\rho}}$ from $\rho$?*. Without doing justice to the rich body of research behind this simple statement (e.g., [@WoottersPRD81; @HelstromBook; @FuchsThesis; @FuchsIEEE99; @CalsamigliaPRA08; @AudenaertPRL07]…), we summarize as follows. The discrepancy between ${\hat{\rho}}$ and $\rho$ given a *single sample* is well described by the trace distance, $|{\hat{\rho}}-\rho|_1$. But tomography (i) requires $N\gg1$ samples; (ii) is used to predict experiments on $N\gg1$ samples; and (iii) yields errors that cannot be detected without $N\gg1$ samples. So the operationally relevant quantity is $\left|{\hat{\rho}}^{\otimes N} - \rho^{\otimes N}\right|_1$, which for $N\gg1$ behaves as $1-e^{-D({\hat{\rho}},\rho)N}$. The exponent $D$ is the *quantum Chernoff bound* [@AudenaertPRL07], and $N\approx D\log(1/\epsilon)$ samples are necessary and sufficient to distinguish $\rho$ from ${\hat{\rho}}$ with confidence $1-\epsilon$. $D$ is tightly bounded by the logarithm of the fidelity (see [@CalsamigliaPRA08], Eq. 28); when $1-F({\hat{\rho}},\rho) \ll 1$ (which should always be true in tomography!), $-\log(F)\approx 1-F$ and $$\frac{1-F}{2} \leq D \leq 1-F. \label{eq:Chernoffboundbound}$$ Thus, $1-F$ really does (almost uniquely) quantify tomographic inaccuracy; $N \approx [1-F({\hat{\rho}},\rho)]^{-1}$ samples are (up to a factor of 2) necessary and sufficient [^1] to falsify ${\hat{\rho}}$. In contrast, Hilbert-Schmidt- and trace-distance have no such $N$-sample meaning, and give wildly misleading metrics of tomographic error. We show that standard tomography with static measurements can’t beat $1-F = O(1/\sqrt{N})$ as $N\to\infty$ for a large and important class of states, then introduce and explain a simple adaptive protocol that achieves $1-F = O(1/N)$ for every state. Finally, we demonstrate this effect in a linear optical experiment, achieving a 10-fold improvement in infidelity (from 0.1% to 0.01% with $N=3\times 10^4$ measurements) over standard tomography. We believe this protocol will have wide application, particularly in situations where the rate of data collection is small, such as post-selected optical systems (e.g. [@JWP2011], where data were collected at approximately $9$ measurements per hour). Adaptivity has been proposed in various contexts. Single-step adaptive tomography was first analyzed by [@GillPRA00], then refined in [@BaganPRA06; @BaganPRL06; @HuszarPRA12]. A scheme similar to ours (and its efficacy for *pure* states) was analyzed in [@RehacekPRA04]. Ref. [@OkamotoPRL12] recently treated state estimation as parameter estimation, obtaining results complementary, but largely orthogonal, to those reported here. Here, we present both an experimental demonstration and simple, self-contained derivation of: (1) why quantum fidelity is significant; (2) why adaptive tomography achieves far better infidelity; and (3) how the adaptation should be done. We optimize *worst-case* infidelity over all states, not just pure states [@RehacekPRA04] or specific ensembles of mixed states (e.g. Ref. [@BaganPRL06] achieved high average fidelity, but low fidelity on nearly-pure states). Adaptive tomography =================== Static tomography uses data from a fixed set of measurements. Different measurements yield subtly different tomographic accuracy [@DeBurghPRA08], but to leading order, “good” protocols for single-qubit tomography provide equal information [@ScottJPA06] about every component of the unknown density matrix $\rho$, $$\rho = \frac12\left(\Id + {\left\langle\sigma_x\right\rangle}\sigma_x + {\left\langle\sigma_y\right\rangle}\sigma_y + {\left\langle\sigma_z\right\rangle}\sigma_z\right). \label{eq:xyz}$$ The canonical example involves measuring the three Pauli operators ($\sigma_x$, $\sigma_y$, $\sigma_z$). This minimizes the variance of the estimator ${\hat{\rho}}$ – but not the expected infidelity, for two reasons. First, *the variance of the estimate ${\hat{\rho}}$ depends also on $\rho$ itself*. Consider the linear inversion estimator ${\hat{\rho}}_{\mathrm{lin}}$, defined by estimating ${\left\langle\sigma_z\right\rangle} = \frac{n_\uparrow - n_\downarrow}{n_\uparrow + n_\downarrow}$ (and similarly for ${\left\langle\sigma_x\right\rangle}$ and ${\left\langle\sigma_y\right\rangle}$), and substituting into Eq. \[eq:xyz\]. Each measurement behaves like $N/3$ flips of a coin with bias $p_k = \frac12(1+{\left\langle\sigma_k\right\rangle})$, and yields $$\begin{aligned} \hat{p_k} &=& p_k \pm \sqrt{\frac{3}{N}}\sqrt{p_k(1-p_k)} \label{eq:pvariance}\\ \Rightarrow {\left\langle\sigma_k\right\rangle}_{\mathrm{estimated}} &=& {\left\langle\sigma_k\right\rangle}_{\mathrm{true}} \pm \sqrt{\frac{3}{2N}}\sqrt{1-{\left\langle\sigma_k\right\rangle}^2}. \label{eq:variance}\end{aligned}$$ When ${\left\langle\sigma_k\right\rangle}\approx 0$, its estimate has a large variance – but when ${\left\langle\sigma_k\right\rangle}\approx\pm1$, the variance is very small. As a result, the variance of ${\hat{\rho}}$ around $\rho$ is anisotropic and $\rho$-dependent (see Fig. \[fig1\]a). Second, *the dependence of infidelity on the error, $\Delta = {\hat{\rho}}-\rho$, also varies with $\rho$*. Infidelity is hypersensitive to misestimation of small eigenvalues. A Taylor expansion of $1-F({\hat{\rho}},\rho)$ yields (in terms of $\rho$’s eigenbasis $\{{\left| i \right\rangle}\}$), $$1-F(\rho,\rho+\epsilon\Delta) = \frac14\sum_{i,j}{\frac{{{\left\langle i \right|}\Delta{\left| j \right\rangle}}^2}{{{\left\langle i \right|}\rho{\left| i \right\rangle}}+{{\left\langle j \right|}\rho{\left| j \right\rangle}}}} + O(\Delta^3). \label{eq:Taylor}$$ Infidelity is quadratic in $\Delta$ – except that as an eigenvalue ${{\left\langle i \right|}\rho{\left| i \right\rangle}}$ approaches $0$, its sensitivity to ${{\left\langle i \right|}\Delta{\left| i \right\rangle}}$ diverges; $1-F$ becomes *linear* [^2] in $\Delta$: $$1-F(\rho,\rho+\epsilon\Delta) = \epsilon\sum_{i:\ {{\left\langle i \right|}\rho{\left| i \right\rangle}}=0}{{{\left\langle i \right|}\Delta{\left| i \right\rangle}}} + O(\Delta^2).$$ To minimize infidelity, we must accurately estimate the small eigenvalues of $\rho$, particularly those that are (or appear to be) zero. For states deep within the Bloch sphere, static tomography achieves infidelity of $O(1/N)$ [@GillPRA00; @SugiyamaNJP12]. Typical errors scale as $|\Delta| = O(1/\sqrt{N})$ (Eq. \[eq:variance\]), and infidelity scales as $1-F = O(|\Delta|^2)$. But for states with eigenvalues less than $O(1/\sqrt{N})$, infidelity scales as $O(1/\sqrt{N})$. Quantum information processing relies on nearly-pure states, so this poor scaling is significant. ![Two features of qubit tomography with Pauli measurements (shown for an equatorial cross-section of the Bloch sphere): **(a)** The distribution or “scatter” of any unbiased estimator ${\hat{\rho}}$ (depicted by dull red ellipses) varies with the true state $\rho$ (black stars at the center of ellipses); **(b)** The expected infidelity between ${\hat{\rho}}$ and $\rho$ as a function of $\rho$. Within the Bloch sphere, the expected infidelity is $O\left(1/N\right)$. But in a thin shell of nearly-pure states (of thickness $O\left(1/\sqrt{N}\right)$), it scales as $O\left(1/\sqrt{N}\right)$ – *except* when $\rho$ is aligned with a measurement axis (Pauli $X$, $Y$, or $Z$).\[fig1\] ](cartoon){width="\FCW"} To achieve better performance, we observe that if $\rho$ is diagonal in one of the measured bases (e.g., $\sigma_z$), then infidelity *always* scales as $O(1/N)$. The increased sensitivity of $1-F$ to error in small eigenvalues (Eq. \[eq:Taylor\]) is precisely canceled by the reduced inaccuracy that accompanies a highly biased measurement-outcome distribution (Eq. \[eq:variance\]). This suggests an obvious (if naïve) solution: we should simply ensure that we measure the diagonal basis of $\rho$! This is unreasonable – knowing $\rho$ would render tomography pointless. But we *can* perform standard tomography on $N_0<N$ samples, get a preliminary estimate ${\hat{\rho}}_0$, and measure the remaining $N-N_0$ samples so that one basis diagonalizes ${\hat{\rho}}_0$. This measurement will not diagonalize $\rho$ exactly, but if $N_0\gg1$ it will be fairly close. The angle $\theta$ between the eigenbases of $\rho$ and ${\hat{\rho}}_0$ is $O(|\Delta|) = O(1/\sqrt{N_0})$. This implies that if $\rho$ has an eigenvector ${\left| \psi_k \right\rangle}$ with eigenvalue $\lambda_k=0$, then corresponding measurement outcome ${| \phi_k\rangle\!\langle \phi_k |}$ will have probability at most $p_k = \sin^2\theta \approx \theta^2 = O(1/N_0)$. Since we make this measurement on $O(N-N_0)$ copies [^3], the final error in the estimated $\hat{p_k}$ (and therefore in the eigenvalue $\lambda_k$) is $O(1/\sqrt{N_0(N-N_0)})$. So using a constant fraction $N_0 = \alpha N$ of the available samples for the preliminary estimation should yield $O(1/N)$ infidelity for *all* states. A similar protocol was suggested in Ref. [@BaganPRL06], but that analysis concluded that $N_0\propto N^p$ for $p\geq \frac23$ would be sufficient. This works for *average* infidelity over a particular ensemble, but yields $1-F = O(N^{-5/6})$ for almost all nearly-pure states. Simulation results ================== We performed numerical simulations of single-qubit tomography using four different protocols: (1) standard fixed-measurement tomography; (2) adaptive tomography with $N_0 = N^{2/3}$, as proposed in [@BaganPRL06]; (3) adaptive tomography with $N_0 = \alpha N$ (for a range of $\alpha$); and (4) “known basis” tomography, wherein we cheat by aligning our measurement frame with $\rho$’s eigenbasis (for all $N$ samples). We simulated many true states $\rho$, but present a representative case: a pure state with $(\langle\sigma_x\rangle,\langle\sigma_y\rangle,\langle\sigma_z\rangle)=(0.5,1/\sqrt{2},0.5)$ $$\label{eq:diagstate} {\left| \nearrow \right\rangle} = \frac{1}{2}\left(\begin{array}{c} \sqrt{3} \\ \frac{1}{\sqrt{3}}-\frac{2i}{\sqrt{6}}\end{array} \right)$$ Our results are not particularly sensitive to the exact estimator used; we used maximum-likelihood estimation (MLE) with a quadratic approximation to the negative loglikelihood function: $${l}(\rho)=-\log{\mathcal{L}}(\rho) \approx \sum^{3}_{k=1}\frac{N_k({\mathrm{Tr}}[\rho E_k]-f_k)^2}{f_k(1-f_k)},$$ where $f_k = n_k/N_k$ are the observed frequencies of the $+1$ eigenvectors of the three Pauli operators $\sigma_k$, $E_k$ is the corresponding projector, and $N_k$ is the number of samples on which $\sigma_k$ was measured. Convex optimization (in MATLAB [@yalmip]) was used to find ${{\hat{\rho}}_\mathrm{MLE}}$. Results were averaged over many (typically 150) randomly generated measurement records. Figure \[fig2\] shows average infidelity versus $N$. We fit these simulated data to power laws of the form $1-F = \beta N^p$, and found $p=-0.513 \pm 0.006$ (for static tomography), $p=-0.868 \pm 0.008$ (for adaptive tomography with $N_0=N^{2/3}$), $p=-0.980 \pm 0.006$ (for adaptive tomography with $N_0=0.5N$), and $p=-0.993 \pm 0.09$ (for known-basis tomography). These results are not significantly different [^4] from predictions of the simple theory ($p=-\frac12, -\frac56, 1$, and $1$, respectively). The borderline-significant discrepancy is, we believe, due to boundary effects (${{\hat{\rho}}_\mathrm{MLE}}$ is constrained to be positive). We also varied $\alpha=N_0/N$ (Fig. \[fig2\], inset) and found that $\alpha=\frac12$ optimizes the prefactor ($\beta$). ![Average infidelity $1-F({\hat{\rho}},\rho)$ vs. sample size $N$ for Monte Carlo simulations of four different tomographic procotocols: standard tomography (black), the procedure proposed in [@BaganPRL06] using $N_0=N^{2/3}$ (red), our procedure using $N_0 = N/2$ (blue), and “known basis” tomography (green). Both adaptive procedures clearly outperform static tomography, but our procedure clearly outperforms the $N_0=N^{2/3}$ approach, and matches the asymptotic scaling of known-basis tomography. The inset shows the dependence of the prefactor ($\beta$) on $\alpha=N_0/N$. \[fig2\]](Simulations-with-inset){width="\FCW"} Experimental results ==================== We implemented our protocol experimentally in linear optics (Fig. \[fig3\]). Using type-1 spontaneous parametric down conversion in a nonlinear crystal, photon pairs were created. One of these photons was sent immediately to a single photon counting module (SPCM) to act as a trigger. The second photon was sent through a Glan-Thomson polarizer to prepare it in a state of very pure linear polarization. Computer-controlled waveplates were first used to prepare the polarization state of the photon, and subsequently used in tandem with a polarization beamsplitter to project onto any state on the Bloch sphere. ![\[fig3\] Spontaneous parametric downconversion is performed by pumping a nonlinear BBO crystal with linearly polarized light. One photon is sent directly to a detector as a trigger. A rotation using a quarter-half waveplate combination prepares the other photon in any desired polarization state. Finally, a projective measurement onto any axis of the Bloch sphere is performed by a quarter-half waveplate combination followed by a polarizing beamsplitter. The measurement waveplates are connected to a computer to enable adaptation.](ExperimentC.pdf) We compared static and adaptive tomography protocols on a measured state given (in the H/V basis) by $$\rho = \left(\begin{array}{cc} 0.7711 & 0.2010 + 0.3624i \\ 0.2010 - 0.3624i & 0.2289 \end{array}\right),$$ which has purity ${\mathrm{Tr}}(\rho^2) = 0.991$ and fidelity $F=0.992$ with ${\left| \nearrow \right\rangle}$ (see Eq. \[eq:diagstate\]). We identified $\rho$ to within an uncertainty which is at most $O(1/\sqrt{\tilde{N}})$ using one very long ($\tilde{N}=10^7$) static tomography experiment, whose overwhelming size ensures accuracy sufficient to calibrate the other experiments, all of which involve $N\leq3\times 10^4$ photons. Our “standard” (static) protocol involved repeatedly preparing our target state, collecting $N/3$ photons at each of the three measurement settings corresponding to $\sigma_x$, $\sigma_y$, and $\sigma_z$, and computing ${{\hat{\rho}}_\mathrm{MLE}}$ as outlined in [@JamesPRA01]. Each data point in figure \[fig4\]a represents an average over many ($\sim\!150$) repetitions. To do adaptive tomography, we measured $N_0=N/2$ photons, used the data to generate an ML estimate ${\hat{\rho}}_0$, then rotated the measurement bases so that one diagonalized ${\hat{\rho}}_0$. So, if the preliminary estimate is $${\hat{\rho}}_0 = \lambda_1{| \psi_1\rangle\!\langle \psi_1 |} + \lambda_2{| \psi_2\rangle\!\langle \psi_2 |},$$ we define $\vert \psi_{3/4} \rangle = (1/2)(\vert\psi_1\rangle \pm \vert \psi_2\rangle)$ and $\vert \psi_{5/6}\rangle = (1/2)(\vert\psi_1\rangle \pm i\vert \psi_2\rangle)$, and then measure the bases $\{\{\vert\psi_1\rangle,\vert \psi_2\rangle\},\{\vert\psi_3\rangle,\vert \psi_4\rangle\},\{\vert\psi_5\rangle,\vert \psi_6\rangle\}\}$. We measured the remaining $N-N_0$ photons in these new bases and constructed a final ML estimate using the data from both phases. We fit a power law ($1-F = \beta N^p$) to the average infidelity of each protocol (Fig. \[fig4\]a), and found $p=-0.51 \pm 0.02$ for standard tomography, $p=-0.71 \pm 0.04$ for the procedure of Ref. [@BaganPRL06], and $p=-0.90 \pm 0.04$ for our adaptive procedure. Our data generally match the theory; adaptive tomography outperforms standard tomography by an order of magnitude even for modest ($\sim10^4$) $N$. Experiments that achieve very low infidelities ($\sim10^{-4}$) show small but statistically significant deviations from theory, which we believe can be explained by waveplate misalignment – fluctuations on the order of $10^{-3}$ radians are sufficient to reproduce the observed deviations in simulations. For a detailed discussion of systematic error and how it affects our results please see the supplementary material. ![Experimental data: a) The average infidelity $1-F({\hat{\rho}},\rho)$ for the three tomographic protocols shown in Fig. \[fig2\] vs. the number of samples $N$. Each average is over 150 different realizations of the experiment. b) Average infidelity $1-F({\hat{\rho}},\rho)$ for standard tomography (black) and *reduced* adaptive tomography (blue) is plotted versus $N$. Each average is over 200 different realizations of the experiment; error bars are standard deviation of the mean of these samples. Error bars are standard deviation of the mean of these samples.\[fig4\] ](AllData.png){width="\FCW"} There is an even simpler adaptive procedure. After obtaining a preliminary estimate ${\hat{\rho}}_0$, we measured *all* of the remaining $N/2$ samples in the diagonal basis of ${\hat{\rho}}_0$, neglecting the second and third bases presented in the previous section’s protocol. This *reduced adaptive tomography* procedure requires just one extra measurement setting (full adaptive tomography requires three), but achieves the same $O(\frac{1}{N})$ infidelity (Fig. \[fig4\]b). The best fits to the exponent $p$ in $1-F =\beta N^p$ are $p=-0.51 \pm 0.02$ for standard tomography and $p=-0.88 \pm 0.05$ for reduced adaptive tomography (not significantly different from the results shown in Fig. \[fig4\]a). In higher dimensional systems, reduced adaptive tomography should provide even greater efficiency advantages. Discussion ========== We demonstrated two easily implemented adaptive tomography procedures that achieve $1-F({\hat{\rho}},\rho) = O(1/N)$ for *every* qubit state. In contrast, any static tomography protocol will yield infidelity $O(1/\sqrt{N})$ for most nearly-pure states. Our simplest procedure requires only one additional measurement setting than standard tomography. We see almost no reason not to use reduced adaptive tomography in future experiments. Previous work [@BaganPRL06] optimized average fidelity over Bures measure, a very respectable choice [@HubnerPLA92; @PetzJMP96; @ZyczkowskiPRA05]. Unfortunately, the “hard-to-estimate” states lie in a thin shell at the surface of the Bloch sphere, whose Bures measure vanishes as $N\to\infty$. So although the scheme with $N_0\propto N^{2/3}$ proposed in [@BaganPRL06] achieves Bures-average infidelity $O(1/N)$, it achieves only $O(1/N^{5/6})$ infidelity for nearly all of the (important) nearly-pure states [^5] The $O(1/N)$ infidelity scaling achieved by our scheme is optimal, but the constant can surely be improved – i.e., if our scheme has asymptotic error $\alpha/N$, a more sophisticated scheme can achieve $\alpha'/N$ with $\alpha'<\alpha$. The absolutely optimal protocol requires joint measurements on all $N$ samples [@MassarPRL95], and will outperform any local measurement. There is undoubtedly some marginal benefit to adapting more than once, but we have shown that a single adaptation is sufficient to achieve $O(1/N)$ scaling. DHM, LAR, AD, and AMS thank NSERC and CIFAR for support, and Alan Stummer for designing the coincidence circuit. CF was supported in part by NSF Grant Nos. PHY-1212445 and PHY-1005540 and an NSERC PDF. RBK was supported by the LDRD program at Sandia National Laboratories, a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000. Supplementary Material ====================== In our paper, we attribute certain properties of our experimental data to “systematic errors”. The purpose of this Supplement is to provide more detail on the role that systematic errors play in our experimental results, and their interplay with static and adaptive tomography. “Systematic errors” is a broad term, incorporating almost everything that can go wrong with an experiment, so we consider several forms of it. We begin by briefly discussing *frame misalignment*, where adaptive tomography yields no advantage, but almost nothing else can either. We then consider some systematic errors in measurement that can be detected and mitigated, and demonstrate through simulations that they affect static tomography and adaptive tomography differently. Using a one-parameter fit and a model of our experiment, we show that waveplate-alignment errors of around $1.5\times10^{-3}$ radians reproduce our experimental results remarkably well. To wrap up, we examine the asymptotic scaling of tomographic infidelity for three different models of systematic error, and conclude that adaptive tomography mitigates these forms of systematic error much better than static tomography can. **Frame Misalignment**: The most systematic of errors is a fixed misalignment of reference frames. In our linear optics experiment, where a set of waveplates and a polarizing beamsplitter are used to measure the polarization state of light, this means that all the optical elements are misaligned by the same amount. Varying tomographic strategies can have no effect on this sort of error. In fact, this kind of frame misalignment cannot be detected at all within the experiment. It is equivalent to a change of gauge, has no operational consequences in this context, and is not interesting. Instead, let us consider some errors that, while less “systematic” than frame misalignment, are also more detectable – and therefore potentially sensitive to different tomographic strategies. ![ Average infidelity vs. sample size ($N$) for simulations with systematic errors (of Model 1 type; see text) on the order of $E=10^{-2}$. The infidelity decreases with increasing $N$ up to a point, after which it flattens out after hitting a ’noise floor’. The noise floor occurs at a lower average infidelity for adaptive tomography than for static tomography. \[fig11\]](point5degrees){width="\FCW"} **Waveplate Misalignment**: A very important source of errors for our experiment is the alignment angle of the waveplates that measure photon polarization states. Our experiment’s reference frame is defined by the polarizing beam-splitter used to make the final projective measurement. To change the basis of the measurement, waveplates are mechanically rotated by a motor with good but finite accuracy. Every time we change the measurement basis, the motor’s finite accuracy causes a slight misalignment of the waveplates – and the ensuing photodetections correspond to a measurement of a basis slightly different from the one we intended to measure. Our terminology for discussing these errors is as follows. We performed (and, in this Supplement, we simulate) a large number of *experiments*. A single experiment (or *experimental run*) comprises the production of $N$ identically prepared photons. Within an experiment, we implement several (3 to 6) *measurement settings*. Each measurement setting corresponds to (i) adjusting the waveplate, then (ii) measuring a large number of photons without any adjustments. In static tomography a single experiment includes three measurement settings (projections onto X,Y, and Z axes of the Bloch sphere), in each of which $N/3$ samples are measured. In adaptive tomography, a single experiment includes six measurement settings (the same initial set of three, and then three more in a rotated frame), each applied to $N/6$ states. We consider three different models of systematic error in waveplate alignment. 1. **Model 1**. Each time a waveplate (whose purpose is to make a measurement) is moved to a new angle $\theta$, it ends up instead aligned at angle $\theta + \delta\theta$, where $\delta\theta$ is a Gaussian random variable with zero mean and standard deviation $E$. Thus, in each experimental run, each measurement setting is misaligned by an independent random angle. This angle persists over many samples in the same experimental run, but not across multiple experimental runs. 2. **Model 2**. Each time a waveplate is moved, it misses its target $\theta$ by a random angle $\delta\theta$ that is fixed for each *experiment*, rather than for each measurement setting within the experiment. This model is mathematically equivalent to (and can be taken to represent) a small misalignment of the polarizing beam splitter. We take $\delta\theta$, which is fixed for each individual experiment, to be a Gaussian random variable with zero mean and standard deviation $E$. 3. **Model 3**. The waveplates are misaligned by an angle $\delta\theta$ that is fixed for each experiment (as in Model 2), but each experiment has the same fixed misalignment. In this model, $\delta\theta = E$ is not a random variable. We believe that Model 1 best represents our experiment. The waveplate motors used in our experiment have an finite precision, and every time we change their angle, they return to a factory-set “home” position before realigning (which eliminates or at least minimizes correlation between successive alignment errors). Thus, the waveplate angle picks up a different random error each time it is moved to a new measurement setting. We simulated the effect of Models 1-3 on adaptive and static tomography. In each simulation, 200 independent experimental runs were generated (each involving at least 3 measurement settings, with many identically prepared photons measured at each setting). We averaged the tomographic infidelity of these 200 runs to characterize the effect of random waveplate errors in each model. ![Average infidelity vs. sample size ($N$) for simulations of adaptive tomography with systematic errors (of Model 1 type; see text) on the order of $E=10^{-3}$. Also plotted is the region over which experimental data was taken (see main body of text) and a line of best fit for this region. \[fig22\]](point15degrees){width="\FCW"} **Results:** Figure \[fig11\] shows error (average infidelity) versus sample size ($N$) for Model 1 with $E=0.5$ degrees ($\sim 10^{-2}$ radians). As the sample size increases, statistical errors decrease, and so average infidelity decreases. However, the error reaches a clear noise floor as systematic errors begin to dominate. It is higher for standard tomography than for adaptive tomography. We conclude that adaptive tomography is less sensitive than standard tomography to systematic errors. Since the alignment errors vary randomly from experiment to experiment, an astute experimentalist might achieve higher accuracy by repeating each measurement setting many ($M$) times, resetting the waveplate each time. This would work, reducing the infidelity by a factor of $1/\sqrt{M}$, but it adds significantly to the experimental difficulty and complexity. For example, if the waveplates have a precision of $\sim0.5$ degrees, then when these systematic errors dominate over statistical errors (see Figure \[fig11\]), the infidelity of static tomography saturates at $10^{-2}$, while for adaptive tomography it saturates at $10^{-3}$. Achieving the same $10^{-3}$ accuracy with static tomography would require repeating each measurement $M=100$ times. Or, the experimentalist could just use adaptive tomography, and achieve it with only a single extra waveplate setting. ![**Location of the noise floor ($1-F$) for Models 1,2,3 (from top to bottom).** For each of the three Models, we plot the average infidelity as $N\to\infty$ (to ensure that systematic errors dominate) vs. $E$ (the magnitude of systematic error). Data points are results of simulation, and lines are lines of best fit. \[fig33\] ](many_noises "fig:"){width="\FCW"} ![**Location of the noise floor ($1-F$) for Models 1,2,3 (from top to bottom).** For each of the three Models, we plot the average infidelity as $N\to\infty$ (to ensure that systematic errors dominate) vs. $E$ (the magnitude of systematic error). Data points are results of simulation, and lines are lines of best fit. \[fig33\] ](OneNoisePerExp "fig:"){width="\FCW"} ![**Location of the noise floor ($1-F$) for Models 1,2,3 (from top to bottom).** For each of the three Models, we plot the average infidelity as $N\to\infty$ (to ensure that systematic errors dominate) vs. $E$ (the magnitude of systematic error). Data points are results of simulation, and lines are lines of best fit. \[fig33\] ](OneNoisePerExpDeterministic "fig:"){width="\FCW"} We then performed several simulations of Model 1, in which we varied $E$ (the magnitude of systematic error). For each value of $E$, we fit a line to the $1-F$ vs. $N$ curve *over the same range of $N$ that we observed in our experiment*. This is shown in Figure \[fig2\]. A simulation with $E=0.15$ degrees yielded results almost indistinguishable from our experimental data. In the experimentally observed region, the line of best fit has a slope of $-0.895 \pm 0.023 $, $$1-F \propto N^{-0.895 \pm 0.023},$$ which matches our experimental data very well. Finally, we investigated the value of the noise floor for Models 1-3 (Figure \[fig33\]). Each plot in Figure \[fig33\] shows $1-F$ (average infidelity) vs. $E$ (magnitude of systematic errors) on a log-log plot, for both standard and adaptive tomography. We examined sufficiently high $N$ to guarantee that systematic errors dominate. 1. For Model 1, the line of best fit (on a log-log plot) for standard tomography has a slope of $1.03 \pm 0.01$ and the line of best fit for adaptive tomography has a slope of $2.00 \pm 0.01$. 2. For Model 2, the slopes of the lines of best fit are $1.01 \pm 0.02$ and $1.91\pm 0.02$. 3. For Model \#3, the slopes of the lines of best fit are $1.18\pm0.01$ and $1.99\pm0.01$. We conclude that in all three models of systematica error that we considered here, it’s fair to say that average infidelity scales *linearly* with $E$ for standard tomography, and *quadratically* with $E$ for adaptive tomography. Adaptive tomography is substantially more robust to systematic errors than standard tomography – not just by a constant factor, but qualitatively so. **Conclusion:** We have shown that for three reasonable models of systematic errors, the average infidelity of adaptive tomography scales with $E^2$ and the average infidelity of static tomography scales with $E$, where $E$ is the magnitude of these errors. Infidelity is very sensitive to spectral errors (i.e., changes in the eigenvalues of the estimated density matrix), but not to unitary errors (changes in the eigenvectors). The primary result of systematic errors in the measurement basis – i.e., measuring the wrong basis by an angle $E$ – is a unitary error in the estimate by $O(E)$. As we have shown in the main text, adaptive tomography measures the eigenvalues of the density matrix to much higher precision than static tomography. Furthermore, adaptive tomography (because it specifically seeks to measure the diagonal basis of $\rho$) still obtains an accurate estimate for the eigenvalues even in the presence of systematic error. Even if we get the basis wrong by an angle of $O(E)$, this only affects the measurement probabilities (and therefore the estimated spectrum) by $O(E^2)$. [^1]: Remarkably, for large $N$, local measurements can discriminate almost as well as joint measurements on all $N$ samples. If $D_Q$ and $D_C$ are the optimal error exponents for joint and local measurements (respectively), then $(1-F)/2 \leq D_C \leq D_Q \leq 1-F$ [@CalsamigliaPRA08]. [^2]: Because $\rho$ lies on the state-set’s boundary, the gradient of $F$ need not vanish in order for ${\hat{\rho}}=\rho$ to be a local maximum. [^3]: The “$O$” notation is necessary here because some of the remaining $N-N_0$ copies may be measured in other bases that make up a complete measurement frame. [^4]: All quoted uncertainties herein are $1\sigma$, or 68% confidence intervals. Therefore, we don’t expect the the “true” value to lie within the error bars more than 68% of the time. Most of the results given here agree with theoretical predictions to within $2\sigma$ (95% confidence intervals), a common criterion for consistency between data and theory. [^5]: Ironically, restricting the problem to pure states falsely trivializes it – the average *and* worst-case infidelity is $O(1/N)$ even for static tomography! The difficulty is not in estimating which pure state we have, but in distinguishing between small eigenvalues ($\lambda=0$ vs $\lambda=1/\sqrt{N}$).
--- abstract: 'Equi-chordal and equi-isoclinic tight fusion frames (ECTFFs and EITFFs) are both types of optimal packings of subspaces in Euclidean spaces. In the special case where these subspaces are one-dimensional, ECTFFs and EITFFs both correspond to types of optimal packings of lines known as equiangular tight frames. In this brief note, we review some of the fundamental ideas and results concerning ECTFFs and EITFFs.' author: - | Matthew Fickus, John Jasper, Dustin G. Mixon, Cody E. Watson Department of Mathematics and Statistics, Air Force Institute of Technology\ Wright-Patterson Air Force Base, Ohio 45433\ Department of Mathematical Sciences, University of Cincinnati, Cincinnati, OH 45221 title: | A brief introduction to equi-chordal\ and equi-isoclinic tight fusion frames --- Introduction ============ In various settings, the following problem arises: given a $d$-dimensional Hilbert space ${\mathbb{H}}$, and positive integers $c$ and $n$ with $c\leq d$, how should we choose $n$ subspaces ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ of ${\mathbb{H}}$, each of dimension $c$, such that the minimum pairwise distance between these subspaces is as large as possible? That is, what are the optimal ways of packing $n$ points in the *Grassmannian space* that consists of all $c$-dimensional subspaces of ${\mathbb{H}}$? This is the central problem of a seminal paper by Conway, Hardin and Sloane [@ConwayHS96]. Of course, the answer here depends on how we define the distance between two such subspaces. One popular choice is to let ${\mathbf{P}}_j:{\mathbb{H}}\rightarrow{\mathbb{H}}$ be the orthogonal projection operator onto ${\mathcal{U}}_j$, and to define the *chordal distance* between ${\mathcal{U}}_j$ and ${\mathcal{U}}_{j'}$ to be $\frac1{\sqrt{2}}$ times the Frobenius (Hilbert-Schmidt) distance between their projections. In that setting, our goal is to maximize $$\label{equation.chordal distance} {\operatorname{dist}}_{{\mathrm{c}}}^2({\{{{\mathcal{U}}_j}\}}_{j=1}^{n}):=\tfrac1{2}\min_{j\neq j'}{\|{{\mathbf{P}}_j-{\mathbf{P}}_{j'}}\|}_{{\mathrm{Fro}}}^2.$$ Much previous work on this problem focuses on the special case where $c=1$. In this case, letting ${\boldsymbol{\varphi}}_j$ be any unit vector in the $1$-dimensional subspace ${\mathcal{U}}_j$, we have ${\mathbf{P}}_j={\boldsymbol{\varphi}}_j^{}{\boldsymbol{\varphi}}_j^{*}$ and, as detailed in a later section, becomes $${\operatorname{dist}}_{{\mathrm{c}}}^2({\{{{\mathcal{U}}_j}\}}_{j=1}^{n}) =\tfrac1{2}\min_{j\neq j'}{\|{{\boldsymbol{\varphi}}_j^{}{\boldsymbol{\varphi}}_j^{*}-{\boldsymbol{\varphi}}_{j'}^{}{\boldsymbol{\varphi}}_{j'}^{*}}\|}_{{\mathrm{Fro}}}^2 =1-\max_{j\neq j'}{|{{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}}|}^2.$$ That is, our goal in this case is to choose unit vectors ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ in ${\mathbb{H}}$ of minimal *coherence* $\max_{j\neq j'}{|{{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}}|}$. Here, the *Welch bound* [@Welch74] states that for any positive integers $n\geq d$, any unit vectors in ${\mathbb{H}}$ satisfy $$\label{equation.Welch bound} \max_{j\neq j'}{|{{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}}|}^2 \geq\tfrac{(n-d)}{d(n-1)}.$$ Moreover, it is well-known [@StrohmerH03] that achieves equality in this bound if and only if it is an *equiangular tight frame* (ETF) for ${\mathbb{H}}$, namely when the value of ${|{{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}}|}$ is constant over all $j\neq j'$ (equiangularity) and there exists $\alpha>0$ such that $\sum_{j=1}^{n}{|{{\langle{{\boldsymbol{\varphi}}_j},{{\mathbf{x}}}\rangle}}|}^2=\alpha{\|{{\mathbf{x}}}\|}^2$ for all ${\mathbf{x}}\in{\mathbb{H}}$ (tightness). In this short paper, we review some known results about these concepts, including how the Welch bound  generalizes in the $c>1$ case to the *simplex bound* of Conway, Hardin and Sloane [@ConwayHS96], which itself arises from a classical result by Rankin [@Rankin55]. Nearly all of these ideas are taken from Conway, Hardin and Sloane [@ConwayHS96], or from more recent investigations into these same ideas, notably an article by Dhillon, Heath, Strohmer and Tropp [@DhillonHST08], as well as one by Kutyniok, Pezeshki, Calderbank and Liu [@KutyniokPCL09]. As we shall see, in order to achieve equality in the simplex bound, a sequence ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ of $c$-dimensional subspaces of ${\mathbb{H}}$ is necessarily a *tight fusion frame* [@CasazzaK04], that is, there necessarily exists $\alpha>0$ such that $\sum_{j=1}^{n}{\mathbf{P}}_j=\alpha{\mathbf{I}}$. In particular, ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ achieves equality in the simplex bound if and only if it is an *equi-chordal tight fusion frame* (ECTFF) for ${\mathbb{H}}$ [@KutyniokPCL09]. Refining that analysis leads to *equi-isoclinic* subspaces [@LemmensS73b], and in particular, to *equi-isoclinic tight fusion frames* (EITFFs). In the special case when $c=1$, both ECTFFs and EITFFs reduce to ETFs. In the next section, we introduce the notation and basic concepts that we will need. In Section 3, we review some classical results of Rankin [@Rankin55] concerning optimal packings of points on real unit spheres. Conway, Hardin and Sloane used these results to prove their simplex bound and *orthoplex bound*, and we review their approach in the fourth section. In Section 5, we provide a streamlined proof of the simplex bound, and use it to explain why ECTFFs and EITFFs are the optimal solutions to two variants of the subspace packing problem. In the final section, we interpret ECTFFs and EITFFs in terms of the *principal angles* between two subspaces. Preliminaries ============= Let ${\mathbb{H}}$ be a $d$-dimensional Hilbert space over a field ${\mathbb{F}}$ which is either ${\mathbb{R}}$ or ${\mathbb{C}}$. The *synthesis operator* of a finite sequence of vectors ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ in ${\mathbb{H}}$ is the operator ${\boldsymbol{\Phi}}:{\mathbb{F}}^n\rightarrow{\mathbb{H}}$, ${\boldsymbol{\Phi}}{\mathbf{y}}:=\sum_{j=1}^{n}{\mathbf{y}}(n){\boldsymbol{\varphi}}_j$. Its adjoint is the *analysis operator* ${\boldsymbol{\Phi}}^*:{\mathbb{H}}\rightarrow{\mathbb{F}}^n$, $({\boldsymbol{\Phi}}^*{\mathbf{x}})={\langle{{\boldsymbol{\varphi}}_j},{{\mathbf{x}}}\rangle}$ where the inner product here is taken to be conjugate-linear in its first argument. In the special case where ${\mathbb{H}}={\mathbb{F}}^d$, which here is always assumed to be equipped with the standard (complex) dot product, ${\boldsymbol{\Phi}}$ is the $d\times n$ matrix whose $j$th column is ${\boldsymbol{\varphi}}_j$, and ${\boldsymbol{\Phi}}^*$ is its conjugate-transpose. Composing these two operators gives the corresponding *frame operator* ${\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*:{\mathbb{H}}\rightarrow{\mathbb{H}}$, ${\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*{\mathbf{x}}=\sum_{j=1}^{n}{\langle{{\boldsymbol{\varphi}}_j},{{\mathbf{x}}}\rangle}{\boldsymbol{\varphi}}_j$, and *Gram matrix* ${\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}}:{\mathbb{F}}^n\rightarrow{\mathbb{F}}^n$, an $n\times n$ matrix whose $(j,j')$th entry is $({\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}})(j,j')={\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}$ for all $j,j'$. A sequence of unit vectors ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ in ${\mathbb{H}}$ is *equiangular* if there exists $\beta\geq0$ such that ${|{{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}}|}^2=\beta$ for all $j\neq j'$; to clarify, under this definition, it is not the vectors themselves that are necessarily equiangular, but rather, the lines they span. Meanwhile, ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ is a tight frame for ${\mathbb{H}}$ if there exists $\alpha>0$ such that $\alpha{\|{{\mathbf{x}}}\|}^2=\sum_{j=1}^{n}{|{{\langle{{\boldsymbol{\varphi}}_j},{{\mathbf{x}}}\rangle}}|}^2$ for all ${\mathbf{x}}\in{\mathbb{H}}$. This requires that ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ spans ${\mathbb{H}}$. In particular, we need $n\geq\dim({\mathbb{H}})$. By the polarization identity, ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ is tight with tight frame constant $\alpha>0$ precisely when $$\alpha{\mathbf{I}}={\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^* =\sum_{j=1}^{n}{\boldsymbol{\varphi}}_j^{}{\boldsymbol{\varphi}}_j^*.$$ Here, ${\boldsymbol{\varphi}}_j^*:{\mathbb{H}}\rightarrow{\mathbb{F}}$ is the linear functional ${\boldsymbol{\varphi}}_j^*{\mathbf{x}}:={\langle{{\boldsymbol{\varphi}}_j},{{\mathbf{x}}}\rangle}$ that arises as the adjoint of viewing ${\boldsymbol{\varphi}}_j$ as an operator ${\boldsymbol{\varphi}}_j:{\mathbb{F}}\rightarrow{\mathbb{H}}$, ${\boldsymbol{\varphi}}_j(c):=c{\boldsymbol{\varphi}}_j$. In the special case where ${\boldsymbol{\varphi}}_j$ is unit norm, the operator ${\boldsymbol{\varphi}}_j^{}{\boldsymbol{\varphi}}_j^*$ is the orthogonal projection operator onto the line ${\operatorname{span}}{\{{{\boldsymbol{\varphi}}_j}\}}$. That is, a *unit norm tight frame* corresponds to a collection of rank-one orthogonal projection operators which sum to a scalar multiple of the identity operator. In this case, the tight frame constant $\alpha$ is necessarily the *redundancy* $\frac nd$ of the frame since the diagonal entries of the Gram matrix ${\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}}$ are all $1$ and so $\alpha d={\operatorname{Tr}}(\alpha{\mathbf{I}})={\operatorname{Tr}}({\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*)={\operatorname{Tr}}({\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}})=n$. When ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ is both equiangular and a tight frame for ${\mathbb{H}}$ we say it is an *equiangular tight frame* (ETF) for ${\mathbb{H}}$. As detailed in a later section, being an ETF is equivalent to achieving equality in the Welch bound [@Welch74]. The theory of unit norm tight frames naturally generalizes to subspaces ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ of ${\mathbb{H}}$ of dimension $c>1$. Here, for each $j=1,\dotsc,n$, let ${\boldsymbol{\Phi}}_j:{\mathbb{F}}^c\rightarrow{\mathbb{H}}$ be the synthesis operator for an orthonormal basis ${\{{{\boldsymbol{\varphi}}_{j,k}}\}}_{k=1}^{c}$ of ${\mathcal{U}}_j$. For example, when ${\mathbb{H}}={\mathbb{F}}_d$, ${\boldsymbol{\Phi}}_j$ is a $d\times c$ matrix whose $k$th column is ${\boldsymbol{\varphi}}_{j,k}$. The fact that ${\{{{\boldsymbol{\varphi}}_{j,k}}\}}_{k=1}^{c}$ is an orthonormal basis for ${\mathcal{U}}_j$ implies that ${\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_j^{}={\mathbf{I}}$ and that ${\mathbf{P}}_j={\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^{*}$ is the orthogonal projection operator onto ${\mathcal{U}}_j$. The Gram matrix ${\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}}$ of the $nc$ concatenated vectors ${\{{{\boldsymbol{\varphi}}_{j,k}}\}}_{j=1}^{n}\,_{k=1}^{c}$ is called a *fusion Gram matrix* of ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$, and can be regarded as an $n\times n$ array of $c\times c$ submatrices. Specifically, for any $j,j'=1,\dotsc,n$, the $(j,j')$th block of ${\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}}$ is the *cross-Gramian* ${\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}$ whose $(k,k')$th entry is $({\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{})(k,k')={\langle{{\boldsymbol{\varphi}}_{j,k}},{{\boldsymbol{\varphi}}_{j',k'}}\rangle}$. Meanwhile, the frame operator ${\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*$ of ${\{{{\boldsymbol{\varphi}}_{j,k}}\}}_{j=1}^{n}\,_{k=1}^{c}$ is called the *fusion frame operator* of ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$, and is the sum of the corresponding rank-$c$ orthogonal projection operators ${\{{{\mathbf{P}}_j}\}}_{j=1}^{n}$, $$\label{equation.fusion frame operator} {\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^* =\sum_{j=1}^{n}\sum_{k=1}^{c}{\boldsymbol{\varphi}}_{j,k}^{}{\boldsymbol{\varphi}}_{j,k}^{*} =\sum_{j=1}^{n}{\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^{*} =\sum_{j=1}^{n}{\mathbf{P}}_j.$$ Note here that the fusion Gram matrix of a fusion frame is not unique, as it depends on the particular choice of orthonormal basis for each ${\mathcal{U}}_j$. (To be clear, the diagonal blocks of any fusion Gram matrix are all ${\mathbf{I}}$ regardless.) However, the fusion frame operator is unique, as the orthogonal projection operator onto ${\mathcal{U}}_j$ is invariant with respect to such choices. More precisely, each ${\boldsymbol{\Phi}}_j$ is unique up to right-multiplication by unitary $c\times c$ matrices, meaning ${\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^*$ is unique while ${\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}$ is only unique up to right- and left-multiplication by unitaries. Because of this, it is more natural to generalize the notion of a tight frame to this “fusion" setting than it is to generalize the notion of an equiangular frame. Indeed, as noted in the introduction, ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ is a *tight fusion frame* for ${\mathbb{H}}$ if the corresponding fusion frame operator  is $\alpha{\mathbf{I}}$ for some $\alpha>0$; in this case taking the trace of this equation gives $\alpha d={\operatorname{Tr}}(\alpha{\mathbf{I}})=\sum_{j=1}^{n}{\operatorname{Tr}}({\mathbf{P}}_j)=nc$ and so $\alpha$ is necessarily $\tfrac{nc}d$. To generalize equiangularity, we want some way to take the “modulus" of the off-diagonal cross-Gramians ${\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}$ that is invariant with respect to left- and right-multiplication by unitaries. As we shall see, there is more than one option here: taking the Frobenius norms of these matrices leads to the notion of *equi-chordal* subspaces, while taking the induced $2$-norms of these matrices leads to *equi-isoclinic* subspaces. Optimal packings on the sphere ============================== Conway, Hardin and Sloane [@ConwayHS96] give two distinct sufficient conditions for a sequence ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ of $c$-dimensional subspaces of a $d$-dimensional Hilbert space ${\mathbb{H}}$ to form an optimal packing with respect to the chordal distance . These results are often referred to as the *simplex bound* and *orthoplex bound*. The traditional proofs of these results depend on Rankin’s earlier work concerning optimal packings on real spheres [@Rankin55], that is, ways to arrange $n$ unit vectors in ${\mathbb{R}}^d$ whose minimum pairwise distance is as large as possible. In particular, the traditional proof of the simplex bound depends on the following concept: A sequence of unit vectors ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ in a Hilbert space ${\mathbb{H}}$ is a called a *regular simplex* if there exists some integer $n\geq 2$ such that ${\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}=-\frac1{n-1}$ for all $j\neq j'$. That is, ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ is a regular simplex if and only if its Gram matrix is . The eigenvalues of such a Gram matrix are $0$ and $\frac{n}{n-1}$ with eigenspaces ${\operatorname{span}}{\{{{\boldsymbol{1}}}\}}$ and , respectively. In particular, the null space of ${\boldsymbol{\Phi}}$ is $\ker({\boldsymbol{\Phi}})=\ker({\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}})={\operatorname{span}}{\{{{\boldsymbol{1}}}\}}$, meaning . Moreover, the dimension of ${\operatorname{span}}{\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ is ${\operatorname{rank}}({\boldsymbol{\Phi}})=n-1$. Regular simplices are optimal packings on real unit spheres. For example, in ${\mathbb{R}}^3$, an optimal packing of two unit vectors consists of a pair of antipodal unit vectors, whereas an optimal packing of three unit vectors consists of three equally-spaced vectors in a great circle, and an optimal packing of four unit vectors forms a tetrahedron, namely regular simplices with $n=2$, $3$ and $4$ vectors, respectively. To prove this formally, note that for any finite sequence ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ of unit norm vectors in a real Hilbert space, $$0 \leq{\biggl\|{\sum_{j=1}^{n}{\boldsymbol{\varphi}}_j}\biggr\|}^2 =\sum_{j=1}^{n}\sum_{j'=1}^{n}{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle} =n+\sum_{j=1}^{n}\sum_{\substack{j'=1\\j'\neq j}}^{n}{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle} \leq n+n(n-1)\max_{j\neq j'}{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}.$$ That is, for any such vectors we have $-\frac1{n-1}\leq\max_{j\neq j'}{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}$. Moreover, both of the above inequalities hold with equality if and only if ${\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}=-\frac1{n-1}$, that is, if and only if ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ is a regular simplex. To relate this to optimal packings, note that for any unit vectors ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ in a real Hilbert space, $$\label{equation.packing in terms of coherence} \min_{j\neq j'}{\|{{\boldsymbol{\varphi}}_j-{\boldsymbol{\varphi}}_{j'}}\|}^2 =\min_{j\neq j'}2(1-{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}) =2(1-\max_{j\neq j'}{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}).$$ We summarize these facts in the following result: \[lemma.Rankin simplex bound\] For any finite sequence ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ of unit norm vectors in a real Hilbert space, $$-\tfrac1{n-1} \leq\max_{j\neq j'}{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle} =1-\tfrac12\min_{j\neq j'}{\|{{\boldsymbol{\varphi}}_j-{\boldsymbol{\varphi}}_{j'}}\|}^2,$$ where equality holds if and only if ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ is a regular simplex. We need $n\leq d+1$ in order to achieve equality here: if ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ is a regular simplex that lies in ${\mathbb{H}}$ then $n-1={\operatorname{span}}({\boldsymbol{\Phi}})\leq\dim({\mathbb{H}})=d$. For example, the optimal packing of $n=5$ unit vectors in ${\mathbb{R}}^3$ cannot be a regular simplex. In fact, as we now discuss, when $n>d+1$ one can prove a bound that is stronger than that given in Lemma \[lemma.Rankin simplex bound\]. In particular, when $n>d+1$, we have for any unit vectors ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ in a real $d$-dimensional Hilbert space ${\mathbb{H}}$. For an elegant proof of this fact [@Chapman10], note that $\dim(\ker({\boldsymbol{\Phi}}))\geq 2$ when $n\geq d+2$, implying there exist nontrivial, nonnegative vectors ${\mathbf{y}}_1,{\mathbf{y}}_2\in\ker({\boldsymbol{\Phi}})$ with disjoint support; thus, $$0 ={\langle{{\boldsymbol{0}}},{{\boldsymbol{0}}}\rangle} ={\langle{{\boldsymbol{\Phi}}{\mathbf{y}}_1},{{\boldsymbol{\Phi}}{\mathbf{y}}_2}\rangle} =\sum_{j=1}^n\sum_{\substack{j'=1\\j'\neq j}}^n{\mathbf{y}}_1(j){\mathbf{y}}_2(j'){\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle},$$ where ${\mathbf{y}}_1(j){\mathbf{y}}_2(j')$ is nonnegative for all $j,j'$, and is strictly positive for at least one pair $j\neq j'$; as such we cannot have ${\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}<0$ for all $j\neq j'$. In summary, when combined with , we have the following result: \[lemma.Rankin orthoplex bound\] For any positive integers $n$ and $d$ with $n\geq d+2$, and any finite sequence ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ in a real $d$-dimensional Hilbert space, $$0\leq\max_{j\neq j'}{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle} =1-\tfrac12\min_{j\neq j'}{\|{{\boldsymbol{\varphi}}_j-{\boldsymbol{\varphi}}_{j'}}\|}^2.$$ As noted by Rankin [@Rankin55], equality can be achieved here with $2d$ vectors by choosing ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^n$ to be an orthonormal basis along with its antipodes, i.e., ${\{{\pm\delta_j}\}}_{j=1}^{d}$; such a sequence of vectors is known as an *orthoplex*. The simplex and orthoplex bounds ================================ To obtain the simplex and orthoplex bounds of Conway, Hardin and Sloane, we apply Lemmas \[lemma.Rankin simplex bound\] and \[lemma.Rankin orthoplex bound\] to the normalized traceless components of orthogonal projection operators. To be precise, let ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ be $c$-dimensional subspaces of ${\mathbb{F}}^n$ where ${\mathbb{F}}$ is either ${\mathbb{R}}$ or ${\mathbb{C}}$, and for each $j=1,\dotsc,n$, let ${\mathbf{P}}_j$ be the $n\times n$ matrix which is the orthogonal projection operator onto ${\mathcal{U}}_j$. The operators ${\{{{\mathbf{P}}_j}\}}_{j=1}^{n}$ lie in the real Hilbert space of all $d\times d$ self-adjoint matrices; here, the inner product is the Frobenius (Hilbert-Schmidt) inner product ${\langle{{\mathbf{A}}},{{\mathbf{B}}}\rangle}_{{\mathrm{Fro}}}:={\operatorname{Tr}}({\mathbf{A}}^*{\mathbf{B}})$, and this space has dimension or $d^2$ depending on whether ${\mathbb{F}}$ is ${\mathbb{R}}$ or ${\mathbb{C}}$, respectively. A $d\times d$ self-adjoint matrix is *traceless* if its trace is zero, namely if it lies in the orthogonal complement of ${\mathbf{I}}$. Since ${\operatorname{Tr}}({\mathbf{P}}_j)=\dim({\mathcal{U}}_j)=c$ for all $j$, the traceless component of any ${\mathbf{P}}_j$ is $${\mathbf{P}}_j-\tfrac{{\langle{{\mathbf{I}}},{{\mathbf{P}}_j}\rangle}_{{\mathrm{Fro}}}}{{\langle{{\mathbf{I}}},{{\mathbf{I}}}\rangle}_{{\mathrm{Fro}}}}{\mathbf{I}}={\mathbf{P}}_j-\tfrac{{\operatorname{Tr}}({\mathbf{P}}_j)}{{\operatorname{Tr}}({\mathbf{I}})}{\mathbf{I}}={\mathbf{P}}_j-\tfrac{c}{d}{\mathbf{I}}.$$ To continue, note $$\label{equation.deriving inner product of traceless} {\langle{{\mathbf{P}}_j-\tfrac{c}{d}{\mathbf{I}}},{{\mathbf{P}}_{j'}-\tfrac{c}{d}{\mathbf{I}}}\rangle}_{{\mathrm{Fro}}} ={\operatorname{Tr}}[({\mathbf{P}}_j-\tfrac{c}{d}{\mathbf{I}})({\mathbf{P}}_{j'}-\tfrac{c}{d}{\mathbf{I}})] ={\operatorname{Tr}}({\mathbf{P}}_j{\mathbf{P}}_{j'})-2\tfrac{c^2}{d}+\tfrac{c^2}{d} ={\langle{{\mathbf{P}}_j},{{\mathbf{P}}_{j'}}\rangle}_{{\mathrm{Fro}}}-\tfrac{c^2}{d}$$ for all $j,j'$. In particular, ${\|{{\mathbf{P}}_j-\tfrac{c}{d}{\mathbf{I}}}\|}_{{\mathrm{Fro}}}^2=c-\tfrac{c^2}{d}=\tfrac{c(d-c)}{d}$ for all $j$, meaning $${\{{{\mathbf{Q}}_j}\}}_{j=1}^{n}, \quad {\mathbf{Q}}_j:={\bigl[{\tfrac{d}{c(d-c)}}\bigr]}^{\frac12}({\mathbf{P}}_j-\tfrac{c}{d}{\mathbf{I}}),$$ is a normalized sequence of vectors in a real Hilbert space of dimension $\frac{d(d+1)}{2}-1$ or $d^2-1$, depending on whether ${\mathbb{F}}$ is ${\mathbb{R}}$ or ${\mathbb{C}}$, respectively. Moreover, by , $${\langle{{\mathbf{Q}}_j},{{\mathbf{Q}}_{j'}}\rangle}_{{\mathrm{Fro}}} =\tfrac{d}{c(d-c)}({\langle{{\mathbf{P}}_j},{{\mathbf{P}}_{j'}}\rangle}_{{\mathrm{Fro}}}-\tfrac{c^2}{d}),$$ for all $j,j'$. As such, applying Lemma \[lemma.Rankin simplex bound\] to ${\{{{\mathbf{Q}}_j}\}}_{j=1}^{n}$ and recalling  then gives $$\label{equation.proving simplex from Rankin} -\tfrac1{n-1} \leq\tfrac{d}{c(d-c)}(\max_{j\neq j'}{\langle{{\mathbf{P}}_j},{{\mathbf{P}}_{j'}}\rangle}_{{\mathrm{Fro}}}-\tfrac{c^2}{d}) =1-\tfrac12\tfrac{d}{c(d-c)}\min_{j\neq j'}{\|{{\mathbf{P}}_j-{\mathbf{P}}_{j'}}\|}_{{\mathrm{Fro}}}^2 =1-\tfrac{d}{c(d-c)}{\operatorname{dist}}_{{\mathrm{c}}}^2({\{{{\mathcal{U}}_j}\}}_{j=1}^{n}).$$ Rearranging this expression gives the simplex bound of Conway, Hardin and Sloane [@ConwayHS96]: $$\label{equation.simplex bound} {\operatorname{dist}}_{{\mathrm{c}}}^2({\{{{\mathcal{U}}_j}\}}_{j=1}^{n}) \leq \tfrac{c(d-c)}{d}\tfrac{n}{n-1}.$$ If we instead solve for $\max_{j\neq j'}{\langle{{\mathbf{P}}_j},{{\mathbf{P}}_{j'}}\rangle}_{{\mathrm{Fro}}}$ above, we obtain a lower bound on that quantity which is equivalent to the simplex bound : $$\label{equation.Welch-simplex bound} \max_{j\neq j'}{\langle{{\mathbf{P}}_j},{{\mathbf{P}}_{j'}}\rangle}_{{\mathrm{Fro}}} \geq\tfrac{c^2}{d}-\tfrac1{n-1}\tfrac{c(d-c)}{d} =\tfrac{c}{d(n-1)}[(n-1)c-(d-c)] =\tfrac{c(nc-d)}{d(n-1)}.$$ Note that by Lemma \[lemma.Rankin simplex bound\], and hold precisely when ${\{{{\mathbf{Q}}_j}\}}_{j=1}^{n}$ forms a simplex in the space of all (traceless) self-adjoint operators, namely when ${\langle{{\mathbf{P}}_j},{{\mathbf{P}}_{j'}}\rangle}_{{\mathrm{Fro}}}=\tfrac{c(nc-d)}{d(n-1)}$ for all $j\neq j'$. Recall this can only happen if $${\boldsymbol{0}}=\sum_{j=1}^{n}{\mathbf{Q}}_j ={\bigl[{\tfrac{d}{c(d-c)}}\bigr]}^{\frac12}\sum_{j=1}^{n}({\mathbf{P}}_j-\tfrac{c}{d}{\mathbf{I}}) ={\bigl[{\tfrac{d}{c(d-c)}}\bigr]}^{\frac12}{\biggl({\sum_{j=1}^{n}{\mathbf{P}}_j-\tfrac{nc}{d}{\mathbf{I}}}\biggr)},$$ namely only if ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ is a tight fusion frame for ${\mathbb{F}}^d$. An important fact seemingly overlooked by Conway, Hardin and Sloane is that the simplex bound , when equivalently reexpressed as , is a generalization of the Welch bound . To see this, note that when $c=1$, then for each $j=1,\dotsc,n$ we have ${\mathbf{P}}_j={\boldsymbol{\varphi}}_j^{}{\boldsymbol{\varphi}}_j^{}$ where ${\boldsymbol{\varphi}}_j$ is a unit vector in the $1$-dimensional subspace ${\mathcal{U}}_j$. Moreover, by cycling a trace (and realizing that the trace of a scalar is itself), we have $${\langle{{\mathbf{P}}_j},{{\mathbf{P}}_{j'}}\rangle}_{{\mathrm{Fro}}} ={\operatorname{Tr}}({\mathbf{P}}_j{\mathbf{P}}_{j'}) ={\operatorname{Tr}}({\boldsymbol{\varphi}}_j^{}{\boldsymbol{\varphi}}_j^{*}{\boldsymbol{\varphi}}_{j'}^{}{\boldsymbol{\varphi}}_{j'}^{*}) ={\operatorname{Tr}}({\boldsymbol{\varphi}}_{j'}^{*}{\boldsymbol{\varphi}}_j^{}{\boldsymbol{\varphi}}_j^{*}{\boldsymbol{\varphi}}_{j'}^{}) ={\operatorname{Tr}}({\langle{{\boldsymbol{\varphi}}_{j'}},{{\boldsymbol{\varphi}}_{j}}\rangle}{\langle{{\boldsymbol{\varphi}}_{j}},{{\boldsymbol{\varphi}}_{j'}}\rangle}) ={|{{\langle{{\boldsymbol{\varphi}}_{j}},{{\boldsymbol{\varphi}}_{j'}}\rangle}}|}^2$$ for all $j,j'$. As such, in the $c=1$ case, reduces to , namely . This realization inspired the discussion given in the next section; there, we obtain a more direct proof of  by generalizing a modern proof of the Welch bound. To conclude this section, we note that when ${\mathbb{F}}={\mathbb{R}}$ and , the $n$ vectors ${\{{{\mathbf{Q}}_j}\}}_{j=1}^{n}$ cannot form a simplex in the -dimensional space of $d\times d$ real symmetric traceless operators. In the special case where $c=1$, this fact is closely related to the *Gerzon bound* [@LemmensS73], which states that the maximum number of equiangular lines in ${\mathbb{R}}^d$ is . In this regime, we can instead apply Lemma \[lemma.Rankin orthoplex bound\] to ${\{{{\mathbf{Q}}_j}\}}_{j=1}^{n}$ to obtain the following alternative to : $$0 \leq\tfrac{d}{c(d-c)}(\max_{j\neq j'}{\langle{{\mathbf{P}}_j},{{\mathbf{P}}_{j'}}\rangle}_{{\mathrm{Fro}}}-\tfrac{c^2}{d}) =1-\tfrac{d}{c(d-c)}{\operatorname{dist}}_{{\mathrm{c}}}^2({\{{{\mathcal{U}}_j}\}}_{j=1}^{n}).$$ In the case where ${\mathbb{F}}={\mathbb{C}}$, these same bounds apply when $n>d^2$. When rearranged, this yields Conway, Hardin and Sloane’s orthoplex bound [@ConwayHS96]: ${\operatorname{dist}}_{{\mathrm{c}}}^2({\{{{\mathcal{U}}_j}\}}_{j=1}^{n})\leq\tfrac{c(d-c)}{d}$. Equivalently, . In the special case where $c=1$ and ${\mathbf{P}}_j={\boldsymbol{\varphi}}_j^{}{\boldsymbol{\varphi}}_j^{*}$ for all $j$, this becomes $$\max_{j\neq j'}{|{{\langle{{\boldsymbol{\varphi}}_{j}},{{\boldsymbol{\varphi}}_{j'}}\rangle}}|}^2\geq\tfrac1d,$$ a bound met by *mutually unbiased bases* as well as by other interesting constructions, such as the union of a standard basis and a harmonic ETF arising from a Singer difference set [@BodmannH16]. Equi-chordal and equi-isoclinic tight fusion frames =================================================== In this section, we give an alternative derivation of , an inequality that is equivalent to Conway, Hardin and Sloane’s simplex bound . This is not a new proof *per se*: it essentially combines the main ideas of the previous section with those of the proof of Lemma \[lemma.Rankin simplex bound\], while eliminating some unnecessary technicalities. Recall from Section 2, that a sequence ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ of $c$-dimensional subspaces of ${\mathbb{F}}^d$ forms a tight fusion frame for ${\mathbb{F}}^d$ if there exists $\alpha>0$ such that $\alpha{\mathbf{I}}=\sum_{j=1}^{n}{\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^{*}$ where each ${\boldsymbol{\Phi}}_j$ is a $d\times c$ synthesis operator of an orthonormal basis for ${\mathcal{U}}_j$. Further recall that in this case, $\alpha$ is necessarily $\frac{nc}{d}$. As such, for any sequence ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ of $c$-dimensional subspaces of ${\mathbb{F}}^d$, $$\begin{aligned} 0 &\leq{\|{{\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*-\tfrac{nc}{d}{\mathbf{I}}}\|}_{{\mathrm{Fro}}}^2\\ &={\operatorname{Tr}}[({\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*-\tfrac{nc}{d}{\mathbf{I}})^2]\\ &={\operatorname{Tr}}[({\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*)^2]-2\tfrac{nc}{d}{\operatorname{Tr}}({\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*)+\tfrac{n^2c^2}{d^2}{\operatorname{Tr}}({\mathbf{I}})\\ &={\operatorname{Tr}}[({\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}})^2]-2\tfrac{nc}{d}{\operatorname{Tr}}({\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}})+\tfrac{n^2c^2}{d}\\ &={\|{{\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}}}\|}_{{\mathrm{Fro}}}^2-\tfrac{n^2c^2}{d}.\end{aligned}$$ To continue simplifying, we express the Frobenius norm (entrywise $2$-norm) of the fusion Gram matrix in terms of the Frobenius norms of the cross-Gramians ${\{{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\}}_{j,j'=1}^n$, recalling that ${\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_j={\mathbf{I}}$ for all $j=1,\dotsc,n$: $$\label{equation.direct derivation of generalized Welch} 0 \leq{\|{{\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*-\tfrac{nc}{d}{\mathbf{I}}}\|}_{{\mathrm{Fro}}}^2 =\sum_{j=1}^{n}\sum_{j'=1}^{n}{\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_{{\mathrm{Fro}}}^2-\tfrac{n^2c^2}{d} \leq n(n-1)\max_{j\neq j'}{\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_{{\mathrm{Fro}}}^2+nc-\tfrac{n^2c^2}{d}.$$ Rearranging this inequality yields , namely the inequality that is equivalent to the simplex bound. Moreover, equality in  only occurs when both inequalities hold with equality, namely when ${\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*=\tfrac{nc}{d}{\mathbf{I}}$ and ${\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_{{\mathrm{Fro}}}^2$ is constant over all $j\neq j'$. Recall our first property here means ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ is a tight fusion frame for ${\mathbb{F}}^d$. Meanwhile, our second property is equivalent to ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ being *equi-chordal*, namely that the chordal distance between any two distinct subspaces is the same value: $$\begin{aligned} {\operatorname{dist}}_{{\mathrm{c}}}^2({\mathcal{U}}_j,{\mathcal{U}}_{j'}) \nonumber &=\tfrac12{\|{{\mathbf{P}}_j-{\mathbf{P}}_{j'}}\|}_{{\mathrm{Fro}}}^2\\ \nonumber &=\tfrac12{\operatorname{Tr}}[({\mathbf{P}}_j-{\mathbf{P}}_{j'})^2]\\ \nonumber &=\tfrac12[{\operatorname{Tr}}({\mathbf{P}}_j)+{\operatorname{Tr}}({\mathbf{P}}_j)-2{\operatorname{Tr}}({\mathbf{P}}_j{\mathbf{P}}_{j'})]\\ \nonumber &=c-{\operatorname{Tr}}({\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}{\boldsymbol{\Phi}}_{j'}^*)\\ \nonumber &=c-{\operatorname{Tr}}({\boldsymbol{\Phi}}_{j'}^*{\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{})\\ \label{equation.chordal distance between two subspaces} &=c-{\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_{{\mathrm{Fro}}}^2.\end{aligned}$$ That is, equality in  is achieved precisely when ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ is equi-chordal and is a tight fusion frame. We summarize these facts in the following result: \[theorem.equi-chordal\] Let ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ be a sequence of $c$-dimensional subspaces of ${\mathbb{F}}^d$. For each $j=1,\dotsc,n$, let ${\boldsymbol{\Phi}}_j$ be the $d\times c$ synthesis operator of an orthonormal basis for ${\mathcal{U}}_j$, and let ${\mathbf{P}}_j={\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^*$ be the corresponding orthogonal projection operator. Then $$\label{equation.equi-chordal Welch bound} \max_{j\neq j'}{\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_{{\mathrm{Fro}}}^2 \geq\tfrac{c(nc-d)}{d(n-1)},$$ where equality holds if and only if ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ is an equi-chordal tight fusion frame (ECTFF) for ${\mathbb{F}}^d$, namely if and only if both 1. ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ is a tight fusion frame, namely there exists some $\alpha>0$ such that $\displaystyle\sum_{j=1}^{n}{\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^{*}=\sum_{j=1}^{n}{\mathbf{P}}_j=\alpha{\mathbf{I}}$; 2. ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ is equi-chordal, namely there exists some $\beta\geq0$ such that ${\operatorname{Tr}}({\mathbf{P}}_j{\mathbf{P}}_{j'})={\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_{{\mathrm{Fro}}}^2=\beta$ for all $j\neq j'$, or equivalently, that the squared chordal distance $\frac12{\|{{\mathbf{P}}_j-{\mathbf{P}}_{j'}}\|}^2$ is constant over all $j\neq j'$. In this case, $\alpha$ and $\beta$ are necessarily $\frac{nc}{d}$ and $\tfrac{c(nc-d)}{d(n-1)}$, respectively. We now further refine these ideas to obtain another bound that can only be achieved by *equi-isoclinic* subspaces. Recall that the Frobenius norm of a matrix is the $2$-norm of its singular values, while its induced $2$-norm is the $\infty$-norm of its singular values, yielding the following bound: writing ${\mathbf{A}}\in{\mathbb{C}}^{c\times c}$ as ${\mathbf{A}}={\mathbf{U}}{\boldsymbol{\Sigma}}{\mathbf{V}}^*$ where ${\mathbf{U}}$ and ${\mathbf{V}}$ are unitary, $${\|{{\mathbf{A}}}\|}_{{\mathrm{Fro}}}^2 ={\|{{\mathbf{U}}{\boldsymbol{\Sigma}}{\mathbf{V}}^*}\|}_{{\mathrm{Fro}}}^2 ={\|{{\boldsymbol{\Sigma}}}\|}_{{\mathrm{Fro}}}^2 =\sum_{k=1}^{c}\sigma_k^2 \leq c\max{\{{\sigma_k^2}\}}_{k=1}^{c} =c{\|{{\mathbf{A}}}\|}_2^2.$$ Moreover, equality here is only achieved when the singular values ${\{{\sigma_k}\}}_{k=1}^{c}$ are constant, namely when there exists some $\sigma\geq0$ such that ${\boldsymbol{\Sigma}}=\sigma{\mathbf{I}}$. As the singular values of ${\mathbf{A}}\in{\mathbb{C}}^{c\times c}$ are the square roots of the eigenvalues of ${\mathbf{A}}^*{\mathbf{A}}$ and ${\mathbf{A}}{\mathbf{A}}^*$, this occurs precisely when ${\mathbf{A}}^*{\mathbf{A}}=\sigma^2{\mathbf{I}}$, or equivalently, when ${\mathbf{A}}{\mathbf{A}}^*=\sigma^2{\mathbf{I}}$. That is, for any ${\mathbf{A}}\in{\mathbb{C}}^{c\times c}$, ${\|{{\mathbf{A}}}\|}_{{\mathrm{Fro}}}^2\leq c{\|{{\mathbf{A}}}\|}_2^2$, and equality is achieved if and only if ${\mathbf{A}}$ is a scalar multiple of a unitary matrix. Note that in this case, the value $\sigma$ is uniquely determined by the Frobenius norm of ${\mathbf{A}}$, namely $\sigma^2=\frac1{c}{\|{{\mathbf{A}}}\|}_{{\mathrm{Fro}}}^2$. Using these ideas, we can continue  as $$\label{equation.deriving EITFFs} \tfrac{c(nc-d)}{d(n-1)} \leq\max_{j\neq j'}{\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_{{\mathrm{Fro}}}^2 \leq c\max_{j\neq j'}{\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_{2}^2,$$ obtaining the lower bound $\max_{j\neq j'}{\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_{2}^2 \geq\tfrac{nc-d}{d(n-1)}$. Here, we have equality precisely when we have equality in  and ${\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}$ has constant singular values for any $j\neq j'$. That is, equality in  holds precisely when ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ is an ECTFF and there exists $\sigma\geq0$ such that ${\boldsymbol{\Phi}}_{j'}^*{\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}=\sigma^2{\mathbf{I}}$ for all $j\neq j'$; note the value of $\sigma$ here is independent of $j,j'$, satisfying $\sigma^2=\frac1{c}{\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_{{\mathrm{Fro}}}^2=\frac1c\beta=\tfrac{nc-d}{d(n-1)}$. As we now discuss, sequences of subspaces with this special property are themselves a subject of interest. In particular, a sequence of $c$-dimensional subspaces ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ of ${\mathbb{F}}^d$ is called *equi-isoclinic* if there exists $\sigma\geq0$ such that ${\boldsymbol{\Phi}}_{j'}^*{\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}=\sigma^2{\mathbf{I}}$ for all $j\neq j'$ [@LemmensS73b]. Note that conjugating this expression by ${\boldsymbol{\Phi}}_{j'}$ gives $$\label{equation.equi-isoclinic projections} {\mathbf{P}}_{j'}{\mathbf{P}}_{j}{\mathbf{P}}_{j'} ={\boldsymbol{\Phi}}_{j'}^{}{\boldsymbol{\Phi}}_{j'}^*{\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}{\boldsymbol{\Phi}}_{j'}^{*} =\sigma^2{\boldsymbol{\Phi}}_{j'}^{}{\boldsymbol{\Phi}}_{j'}^{*} =\sigma^2{\mathbf{P}}_{j'}$$ for all $j\neq j'$. Conversely, as we now explain, if there exists some $\sigma\geq0$ such that the subspaces ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ satisfy for all $j\neq j'$, then they are equi-isoclinic. Indeed, implies that ${\mathbf{P}}_{j'}({\mathbf{P}}_{j}{\mathbf{P}}_{j'}-\sigma^2{\mathbf{I}})={\boldsymbol{0}}$, meaning the range of ${\mathbf{P}}_{j}{\mathbf{P}}_{j'}-\sigma^2{\mathbf{I}}$ lies in $\ker({\mathbf{P}}_{j'})=\ker({\boldsymbol{\Phi}}_{j'}^{}{\boldsymbol{\Phi}}_{j'}^*)=\ker({\boldsymbol{\Phi}}_{j'}^*)$. That is, ${\boldsymbol{0}}={\boldsymbol{\Phi}}_{j'}^*({\mathbf{P}}_{j}^{}{\mathbf{P}}_{j'}^{}-\sigma^2{\mathbf{I}}) =({\boldsymbol{\Phi}}_{j'}^*{\mathbf{P}}_{j}^{}{\boldsymbol{\Phi}}_{j'}^{}-\sigma^2{\mathbf{I}}){\boldsymbol{\Phi}}_{j'}^*$. Since ${\boldsymbol{\Phi}}_{j'}^*{\boldsymbol{\Phi}}_{j'}^{}={\mathbf{I}}$, multiplying this equation on the right by ${\boldsymbol{\Phi}}_{j'}$ gives ${\boldsymbol{\Phi}}_{j'}^*{\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{} ={\boldsymbol{\Phi}}_{j'}^*{\mathbf{P}}_{j}^{}{\boldsymbol{\Phi}}_{j'}^{} =\sigma^2{\mathbf{I}}$. We summarize these facts as follows: Let ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ be a sequence of $c$-dimensional subspaces of ${\mathbb{F}}^d$. For each $j=1,\dotsc,n$, let ${\boldsymbol{\Phi}}_j$ be the $d\times c$ synthesis operator of an orthonormal basis for ${\mathcal{U}}_j$, and let ${\mathbf{P}}_j={\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^*$ be the corresponding orthogonal projection operator. Then $$\label{equation.equi-isoclinic Welch bound} \max_{j\neq j'}{\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_{2}^2 \geq\tfrac{nc-d}{d(n-1)},$$ where equality holds if and only if ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ is an equi-isoclinic tight fusion frame (EITFF) for ${\mathbb{F}}^d$, namely if and only if both 1. ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ is a tight fusion frame, namely there exists some $\alpha>0$ such that $\displaystyle\sum_{j=1}^{n}{\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^{*}=\sum_{j=1}^{n}{\mathbf{P}}_j=\alpha{\mathbf{I}}$; 2. ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ is equi-isoclinic, namely there exists some $\sigma^2\geq0$ such that ${\boldsymbol{\Phi}}_{j'}^*{\boldsymbol{\Phi}}_j^{}{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}=\sigma^2{\mathbf{I}}$ for all $j\neq j'$, or equivalently, that ${\mathbf{P}}_{j'}{\mathbf{P}}_{j}{\mathbf{P}}_{j'}=\sigma^2{\mathbf{P}}_{j'}$ for all $j\neq j'$. In this case, ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ is necessarily an ECTFF for ${\mathbb{F}}^d$, see Theorem \[theorem.equi-chordal\], and $\alpha$ and $\sigma^2$ are necessarily $\frac{nc}{d}$ and $\tfrac{nc-d}{d(n-1)}$, respectively. In the special case where $c=1$, ECTFFs are equivalent to EITFFs since the induced $2$-norm of a $1\times 1$ matrix equals its Frobenius norm. In fact, in this case, both ECTFFs and EITFFs correspond to ETFs: letting ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ be unit vectors chosen from the $1$-dimensional subspaces ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ we have ${\mathbf{P}}_j={\boldsymbol{\varphi}}_j^{}{\boldsymbol{\varphi}}_j^{*}$ and ${\operatorname{Tr}}({\mathbf{P}}_j{\mathbf{P}}_{j'})={\|{{\boldsymbol{\varphi}}_j^*{\boldsymbol{\varphi}}_{j'}^{}}\|}_{{\mathrm{Fro}}}^2={|{{\langle{{\boldsymbol{\varphi}}_{j}},{{\boldsymbol{\varphi}}_{j'}}\rangle}}|}^2$. Principal angles ================ We have seen that ECTFFs give optimal packings of subspaces with respect to chordal distance. Less obvious is what distance (if any) EITFFs are optimal packings with respect to. To understand this better, we now discuss *principal angles*. Here, as before, let ${\{{{\boldsymbol{\Phi}}_j}\}}_{j=1}^{n}$ be a sequence of $d\times c$ synthesis operators for orthonormal bases of a given sequence ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$ of $c$-dimensional subspaces of ${\mathbb{F}}^d$. For each $j$ we have ${\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_j^{}={\mathbf{I}}$ and so ${\|{{\boldsymbol{\Phi}}_j}\|}_2\leq 1$. As such, for any $j\neq j'$, $${\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_2 \leq{\|{{\boldsymbol{\Phi}}_j}\|}_2{\|{{\boldsymbol{\Phi}}_{j'}^{}}\|}_2 \leq1.$$ This means that for any $j\neq j'$, the singular values ${\{{\sigma_{j,j',k}}\}}_{k=1}^{c}$ of ${\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}$ are at most $1$. As such, for any $j\neq j'$, there exists an increasing (non-decreasing) sequence of angles ${\{{\theta_{j,j',k}}\}}_{k=1}^{c}$ in $[0,\frac{\pi}2]$ such that $\sigma_{j,j',k}=\cos(\theta_{j,j',k})$ for all $k=1,\dotsc,c$. For any $j\neq j'$, these angles are known as the *principal angles* between ${\mathcal{U}}_j$ and ${\mathcal{U}}_{j'}$. These angles are invariant with respect to the particular choice of orthonormal bases for ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$, since changing bases is equivalent to right-multiplying ${\{{{\boldsymbol{\Phi}}_j}\}}_{j=1}^{n}$ by $c\times c$ unitary matrices, which only affects the unitary terms in the singular value decomposition of ${\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}$, not its singular values. One can express the chordal distance between any two subspaces in terms of their principal angles. In particular, for any $j\neq j'$, $${\operatorname{dist}}_{{\mathrm{c}}}^2({\mathcal{U}}_j,{\mathcal{U}}_{j'}) =\tfrac12{\|{{\mathbf{P}}_j-{\mathbf{P}}_{j'}}\|}_{{\mathrm{Fro}}}^2 =c-{\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_{{\mathrm{Fro}}}^2 =c-\sum_{k=1}^{c}\cos^2(\theta_{j,j',k}) =\sum_{k=1}^{c}\sin^2(\theta_{j,j',k}).$$ This “chordal" notion of the distance between two subspaces has an advantage over other, more classical notions of distance, such as the *geodesic distance* $(\sum_{k=1}^{c}\theta_{j,j',k}^2)^{\frac12}$: ECTFFs give optimal packings with respect to the chordal distance, whereas for any $c>1$, we are not aware of any practically-verifiable conditions that suffice to guarantee a given arrangement of subspaces is optimal with respect to geodesic distance. Whereas ECTFFs are optimal packings with respect to the chordal distance, EITFFs instead achieve equality in : $$\tfrac{nc-d}{d(n-1)} \leq\max_{j\neq j'}{\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}^2 =\max_{j\neq j'}\max{\{{\cos^2(\theta_{j,j',k})}\}}_{k=1}^{c} =\max_{j\neq j'}\cos^2(\theta_{j,j',1}) =1-\min_{j\neq j'}\sin^2(\theta_{j,j',1}).$$ That is, they ensure that $\min_{j\neq j'}\sin^2(\theta_{j,j',1})$ is as large as possible, where for any $j\neq j'$, $\theta_{j,j',1}$ is the smallest principal angle between ${\mathcal{U}}_j$ and ${\mathcal{U}}_{j'}$; Dhillon, Heath, Strohmer and Tropp refer to $\sin(\theta_{j,j',1})$ as the *spectral distance* between ${\mathcal{U}}_j$ and ${\mathcal{U}}_{j'}$ [@DhillonHST08]. Put more simply, EITFFs maximize the minimum principal angle of ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$, where this minimum is taken over all pairs of subspaces as well as all principal angles between them. When viewed from the perspective of principal angles, EITFFs seem extremely special: we need a collection of subspaces with such a high degree of symmetry that their orthogonal projection operators sum to a scalar multiple of the identity and such that any principal angle between any pair of subspaces is equal to any (other) principal angle between any (other) pair of subspaces. EITFFs are so special, in fact, that one may reasonably doubt that nontrivial examples of them exist. Nevertheless, they do: if ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$ is any ETF for ${\mathbb{F}}^e$ and $c$ is any positive integer, then letting ${\mathbf{I}}$ be the $c\times c$ identity matrix, and letting ${\boldsymbol{\Phi}}_j={\boldsymbol{\varphi}}_j\otimes{\mathbf{I}}$ for all $j=1,\dotsc,n$, we have that ${\{{{\mathcal{U}}_j}\}}_{j=1}^{n}$, ${\mathcal{U}}_j:=\operatorname{range}({\boldsymbol{\Phi}}_j)$, is an EITFF for ${\mathbb{F}}^{d}$ where $d=ce$. To see this, note that letting ${\boldsymbol{\Phi}}$ denote the $e\times n$ synthesis operator of the ETF ${\{{{\boldsymbol{\varphi}}_j}\}}_{j=1}^{n}$, the fusion frame operator of ${\{{{\boldsymbol{\Phi}}_j}\}}_{j=1}^{n}$ is $$({\boldsymbol{\Phi}}\otimes{\mathbf{I}})({\boldsymbol{\Phi}}\otimes{\mathbf{I}})^* =({\boldsymbol{\Phi}}\otimes{\mathbf{I}})({\boldsymbol{\Phi}}^*\otimes{\mathbf{I}}) ={\boldsymbol{\Phi}}{\boldsymbol{\Phi}}^*\otimes{\mathbf{I}}=\tfrac ne({\mathbf{I}}\otimes{\mathbf{I}}) =\tfrac{nc}{d}{\mathbf{I}},$$ while its fusion Gram matrix is $({\boldsymbol{\Phi}}\otimes{\mathbf{I}})^*({\boldsymbol{\Phi}}\otimes{\mathbf{I}})={\boldsymbol{\Phi}}^*{\boldsymbol{\Phi}}\otimes{\mathbf{I}}$, meaning that for any $j\neq j'$, $${\|{{\boldsymbol{\Phi}}_j^*{\boldsymbol{\Phi}}_{j'}^{}}\|}_2^2 ={\|{({\boldsymbol{\varphi}}_j\otimes{\mathbf{I}})^*({\boldsymbol{\varphi}}_{j'}\otimes{\mathbf{I}})}\|}_2^2 ={\|{{\langle{{\boldsymbol{\varphi}}_j},{{\boldsymbol{\varphi}}_{j'}}\rangle}\otimes{\mathbf{I}}}\|}_2^2 =\tfrac{n-e}{e(n-1)} =\tfrac{nc-ce}{ce(n-1)} =\tfrac{nc-d}{d(n-1)}.$$ This trick allows one to use the growing list [@FickusM15] of known ETF constructions to produce (infinite families of) nontrivial EITFFs, all of which are also ECTFFs. To our knowledge, it is an open question whether every EITFF of $c$-dimensional subspaces of ${\mathbb{F}}^d$ has $n$ and $d$ parameters of the form $d=ce$ where there exists an $n$-vector ETF for ${\mathbb{F}}^e$. More generally, it is an open question whether the dimension $c$ of the subspaces in an EITFF for ${\mathbb{F}}^d$ necessarily divides the ambient dimension $d$. Acknowledgments {#acknowledgments .unnumbered} =============== This work was partially supported by NSF DMS 1321779, AFOSR F4FGA05076J002 and an AFOSR Young Investigator Research Program award. The views expressed in this article are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the U.S. Government. [WW]{} B. G. Bodmann, J. Haas, Achieving the orthoplex bound and constructing weighted complex projective 2-designs with Singer sets, Linear Algebra Appl. 511 (2016) 54–71. P. G. Casazza, G. Kutyniok, Frames of subspaces, Contemp. Math. 345 (2004) 87–114. R. Chapman, Largest number of vectors with pairwise negative dot product, https://mathoverflow.net/q/31440. J. H. Conway, R. H. Hardin, N. J. A. Sloane, Packing lines, planes, etc.: packings in Grassmannian spaces, Experiment. Math. 5 (1996) 139–159. I. S. Dhillon, J. R. Heath, T. Strohmer, J. A. Tropp, Constructing packings in Grassmannian manifolds via alternating projection, Exp. Math. 17 (2008) 9–35. M. Fickus, D. G. Mixon, Tables of the existence of equiangular tight frames, arXiv:1504.00253. G. Kutyniok, A. Pezeshki, R. Calderbank, T. Liu, Robust dimension reduction, fusion frames, and Grassmannian packings, Appl. Comput. Harmon. Anal. 26 (2009) 64–76. P. W. H. Lemmens, J. J. Seidel, Equi-isoclinic subspaces of Euclidean spaces, Indag. Math. 76 (1973) 98–107. P. W. H. Lemmens, J. J. Seidel, Equiangular lines, J. Algebra 24 (1973) 494–512. R. A. Rankin, The closest packing of spherical caps in $n$ dimensions, Glasg. Math. J. 2 (1955) 139–144. T. Strohmer, R. W. Heath, Grassmannian frames with applications to coding and communication, Appl. Comput. Harmon. Anal. 14 (2003) 257–275. L. R. Welch, Lower bounds on the maximum cross correlation of signals, IEEE Trans. Inform. Theory 20 (1974) 397-–399.
--- abstract: 'Using spectroscopic radial velocities with the APOGEE instrument and *Gaia* distance estimates, we demonstrate that Kepler-503b, currently considered a validated [*Kepler*]{} planet, is in fact a brown-dwarf/low-mass star in a nearly circular 7.2-day orbit around a subgiant star. Using a mass estimate for the primary star derived from stellar models, we derive a companion mass and radius of $0.075\pm0.003{\ensuremath{\, \mathrm{M_\odot}}}$ ($78.6\pm3.1$ [$\, \mathrm{M_{Jup}}$]{}) and $0.099^{+0.006}_{-0.004}{\ensuremath{\, \mathrm{R_\odot}}}$ ($0.96^{+0.06}_{-0.04}$ [$\, \mathrm{R_{Jup}}$]{}), respectively. Assuming the system is coeval, the evolutionary state of the primary indicates the age is $\sim6.7$ Gyr. Kepler-503b sits right at the hydrogen burning mass limit, straddling the boundary between brown dwarfs and very low-mass stars. More precise radial velocities and secondary eclipse spectroscopy with James Webb Space Telescope will provide improved measurements of the physical parameters and age of this important system to better constrain and understand the physics of these objects and their spectra. This system emphasizes the value of radial velocity observations to distinguish a genuine planet from astrophysical false positives, and is the first result from the SDSS-IV monitoring of Kepler planet candidates with the multi-object APOGEE instrument.' author: - 'Caleb I. Cañas' - 'Chad F. Bender' - Suvrath Mahadevan - 'Scott W. Fleming' - 'Thomas G. Beatty' - 'Kevin R. Covey' - Nathan De Lee - 'Fred R. Hearty' - 'D. A. García-Hernández' - 'Steven R. Majewski' - 'Donald P. Schneider' - 'Keivan G. Stassun' - 'Robert F. Wilson' title: 'Kepler-503b: An Object at the Hydrogen Burning Mass Limit Orbiting a Subgiant Star' --- Introduction ============ The NASA [*Kepler*]{} space mission, in its search for transiting Earth analogues, provided nearly continuous observations of $\sim 200,000$ stars with a photometric precision of a few parts per million [@Borucki2010; @Koch2010]. The final [*Kepler*]{} data release (DR25) lists more than $8,000$ objects of interest (KOIs), or targets showing a transit which may be caused by an exoplanet [@Thompson2017]. Vetting these KOIs has revealed over $2,000$ eclipsing binaries with precise photometric data [@Kirk2016]. While high-resolution imaging [@Furlan2017] and statistical methods [@Morton2016] are useful for constraining the nature of a KOI, dynamical observations can provide an unambiguous classification for a given system. Eclipsing binaries are important astrophysical systems because simultaneous modeling of spectroscopic and photometric observations yields precise dynamical masses and stellar radii [@Torres2010]. Precisely measured stellar parameters are valuable for calibrating and refining stellar evolution models [e.g., @Fernandez2009; @Torres2014] and even play a role in the cosmic distance scale. Furthermore, determining precise properties of exoplanets requires an understanding of their host stars’ parameters, particularly the masses and radii. The detection of false positive KOIs has greater implications for the planet-hosting stellar population. The presence of eclipsing binaries quantifies the false positive rate and can reveal any dependencies on parameters, such as the location within the [*Kepler*]{} field and properties of the host star or planetary candidate. In this paper, we provide tight constraints on the age, radii, masses, and other properties of the low mass-ratio ($M_{2}/M_{1}=q\sim 0.07$) eclipsing binary system Kepler-503 (KIC 3642741, 2MASS J19223275+3842276, Kp = 14.75, H = 13.14). The paper is structured as follows: Section \[sec:observations\] presents the observational data, Section \[sec:datared\] discusses our data processing, and Section \[sec:models\] describes our analysis. A discussion of our results is presented in Section \[sec:discussion\]. Observations {#sec:observations} ============ Kepler-503 was observed from Apache Point Observatory (APO) between 30 April 2015 and 7 April 2017 as part of the APO Galaxy Evolution Experiment (APOGEE) KOI program [@Fleming2015; @Majewski2017; @Zasowski2017] within SDSS-IV [@Blanton2017]. We obtained nineteen spectra of Kepler-503, using the high-resolution ($R\sim22500$), near-infrared ($1.514-1.696$ [$\, \mathrm{\mu m}$]{}), multi-object APOGEE spectrograph [@Wilson2010; @Wilson2012], mounted on the Sloan 2.5-meter telescope [@Gunn2006]. The observations are summarized in Table \[tab:table1\], which lists our derived radial velocities, their uncertainties (corresponding to a $1-\sigma$ error), and the signal-to-noise ratio (SNR) per pixel for each epoch. One observation with a SNR below $10$ was not used for analysis. [cccc]{} 2456932.708196 & -52.86 & 0.21 & 22\ 2457142.860499 & -52.11 & 0.2 & 24\ 2457199.743627 & -46.26 & 0.35 & 12\ 2457264.658553 & -43.3 & 0.24 & 17\ 2457265.795423 & -50.02 & 0.17 & 29\ 2457266.783228 & -52.86 & 0.18 & 25\ 2457294.705133 & -49.98 & 0.17 & 30\ 2457319.648315 & -45.49 & 0.25 & 19\ 2457472.004760 & -45.25 & 0.26 & 16\ 2457498.977450 & -52.66 & 0.39 & 12\ 2457527.949126 & -53.05 & 0.15 & 33\ 2457554.814754 & -42.62 & 0.26 & 16\ 2457562.871095 & -47.25 & 0.26 & 16\ 2457643.740360 & -52.13 & 0.35 & 12\ 2457672.675784 & -52.15 & 0.17 & 27\ 2457701.553540 & -51.51 & 0.49 & 12\ 2457831.992227 & -50.48 & 0.23 & 17\ 2457850.990495 & \* & \* & 6\ Kepler-503 was observed for the entirety of the [*Kepler*]{} mission, is listed as a planetary candidate in DR25 and was statistically validated as an exoplanet by [@Morton2016]. The final data release lists a shallow, $0.37\%$ transit signal with an orbital period of $7.258450123$ days and an estimated physical radius of $5.53^{+1.59}_{-0.53}$ [$\, \mathrm{R_{\oplus}}$]{}. The stellar parameters in the DR25 stellar properties catalog [@Mathur2017] were derived with photometric priors and inferred from stellar models. The DR25 parameters for Kepler-503 suggest a solar-like host star with an effective temperature of $5638^{+154}_{-171}$ K, a mass of $1.006^{+0.090}_{-0.120}$ [$\, \mathrm{M_{\odot}}$]{}, and a radius of $0.920^{+0.264}_{-0.088}$ [$\, \mathrm{R_{\odot}}$]{}. Data Processing {#sec:datared} =============== Radial Velocities {#sec:reduc} ----------------- The APOGEE data reduction pipeline [@Nidever2015] performs sky subtraction, telluric and barycentric correction, and wavelength and flux calibration for each observation of a target. We focused our analysis on these individual spectra for dynamical characterization. While the APOGEE pipeline provides radial velocity measurements, we performed additional post-processing on the spectrum to remove residual telluric lines prior to analysis. Radial velocities of Kepler-503 were derived using the cross-correlation method with uncertainties calculated via the maximum-likelihood approach presented by [@Zucker2003]. With this method, we account for uncertainty contributions from the spectral bandwidth, sharpness of the correlation peak, and the spectral line SNR but cannot exclude systematic uncertainties due to the instrument or poor template selection. Cross-correlation searches for the Doppler shift of an observed spectrum that maximizes the correlation with a template that adequately represents the observed spectrum without a Doppler shift. The best-fitting spectrum should have the highest correlation and it is common practice for this to be used as the template [e.g., @Latham2002]. We identified the best-fitting synthetic spectrum in the H-band from a grid of BT-Settl synthetic spectra [@Allard2012] by cross-correlating the APOGEE epoch with the highest SNR against a grid spanning surface effective temperature ($5100\le T_{e}\le6100$, in intervals of 100 K), surface gravity ($3.5\le\log g\le4.5$, in intervals of 0.5 dex), metallicity ($-0.5\le\text{[M/H]}\le0.5$, in intervals of 0.5 dex), and rotational broadening ($2 \le v\sin i\le 50$, in intervals of 2 [$\, \mathrm{km s^{-1}}$]{}). The synthetic spectrum with the largest correlation was used for the final cross-correlation to derive the reported radial velocities in Table \[tab:table1\]. The properties of the best model are listed in Table \[tab:table2\] and were not used as priors for fitting the system. Photometry ---------- We used the [*Kepler*]{} pre-search data conditioned time-series light curves [@Smith2012; @Stumpe2012; @Stumpe2014] available at the Mikulski Archive for Space Telescopes (MAST). We assumed the transit signal was superimposed on the stellar variability and could be modeled using a Gaussian process. We used the [celerite]{} package and assumed a quasi-periodic covariance function for the Gaussian process, following the procedure in [@Foreman-Mackey2017]. No additional processing was performed on the light curve. Results {#sec:models} ======= We jointly modeled Kepler-503’s radial velocities and light curve using the [EXOFASTv2]{} analysis package [@Eastman2017]. The light curve and Keplerian radial velocity models follow the parametrization described by [@Eastman2013]. We adopted a quadratic limb darkening law for the transit. The priors for the modeling included (i) 2MASS $JHK$ magnitudes [@Skrutskie2006], (ii) SDSS $ugriz$ magnitudes [@Alam2015], (iii) $UBV$ magnitudes [@Everett2012], (iv) WISE magnitudes [@Wright2010], (v) surface gravity, temperature and metallicity from the APOGEE Stellar Parameter and Chemical Abundances Pipeline [ASPCAP, @GarciaPerez2016], (vi) the maximum visual extinction from estimates of Galactic dust extinction by [@Schlafly2011], (vii) the photometric measurements from the second data release of the *Gaia* survey [@GaiaCollaboration2018], and (viii) the distance estimate from [@Bailer-Jones2018]. We validated the performance of our [EXOFASTv2]{} implementation by analyzing the corresponding data products for KOI-189, and obtained very similar parameters to those published by [@Diaz2014] using the [PASTIS]{} planet-validation software. We repeated the analysis for Kepler-503 using the parallax from *Gaia* to determine if there were any significant effects from the nonlinearity of the parallax transformation or any asymmetry in the parallax posterior distribution. The results are consistent to within their $1-\sigma$ uncertainties and herein we present values from the analysis using the distance prior. Figure \[fig:f1\] presents the result of the fit and Table \[tab:table2\] provides a summary of the stellar priors and the inferred systemic parameters along with their confidence intervals. The minimum companion mass is $\sim 0.075$ [$\, \mathrm{M_{\odot}}$]{}, a value incompatible with the statistical validation and estimated parameters from DR25. [llc]{}    Effective Temperature & $T_{e}$ (K)& $6000$\    Surface Gravity & $\log(g_1)$ (cgs)& $4.0$\    Metallicity & \[M/H\]& $0.0$\    Rotational Velocity & $v\sin i$ (km [$\, \mathrm{s^{-1}}$]{})& $2.0$\    Effective Temperature$^\ddagger$ & $T_{e}$ (K)& $5690 \pm 150$\    Surface Gravity$^\ddagger$ & $\log(g_1)$ (cgs)& $4.0$\    Metallicity$^\ddagger$ & \[Fe/H\]& $0.17\pm 0.05$\    Maximum Visual Extinction & $A_{V,max}$ & $0.43$\    Distance& (pc)& $1628 \pm 55$\    Mass & $M_{1}$ ([$\, \mathrm{M_{\odot}}$]{})& $1.154^{+0.047}_{-0.042}$\    Radius& $R_{1}$ ([$\, \mathrm{R_{\odot}}$]{})& $1.764^{+0.080}_{-0.068}$\    Density & $\rho_1$ (g [$\, \mathrm{cm^{-3}}$]{})& $0.297^{+0.038}_{-0.037}$\    Surface Gravity & $\log(g_1)$ (cgs)& $4.008 \pm 0.038$\    Effective Temperature& $T_{e}$ (K)& $5670^{+100}_{-110}$\    Metallicity& \[Fe/H\]& $0.169^{+0.046}_{-0.045}$\    Age& (Gyr)& $6.7^{+1.0}_{-0.9}$\    Parallax& (mas)& $0.617^{+0.020}_{-0.019}$\    Mass& $M_{2}$ ([$\, \mathrm{M_{\odot}}$]{})& $0.075\pm0.003$\    Radius& $R_{2}$ ([$\, \mathrm{R_{\odot}}$]{})& $0.099_{-0.004}^{+0.006}$\    Density& $\rho_{2}$ (g [$\, \mathrm{cm^{-3}}$]{})& $108\pm17$\    Surface Gravity& $\log(g_{2})$ (cgs)& $5.320^{+0.045}_{-0.050}$\    Equilibrium Temperature& $T_{eq}$ (K)& $1296^{+22}_{-23}$\    Orbital Period& $P$ (days) & $7.2584481 \pm 0.0000023$\    Time of Periastron& $T_{P}$ (BJD$_{\text{TDB}}$)& $2454970.27^{+0.63}_{-1.1}$\    Semi-major Axis& $a$ (AU) & $0.0786\pm0.0010$\    Orbital Eccentricity& $e$ & $0.025^{+0.014}_{-0.012}$\    Argument of Periastron& $\omega$ (degrees)& $33^{+32}_{-54}$\    Semi-amplitude Velocity& $K$ (km [$\, \mathrm{s^{-1}}$]{})& $7.17\pm0.18$\    Mass Ratio& $q$ & $0.0648^{+0.0019}_{-0.0020}$\    Systemic Velocity& $\gamma$ (km [$\, \mathrm{s^{-1}}$]{})& $-45.93\pm0.10$\    Radial Velocity Jitter& $\sigma_{RV}$ (m [$\, \mathrm{s^{-1}}$]{})& $168^{+96}_{-92}$\    Time of Mid-transit& $T_C$ (BJD~TDB~)& $2454971.34494 \pm 0.00026$\    Radius Ratio& $R_{2}/R_{1}$ & $0.05619^{+0.00069}_{-0.00060}$\    Scaled Semi-major Axis& $a/R_{1}$ & $9.59^{+0.39}_{-0.42}$\    e$\cos\omega$& & $0.017^{+0.009}_{-0.010}$\    e$\sin\omega$& & $0.010^{+0.020}_{-0.015}$\    Linear Limb-darkening Coefficient& $u_1$& $0.419 \pm 0.023$\    Quadratic Limb-darkening Coefficient& $u_2$& $0.238^{+0.044}_{-0.045}$\    Orbital Inclination& $i$ (degrees)& $88.04^{+1.0}_{-0.76}$\    Impact Parameter& $b$& $0.32^{+0.11}_{-0.17}$\ The stellar parameters from [EXOFASTv2]{} are derived using stellar models; the values suggest Kepler-503 is an evolved star beyond the terminal age main sequence. [@Seager2003] proposed a diagnostic for a transiting system using transit parameters to obtain an estimate of the primary stellar density ($\rho_{1}$). With both photometry and velocimetry, one can determine that the observational data are consistent with the selected stellar models [e.g., @vonBoetticher2017]. To determine if the deviation from the DR25 stellar parameters was justified, we used the MESA Isochrones & Stellar Tracks [MIST, @Dotter2016; @Choi2016] to model the primary star and iteratively refine its parameters until the density derived from the photometry and velocimetry was within the uncertainty of the density from the models. The result of the stellar modeling is shown in Figure \[fig:f2\] and demonstrates that the stellar parameters are comparable to those derived with [EXOFASTv2]{}. The data demonstrate that the primary star, Kepler-503, is not a solar analogue, but instead a slightly evolved subgiant. When compared to stellar evolution models, the posterior distributions for the primary star surface effective temperature and luminosity are located in the subgiant branch, which is in agreement with a slightly evolved system. Accordingly, these stellar parameters suggest that the purported exoplanet is actually an object near the hydrogen burning limit of $\sim0.075$ [$\, \mathrm{M_{\odot}}$]{} [e.g., @Chabrier2005]. Discussion {#sec:discussion} ========== Constraint on the Companion Age ------------------------------- Kepler-503, and particularly the secondary component, is of great astrophysical interest because the age of a star is a fundamental parameter that is often poorly constrained. This situation forces age estimates to rely on proxies, such as magnetic activity, element depletion, rotation [gyrochronology, e.g., @Soderblom2010], or asteroseismology [e.g., @Pinsonneault2018], which are difficult to measure for objects like Kepler-503b. This measurement is further complicated in low-mass stars because their spindown timescales can exceed the age of the Galaxy [@West2008], and their fully convective nature can deplete lithium after only a few hundred million years [@Stauffer1998]. Objects near the bottom of the stellar mass function are of interest because they define the transition from bona fide stars to planets. The simplest way to determine the age of such an object is to associate it with another star or group for which the age is better constrained. Here, we assume Kepler-503 is a coeval system. The subgiant nature of the primary star places a strong constraint on the age of the companion because the placement of the subgiant branch on the Hertzsprung-Russell diagram is allowed for only a limited age range, according to stellar evolution models [e.g., @Soderblom2010]. The modeling suggests the age of the Kepler-503 system is $6.7^{-0.9}_{+1.0}$ Gyr, making it considerably older than the Solar System. Constraints on the Companion Temperature ---------------------------------------- The lack of a secondary eclipse in the photometry provides additional information about Kepler-503b. The dearth of an eclipse, despite the low eccentricity and inclination of the orbit, places a constraint on the companion’s effective surface temperature. The estimated eclipse depth is a function of both the radius ratio, $R_{2}/R_{1}$, and the surface brightness ratio, $B_{2}/B_{1}$. Using the parameters derived in this paper, the equilibrium temperature of Kepler-503b is $\sim1296$ K. For such an object, the estimated transit depths for *Spitzer*’s 3.6 and 4.5 [$\, \mathrm{\mu m}$]{} band-passes are $\sim150$ and $\sim220$ ppm, respectively. Even if we were to assume a higher surface effective temperature that is appropriate for a very low-mass star ($\sim 2300$ K), the estimated eclipse depths for Kepler-503 are at the sensitivity limits of these *Spitzer* channels. For [*Kepler*]{}, the estimated eclipse depth at 600 nm is $< 1$ ppm and is dwarfed by the variance in the light curve. Future photometric and spectroscopic observations with the James Webb Space Telescope are required to further constrain the flux ratio of the system to better characterize Kepler-503b. Constraints on Evolutionary Models ---------------------------------- The study of objects similar to Kepler-503b is critical for the empirical calibration of the mass-radius relation near the hydrogen burning mass limit ($\sim 0.075$ [$\, \mathrm{M_{\odot}}$]{}). Figure \[fig:f3\] shows posterior distribution of Kepler-503b on a mass-radius diagram for brown dwarfs and low-mass stars. Its mass and radius are comparable to those of the brown dwarf KOI-189b [@Diaz2014] Kepler-503b is one of the few objects in the regime where evolutionary tracks converge , and can thus help refine said models. The properties of Kepler-503b appear consistent with evolutionary tracks for solar metallicity low-mass stars and brown dwarfs. From the L- and T-dwarf models by [@Saumon2008], the 5 and 10 Gyr evolutionary tracks are the best matches for this object. One caveat is that, while the derived metallicity for the host star suggests this is a slightly metal-rich system, the cloud-based models exist only for solar metallicities. Given the recent interest in very low-mass stars as targets for exoplanet searches [e.g., @Gillon2017], it is essential that the super-solar metallicity regime be properly characterized. Future searches for planets around ultra-cool dwarfs will require precise masses and radii of the host star to properly characterize any detected exoplanets. Summary {#sec:summ} ======= This paper reveals a low mass-ratio eclipsing binary system in a nearly circular orbit that was erroneously classified as a transiting exoplanet. Analysis of the photometric and spectroscopic data shows Kepler-503 is an old system with a companion at the hydrogen burning mass limit. This misclassification is largely due to the stellar parameters previously adopted (i.e., a solar-like host). The stellar classification from DR25 used a prior which is known to underestimate the number of sub-giants due to Malmquist bias [e.g., @Bastien2014]. [@Mathur2017] acknowledge that some systematic biases persist in the DR25 stellar properties catalog, resulting in misclassified systems such as Kepler-503. This study is one example of the systems observed in the ongoing APOGEE KOI program, with the ultimate goal of (i) refining the false positive rate of [*Kepler*]{} exoplanet candidates, (ii) revealing any dependencies with stellar or candidate parameters, and (iii) understanding binarity and its effect in the planet host population. The recent second data release from the *Gaia* survey will be helpful for future validation of KOIs by providing a parallax that help to constrain the properties of the host star. CIC, CFB, and SM acknowledge support from NSF award AST 1517592. ND, SRM, and RFW would like to acknowledge support from NSF Grant Nos. 1616684 and 1616636. DAGH acknowledges support provided by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant AYA-2017-88254-P. Some of the data presented in this paper were obtained from MAST. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. 2MASS is a joint project of the University of Massachusetts and IPAC at Caltech, funded by NASA and the NSF. Funding for the [*Kepler*]{} mission is provided by the NASA Science Mission directorate. The NASA Exoplanet Archive is operated by Caltech, under contract with NASA under the Exoplanet Exploration Program. Funding for SDSS-IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is [www.sdss.org](www.sdss.org). SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofísica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Astrophysik (MPA Garching), Max-Planck-Institut für Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. , S., [Albareti]{}, F. D., [Allende Prieto]{}, C., [et al.]{} 2015, [, 219, 12](http://dx.doi.org/10.1088/0067-0049/219/1/12) , F., [Homeier]{}, D., & [Freytag]{}, B. 2012, [, 370, 2765](http://dx.doi.org/10.1098/rsta.2011.0269) , C. A. L., [Rybizki]{}, J., [Fouesneau]{}, M., [Mantelet]{}, G., & [Andrae]{}, R. 2018, , [[arXiv:1804.10121 \[astro-ph.SR\]]{}](http://arxiv.org/abs/1804.10121) , I., [Chabrier]{}, G., [Allard]{}, F., & [Hauschildt]{}, P. H. 1998, , 337, 403 , I., [Homeier]{}, D., [Allard]{}, F., & [Chabrier]{}, G. 2015, [, 577, A42](http://dx.doi.org/10.1051/0004-6361/201425481) , F. A., [Stassun]{}, K. G., & [Pepper]{}, J. 2014, [, 788, L9](http://dx.doi.org/10.1088/2041-8205/788/1/L9) , M. R., [Bershady]{}, M. A., [Abolfathi]{}, B., [et al.]{} 2017, [, 154, 28](http://dx.doi.org/10.3847/1538-3881/aa7567) , W. J., [Koch]{}, D., [Basri]{}, G., [et al.]{} 2010, [, 327, 977](http://dx.doi.org/10.1126/science.1185402) , G., [Baraffe]{}, I., [Allard]{}, F., & [Hauschildt]{}, P. H. 2005, , [[astro-ph/0509798]{}](http://arxiv.org/abs/astro-ph/0509798) , J., [Dotter]{}, A., [Conroy]{}, C., [et al.]{} 2016, [, 823, 102](http://dx.doi.org/10.3847/0004-637X/823/2/102) , R. F., [Montagnier]{}, G., [Leconte]{}, J., [et al.]{} 2014, [, 572, A109](http://dx.doi.org/10.1051/0004-6361/201424406) , A. 2016, [, 222, 8](http://dx.doi.org/10.3847/0067-0049/222/1/8) , J. 2017, [EXOFASTv2: Generalized publication-quality exoplanet modeling code]{}, Astrophysics Source Code Library, [[ascl:1710.003]{}](http://arxiv.org/abs/1710.003) , J., [Gaudi]{}, B. S., & [Agol]{}, E. 2013, [, 125, 83](http://dx.doi.org/10.1086/669497) , M. E., [Howell]{}, S. B., & [Kinemuchi]{}, K. 2012, [, 124, 316](http://dx.doi.org/10.1086/665529) , J. M., [Latham]{}, D. W., [Torres]{}, G., [et al.]{} 2009, [, 701, 764](http://dx.doi.org/10.1088/0004-637X/701/1/764) , S. W., [Mahadevan]{}, S., [Deshpande]{}, R., [et al.]{} 2015, [, 149, 143](http://dx.doi.org/10.1088/0004-6256/149/4/143) , D., [Agol]{}, E., [Ambikasaran]{}, S., & [Angus]{}, R. 2017, [, 154, 220](http://dx.doi.org/10.3847/1538-3881/aa9332) , E., [Ciardi]{}, D. R., [Everett]{}, M. E., [et al.]{} 2017, [, 153, 71](http://dx.doi.org/10.3847/1538-3881/153/2/71) , [Brown]{}, A. G. A., [Vallenari]{}, A., [et al.]{} 2018, , [[arXiv:1804.09365]{}](http://arxiv.org/abs/1804.09365) , A. E., [Allende Prieto]{}, C., [Holtzman]{}, J. A., [et al.]{} 2016, [, 151, 144](http://dx.doi.org/10.3847/0004-6256/151/6/144) , M., [Triaud]{}, A. H. M. J., [Demory]{}, B.-O., [et al.]{} 2017, [, 542, 456](http://dx.doi.org/10.1038/nature21360) , J. E., [Siegmund]{}, W. A., [Mannery]{}, E. J., [et al.]{} 2006, [, 131, 2332](http://dx.doi.org/10.1086/500975) , B., [Conroy]{}, K., [Pr[š]{}a]{}, A., [et al.]{} 2016, [, 151, 68](http://dx.doi.org/10.3847/0004-6256/151/3/68) , D. G., [Borucki]{}, W. J., [Basri]{}, G., [et al.]{} 2010, [, 713, L79](http://dx.doi.org/10.1088/2041-8205/713/2/L79) , D. W., [Stefanik]{}, R. P., [Torres]{}, G., [et al.]{} 2002, [, 124, 1144](http://dx.doi.org/10.1086/341384) , S. R., [Schiavon]{}, R. P., [Frinchaboy]{}, P. M., [et al.]{} 2017, [, 154, 94](http://dx.doi.org/10.3847/1538-3881/aa784d) , S., [Huber]{}, D., [Batalha]{}, N. M., [et al.]{} 2017, [, 229, 30](http://dx.doi.org/10.3847/1538-4365/229/2/30) , T. D., [Bryson]{}, S. T., [Coughlin]{}, J. L., [et al.]{} 2016, [, 822, 86](http://dx.doi.org/10.3847/0004-637X/822/2/86) , D. L., [Holtzman]{}, J. A., [Allende Prieto]{}, C., [et al.]{} 2015, [, 150, 173](http://dx.doi.org/10.1088/0004-6256/150/6/173) , M. H., [Elsworth]{}, Y. P., [Tayar]{}, J., [et al.]{} 2018, , [[arXiv:1804.09983 \[astro-ph.SR\]]{}](http://arxiv.org/abs/1804.09983) , D., & [Marley]{}, M. S. 2008, [, 689, 1327](http://dx.doi.org/10.1086/592734) , E. F., & [Finkbeiner]{}, D. P. 2011, [, 737, 103](http://dx.doi.org/10.1088/0004-637X/737/2/103) , S., & [Mall[é]{}n-Ornelas]{}, G. 2003, [, 585, 1038](http://dx.doi.org/10.1086/346105) , M. F., [Cutri]{}, R. M., [Stiening]{}, R., [et al.]{} 2006, [, 131, 1163](http://dx.doi.org/10.1086/498708) , J. C., [Stumpe]{}, M. C., [Van Cleve]{}, J. E., [et al.]{} 2012, [, 124, 1000](http://dx.doi.org/10.1086/667697) , D. R. 2010, [, 48, 581](http://dx.doi.org/10.1146/annurev-astro-081309-130806) , J. R., [Schultz]{}, G., & [Kirkpatrick]{}, J. D. 1998, [, 499, L199](http://dx.doi.org/10.1086/311379) , M. C., [Smith]{}, J. C., [Catanzarite]{}, J. H., [et al.]{} 2014, [, 126, 100](http://dx.doi.org/10.1086/674989) , M. C., [Smith]{}, J. C., [Van Cleve]{}, J. E., [et al.]{} 2012, [, 124, 985](http://dx.doi.org/10.1086/667698) Thompson, S. E., Coughlin, J. L., Hoffman, K., [et al.]{} 2017, Planetary Candidates Observed by Kepler. VIII. A Fully Automated Catalog With Measured Completeness and Reliability Based on Data Release 25, [[arXiv:1710.06758]{}](http://arxiv.org/abs/arXiv:1710.06758) , G., [Andersen]{}, J., & [Gim[é]{}nez]{}, A. 2010, [, 18, 67](http://dx.doi.org/10.1007/s00159-009-0025-1) , G., [Sandberg Lacy]{}, C. H., [Pavlovski]{}, K., [et al.]{} 2014, [, 797, 31](http://dx.doi.org/10.1088/0004-637X/797/1/31) , A., [Triaud]{}, A. H. M. J., [Queloz]{}, D., [et al.]{} 2017, [, 604, L6](http://dx.doi.org/10.1051/0004-6361/201731107) , A. A., [Hawley]{}, S. L., [Bochanski]{}, J. J., [et al.]{} 2008, [, 135, 785](http://dx.doi.org/10.1088/0004-6256/135/3/785) , J. C., [Hearty]{}, F., [Skrutskie]{}, M. F., [et al.]{} 2010, [in , Vol. 7735, Ground-based and Airborne Instrumentation for Astronomy III](http://dx.doi.org/10.1117/12.856708), 77351C , J. C., [Hearty]{}, F., [Skrutskie]{}, M. F., [et al.]{} 2012, [in , Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV](http://dx.doi.org/10.1117/12.927140), 84460H , E. L., [Eisenhardt]{}, P. R. M., [Mainzer]{}, A. K., [et al.]{} 2010, [, 140, 1868](http://dx.doi.org/10.1088/0004-6256/140/6/1868) , G., [Cohen]{}, R. E., [Chojnowski]{}, S. D., [et al.]{} 2017, [, 154, 198](http://dx.doi.org/10.3847/1538-3881/aa8df9) , S. 2003, [, 342, 1291](http://dx.doi.org/10.1046/j.1365-8711.2003.06633.x)
--- abstract: 'In this short note we summarize the main results of our paper \[hep-ph/0510055\] and reply to a recent comment \[hep-ph/0511174\] on that paper.' author: - 'H. Walliser and H. Weigel' title: | Reply to Cohen’s comment on the rotation–vibration coupling\ in chiral soliton models --- In a recent comment [@cohen] Cohen criticized our conclusion in ref. [@ww] that the rigid rotator approach (RRA) to generate baryon states with non–zero strangeness in chiral soliton models is suitable to estimate excitation energies and decay properties of exotic baryons such as the $\Theta^+$ pentaquark. Starting point for this criticism is the so–called bound state approach (BSA) to chiral soliton models. The BSA describes baryons with non–zero strangeness as compound objects of the soliton and kaon modes that are treated as harmonic vibrations about the soliton. It is well established that the BSA becomes exact in the limit that the number of colors, $N_C$ approaches infinity. Cohen’s criticism is based on the (correct) observation that the excitation energy of the mode needed to build the $\Theta^+$ pentaquark does not vanish even in the combined limit of large $N_C$ and $m_K\to m_\pi$. Hence rotational and vibrational modes do not decouple for pentaquark baryons. Cohen then argues that this prevents the introduction of collective coordinates to describe these modes as rigid rotations and that the RRA would be inadequate to compute physical properties of exotic baryons in large $N_C$ (see also refs. \[2-7\] in ref. [@cohen])[^1]. Conversely, the correct conclusion from this observation is that these non–vanishing rotation–vibration couplings must be taken into account. This is exactly what we did in ref. [@ww]. We found that the correction to the RRA estimate of the $\Theta^+$ excitation energy due to the vibrational modes is indeed small. For this and other reasons we concluded that the $\Theta^+$ may well be considered as a collective excitation of the soliton. Here we will back up this conclusion by briefly recapitulating the central results of ref. [@ww]. In the RRA the $SU(3)$ Euler angles $\vec{\alpha}$ that parameterize the orientation of the soliton in flavor space are introduced as collective coordinates and quantized canonically. In the flavor symmetric case the RRA then predicts the excitation energy and wave–function of the exotic $\Theta^+$ to be \_= E\_- E\_N = , | \^+ D\^\_[(2,0,0),(1,,-J\_3)]{} (). \[eqR1\] Form and numerical value of the kaonic moment of inertia, $\Theta_K$, depend on the considered model. The baryon wave–functions are Wigner $D$-functions of the Euler angles, characterized by the left and right quantum numbers $(Y,T,T_3)$ and $(Y_R,J,-J_3)$, respectively, and the SU(3) representation $\mult$ with $(p,q)=(0,\frac{N_C+3}{2})$ for arbitrary $N_C$. Flavor symmetry breaking can straightforwardly be included and the exact eigenstates of the Hamiltonian  (4.15)[^2] for the collective coordinates are obtained as linear combinations of states from different SU(3) representations. To study the rotation–vibration coupling small amplitude fluctuations must be introduced in addition to the collective rotations. In ref. [@ww] we have utilized Dirac’s quantization procedure under constraints to quantize these additional fluctuations in the subspace that is orthogonal to the rigid rotations parameterized by the collective coordinates. This then defines the rotation–vibration approach (RVA). An important feature of the RVA is that it generates a contribution in the Hamiltonian, $H_{\rm int}$ that is linear in these fluctuations. Since $H_{\rm int}$ also contains collective coordinate operators it gives rise to a Yukawa coupling between the nucleon and its collective excitations. Actually, rotation–vibration coupling has been frequently considered in former soliton calculations, both in SU(2) and in SU(3) (see [@ww] for references). Since the RVA contains collective rotations and orthogonal fluctuations but the BSA contains fluctuations only, the large $N_C$ correspondence is such that the fluctuations in the two approaches are equal in the subspace orthogonal to the rotations. In the rotational subspace the BSA fluctuations must thus correspond to the collective rotations of the RRA. In section III and IV of ref. [@ww] we therefore have carefully compared the BSA and RRA in the rotational subspace. Projecting the BSA equation (3.5) onto its rotational subspace immediately leads to the criticized eqs. (3.10) and (3.11) for the mass differences $\omega_\Lambda = E_\Lambda - E_N$ and $\omega_\Theta = E_\Theta - E_N$. In Fig. 3 of ref. [@ww] we compared these mass differences to the excitation energies predicted by the RRA for arbitrary $N_C$ and $m_K=495{\rm MeV}$. Their equality for large $N_C$ and arbitrary $m_K \not= m_\pi$ unambiguously confirms the above described scenario for the correspondence between the BSA fluctuations and the collective excitations. The central equations of the RVA are the integro–differential eqs. (5.10) for $m_K = m_\pi$ and (7.4) for $m_K \not= m_\pi$. As a matter of fact, these equations are fundamental to the RVA and everything else directly follows thereof. We have solved these two equations numerically in order to obtain the phase shifts. For completeness we show these phase shifts here in an extra figure although they may be easily extracted from Figs. 2, 5 and 6 of ref. [@ww]. For $N_C=3$ we notice a sharp and pronounced resonance with almost a full $\pi$ jump in the phase shifts. -- -- -- -- In the RVA the transition matrix element $\langle N| H_{\rm int} |\Theta^+\rangle$ between the nucleon and the $\Theta^+$ is essential. This matrix element can be expressed as a sum of terms that are products of two factors, (i) a spatial integral over the wave–functions of the fluctuations and the soliton profile and (ii) a collective coordinate matrix element involving the Wigner $D$–functions of the nucleon and the $\Theta^+$, [*cf.*]{} eq. (\[eqR1\]). Of course, we have taken configuration mixing into account in the physical case of $m_K\ne m_\pi$. Although our results do not rely on separating background and resonance phase shifts it is instructive to do so. For simplicity we consider the SU(3) symmetric case (5.10) and switch off the $\Lambda$ pole contribution. In the $\Theta^+$ resonance region that contribution is unimportant and in large $N_C$ it vanishes anyhow if $m_K=m_\pi$ [@ww]. Using standard scattering theory techniques we then find the *exact* and *unambiguous* relation (k) = (k) + . \[phase\] The $N_C$ independent background phase shift $\overline{\delta}(k)$ is obtained from (5.10) for vanishing Yukawa coupling. Eq. (\[phase\]) corroborates that the RRA excitation energy $\omega_\Theta$ is absolutely essential to reproduce the correct phase shift within the RVA. The width, $\Gamma_\Theta(\omega_k)$ is proportional to the square of the transition matrix element $\langle N| H_{\rm int} |\Theta^+\rangle$ between the nucleon and the $\Theta^+$. The unique resonance contribution arises solely due to the Yukawa coupling. It emerges in the standard shape parameterized by the width $\Gamma_\Theta$ and the pole shift $\Delta_\Theta$ that are listed in eqs. (6.5) and (6.6). The collective RRA quantities, eq. (\[eqR1\]) inevitably enter the computation of $\omega_\Theta$, $\Gamma_\Theta$ and $\Delta_\Theta$, therewith emphasizing the collective nature of the $\Theta^+$. Furthermore these collective coordinate matrix elements induce a strong $N_C$ dependence in the resonance contribution. In the flavor symmetric case $\langle N| H_{\rm int} |\Theta^+\rangle$ contains only a single SU(3) structure. This is in sharp contrast to the approaches of refs. [@Diak] that attempt to describe the (potentially) small width of the $\Theta^+$ from cancellations between contributions from different SU(3) structures. Moreover, the SU(3) structure in $H_{\rm int}$ is not related to the transition operator for the decay $\Delta\to \pi N$. For $N_C=3$ we have calculated a small pole shift $\Delta_\Theta=-14{\rm MeV}$. This small number has to be contrasted with the RRA excitation energy $\omega_\Theta = 792{\rm MeV}$. Obviously, the coupling to the continuum yields a negligible correction to the RRA prediction for the excitation energy of the $\Theta^+$. This additionally indicates its collective nature. We have already noted that the BSA is exact for $N_C\to\infty$. Indeed we have verified that in this limit $\delta(k)$ is identical to the BSA phase shift. Nevertheless for $N_C\to\infty$ the separation in eq. (\[phase\]) still holds and we observe a broad resonance hidden by repulsive background phase shifts ([*cf.*]{} Fig. 2 in ref. [@ww] for the individual contributions). Eq. (\[phase\]) also applies to the $\Delta$ decay in the SU(2) version of the model, where nobody doubts the validity of the RRA. Apart from the different transition operator, the collective $\Theta^+$ quantities, eq. (\[eqR1\]), are simply replaced by those of the $\Delta$ in the two flavor model \_= E\_- E\_N = , | D\^[T=J=]{}\_[T\_3,-J\_3]{}(), where $\Theta_\pi$ is the pionic moment of inertia. A small pole shift $\Delta_\Delta$ due to the coupling to the continuum appears also there [@Hayashi]. In the large $N_C$ limit width and pole shift become sizable for the $\Theta^+$ (cf. Fig. 1) but vanish for the $\Delta$ in SU(2). For the $\Delta$ this reflects the above mentioned decoupling of rotational and vibrational modes and the fact that the $\Delta$ excitation becomes purely collective in that limit. In the real world, $N_C=3$, the situation is just reversed, namely width and pole shift for the $\Theta^+$ are smaller than the corresponding quantities for the $\Delta$, implying that the collective portion in the total wave function is even higher for the $\Theta^+$ than the $\Delta$. In any case, we may safely conclude that both excitations, the $\Delta$ *and* the $\Theta^+$ can reliably be described as collective excitations of the soliton. Finally we briefly comment on the $1/N_C$ expansion. Admittedly there is an inconsistency which we frankly discussed in chapter V of [@ww]. Namely, we have selected the leading $N_C$ Yukawa couplings only, but treated them to all orders in $N_C$ while we omitted subleading terms. This is completely sufficient to investigate the relation between the BSA (which does not have subleading terms to begin with) and the RVA. Because the leading terms taken into account introduce already an extreme (for $N_C=3$ diverging) $1/N_C$ dependence ([*cf.*]{} section VI.B of ref. [@ww]) serious doubts concerning the applicability of $1/N_C$ expansion methods in the context of exotic baryons are in place. There is no reason to expect the subleading rotation–vibration couplings to be small. Eventually all possible terms would have to be taken into account. This would lead to a tremendous computational effort including many Yukawa coupling terms into a complicated coupled channel calculation [@Bernd] (the inclusion of processes like $K N \longrightarrow K \pi N$ would stand at the very end of our wish list). Improvements in that direction probably have to wait for a clarification of the experimental situation concerning the status of exotic states. To summarize, we fully reject the criticism raised in the comment, ref. [@cohen], in all points. Moreover, from the presented argumentation it is obvious that the exotic $\Theta^+$, alike the non-exotic $\Delta$, is predominantly a collective soliton excitation. Thus we have to reiterate the conclusion drawn in ref. [@ww] that the rigid rotator approach is indeed appropriate in predicting pentaquark masses and properties in chiral soliton models, in sharp disagreement to the statements made in ref. [@cohen] (and refs. \[2-7\] therein) put forward to discredit this approach to chiral soliton models in flavor SU(3). [60]{} T. D. Cohen, arXiv:hep-ph/0511174:  [*“Comment on the Walliser-Weigel approach to exotic baryons in chiral soliton models”.*]{} H. Walliser and H. Weigel, arXiv:hep-ph/0510055:  [*“Bound state versus collective coordinate approaches in chiral soliton models and the width of the $\Theta^+$ pentaquark”.*]{} D. Diakonov, V. Petrov and M. V. Polyakov, Z. Phys. [**A359**]{}, 305 (1997),\ M. Prasza[ł]{}owicz, Phys. Lett. [**B583**]{}, 96 (2004). B. Schwesinger, H. Weigel, G. Holzwarth and A. Hayashi, Phys. Rep. [**173**]{}, 173 (1989). B. Schwesinger, Nucl. Phys. [**A537**]{}, 253 (1992). [^1]: This (wrong) argument would also invalidate the RRA for non–exotic $\mathbf{8}$ and $\mathbf{10}$ baryons in the full calculation, where a sizable symmetry breaking must be included. Note [*e.g.*]{} that the excitation energy of the $\Omega(1670)$ is also order $N_C^0$ but even larger than that of the $\Theta^+$. [^2]: The equations in this note are labeled (R1), (R2) and (R3), all other numbers refer to formulas in ref. [@ww].
--- abstract: 'The paper is devoted to investigation of some properties of $[L]$-homotopy groups. It is proved, in particular, that for any finite $CW$-complex $L$, satisfying double inequality $[S^n] < [L] \le [S^{n+1}]$, ${\pi_n^{[L]}(S^n)}= {\mathbb Z}$. Here $[L]$ denotes extension type of complex $L$ and ${\pi_n^{[L]}(X)}$ denotes $n$-th $[L]$-homotopy group of $X$.' address: 'Department of Mathematics and Statistics, University of Saskatchewan, McLean Hall, 106 Wiggins Road, Saskatoon, SK, S7N 5E6, Canada' author: - 'A. V. Karasev' title: 'On $[L]$-homotopy groups' --- Introduction ============ A new approach to dimension theory, based on notions of extension types of complexes and extension dimension leads to appearence of $[L]$-homotopy theory which, in turn, allows to introduce $[L]$-homotopy groups (see [@ch]). Perhaps the most natural problem related to $[L]$-homotopy groups is a problem of computation. It is necessary to point out that $[L]$-homotopy groups may differ from usual homotopy groups even for complexes. More specifically the problem of computation can be stated as follows: describe $[L]$-homotopy groups of a space $X$ in terms of usual homotopy groups of $X$ and homotopy properties of complex $L$. The first step on this way is apparently computation of $n$-th $[L]$-homotopy group of $S^n$ for complex whose extension type lies between extension types of $S^n$ and $S^{n+1}$. In what follows we, in particular, perform this step. Preliminaries ============= Follow [@ch], we introduce notions of [*extension types of complexes, extension dimension, $[L]$-homotopy, $[L]$-homotopy groups*]{} and other related notions. We also state Dranishnikov’s theorem, characterizing extension properties of complex [@dr]. All spaces are polish, all complexes are countable finitely-dominated $CW$ complexes. For spaces $X$ and $L$, the notation $L \in {\mathop{\rm AE}\nolimits}(X)$ means, that every map $f:A \to L$, defined on a closed subspace $A$ of $X$, admits an extension $\bar f$ over $X$. Let $L$ and $K$ be complexes. We say (see [@ch]) that $L \le K$ if for each space $X$ from $L \in {\mathop{\rm AE}\nolimits}(X)$ follows $K \in {\mathop{\rm AE}\nolimits}(X)$. Equivalence classes of complexes with respect to this relation are called [*extension types*]{}. By $[L]$ we denote extension type of $L$. [[[([@ch])]{}.]{}]{} The extension dimension of a space $X$ is extension type ${\mathop{\rm ed}\nolimits}(X)$ such that ${\mathop{\rm ed}\nolimits}(X) = \min \{ [L] : L \in {\mathop{\rm AE}\nolimits}(X) \}$. Observe, that if $[L] \le [S^n]$ and ${\mathop{\rm ed}\nolimits}(X) \le [L]$, then ${\mathop{\rm dim}\nolimits}X \le n$. Now we can give the following [@ch] We say that a space $X$ is [*an absolute (neighbourhood) extensor modulo*]{} $L$ (shortly $X$ is ${\mathop{\rm A(N)E}\nolimits}([L])$) and write $X\in {\mathop{\rm A(N)E}\nolimits}([L])$ if $X\in {\mathop{\rm A(N)E}\nolimits}(Y)$ for each space $Y$ with ${\mathop{\rm ed}\nolimits}(X) \le [L]$. Definition of $[L]$-homotopy and $[L]$-homotopy equivalence [@ch] are essential for our consideration: Two maps $f_0$, $f_1: X \to Y$ are said to be $[L]$-homotopic (notation: $f_0 \stackrel{[L]}{\simeq} f_1$) if for any map $h: Z \to X \times [0,1]$, where $Z$ is a space with ${\mathop{\rm ed}\nolimits}(Z) \le [L]$, the composition $(f_0 \oplus f_1) h|_{h^{-1}(X \times \{ 0, 1 \} )} : h^{-1} (X \times \{ 0,1 \}) \to Y$ admits an extension $H: Z \to Y$. A map $f:X \to Y$ is said to be $[L]$-homotopy equivalence if there is a map $g: Y \to X$ such that the compositions $gf$ and $fg$ are $[L]$-homotopic to ${\mathop{\rm id}\nolimits}_X$ and ${\mathop{\rm id}\nolimits}_Y$ respectively. Let us observe (see [@ch]) that ${\mathop{\rm ANE}\nolimits}([L])$-spaces have the following $[L]$-homotopy extension property. \[homext\] Let $[L]$ be a finitely dominated complex and $X$ be a Polish ${\mathop{\rm ANE}\nolimits}([L])$-space. Suppose that $A$ is closed in a space $B$ with ${\mathop{\rm ed}\nolimits}(B) \le [L]$. If maps $f, g : A \to X$ are $[L]$-homotopic and $f$ admits an extension $F : B \to X$ then $g$ also admits an extension $G : B \to X$, and it may be assumed that $F$ is $[L]$-homotopic to $G$. To provide an important example of $[L]$-homotopy equivalence we need to introduce the class of approximately $[L]$-soft maps. [@ch] A map $f: X \to Y$ is said to be approximately $[L]$-soft, if for each space $Z$ with ${\mathop{\rm ed}\nolimits}(Z) \le [L]$, for each closed subset $A\subset Z$, for an open cover ${{\EuScript U}}\in {\mathop{\rm cov}\nolimits}(Y)$, and for any two maps $g: A \to X$ and $h: Z \to Y$ such that $fg = h| _A$ there is a map $k: Z \to X$ satisfying condition $k| _A = g$ and the composition $fk$ is ${{\EuScript U}}$-close to $h$. [@ch] \[hmteq\] Let $f: X \to Y$ be a map between ${\mathop{\rm ANE}\nolimits}([L])$-compacta and ${\mathop{\rm ed}\nolimits}(Y) \le [L]$. If $f$ is approximately $[L]$-soft then $f$ is a $[L]$-homotopy equivalence. In order to define $[L]$-homotopy groups it is necessary to consider an [*$n$-th $[L]$-sphere ${S^n_{[L]}}$*]{} [@ch], namely, an $[L]$-dimensional ${\mathop{\rm ANE}\nolimits}([L])$ - compactum admitting an approximately $[L]$-soft map onto $S^n$. It can be shown that all possible choices of an $[L]$-sphere ${S^n_{[L]}}$ are $[L]$-homotopy equivalent. This remark, coupled with the following proposition, allows us to consider for every finite complex $L$, every $n \ge 1$ and for any space $X$, the set ${\pi_n^{[L]}(X)}= [{S^n_{[L]}}, X] _{[L]}$ endowed with natural group structure (see [@ch] for details). [@ch] Let $L$ be a finitely dominated complex and $X$ be a finite polyhedron or a compact Hilbert cube manifold. Then there exist a $[L]$-universal ${\mathop{\rm ANE}\nolimits}([L])$ compactum $\mu ^{[L]} _X$ with ${\mathop{\rm ed}\nolimits}(\mu ^{[L]} _X ) = [L]$ and an $[L]$-invertible and approximately $[L]$-soft map $f^{[L]}_X : \mu ^{[L]}_X \to X$. The following theorem is essential for our consideration. \[D1\] Let $L$ be simply-connected $CW$-complex, $X$ be finite-dimensional compactum. Then $L \in {\mathop{\rm AE}\nolimits}(X)$ iff ${\mathop{\rm c-dim}\nolimits}_{H_i(L)} X \le i$ for any $i$. From the proof of Theorem \[D1\] one can conclude that the following theorem also holds: \[D2\] Let $L$ be a $CW$-complex (not necessary\ simply-connected). Then for any finite-dimensional compactum $X$ from $L \in {\mathop{\rm AE}\nolimits}(X)$ follows that ${\mathop{\rm c-dim}\nolimits}_{H_i(L)} X \le i$ for any $i$. Cohomological properties of $L$ =============================== In this section we will investigate some cohomological properties of complexes $L$ satisfying condition $[L] \le S^n$ for some $n$. To establish these properties let us first formulate the following \[Sp\] [@sp] Let $(X,A)$ be a topological pair, such that $H_q (X,A)$ is finitely generated for any $q$. Then free submodules of $H^q (X,A)$ and $H_q (X,A)$ are isomorphic and torsion submodules of $H^q (X,A)$ and $H_{q-1} (X,A)$ are isomorphic. Now we use Theorem \[D2\] to obtain the following lemma. \[torh\] Let $L$ be finite $CW$ complex such that $[L] \le [S^{n+1}]$ and $n$ is minimal with this property. Then for any $q \le n$ $H_q (L)$ is torsion group. Suppose that there exists $q \le n$ such that $H^q (L) = {\mathbb Z}\oplus G$. To get a contradiction let us show that $[L] \le [S^q]$. Consider $X$ such that $L \in {\mathop{\rm AE}\nolimits}(X)$. Observe, that $X$ is finite-dimensional since $[L] \le [S^{n+1}]$ by our assumption. Denote $H = H_q (L)$. By Theorem \[D2\] we have ${\mathop{\rm c-dim}\nolimits}_H X \le q$. Hence, for any closed subset $A \subseteq X$ we have $H^{q+1} (X,A;H) = \{ 0 \}$. From the other hand, univeral coefficients formula implies that\ $H^{q+1} (X,A) \approx H^{q+1} (X,A) \otimes H \oplus {\mathop{\rm Tor}\nolimits}(H^{q+2} (X, A),H)$. Hence, $H^{q+1} (X,A) \otimes H = \{ 0 \}$. Observe, however, that by our assumtion we have $H^{q+1} (X,A) \otimes H = H^{q+1} \otimes ( {\mathbb Z}\oplus G) = H^{q+1} (X,A) \oplus (H^{q+1} (X,A) \otimes G)$. Therefore, $H^{q+1} (X,A) = 0$. From the last fact we conclude that ${\mathop{\rm c-dim}\nolimits}X \le q$ and therefore since $X$ is finite-dimensional, ${\mathop{\rm dim}\nolimits}X \le q$ which iplies $S^q \in {\mathop{\rm AE}\nolimits}(X)$. From this lemma and Proposition \[Sp\] we obtain \[torch\] In the same assumptions $H^{q} (L)$ is torsion group for any $q \le n$. The following fact is essential for constraction of compacts with some specific properties which we are going to construct further. \[acycl\] Let $L$ be as in previous lemma. For any $m$ there exists $p \ge m$ such that $H^q(L; {\mathbb Z _{p}} ) = \{ 0 \}$ for any $q \le n$. From Corollary \[torch\] we can conclude that $H^{q} (L) = \bigoplus\limits_{i=1}^{l_k} {\mathbb Z _{m_{qi}}}$ for any $q \le n$. Additionally, let ${\mathop{\rm Tor}\nolimits}H^{n+1}(L) = \bigoplus\limits_{i=1}^{l_{n+1}} {\mathbb Z _{m_{(n+1)i}}}$ For any $m$ consider $p \ge m$ such that $(p,m_{ki})$ for every $k = 1 \ldots n+1$ and $i = 1 \ldots l_k$. Universal coefficients formula implies that $H^{q} (L; {\mathbb Z _{p}} ) = \{ 0 \}$ for every $k \le n$. Finally let us proof the following \[ext\] Let $X$ be a metrizable compactum, $A$ be a closed subset of $X$. Consider a map $f: A \to S^n$. If there exists extension $\bar f : X \to S^n$ then for any $k$ we have ${\delta ^*_{X,A}} (f^* ( {\zeta})) = 0$ in group $H^{n+1} (X,A; {\mathbb Z _{k}})$, where ${\zeta}$ is generator in $H^n (S^n, {\mathbb Z _{k}})$. Let $\bar f$ be an extension of $f$. Commutativity of the following diagram implies assertion of lemma: $$\begin{CD} H^n (A; {\mathbb Z _{k}}) @>{\delta ^*_{X,A}}>> H^{n+1} (X,A; {\mathbb Z _{k}})\\ @AA{\bar{f^*} = f^*}A @AA{\bar{f^*}}A\\ H^n (S^n; {\mathbb Z _{k}}) @>{\delta ^*_{S^n,S^n}}>> H^{n+1} (S^n, S^n; {\mathbb Z _{k}}) = \{ 0 \} \end{CD}$$ Some properties of \[L\]-homotopy groups ======================================== In this section we will investigate some properties of $[L]$-homotopy groups. From this point and up to the end of the text we consider finite complex $L$ such that $[S^n] < [L] \le [S^{n+1}]$ for some fixed $n$. \[grup\] Let us observe that for such complexes ${S^n_{[L]}}$ is $[L]$-homotopic equivalent to $S^{n}$ (see Proposition \[hmteq\]). Therefore for any $X$ ${\pi_n^{[L]}(X)}$ is isomorphic to $G = \pi _n (S^n)/N([L])$ where $N([L])$ denotes the relation of $[L]$-homotopic equivalence between elements of $\pi_n (S^n)$. From this observation one can easely obtain the following fact. \[grp\] For $\pi_n^{[L]}(S^n)$ there are three variants: $\pi_n^{[L]} (S^n) = {\mathbb Z}$, $\pi_n^{[L]} (S^n) = {\mathbb Z _{m}}$ for some integer $m$ or this group is trivial. Let us characterize the hypothetical equality ${\pi_n^{[L]}(S^n)}= {\mathbb Z _{m}}$ in terms of extensions of maps. \[hext\] If ${\pi_n^{[L]}(S^n)}= {\mathbb Z _{m}}$ then for any $X$ such that ${\mathop{\rm ed}\nolimits}(X) \le [L]$, for any closed subset $A$ of $X$ and for any map $f:A \to S^n$, there exists extension $\bar h : X \to S^m$ of composition $h = z_m f$, where $z_m : S^n \to S^n$ is a map having degree $m$. Suppose, that ${\pi_n^{[L]}(S^n)}= {\mathbb Z _{m}}$. Then from Remark \[grup\] and since $ [z_m] = m[ {\mathop{\rm id}\nolimits}_{S^n}] = [*]$ (where $[f]$ denotes homotopic class of $f$) we conclude that $z_m : S^n \to S^n$ is $[L]$-homotopic to constant map. Let us show that $h = z_m f : A \to S^n$ is also $[L]$-homotopic to constant map. This fact will prove our statement. Indeed, by our assumption ${\mathop{\rm ed}\nolimits}(X) \le [L]$ and $S^n \in ANE$ and therefore we can apply Proposition \[homext\]. Consider $Z$ such that ${\mathop{\rm ed}\nolimits}(Z) \le [L]$ and a map $H : Z \to A \times I$, where $I = [0,1]$. Pick a point $s \in S^n$. Let $f_0 = z_m f$, $f_1 \equiv s$ – constant map considered as $f_i : A \times \{ i \} \to S^n$, $i = 0,1$. Define $F: A \times I \to S^n \times I$ as follows: $F (a,t) = (f(a),t)$ for each $a \in A$ and $t \in I$. Let $f_0' \equiv z_m$ and $f_1' \equiv s$ considered as $f_i' : S^n \times \{ i \} \to S^n$, $i = 0,1$. Consider a composition $G = F H : Z \to S^n \times I$. By our assumption $f_0'$ is $[L]$-homotopic to $f_1'$. Therefore a map $g: G^{-1} (S^n \times \{ 0 \} \bigcup S^n \times \{ 1 \}) \to S^n$, defined as $g| _{G^{-1} (S^n \times \{ i \}) } = f_i' G$ for $i = 0,1$, can be extended over $Z$. From the other hand we have $G^{-1} (S^n \times \{ i \}) \equiv H^{-1} (A \times \{ i \})$ and $g| _{G^{-1} (S^n \times \{ i \}) } = f_i' f H = f_i$ for $i = 0,1$. This remark completes the proof. Now consider a special case of complex having a form $S^n < L = K_s \vee K \le S^{n+1}$, where $K_s$ is a complex obtained by attaching to $S^n$ a $(n+ 1)$-dimensional cell using a map of degree $s$. Let $[{\alpha}] \in \pi _n (X)$ be an element of order $s$. Then ${\alpha}$ is $[L]$-homotopy to constant map. Observe that simillar to proof of Proposition \[hext\] it is enough to show that for every $Z$ with ${\mathop{\rm ed}\nolimits}(Z) \le [L]$, for every closed subspace $A$ of $Z$ and for any map $f : Z \to S^n$ a composition ${\alpha}f : A \to X$ can be extended over $Z$. Let $g : S^n \to K_s ^{(n)}$ be an embedding (by $M ^{(n)}$ we denote $n$-dimensional skeleton of complex $M$) and $r : L \to K_s$ be a retraction. Since ${\mathop{\rm ed}\nolimits}(Z) \le [L]$, a composition $gf$ has an extension $F : Z \to L$. Let $F' = rF$ and ${\alpha}'$ be a map ${\alpha}$ considered as a map ${\alpha}' : K_s ^{(n)} \to X$. Observe that ${\alpha}' F'$ is a necessary extension of ${\alpha}f$. Computation of ${\pi_n^{[L]}(S^n)}$ =================================== In this section we will prove that ${\pi_n^{[L]}(S^n)}= {\mathbb Z}$. Suppose the oppsite, i.e. ${\pi_n^{[L]}(S^n)}= {\mathbb Z _{m}}$ (we use Proposition \[grp\]; the same arguments can be used to prove that ${\pi_n^{[L]}(S^n)}$ is non-trivial). To get a contradiction we need to obtain a compact with special extension properties. We will use a construction of [@drp] Let us recall the following definition. [@drp] Inverse sequence $S = \{ X_i, p ^{i+1}_i : i \in \omega \}$ consisting of metrizable compacta is said to be $L$-resolvable if for any $i$, $A \subseteq X_i$ - closed subspace of $X_i$ and any map $f:A \to L$ there exists $k \le i$ such that composition $fp ^k _i : (p ^k _i)^{-1}A \to L$ can be extended over $X_k$. The following lemma (see [@drp]) expresses an important property of $[L]$-resolvable inverse sequences. Suppose that $L$ is a countable complex and that $X$ is a compactum such that $X = lim S$ where $S = { (X_i, \lambda _i), q_i^{i+1}}$ is a $L$-resolvable inverse system of compact polyhedra $X_i$ with triangulations $\lambda _i$ such that ${\mathop{\rm mesh}\nolimits}\{ \lambda _i \} \to 0$. Then $L \in {\mathop{\rm AE}\nolimits}(X)$ Let us recall that in [@drp] inverse sequence $S = \{ (X_i, \tau _i), p^{i+1}_i \}$ was constructed such that $X_i$ is compact polyhedron with fixed triangulation $\tau _i$, $X_0 = S^{n+1}$, ${\mathop{\rm mesh}\nolimits}\tau _i \to 0$, $S$ is $[L]$-resolvable and for any $x \in X_i$ we have $(p^{i+1}_i)^{-1}x \simeq L$ or $*$. It is easy to see that using the same construction one can obtain inverse sequence $S = \{ (X_i, \tau _i), p^{i+1}_i \}$ having the same properties with exeption of $X_0 = D^{n+1}$ where $D^{n+1}$ is $n+1$-dimensional disk. Let $X = \lim S$. Observe, that ${\mathop{\rm ed}\nolimits}(X) \le [L]$. Let $p_0 : X \to D^{n+1}$ be a limit projection. Pick $p \ge m+1$ which Lemma \[acycl\] provides us with. By Vietoris-Begle theorem (see [@sp]) and our choice of p, for every $i$ and every $X'_i \subseteq X_i$ a homomorphism $(p^{i+1}_i)^* : H^{k} (X'_i; {\mathbb Z _{p}} ) \to H^{k} ((p^{i+1}_i)^{-1}X'_i ; {\mathbb Z _{p}} )$ is isomorphism for $k \le n$ and monomorphism for $k = n+1$. Therefore for each $D' \subseteq X_0 = D^{n+1}$ homomorphism $p_0^* : H^{k} (D'; {\mathbb Z _{p}} ) \to H^{k} ((p_0)^{-1} D' ; {\mathbb Z _{p}}) $ is isomorphism for $k \le n$ and monomorphism for $k = n+1$. In particular, $H^{n} (X ; {\mathbb Z _{p}}) = \{ 0 \}$ since $X_0 = D^{n+1}$ has trivial cohomology groups. Let $A = (p_0)^{-1} S^n$ and ${\zeta}\in H^n(S^n; {\mathbb Z _{p}}) \approx {\mathbb Z _{p}}$ be a generator. Since $p_0^* : H^n (S^n; {\mathbb Z _{p}} ) \to H^n (A; {\mathbb Z _{p}} )$ is isomorphism, $p_0^* ({\zeta})$ is generator in $H^n (A, {\mathbb Z _{p}}) \approx {\mathbb Z _{p}} $. In particular, $p_0^* ({\zeta})$ is element of order $p$. From exact sequence of pair $(X,A)$ $$\begin{CD} \ldots\to H^n (X; {\mathbb Z _{p}}) = \{ 0 \} @>i_{X,A}>> H^n (A; {\mathbb Z _{p}}) @>{\delta ^*_{X,A}}>> H^{n+1} (X,A; {\mathbb Z _{p}}) \to \ldots \\ \end{CD}$$ we conclude that ${\delta ^*_{X,A}}$ is monomorphism and hence ${\delta ^*_{X,A}}(p_0^*({\zeta})) \in H^{n+1} (X,A; {\mathbb Z _{p}})$ is element of order $p$. Consider now a composition $h = z_m p_0$. By our assumption this map can be extended over $X$ (see Proposition \[hext\]). This fact coupled with Lemma \[ext\] implies that ${\delta ^*_{X,A}}(h^*({\zeta})) = {0}$ in $H^{n+1} (X,A; {\mathbb Z _{p}})$. But ${\delta ^*_{X,A}}(h^*({\zeta})) = m{\delta ^*_{X,A}}(p_0^*({\zeta}))$. We arrive to a contradiction which shows that Let $L$ be a complex such that $[S^n] < [L] \le [S^{n+1}]$. Then ${\pi_n^{[L]}(S^n)}\approx {\mathbb Z}$. The author is greatfull to A. C. Chigogidze for usefull discussions. [99]{} A. Chigogidze, [*Infinite dimensional topology and shape theory*]{}, to appear in: “Handbook of Geometric Topology” edited by R. Daverman and R. B. Sher, North Holland, Amsterdam, 1999. A. N. Dranishnikov, [*Extension of mappings into $CW$-complexes*]{}, Math. USSR Sbornik [**74**]{} (1993), 47-56. A. N. Dranishnikov and D. Repovš, [*Cohomological dimension with respect to perfect groups*]{}, Topology Appl. [**74**]{} (1996), 123-140. E. H. Spanier, [*Algebraic topology*]{}, McGraw-Hill, New York, 1966.
--- abstract: | Compressible Euler-Poisson equations are the standard self-gravitating models for stellar dynamics in classical astrophysics. In this article, we construct periodic solutions to the isothermal ($\gamma=1$) Euler-Poisson equations in $R^{2}$ with possible applications to the formation of plate, spiral galaxies and the evolution of gas-rich, disk-like galaxies. The results complement Yuen’s solutions without rotation (M.W. Yuen, *Analytical Blowup Solutions to the 2-dimensional Isothermal Euler-Poisson Equations of Gaseous Stars*, J. Math. Anal. Appl. **341 (**2008**),** 445–456.). Here, the periodic rotation prevents the blowup phenomena that occur in solutions without rotation. Based on our results, the corresponding $3$D rotational results for Goldreich and Weber’s solutions are conjectured.   MSC2010: 35B10 35B40 35Q85 76U05 85A05 85A15 Key Words: Euler-Poisson Equations, Periodic Solutions, Rotational Solutions, Galaxies Formation, Galaxies Evolution, Gaseous Stars author: - | M<span style="font-variant:small-caps;">an Kam Kwong[^1]</span>\ *Department of Applied Mathematics,*\ *The Hong Kong Polytechnic University,*\ *Hung Hom, Kowloon, Hong Kong* - | M<span style="font-variant:small-caps;">anwai Yuen[^2]</span>\ *Department of Mathematics and Information Technology,*\ *The Hong Kong Institute of Education,*\ *10 Lo Ping Road, Tai Po, New Territories, Hong Kong* date: 'Revised 29-May-2014' title: '**Periodic Solutions of 2D Isothermal Euler-Poisson Equations with Possible Applications to Spiral and Disk-like Galaxies**' --- Introduction ============ The evolution of self-gravitating galaxies or gaseous stars in astrophysics can be described by the compressible Euler-Poisson equations: $$\left\{ \begin{array} [c]{rl}{\normalsize \rho}_{t}{\normalsize +\nabla\cdot(\rho\vec{u})} & {\normalsize =}{\normalsize 0}\\ \rho(\vec{u}_{t}+(\vec{u}\cdot\nabla)\vec{u}){\normalsize +\nabla P} & {\normalsize =}{\normalsize -\rho\nabla\Phi}\\ {\normalsize \Delta\Phi(t,\vec{x})} & {\normalsize =\alpha(N)}{\normalsize \rho}\text{,}\end{array} \right. \label{Euler-Poisson2DRotation}$$ where $\alpha(N)$ is a constant related to the unit ball in $R^{N}$, such that $\alpha(1)=2$, $\alpha(2)=2\pi$, and for $N\geq3$$$\alpha(N)=N(N-2)V(N)=N(N-2)\frac{\pi^{N/2}}{\Gamma(N/2+1)}\text{,}$$ where $V(N)$ is the volume of the unit ball in $R^{N}$ and $\Gamma$ is the Gamma function. The unknown functions $\rho=\rho(t,\vec{x})$ and $\vec{u}=\vec{u}(t,\vec{x})=(u_{1},u_{2},....,u_{N})\in\mathbf{R}^{N}$ are the density and the velocity, respectively. The $\gamma$-law is usually imposed on the pressure term:$$P={\normalsize P}\left( \rho\right) {\normalsize =K\rho}^{\gamma}$$ with the constant $\gamma\geq1$. In addition, the ideal fluid is called isothermal if $\gamma=1$. The Poisson equation (\[Euler-Poisson2DRotation\])$_{3}$ can be solved as$${\normalsize \Phi(t,\vec{x})=}\int_{R^{N}}Green(\vec{x}-\vec{y})\rho(t,\vec {y}){\normalsize d\vec{y}}\text{,}$$ with the Green’s function$$Green(\vec{x})=\left\{ \begin{array} [c]{ll}\log|\vec{x}| & \text{for }N=2\\ \frac{-1}{|\vec{x}|^{N-2}} & \text{for }N\geq3. \end{array} \right.$$ For $N=3$, the Euler-Poisson equations (\[Euler-Poisson2DRotation\]) are the classical models in stellar dynamics given in [@BT], [@C], [@KW] and [@Longair]. Some results on local existence of the system can be found in [@M], [@B], and [@G]. If we seek solutions with radial symmetry, the Poisson equation (\[Euler-Poisson2DRotation\])$_{3}$ is transformed to$${\normalsize r^{N-1}\Phi}_{rr}\left( {\normalsize t,r}\right) +\left( N-1\right) r^{N-2}\Phi_{r}\left( {\normalsize t,r}\right) {\normalsize =}\alpha\left( N\right) {\normalsize \rho r^{N-1}}$$$$\Phi_{r}=\frac{\alpha\left( N\right) }{r^{N-1}}\int_{0}^{r}\rho (t,s)s^{N-1}ds.$$ In particular, radially symmetric solutions without rotation can be expressed as$$\rho(t,\vec{x})=\rho(t,r)\text{, }\vec{u}(t,\vec{x})=\frac{\vec{x}}{r}V(t,r)$$ with the radius $r:=\left( \sum_{i=1}^{N}x_{i}^{2}\right) ^{1/2}$. In 1980, Goldreich and Weber first constructed analytical blowup (collapsing) solutions of the $3$D Euler-Poisson equations for $\gamma=4/3$ for the non-rotating gas spheres [@GW]. In 1992, Makino [@M1] provided a rigorous proof of the existence of these kinds of blowup solutions. In 2003, Deng, Xiang and Yang [@DXY] generalized the solutions to higher dimensions, $R^{N}$($N\geq3$). In 2008, Yuen constructed the corresponding solutions (without compact support) in $R^{2}$ with $\gamma=1$ [@YuenJMAA2008a]. In summary, the family of the analytical solutions is as follows:for $N\geq3$ and $\gamma=(2N-2)/N$, in [@DXY] $$\left\{ \begin{array} [c]{c}\rho(t,r)=\left\{ \begin{array} [c]{c}\dfrac{1}{a(t)^{N}}f(\frac{r}{a(t)})^{N/(N-2)}\text{ for }r<a(t)S_{\mu}\\ 0\text{ for }a(t)S_{\mu}\leq r \end{array} \right. \text{, }V{\normalsize (t,r)=}\dfrac{\dot{a}(t)}{a(t)}{\normalsize r}\\ \ddot{a}(t){\normalsize =}\dfrac{-\lambda}{a(t)^{N-1}},\text{ }{\normalsize a(0)=a}_{0}>0{\normalsize ,}\text{ }\dot{a}(0){\normalsize =a}_{1}\\ \ddot{f}(s){\normalsize +}\dfrac{N-1}{s}\dot{f}(s){\normalsize +}\dfrac {\alpha(N)}{(2N-2)K}f{\normalsize (s)}^{N/(N-2)}{\normalsize =}\frac {{\normalsize N(N-2)\lambda}}{{\normalsize (2N-2)K}}{\normalsize ,}\text{ }f(0)=\alpha>0,\text{ }\dot{f}(0)=0\text{,}\end{array} \right. \label{3-dgamma=4over3}$$ where the finite $S_{\mu}$ is the first zero of $f(s)$ andfor $N=2$ and $\gamma=1$, in [@YuenJMAA2008a]$$\left\{ \begin{array} [c]{c}\rho(t,r)=\dfrac{1}{a(t)^{2}}e^{f(\frac{r}{a(t)})}\text{, }V{\normalsize (t,r)=}\dfrac{\dot{a}(t)}{a(t)}{\normalsize r}\\ \ddot{a}(t){\normalsize =}\dfrac{-\lambda}{a(t)},\text{ }{\normalsize a(0)=a}_{0}>0{\normalsize ,}\text{ }\dot{a}(0){\normalsize =a}_{1}\\ \ddot{f}(s){\normalsize +}\dfrac{1}{s}\dot{f}(s){\normalsize +\dfrac{2\pi}{K}e}^{f(s)}{\normalsize =\frac{2\lambda}{K},}\text{ }f(0)=\alpha,\text{ }\dot{f}(0)=0. \end{array} \right. \label{solution 3 Yuen}$$ Similar solutions exist for other similar systems, see, for example, [@YuenNonlinearity2009] and [@YuenCQG2009]. All the above known solutions are without rotation. For the 2D Euler equations with $\gamma=2$, $$\left\{ \begin{array} [c]{rl}{\normalsize \rho}_{t}{\normalsize +\nabla\cdot(\rho\vec{u})} & {\normalsize =}{\normalsize 0}\\ \rho(\vec{u}_{t}+(\vec{u}\cdot\nabla)\vec{u}){\normalsize +\nabla P} & {\normalsize =0,}\end{array} \right.$$ Zhang and Zheng [@ZZ] in 1995 constructed the following explicitly spiral solutions:$$\rho=\frac{r^{2}}{8Kt^{2}}\text{, }u_{1}=\frac{1}{2t}(x+y)\text{, }u_{2}=\frac{1}{2t}(x-y)$$ in $r\leq2t\sqrt{\dot{P}_{0}}$, and$$\left\{ \begin{array} [c]{c}\rho=\rho_{0},\\ u_{1}=(2t\dot{P}_{0}\cos\theta+\sqrt{2\dot{P}_{0}}\sqrt{r^{2}-2t^{2}\dot {P}_{0}}\sin\theta)/r,\\ u_{2}=(2t\dot{P}_{0}\sin\theta-\sqrt{2\dot{P}_{0}}\sqrt{r^{2}-2t^{2}\dot {P}_{0}}\cos\theta)/r \end{array} \right.$$ in $r>2t\sqrt{\dot{P}_{0}}$, where $\rho_{0}>0$ is an arbitrary parameter, $\dot{P}_{0}=\dot{P}(\rho_{0})$, $x=r\cos\theta$ and $y=r\sin\theta.$ In this article, we combine the above results to construct solutions with rotation for the $2$D isothermal Euler-Poisson equations. Our main contribution is in applying the isothermal pressure term to balance the potential force term to generate novel solutions. \[2-dRotation\]For the isothermal ($\gamma=1$) Euler-Poisson equations (\[Euler-Poisson2DRotation\]) in $R^{2}$, there exists a family of global solutions with rotation in radial symmetry,$$\left\{ \begin{array} [c]{c}\rho(t,\vec{x})=\rho(t,r)=\frac{1}{a(t)^{2}}e^{f(\frac{r}{a(t)})}\text{, }{\normalsize u}_{1}{\normalsize =}\frac{\overset{\cdot}{a}(t)}{a(t)}x-\frac{\xi}{a(t)^{2}}y\text{, }u_{2}=\frac{\xi}{a(t)^{2}}x+\frac {\overset{\cdot}{a}(t)}{a(t)}y,\\ \ddot{a}(t)=\frac{-\lambda}{a(t)}+\frac{\xi^{2}}{a(t)^{3}}\text{, }a(0)=a_{0}>0\text{, }\dot{a}(0)=a_{1}\\ \overset{\cdot\cdot}{f}(s){\normalsize +}\frac{1}{s}\overset{\cdot}{f}(s){\normalsize +\frac{2\pi}{K}e}^{f(s)}{\normalsize =}\frac{2\lambda}{K}{\normalsize ,}\text{ }f(0)=\alpha,\text{ }\overset{\cdot}{f}(0)=0\text{,}\end{array} \right. \label{2-DIsothermalRotation}$$ with arbitrary constants $\xi\neq0,$ $a_{0}$, $a_{1}$ and $\alpha$. - With $\lambda>0$, solutions (\[2-DIsothermalRotation\]) are non-trivially time-periodic, except for the case $a_{0}=\frac{\left\vert \xi\right\vert }{\sqrt{\lambda}}$and $a_{1}=0$; if $a_{0}=\frac{\left\vert \xi\right\vert }{\sqrt{\lambda}}$ and $a_{1}=0$, solutions (\[2-DIsothermalRotation\]) are steady. - With $\lambda\leq0$, solutions (\[2-DIsothermalRotation\]) are global in time. Here, $2$D rotational solutions (\[2-DIsothermalRotation\]) of the Euler-Poisson equations (\[Euler-Poisson2DRotation\]) may be reference examples for modeling the formation of plate and spiral galaxies or gaseous stars in the non-relativistic content, because most of the matter is gas at the early stage of their evolution. Readers can refer to [@ZZ] for the detail description of astrophysical situations. In addition, solutions (\[2-DIsothermalRotation\]) may also be applied to the development of gas-rich and disk-like (dwarf) galaxies [@BT]. **Remark.** *By taking $\xi=0$ for solutions (\[2-DIsothermalRotation\]) in Theorem \[2-dRotation\], we obtain Yuen’s non-rotational solutions (\[solution 3 Yuen\]), which blow up in a finite time $T$ if $\lambda>0$. However, the rotational (when $\xi\neq0$) term in (\[2-DIsothermalRotation\]) prevents the blowup phenomena.* Periodic and Spiral Solutions ============================= Our main work is to design the relevant functions with rotation to fit the 2D mass equation (\[Euler-Poisson2DRotation\])$_{1}$. \[lem:generalsolutionformasseqrotation2d\]For the 2D equation of conservation of mass $$\rho_{t}+\nabla\cdot\left( \rho\vec{u}\right) =0, \label{massequationspherical2Drotation}$$ there exist the following solutions:$$\rho(t,\vec{x})=\rho(t,r)=\frac{f\left( \frac{r}{a(t)}\right) }{a(t)^{2}},\text{ }{\normalsize u}_{1}{\normalsize =}\frac{\overset{\cdot}{a}(t)}{a(t)}x-\frac{G(t,r)}{r}y\text{, }u_{2}=\frac{G(t,r)}{r}x+\frac{\overset {\cdot}{a}(t)}{a(t)}y \label{generalsolutionformassequation2Drotation}$$ with arbitrary $C^{1}$ functions $f(s)\geq0$ and $G(t,r)$ and $a(t)>0\in C^{1}.$ We plug the following functional form $$\rho(t,\vec{x})=\rho(t,r)=\frac{f\left( \frac{r}{a(t)}\right) }{a(t)^{2}},\text{ }{\normalsize u}_{1}{\normalsize =}\frac{F(t,r)}{r}x-\frac{G(t,r)}{r}y\text{, }u_{2}=\frac{G(t,r)}{r}x+\frac{F(t,r)}{r}y$$ with arbitrary $C^{1}$ functions $f(s)\geq0$, $F(t,r)$, $G(t,r)$ and $a(t)>0\in C^{1}$, into the 2D mass equation (\[massequationspherical2Drotation\]) to have $$\rho_{t}+\nabla\cdot\left( \rho\vec{u}\right)$$$$=\rho_{t}+\frac{\partial}{\partial x}\left( \rho\frac{Fx}{r}-\rho\frac{Gy}{r}\right) +\frac{\partial}{\partial y}\left( \rho\frac{Fy}{r}+\rho\frac {Gx}{r}\right)$$$$\begin{aligned} & =\rho_{t}+\left( \frac{\partial}{\partial x}\rho\right) \frac{Fx}{r}+\rho\left( \frac{\partial}{\partial x}\frac{Fx}{r}\right) -\left( \frac{\partial}{\partial x}\rho\right) \frac{Gy}{r}-\rho\left( \frac{\partial}{\partial x}\frac{Gy}{r}\right) \nonumber\\ & +\left( \frac{\partial}{\partial y}\rho\right) \frac{Fy}{r}+\rho\left( \frac{\partial}{\partial y}\frac{Fy}{r}\right) +\left( \frac{\partial }{\partial y}\rho\right) \frac{Gx}{r}+\rho\left( \frac{\partial}{\partial y}\frac{Gx}{r}\right)\end{aligned}$$$$\begin{aligned} & =\rho_{t}+\rho_{r}\frac{x}{r}\frac{Fx}{r}+\rho\left( F_{r}\frac{x}{r}\right) \frac{x}{r}+\rho\frac{F}{r}-\rho Fx\frac{x}{r^{3}}\nonumber\\ & -\rho_{r}\frac{x}{r}\frac{Gy}{r}-\rho G_{r}\frac{x}{r}\frac{y}{r}+\rho Gy\frac{x}{r^{3}}+\rho_{r}\frac{y}{r}\frac{Fy}{r}+\rho\left( F_{r}\frac{y}{r}\right) \frac{y}{r}\nonumber\\ & +\rho\frac{F}{r}-\rho Fy\frac{y}{r^{3}}+\rho_{r}\frac{y}{r}\frac{Gx}{r}+\rho\left( G_{r}\frac{y}{r}\right) \frac{x}{r}-\rho Gx\frac{y}{r^{3}}$$$$\begin{aligned} & =\rho_{t}+\rho_{r}\frac{x}{r}\frac{Fx}{r}+\rho\left( F_{r}\frac{x}{r}\right) \frac{x}{r}+\rho\frac{F}{r}-\rho Fx\frac{x}{r^{3}}\nonumber\\ & +\rho_{r}\frac{y}{r}\frac{Fy}{r}+\rho\left( F_{r}\frac{y}{r}\right) \frac{y}{r}+\rho\frac{F}{r}-\rho Fy\frac{y}{r^{3}}$$$$=\rho_{t}+\rho_{r}F+\rho F_{r}+\rho F\frac{1}{r}. \label{tomassradial}$$ Then we take the self-similar structure for the density function$$\rho(t,\vec{x})=\rho(t,r)=\frac{f\left( \frac{r}{a(t)}\right) }{a(t)^{2}}\text{,}$$ and $F(t,r)=\frac{\dot{a}(t)}{a(t)}r$ for the velocity $\vec{u}$ to balance equation (\[tomassradial\]):$$=\frac{\partial}{\partial t}\frac{f\left( \frac{r}{a(t)}\right) }{a(t)^{2}}+\left( \frac{\partial}{\partial r}\frac{f\left( \frac{r}{a(t)}\right) }{a(t)^{2}}\right) \frac{\dot{a}(t)r}{a(t)}+\frac{f\left( \frac{r}{a(t)}\right) }{a(t)^{2}}\frac{\dot{a}(t)}{a(t)}+\frac{f\left( \frac {r}{a(t)}\right) }{a(t)^{2}}\frac{\dot{a}(t)}{a(t)}$$$$\begin{aligned} & =\frac{-2\overset{\cdot}{a}(t)f\left( \frac{r}{a(t)}\right) }{a(t)^{3}}-\frac{\overset{\cdot}{a}(t)r\overset{\cdot}{f}\left( \frac{r}{a(t)}\right) }{a(t)^{4}}\nonumber\\ & +\frac{\overset{\cdot}{f}\left( \frac{r}{a(t)}\right) }{a(t)^{3}}\frac{\overset{\cdot}{a}(t)r}{a(t)}+\frac{f\left( \frac{r}{a(t)}\right) }{a(t)^{2}}\frac{\overset{\cdot}{a}(t)}{a(t)}+\frac{f\left( \frac{r}{a(t)}\right) }{a(t)^{2}}\frac{\overset{\cdot}{a}(t)}{a(t)}$$$$=0.$$ The proof is completed. The following Lemma is required to show the cyclic phenomena of the rotational solutions (\[2-DIsothermalRotation\]). \[lemma22–2drotation\]With $\xi\neq0$, for the Emden equation$$\ddot{a}(t)=\frac{-\lambda}{a(t)}+\frac{\xi^{2}}{a(t)^{3}},\text{ }a(0)=a_{0}>0,\text{ }\dot{a}(0)=a_{1}\text{,} \label{eq124-2drotation}$$ - with $\lambda>0$, the solution is non-trivially periodic, except for the case with $a_{0}=\frac{\left\vert \xi\right\vert }{\sqrt{\lambda}}$ and $a_{1}=0$; - with $\lambda\leq0$, the solution is global. The proof is standard and similar to Lemma 3 in [@YuenCQG2009] for the Euler-Poisson equations with a negative cosmological constant.(I) For equation (\[eq124-2drotation\]), we could multiply $\dot{a}(t)$ and integrate it in the following manner:$$\frac{\dot{a}(t)^{2}}{2}+\lambda\ln a(t)+\frac{\xi^{2}}{2a(t)^{2}}=\theta$$ with the constant $\theta=\frac{a_{1}^{2}}{2}+\lambda\ln a_{0}+\frac{\xi^{2}}{2a_{0}^{2}}$.Then, we could define the kinetic energy as $$F_{kin}=\frac{\dot{a}(t)^{2}}{2}$$ and the potential energy as$$F_{pot}=\lambda\ln a(t)+\frac{\xi^{2}}{2a(t)^{2}}.$$ Here, the total energy is conserved such that$$\frac{d}{dt}(F_{kin}+F_{pot})=0.$$ The potential energy function has only one global minimum at $\overset{-}{a}=\frac{\left\vert \xi\right\vert }{\sqrt{\lambda}}$ for $a(t)\in (0,+\infty)$. Therefore, by the classical energy method (in Section 4.3 of [@LS]), the solution for equation (\[eq124-2drotation\]) has a closed trajectory. The time for traveling the closed orbit is$$T=2\int_{t_{1}}^{t_{2}}dt=2\int_{a_{\min}}^{a_{\max}}\frac{da(t)}{\sqrt{2\left[ \theta-\left( \lambda\ln a(t)+\frac{\xi^{2}}{2a(t)^{2}}\right) \right] }}\text{,}\label{hk12DRotation}$$ where $a(t_{1})=a_{\min}=\underset{t\geq0}{\inf}(a(t))$ and $a(t_{2})=a_{\max }=\underset{t\geq0}{\sup}(a(t))$ with some constants $t_{1}$, $t_{2\text{ }}$such that $t_{2}\geq t_{1}\geq0$.We let $H(t)=\theta-\left( \lambda\ln a(t)+\frac{\xi^{2}}{2a(t)^{2}}\right) $, $H_{0}=\left\vert \theta-\left( \lambda\ln(a_{\min}+\epsilon)+\frac{\xi^{2}}{2(a_{\min }+\epsilon)^{2}}\right) \right\vert $ , and $H_{1}=\left\vert \theta-\left( \lambda\ln(a_{\max}-\epsilon)+\frac{\xi^{2}}{2(a_{\max}-\epsilon)^{2}}\right) \right\vert $. Except for the case with $a_{0}=\frac{\left\vert \xi\right\vert }{\sqrt{\lambda}}$ and $a_{1}=0$, the time in equation (\[hk12DRotation\]) can be estimated by$$\begin{aligned} T & =\int_{a_{\min}}^{a_{\min}+\epsilon}\frac{2da(t)}{\sqrt{2\left[ \theta-\left( \lambda\ln a(t)+\frac{\xi^{2}}{2a(t)^{2}}\right) \right] }}+\int_{a_{\min}+\epsilon}^{a_{\max}-\epsilon}\frac{2da(t)}{\sqrt{2\left[ \theta-\left( \lambda\ln a(t)+\frac{\xi^{2}}{2a(t)^{2}}\right) \right] }}\nonumber\\ & +\int_{a_{\max}-\epsilon}^{a_{\max}}\frac{2da(t)}{\sqrt{2\left[ \theta-\left( \lambda\ln a(t)+\frac{\xi^{2}}{2a(t)^{2}}\right) \right] }}$$ with a sufficient small constant $\epsilon>0$,$$\begin{aligned} & \leq\underset{a_{\min}\leq a(t)\leq a_{\min}+\epsilon}{\sup}\left\vert \frac{1}{-\frac{\lambda}{a(t)}+\frac{\xi^{2}}{a(t)^{3}}}\right\vert \int _{0}^{H_{0}}\frac{\sqrt{2}dH(t)}{\sqrt{H(t)}}+\int_{a_{\min}+\epsilon }^{a_{\max}-\epsilon}\frac{2da(t)}{\sqrt{2\left[ \theta-\left( \lambda\ln a(t)+\frac{\xi^{2}}{2a(t)^{2}}\right) \right] }}\nonumber\\ & +\underset{a_{\max}-\epsilon\leq a(t)\leq a_{\max}}{\sup}\left\vert \frac{1}{-\frac{\lambda}{a(t)}+\frac{\xi^{2}}{a(t)^{3}}}\right\vert \int _{0}^{H_{1}}\frac{\sqrt{2}dH(t)}{\sqrt{H(t)}}$$$$\begin{aligned} & =\underset{a_{\min}\leq a\leq a_{\min}+\epsilon}{\sup}\left\vert \frac {1}{-\frac{\lambda}{a(t)}+\frac{\xi^{2}}{a(t)^{3}}}\right\vert 2\sqrt{2}\sqrt{H_{0}}+\int_{a_{\min}+\epsilon}^{a_{\max}-\epsilon}\frac{2da(t)}{\sqrt{2\left[ \theta-\left( \lambda\ln a(t)+\frac{\xi^{2}}{2a(t)^{2}}\right) \right] }}\nonumber\\ & +\underset{a_{\max}-\epsilon\leq a(t)\leq a_{\max}}{\sup}\left\vert \frac{1}{-\frac{\lambda}{a(t)}+\frac{\xi^{2}}{a(t)^{3}}}\right\vert 2\sqrt {2}\sqrt{H_{1}}$$$$<\infty.$$ Therefore, we have (a) the solutions to the Emden equation (\[eq124-2drotation\]) are non-trivially periodic except for the case with $a_{0}=\frac{\left\vert \xi\right\vert }{\sqrt{\lambda}}$ and $a_{1}=0$.Figure 2 below shows a particular solution for the Emden equation:$$\left\{ \begin{array} [c]{c}\ddot{a}(t)=\frac{-1}{a(t)}+\frac{1}{a(t)^{3}}\\ a(0)=1,\text{ }\dot{a}(0)=1. \end{array} \right. \label{grapha(t)}$$ It is clear to see (b) if $a_{0}=\frac{\left\vert \xi\right\vert }{\sqrt{\lambda}}$ and $a_{1}=0$, the solutions (\[eq124-2drotation\]) are steady. By applying the similar analysis, we can show that(II) with $\lambda\leq0$, the solutions are global.The proof is completed. After obtaining the above two lemmas, we can construct the periodic and spiral solutions with rotation to the $2$D isothermal Euler-Poisson system (\[Euler-Poisson2DRotation\]) as follows. \[Proof of Theorem \[2-dRotation\]\]The procedure of the proof is similar to the proof for the non-rotational fluids [@YuenJMAA2008a]. It is clear that our functions (\[2-DIsothermalRotation\]) satisfy Lemma \[lem:generalsolutionformasseqrotation2d\] for the mass equation (\[Euler-Poisson2DRotation\])$_{1}$. For the first momentum equation (\[Euler-Poisson2DRotation\])$_{2,1}$, we get$$\rho\left[ \frac{\partial u_{1}}{\partial t}+u_{1}\frac{\partial u_{1}}{\partial x}+u_{2}\frac{\partial u_{1}}{\partial y}\right] +\frac{\partial }{\partial x}P+\rho\frac{\partial}{\partial x}\Phi$$$$=\rho\left[ \frac{\partial u_{1}}{\partial t}+u_{1}\frac{\partial u_{1}}{\partial x}+u_{2}\frac{\partial u_{1}}{\partial y}\right] +\frac{\partial }{\partial x}\frac{Ke^{f(\frac{r}{a(t)})}}{a(t)^{2}}+\rho\frac{\partial }{\partial x}\Phi\text{.}$$ By defining the variable $s=\frac{r}{a(t)}$ with $\frac{\partial}{\partial x}=\frac{\partial}{\partial r}\frac{\partial r}{\partial x}=\frac{x}{r}\frac{\partial}{\partial r}$, we have$$=\rho\left[ \frac{\partial u_{1}}{\partial t}+u_{1}\frac{\partial u_{1}}{\partial x}+u_{2}\frac{\partial u_{1}}{\partial y}\right] +\frac{Ke^{f(s)}}{a(t)^{2}}\frac{x}{r}\frac{\partial}{a(t)\partial\left( \frac{r}{a(t)}\right) }f(s)+\frac{x}{r}\rho\frac{\partial}{\partial r}\Phi$$$$=\rho\left[ \frac{\partial u_{1}}{\partial t}+u_{1}\frac{\partial u_{1}}{\partial x}+u_{2}\frac{\partial u_{1}}{\partial y}+\frac{K}{a(t)}\frac{x}{r}\frac{\partial}{\partial s}f(s)+\frac{x}{r}\frac{2\pi}{r}{\displaystyle\int\limits_{0}^{r}} \frac{e^{f(\frac{\eta}{a(t)})}}{a(t)^{2}}\eta d\eta\right]$$ $$=\rho\left[ \begin{array} [c]{c}\frac{\partial}{\partial t}\left( \frac{\dot{a}(t)}{a(t)}x-\frac{\xi }{a(t)^{2}}y\right) +\left( \frac{\dot{a}(t)}{a(t)}x-\frac{\xi}{a(t)^{2}}y\right) \frac{\partial}{\partial x}\left( \frac{\dot{a}(t)}{a(t)}x-\frac{\xi}{a(t)^{2}}y\right) \\ +\left( \frac{\xi}{a(t)^{2}}x+\frac{\dot{a}(t)}{a(t)}y\right) \frac {\partial}{\partial y}\left( \frac{\dot{a}(t)}{a(t)}x-\frac{\xi}{a(t)^{2}}y\right) +\frac{x}{a(t)r}\left( K\dot{f}(s)+\frac{2\pi}{\frac{r}{a(t)}}{\displaystyle\int\limits_{0}^{r}} \frac{e^{f(\frac{\eta}{a(t)})}}{a(t)^{2}}\eta d\eta\right) \end{array} \right]$$$$=\rho\left[ \begin{array} [c]{c}\left( \frac{\ddot{a}(t)}{a(t)}-\frac{\dot{a}(t)^{2}}{a(t)^{2}}\right) x+\frac{2\xi\dot{a}(t)}{a(t)^{3}}y+\left( \frac{\dot{a}(t)}{a(t)}x-\frac{\xi }{a(t)^{2}}y\right) \frac{\dot{a}(t)}{a(t)}\\ -\left( \frac{\xi}{a(t)^{2}}x+\frac{\dot{a}(t)}{a(t)}y\right) \frac{\xi }{a(t)^{2}}+\frac{x}{a(t)r}\left( K\dot{f}(s)+\frac{2\pi}{\frac{r}{a(t)}}{\displaystyle\int\limits_{0}^{r}} e^{f(\frac{\eta}{a(t)})}\left( \frac{\eta}{a(t)}\right) d\left( \frac{\eta }{a(t)}\right) \right) \end{array} \right]$$$$=\frac{x\rho}{a(t)r}\left[ \left( \ddot{a}(t)-\frac{\xi^{2}}{a(t)^{3}}\right) r+K\dot{f}(s)+\frac{2\pi}{s}{\displaystyle\int\limits_{0}^{s}} e^{f(\tau)}\tau d\tau\right]$$$$=\frac{x\rho}{a(t)r}\left[ -\lambda s+K\dot{f}(s)+\frac{2\pi}{s}{\displaystyle\int\limits_{0}^{s}} e^{f(\tau)}\tau d\tau\right] \label{eq581}$$ with the Emden equation $$\left\{ \begin{array} [c]{c}\ddot{a}(t)=\frac{-\lambda}{a(t)}+\frac{\xi^{2}}{a(t)^{3}}\\ a(0)=a_{0}>0\text{, }\dot{a}(0)=a_{1}\text{,}\end{array} \right. \label{Endemeqeq2-Drotation}$$ with an arbitrary constant $\xi\neq0$.Similarly, we obtain the corresponding result for the second momentum equation (\[Euler-Poisson2DRotation\])$_{2,2}$ in the following manner with $\frac{\partial}{\partial y}=\frac{\partial}{\partial r}\frac{\partial r}{\partial y}=\frac{y}{r}\frac{\partial}{\partial r}$:$$\rho\left[ \frac{\partial u_{2}}{\partial t}+u_{1}\frac{\partial u_{2}}{\partial x}+u_{2}\frac{\partial u_{2}}{\partial y}\right] +\frac{\partial }{\partial y}P+\rho\frac{\partial}{\partial y}\Phi$$$$=\rho\left[ \frac{\partial u_{2}}{\partial t}+u_{1}\frac{\partial u_{2}}{\partial x}+u_{2}\frac{\partial u_{2}}{\partial y}\right] +\frac{Ke^{f(s)}}{a(t)^{2}}\frac{y}{r}\frac{\partial}{a(t)\partial\left( \frac{r}{a(t)}\right) }f(s)+\frac{y}{r}\rho\frac{\partial}{\partial r}\Phi$$$$=\rho\left[ \frac{\partial u_{2}}{\partial t}+u_{1}\frac{\partial u_{2}}{\partial x}+u_{2}\frac{\partial u_{2}}{\partial y}+\frac{K}{a(t)}\frac{y}{r}\frac{\partial}{\partial s}f(s)+\frac{y}{r}\frac{2\pi}{r}{\displaystyle\int\limits_{0}^{r}} \frac{e^{f(\frac{\eta}{a(t)})}}{a(t)^{2}}\eta d\eta\right]$$$$=\rho\left[ \begin{array} [c]{c}\frac{\partial}{\partial t}\left( \frac{\xi}{a(t)^{2}}x+\frac{\overset{\cdot }{a}(t)}{a(t)}y\right) +\left( \frac{\dot{a}(t)}{a(t)}x-\frac{\xi}{a(t)^{2}}y\right) \frac{\partial}{\partial x}\left( \frac{\xi}{a(t)^{2}}x+\frac{\overset{\cdot}{a}(t)}{a(t)}y\right) \\ +\left( \frac{\xi}{a(t)^{2}}x+\frac{\dot{a}(t)}{a(t)}y\right) \frac {\partial}{\partial y}\left( \frac{\xi}{a(t)^{2}}x+\frac{\overset{\cdot}{a}(t)}{a(t)}y\right) +\frac{y}{a(t)r}\left( K\dot{f}(s)+\frac{2\pi}{\frac{r}{a(t)}}{\displaystyle\int\limits_{0}^{r}} \frac{e^{f(\frac{\eta}{a(t)})}}{a(t)^{2}}\eta d\eta\right) \end{array} \right]$$$$=\rho\left[ \begin{array} [c]{c}-\frac{2\xi\dot{a}(t)}{a(t)^{3}}x+\left( \frac{\ddot{a}(t)}{a(t)}-\frac {\dot{a}(t)^{2}}{a(t)^{2}}\right) y+\left( \frac{\dot{a}(t)}{a(t)}x-\frac{\xi}{a(t)^{2}}y\right) \frac{\xi}{a(t)^{2}}\\ +\left( \frac{\xi}{a(t)^{2}}x+\frac{\dot{a}(t)}{a(t)}y\right) \frac {\overset{\cdot}{a}(t)}{a(t)}+\frac{y}{a(t)r}\left( K\dot{f}(s)+\frac{2\pi }{\frac{r}{a(t)}}{\displaystyle\int\limits_{0}^{r}} e^{f(\frac{\eta}{a(t)})}\left( \frac{\eta}{a(t)}\right) d\left( \frac{\eta }{a(t)}\right) \right) \end{array} \right]$$$$=\frac{y\rho}{a(t)r}\left[ \left( \ddot{a}(t)-\frac{\xi^{2}}{a(t)^{3}}\right) r+K\dot{f}(s)+\frac{2\pi}{s}{\displaystyle\int\limits_{0}^{s}} e^{f(\tau)}\tau d\tau\right]$$$$=\frac{y\rho}{a(t)r}\left[ -\lambda s+K\dot{f}(s)+\frac{2\pi}{s}{\displaystyle\int\limits_{0}^{s}} e^{f(\tau)}\tau d\tau\right] \text{.}\label{eq58}$$ To make equations (\[eq581\]) and (\[eq58\]) equal zero, we may require the Liouville equation from differential geometry:$$\left\{ \begin{array} [c]{c}\ddot{f}(s)+\frac{\dot{f}(s)}{s}+\frac{2\pi}{K}e^{f(s)}=\frac{2\lambda}{K}\\ f(0)=\alpha\text{, }\dot{f}(0)=0. \end{array} \right. \label{Liouville2-DRotation}$$ We note that the global existence of the initial value problem of the Liouville equation (\[Liouville2-DRotation\]) has been shown by Lemma 10 in [@YuenJMAA2008a]. Thus, we confirm that functions (\[2-DIsothermalRotation\]) are a family of classical solutions for the isothermal ($\gamma=1$) Euler-Poisson equations (\[Euler-Poisson2DRotation\]) in $R^{2}$. With Lemma \[lemma22–2drotation\], it is clear that(I) With $\lambda>0$,(a) solutions (\[2-DIsothermalRotation\]) are non-trivially time-periodic, except for the case $a_{0}=\frac{\left\vert \xi\right\vert }{\sqrt{\lambda}}$and $a_{1}=0$;(b) if $a_{0}=\frac{\left\vert \xi\right\vert }{\sqrt{\lambda}}$ and $a_{1}=0$, solutions (\[2-DIsothermalRotation\]) are steady.(II) With $\lambda\leq0$, solutions (\[2-DIsothermalRotation\]) are global in time.Therefore all of the rotational solutions (\[2-DIsothermalRotation\]) with $\xi\neq0$, are global in time.We complete the proof. Conclusion and Discussion ========================= Our results confirm that there exists a class of periodic solutions which can be found by choosing a sufficiently small constant $a_{0}<<1$ in solutions (\[2-DIsothermalRotation\]), in the Euler-Poisson equations (\[Euler-Poisson2DRotation\]) in $R^{2}$, even without a negative cosmological constant [@YuenCQG2009]. Here, the periodic rotation prevents the blowup phenomena that occur in solutions without rotation [@YuenJMAA2008a]. It is open to show the existences of solutions and their stabilities for the small perturbation of these solutions (\[2-DIsothermalRotation\]). Numerical simulation and mathematical proofs for the perturbational solutions are suggested for understanding their evolution. As our solutions in this paper works for the 2D case, the corresponding rotational solutions in $R^{3}$ are conjectured. We conjecture that the corresponding rotational solutions to Goldreich and Weber’s solutions (\[3-dgamma=4over3\]) for the 3D Euler-Poisson equations with $\gamma=4/3$ [@GW] exist, such as the ones for the Euler equations [@Yuen3DexactEuler]. Further research is expected to shed more light on the possibilities. Acknowledgement =============== The authors thank the reviewers and the editors for their helpful comments for improving the quality of this article. [99]{} M. Bezard, *Existence locale de solutions pour les equations d’Euler-Poisson. (French) \[Local Existence of Solutions for Euler-Poisson Equations\],* Japan J. Indust. Appl. Math. **10** (1993), 431–450. J. Binney and S. Tremaine, Galactic Dynamics, 2nd Edition, Princeton Univ. Press, 2008. S. Chandrasekhar, An Introduction to the Study of Stellar Structure, Univ. of Chicago Press, 1939. Y.B. Deng, J.L. Xiang and T. Yang, *Blowup Phenomena of Solutions to Euler-Poisson Equations*, J. Math. Anal. Appl. **286** (2003), 295–306. P. Gamblin, *Solution reguliere a temps petit pour l’equation d’Euler-Poisson. (French) \[Small-time Regular Solution for the Euler-Poisson Equation\]* Comm. Partial Differential Equations **18** (1993), 731–745. P. Goldreich and S. Weber, *Homologously Collapsing Stellar Cores*, Astrophys, J. **238** (1980), 991–997. R. Kippenhahn and A. Weigert, Stellar Sturture and Evolution, Springer-Verlag, 1990. W.D. Lakin and D.A. Sanchez, Topics in Ordinary Differential Equations, Dover Pub. Inc., New York, 1982. M.S. Longair, Galaxy Formation, 2nd Edition, Springer-Verlag, 2008. T. Makino, *On a Local Existence Theorem for the Evolution Equation of Gaseous Stars*, Patterns and Waves, 459–479, Stud. Math. Appl., **18**, North-Holland, Amsterdam, 1986. T. Makino, *Blowing up Solutions of the Euler-Poisson Equation for the Evolution of the Gaseous Stars*, Transport Theory and Statistical Physics **21** (1992), 615–624. M.W. Yuen, *Analytical Blowup Solutions to the 2-dimensional Isothermal Euler-Poisson Equations of Gaseous Stars*, J. Math. Anal. Appl. **341 (**2008**),** 445–456. M.W. Yuen, *Analytical Blowup Solutions to the Pressureless Navier-Stokes-Poisson Equations with Density-dependent Viscosity in* $R^{N}$, Nonlinearity **22** (2009), 2261–2268. M.W. Yuen, *Analytically Periodic Solutions to the 3-dimensional Euler-Poisson Equations of Gaseous Stars with a Negative Constant,* Class. Quantum Grav. **26** (2009), 235011, 8pp. M.W. Yuen, *Exact, Rotational, Infinite Energy, Blowup Solutions to the 3-dimensional Euler Equations*, Phys. Lett. A **375** (2011), 3107–3113. T. Zhang and Y.X. Zheng, *Exact Spiral Solutions of the Two-dimensional Euler Equations*, Discrete Contin. Dynam. Systems **3** (1997), 117–133. [^1]: E-mail address: [email protected] [^2]: Corresponding author and E-mail address: [email protected]
--- abstract: 'We consider the asymptotic evolution of a relativistic spin-$\frac{1}{2}$ particle. i.e. a particle whose wavefunction satisfies the Dirac equation with external static potential. We prove that the probability for the particle crossing a (detector) surface converges to the probability, that the direction of the momentum of the particle lies within the solid angle defined by the (detector) surface, as the distance of the surface goes to infinity. This generalizes earlier non relativistic results, known as flux across surfaces theorems, to the relativistic regime.' author: - | D. Dürr, P. Pickl\ Fakultät für Mathematik\ Universität München, Theresienstr. 39, 80333 München, Germany date: March 2002 title: 'Flux-Across-Surfaces Theorem for a Dirac Particle' --- Introduction ============ In scattering experiments the scattered particles are measured at a macroscopic distance, but the computations of scattering cross sections are based on the distribution of the wavefunction in momentum space. Therefore a relationship between the crossing probability through a far distant detector surface and the shape of the wavefunction in momentum space is needed. This relationship is given by the flux-across-surfaces theorem, which - as a problem in mathematical physics - has been formulated by Combes, Newton and Shtokhamer [@CNS], see also [@DDGZ2; @SCAT]. For scattering states (material on scattering states for the Dirac equation is in [@thaller]) the theorem asserts that the probability of crossing a far distant surface (physical interaction with the detector is neglected) subtended by a solid angle is equal to the probability that the scattered particle will, in the distant future, have a momentum, whose direction lies in that same solid angle. Moreover, the probability, that the particle will cross the detector within a certain area is given by the integral of the flux over that area and time. This has been proven for Schrödinger evolutions in great generality, see for instance [@JMP; @Amrein; @AmreinPearson; @thesis; @Panati; @AntPan]. We consider here wavefunctions $\psi_{t}\in L^{2}(\mathbb{R}^{3})\bigotimes\mathbb{C}^{4}$ which satisfy the Dirac-equation (conveniently setting $c=\hbar=1$) $$\label{dirac} i\frac{\partial\psi_{t}}{\partial t}=-i\sum_{l=1}^{3}\alpha_{l}\partial_{l}\psi_{t}+A\hspace{-0.2cm}/\psi_{t}+\beta m\psi_{t}\equiv H\psi_{t}$$ where $$\begin{aligned} \label{alphas} \alpha_{l}= \begin{pmatrix} _{0} & _{\sigma_{l}}\\ _{\sigma_{l}} & _{0} \end{pmatrix}; \beta= \begin{pmatrix} _{\mathbf{1}} & _{0}\\ _{0} & _{-\mathbf{1}} \end{pmatrix}; l=1,2,3\end{aligned}$$ $\sigma_{l}$ being the Pauli matrices: $$\begin{aligned} \sigma_{1}=\begin{pmatrix} _{1} & _{0} \\ _{0} & _{-1} \end{pmatrix}; \sigma_{2}=\begin{pmatrix} _{0} & _{1} \\ _{1} & _{0} \end{pmatrix}; \sigma_{3}=\begin{pmatrix} _{0} & _{-i} \\ _{i} & _{0}\; \end{pmatrix}\end{aligned}$$ $\mathbf{1}$ the $2\times2$-unit matrix and $A\hspace{-0.2cm}/$ the 4-potential in the form $$A\hspace{-0.2cm}/:=A_{0}+\mathbf{A}\cdot\boldsymbol{\alpha}$$ with $\boldsymbol{\alpha}:=(\alpha_1,\alpha_2,\alpha_3)$ . In the following we will always denote solutions of the Dirac equation by $\psi_{t}$ and by $\psi_{0}$ the “time zero” wavefunction. $A\hspace{-0.2cm}/$ is an external static four-potential, which satisfies condition A (see \[potcond\]), which concerns smoothness and is for the sake of simplicity taken stronger than needed: $$\label{potcond} \hspace{-2.1cm}\text{\bf{Condition A}} \hspace{2cm}A\hspace{-0.2cm}/(\mathbf{x})\in C^{\infty}\hspace{1cm}\exists M,\xi>0:\hspace{0.5cm}\mid A\hspace{-0.2cm}/(\mathbf{x})\mid\leq M\langle x\rangle^{4+\xi}\vspace{0.5cm}\;.$$ The norm $\mid\cdot\mid$ is defined as: $$\mid B\mid:=\sup_{\parallel\varphi\parallel_{s}=1}\parallel B\varphi\parallel_{s}$$ where $$\parallel\varphi\parallel_{s}:=\langle\varphi,\varphi\rangle^{\frac{1}{2}}$$ with the inner product in spin space $$\langle\cdot ,\cdot \rangle: \mathbb{C}\parallel\varphi\parallel_{s}(\mathbf{x}) ^{4}\bigotimes \mathbb{C} ^{4}\rightarrow \mathbb{C}\hspace{1cm}\langle\varphi,\chi\rangle:= \sum_{l=1}^{4}\overline{\varphi_{l}}\chi_{l}\,\,.$$ Often we have spinors depending on $\mathbf{x}$, in that case we have $\parallel\varphi\parallel_{s}(\mathbf{x})$. The continuity equation involving the quantum flux of a relativistic spin $\frac{1}{2}$ particle reads $$\label{cont} \frac{\partial}{\partial t}\overline\psi_{t}\psi_{t} = \nabla\cdot\mathbf{j} ,$$ whereas the 4-flux is defined for any $\varphi\in L^{2}(\mathbb{R}^{3})\bigotimes\mathbb{C}^{4}$ by $$\label{relflux} \underline{j} = \begin{pmatrix} _{j_{0}} \\ _{\mathbf{j}} \end{pmatrix}=\langle\varphi,\underline{\alpha}\varphi\rangle\;,$$ with $\underline{\alpha}=\begin{pmatrix} _{1} \\ _{\boldsymbol{\alpha}} \end{pmatrix}.$ For notational convenience we sometimes omit the dependence on $\mathbf{x}.$ Furthermore we have the usual $L^2$-Norm on the space of $4-$spinors given by $$\parallel\varphi\parallel=\left(\int \parallel\varphi\parallel_{s}^2d^3x\right)^{\frac{1}{2}}\,\,.$$ We introduce the Fouriertransform of $\varphi(\mathbf{x})$ as representation in the generalized basis (\[basis\]) of the free Hamiltonian ,i.e. $$\label{Four} \widehat{\varphi}_s(\mathbf{k})= \int(2\pi)^{-\frac{3}{2}} \langle\varphi_{\mathbf{k}}^s(\mathbf{x}),\varphi(\mathbf{x})\rangle d^{3}x\hspace{1cm} \widehat{\varphi}(\mathbf{k}):=\sum_{s=1}^{2}s_{\mathbf{k}}^{s}\widehat{\varphi}_s(\mathbf{k})\,.$$ We denote by $x$ the euclidian length of $\mathbf{x}$. We assume that asymptotic completeness holds, i.e. that the wave operators exist on the spectral subspace ${\cal H}_{ac}$ of the continuous positive spectrum (“scattering state”) of the Dirac-Hamiltonian: Let $\psi_{\text{out}}$ denote the wavefunction of the free asymptotic of a scattering state $\psi$ then $$\lim_{t\rightarrow\infty}\parallel e^{-iH_{0}t}\psi_{\text{out}}-e^{iHt}\psi\parallel=0\,\,.$$ $\psi_{\text{out}}$ is given by the wave operator: $$\Omega_{+}=\lim_{t\rightarrow\infty}e^{iHt}e^{-iH_{0}t}\hspace{1cm}\psi=\Omega_{+}\psi_{\text{out}}\,\,.$$ The existence of the wave operators and asymptotic completeness has been proven for short range potentials. See e.g. Thaller [@thaller]. We remark (see Lemma \[properties\](d))(\[her\]), that the Fourier transform $\widehat{\psi}_{out,s}(\mathbf{k})$ of $\psi_{\text{out}}$ equals the generalized Fourier transform $\psi^{\ddagger}_s$ of $\psi$ in the generalized eigen-basis of the Dirac hamiltonian with potential. In general, we do not have much information about scattering states. One can prove the flux across surface theorem with conditions merely on the “out”-states, where the corresponding properties of the scattering states are hidden in the mapping properties of the wave operators, or, better, in the smoothness properties of the generalized eigenfunctions. On the other hand, one would like to be sure, that such conditions are not too restrictive on the set of scattering states. We introduce the set $\cal{G}$ of functions $\widehat{\psi}_{\text{out}}$, for which the flux across surfaces can naturally be proven: $$\label{definitionG} f(\mathbf{k})\in\cal{G}\Longleftrightarrow\bigg{\{} \begin{tabular}{llll} $\exists M\in\mathbb{R}:$ & $\parallel\partial_{k}^{j}f(\mathbf{k})\parallel_{s}\leq M\langle k\rangle^{-n}$ & for $j=0,1,2;$ $n\in\mathbb{N}$ \\ $\forall\mathbf{k}\neq0:$ & $\parallel k^{\mid\gamma\mid-1} D_{\mathbf{k}}^{\gamma}f(\mathbf{k})\parallel_{s}\leq M\langle k\rangle^{-n}$ & for $n\in\mathbb{N}$ \\ \end{tabular}$$ where $\gamma=(\gamma_{1};\gamma_{2};\gamma_{3})$ is a multi-index with$\mid\gamma\mid\leq2$, $D_{\mathbf{k}}^{\gamma}:=\partial_{k_{1}}^{\gamma_{1}}\partial_{k_{2}}^{\gamma_{2}}\partial_{k_{3}}^{\gamma_{3}}$. This set maps under the wave operator to a dense set in the set of scattering states. After the theorem we shall give under more restrictive conditions more detailed information on the set of scattering states for which the theorem holds. The paper is organized as follows: In the next section we shall state the theorem. We shall also give its formulation in covariant form, but we shall prove the theorem using the rest frame of the dectector and the potential. The following sections contain the proof of the theorem. We first prove the statement for the free case ($A\hspace{-0.2cm}/=0$) and then for the case of nonzero potential. Both are done in section 3. The proof relies almost entirely on the stationary phase method, which we need to adapt to our purposes. The main lemma is lemma (\[statphas\]), whose lengthy technical proof is put in the Appendix \[appendix1\]. The difficulty we have to face and which makes this paper not a simple generalization of the results in the Schrödinger situation is, that the time evolution with the Dirac hamiltonian is not of a “nice” form for the stationary phase method to be easily applied to. The Schrödinger case is easier. On the other hand, the expression for the flux needs no differentiability of the wavefunction and one might be lead to believe, that to describe scattering in the relativistic regime is simpler—in particular less restrictive theorems should result. One may even get the idea, that asymptotic completeness and the flux across surfaces theorem become more or less equivalent statements in the relativistic regime. But we are far from that. Nevertheless, that we require smoothness and good decay on the potential may well be due to our method of proof. We also need information about the generalized eigenfunctions of the Dirac hamiltonian with external potential, see Lemma \[properties\], whose proof is also put into the Appendix \[appendix3\]. The appendix, which in fact is almost half of the paper, contains other tedious technical details. ***Acknowledgement:*** Very helpful discussions with Stefan Teufel are gratefully acknowledged. We would also like to thank the referee for a very critical reading of the manuscript, which let to a clear improvement of the paper. The Theorem =========== The flux-across-surfaces theorem deals with the flux $\mathbf j$ integrated over a spherical surface at a far distance and asserts that 1. the absolute value of the flux and the flux itself yield the same asymptotics, allowing to interpret the flux integral as crossing probability [@DDGZ2; @Daumer], 2. the crossing probability equals the probability for the momentum to lie within the cone defined by the surface. \[frflsu\] Let $\psi$ be a scattering state with outgoing free asymptotic $\widehat{\psi}_{\text{out}}$, whose Fourier transform $\widehat{\psi}_{\text{out}}$ lies in $\mathcal{G}$(cf. \[definitionG\]). Let $R^{2}d\Omega$ be the surface element at distance R with solid angle differential $d\Omega$ and let $\mathbf{n}$ denote the outward normal of the surface element. Furthermore let $S$ be a subset of the unit sphere. Then for all $t_{i}\in\mathrm{R}$: $$\begin{aligned} \label{limes} \lim_{R\rightarrow\infty}\int_{S}\int_{t_{i}}^{\infty}j(\mathbf{R},t)dt R^{2}d\Omega & = &\lim_{R\rightarrow\infty}\int_{S}\int_{t_{i}}^{\infty}\mathbf{j}(\mathbf{R},t)\cdot\mathbf{n}dt R^{2}d\Omega \nonumber\\& = & \int_{S}\int_{0}^{\infty}\langle\widehat{\psi}_{\text{out}}(\mathbf{k}), \widehat{\psi}_{\text{out}}(\mathbf{k})\rangle k^{2}dkd\Omega\ \, ,\end{aligned}$$ Observing that $\parallel\widehat{\psi}_{\text{out}}\parallel(\mathbf{k})$ does not depend on time, we can choose a coordinate system $t^{\prime}=t-t_{i}$, so that we may for definiteness always put $t_{i}=0$ in (\[limes\]). The conditions on $\psi_{\text{out}}$ can be translated into more detailed conditions on the scattering states under more restrictive conditions on the potential: Let $$\begin{aligned} \label{potcond2} \hspace{-1.8cm}\text{\bf Condition B}\hspace{1.5cm}\mid\partial_{x}^{n}A\hspace{-0.2cm}/(\mathbf{x})\mid\in L^{2}(\mathbb{R}^3)\;\forall n\in\{0,1,2...\}\hspace{1cm}\exists M |A\hspace{-0.2cm}/(\mathbf{x})|\leq M\langle x\rangle^{-6}\nonumber\,\,.\end{aligned}$$ Then (for the proof see Appendix \[appendix4\]) \[equiv\] $$\label{equivalence} \widehat{\psi}_{\text{out}}(\mathbf{k})\in\mathcal{G}\Leftrightarrow\psi(\mathbf{x})\in\widehat{\mathcal{G}}$$ where $\widehat{\mathcal{G}}$ is the space of functions $\psi(\mathbf{x})\in\mathcal{H}_{ac}$ with $x^{j}\nabla\hspace{-0.25cm}/\hspace{0.1cm}^{n}\psi(\mathbf{x})\in L^{2}$ for all $j=0,1,2$; $n\in\mathbb{N}_{0}$, where $\nabla\hspace{-0.25cm}/:=-i\sum_{l=1}^{3}\alpha_{l}\partial_{l}$. Covariant form of the theorem ----------------------------- As we deal with a relativistic regime, it might be of interest to have also a covariant formulation of the theorem. As $\langle\widehat{\psi}_{out},\widehat{\psi}_{out}\rangle$ is not conserved under Lorentz function we use $$\widehat{\psi}^{LI}_{\text{out}}(\underline{k})=(k^{2}+m^{2})^{\frac{1}{4}}\widehat{\psi}_{\text{out}}(\underline{k})\;,$$ of which it is known, that $\langle\widehat{\psi}_{out}^{LI},\widehat{\psi}_{out}^{LI}\rangle$ is a Lorentz-scalar (see for instance [@haag]). Then the flux-across-surfaces theorem reads in a general and covariant way: \[relfast\] Let the conditions of Theorem \[frflsu\] be satisfied. Let $$\underline{x}\diamond\underline{y}:=x_{0}y_{0}-\sum_{j=1}^{3}x_{j}y_{j}$$ be the Minkowski scalar product. Then for any subspace $Z\subseteq\{\underline{x}\mid \underline{x}\diamond\underline{x}=m^{2}\}\subset\mathbb{R}^{4}$ and any smooth scalar function $\eta(\underline{x})$ bounded away from zero: $$\begin{aligned} \label{relativistic} \lim_{\lambda\rightarrow\infty}\int_{\widetilde{Z}(\lambda)} \underline{j}(\mathbf{x})\diamond\underline{n}\widetilde{d\sigma} =\int_{Z}\langle\widehat{\psi}^{LI}_{\text{out}}(\underline{k}), \widehat{\psi}^{LI}_{\text{out}}(\underline{k})\rangle d\sigma \; .\end{aligned}$$ where $$\widetilde{Z}(\lambda):=\{\underline{y}\mid\exists \underline{x}\in Z: \underline{y}=\lambda\eta(\underline{x})\underline{x}\}\subset\mathbb{R}^{4}$$ and $d\sigma$ is the invariant measure on $Z$, $\widetilde{d\sigma}$ the invariant measure on $\widetilde{Z}$ and $\underline{n}$ is the vector orthogonal on $\widetilde{Z}$ with Lorentz length one. This formulation may perhaps not be directly guessed, but once one understands its basics like (\[jscat0\]), this formulation becomes clear: The arbitrariness of the scalar function $\eta$ follows directly from (\[jscat0\]), observing that $$\lim_{\lambda\to\infty}\psi(\lambda \underline{k})=\lim_{\lambda\to\infty}\psi(\lambda \eta(\underline{k})\underline{k}).$$ Physically this is related to the fact, that (on big scales) it is possible to “catch” any part of the wave-function in different ways (for example by using a detector which is “close” and catches the wavefunction at an “early” time or one uses a far detector at a later time-interval). Let us explain how (\[limes\]) follows from (\[relativistic\]). We choose a set $Z$ whose projection on the $t=0$-subspace is a cone with angular distribution S: $$Z=\{\underline{k}\mid\frac{\mathbf{k}}{k}\in S\}\cap\{\underline{k}\diamond\underline{k}=m^{2}\}\;.$$ The invariant measure on the mass hyperboloid $d\sigma=\frac{d^{3}k}{\sqrt{k^{2}+m^{2}}}$ we get for the right hand side of (\[relativistic\]) $$\label{rechteseite} \int_{Z}\langle\widehat{\psi}^{LI}_{\text{out}}(\underline{k}), \widehat{\psi}^{LI}_{\text{out}}(\underline{k})\rangle d\sigma=\int_{S}\int_{0}^{\infty}\langle\widehat{\psi}_{\text{out}}(\mathbf{k}), \widehat{\psi}_{\text{out}}(\mathbf{k})\rangle k^{2}dkd\Omega\;.$$ For the left hand side of (\[relativistic\]) we take: $$\eta(\underline{x}):=\frac{1}{x}\hspace{1cm}x\neq0\;.$$ As both integrands in (\[relativistic\]) are bounded, a small neighborhood of $\mathbf{x}=0$ can be neglected. For constant $\lambda$, $\widetilde{Z}$ represents a radial surface with arbitrary time $t\geq0$. So we have: $$\label{linkeseite} \lim_{\lambda\rightarrow\infty}\int_{\widetilde{Z}(\lambda)}\underline{j}(\mathbf{x})\diamond\underline{n}\widetilde{d\sigma}=\lim_{R\rightarrow\infty}\int_{S}\int_{t_{i}}^{\infty}j(\mathbf{R},t)dt R^{2}d\Omega\;.$$ The proof ========= Scattering into cones heuristics -------------------------------- The flux-across-surfaces theorem is based on an asymptotic connection between the shape of the wavefunction in momentum space and in ordinary space. This is often referred to as the scattering into cones theorem, which has been proven for non-relativistic particles by Dollard [@dollard]. For that one chooses a certain parameterization of $\mathbb{R}^{4}$ and evaluates the wavefunction, as the parameter of the parameterizations goes to infinity. In the non-relativistic case, it is easiest to choose time as the parameter of the parameterization. In the relativistic case it is simplest to have lorentz-invariant three-dimensional subspaces of the time-like part of $\mathbb{R}^{4}$ as leaves of the parametrization[^1]. This can easily be done, by choosing a lorentz-vector as argument of $\psi$, i.e. a vector $\underline{x}$ with $\underline{x}\diamond\underline{x}=x_{0}^{2}-\mathbf{x}\cdot\mathbf{x}=\lambda m^{2}$. Set $\psi(\lambda\underline{k})=\psi(\mathbf{x}=\lambda\mathbf{k},t=\lambda\sqrt{k^{2}+m^{2}})$. We denote the two different eigenstates of momentum $\mathbf{k}$ of the free Hamiltonian with positive energy by $\varphi_{\mathbf{k}}^{s}$, whereas the s labels the two different spins our electron may have. In the standard representation these eigenstates can be written as: $$\label{basis} \varphi_{\mathbf{k}}^{s}=e^{i\mathbf{k}\cdot\mathbf{x}}s^{s}_{\mathbf{k}},$$ where the $s_{\mathbf{k}}^{s}$ are: $$\begin{aligned} s^{1}_{\mathbf{k}}=(2E_{k}\widehat{E}_{k})^{-\frac{1}{2}}\begin{pmatrix} _{\widehat{E}_{k}} \nonumber \\ _{0} \nonumber \\ _{k_{1}} \nonumber \\ _{k^{+}} \end{pmatrix}\hspace{1cm} s^{2}_{\mathbf{k}}=(2E_{k}\widehat{E}_{k})^{-\frac{1}{2}}\begin{pmatrix} _{0} \nonumber \\ _{\widehat{E}_{k}} \nonumber \\ _{k^{-}} \nonumber \\ _{-k_{1}} \end{pmatrix} \,,\end{aligned}$$ where $$k^{\pm}=k_{2}\pm ik_{3}\hspace{1cm}\widehat{E}_{k}=E_{k}+m\hspace{1cm}E_{k}=\sqrt{k^{2}+m^{2}}\,.$$ (For a detailed calculation of these spinors see ([@thaller])) The asymptotics result from a stationary phase analysis: $$\begin{aligned} \psi(\lambda\underline{k}) & = & U(t=\lambda\sqrt{k^{2}+m^{2}})\psi(\lambda\mathbf{k},0)\nonumber \\&=&\sum_{s=1}^{2}e^{-iH\lambda\sqrt{k^{2}+m^{2}}}\int(2\pi)^{-\frac{3}{2}} \varphi_{\mathbf{k^{\prime}}}^{s}(\lambda\mathbf{k})\widehat{\psi}_{s}(\mathbf{k^{\prime}})d^{3}k^{\prime}\nonumber \\&=&\sum_{s=1}^{2} e^{-iH\lambda\sqrt{k^{2}+m^{2}}}\int(2\pi)^{-\frac{3}{2}} e^{i\mathbf{k^{\prime}}\cdot\lambda \mathbf{k}}s^{s}_{\mathbf{k^{\prime}}}\widehat{\psi}_{s}(\mathbf{k^{\prime}})d^{3}k^{\prime}\nonumber \,.\end{aligned}$$ For convenience we define: $$\widehat{\psi}(\mathbf{k^{\prime}})=\sum_{s=1}^{2}s^{s}_{\mathbf{k^{\prime}}}\widehat{\psi}_{s}(\mathbf{k^{\prime}})\,.$$ This leads to: $$\begin{aligned} \label{ps} \psi(\lambda\underline{k}) & = & e^{-iH\lambda\sqrt{k^{2}+m^{2}}}\int(2\pi)^{-\frac{3}{2}} e^{i\mathbf{k^{\prime}}\cdot\lambda \mathbf{k}}\widehat{\psi}(\mathbf{k^{\prime}})d^{3}k^{\prime}\nonumber\\ &=& \int(2\pi)^{-\frac{3}{2}} e^{-i\lambda(\sqrt{k^{\prime 2}+m^{2}}\sqrt{k^{2}+m^{2}}-\mathbf{k^{\prime}}\cdot\mathbf{k})}\widehat{\psi}(\mathbf{k^{\prime}})d^{3}k^{\prime}\;.\end{aligned}$$ In view of the stationary phase method, in the limit $\lambda\rightarrow\infty $ only a small neighborhood of the stationary point of the phase function $$h(\mathbf{k}^{\prime}):=(\sqrt{k^{\prime 2}+m^{2}}\sqrt{k^{2}+m^{2}}-\mathbf{k^{\prime}}\cdot\mathbf{k})$$ will be relevant for the integral. The stationary point is given by: $$\label{point} \nabla_{\mathbf{k}^{\prime}}h(\mathbf{k}_{stat})=0 \Rightarrow \mathbf{k}_{stat}=\mathbf{k}$$ Without loss of generality we can set $k_{2}=k_{3}=0$. Near the stationary point the phase is to second order: $$-i\lambda(\sqrt{k^{\prime 2}+m^{2}}\sqrt{k^{2}+m^{2}}-\mathbf{k^{\prime}}\cdot\mathbf{k})\approx-i\lambda(m^{2}+\frac{m^{2}}{2(k^{2}+m^{2})}(k_{1}^{\prime}-k)^{2}+\frac{1}{2}(k_{2}^{\prime2}+k_{3}^{\prime2}))$$ This in equation(\[ps\]) leads to $$\begin{aligned} \psi(\lambda\underline{k}) & \approx & \int(2\pi)^{-\frac{3}{2}} e^{-i\lambda(m^{2}+\frac{m^{2}}{2(k^{2}+m^{2})}(k_{1}^{\prime}-k)^{2}+\frac{1}{2}(k_{2}^{\prime2}+k_{3}^{\prime2}))}\widehat{\psi}(\mathbf{k^{\prime}})d^{3}k^{\prime}\;,\end{aligned}$$ and replacing $\widehat{\psi}(\mathbf{k}^{\prime})$ by $\widehat{\psi}(\mathbf{k})$ we obtain by integrating the gaussian $$\psi(\lambda\underline{k})\approx\frac{e^{-i\lambda m^{2}}}{(i\lambda)^{\frac{3}{2}}}\widehat{\psi}(\mathbf{k})\sqrt{\frac{k^{2}}{m^{2}}+1}$$ We shall state now the stationary phase result in a some what more general setting, to cover also applications to the potential case considered later: The stationary phase -------------------- \[statphas\] Let $\widetilde{\chi}$ be in $\cal{G}$ (see(\[definitionG\])) and let the “phase function” $g$ be $$g(\mathbf{k}^{\prime})=\sqrt{k^{\prime2}+m^{2}}+a\mid k^{\prime}\mid-\mathbf{y}\cdot\mathbf{k}^{\prime}.$$ Let $\mathbf{k}_{stat}$ be the stationary point of the phase-function: $$\nabla g(\mathbf{k}_{stat})=0\, .$$ Then there exist $C_{1},C_{2},C_{3}\in\mathbb{R}$ so, that for all $\chi$ with $\parallel\partial_{k}^{j}\chi\parallel_{s}\leq\parallel\partial_{k}^{j}\widetilde{\chi}\parallel_{s}$ for $j=0,1,2$, $\mathbf{y}\in\mathbb{R}^{3}$ and $a\geq 0$ $$\label{hoerm} \parallel\int e^{-i\mu g(\mathbf{k}^{\prime})}\chi(\mathbf{k}^{\prime})d^{3}k^{\prime}-C_{1} \mu^{-\frac{3}{2}}\chi(\mathbf{k}_{stat}) \parallel_{s}< C_{2}\mu^{-2}+C_{3} \frac{k_{stat}^{\frac{1}{2}}}{\mu}\chi(\mathbf{k}_{stat})\, .$$ For phase functions without stationary point $C_{1}=C_{3}=0$. Moreover the $C_{j}$ are uniformly bounded for all $\chi$, $a$ and $\mathbf{y}$. For $a=0$ we can choose $C_{1}=(-2\pi i)^{\frac{3}{2}}e^{-i\mu g(\mathbf{k}_{stat})}\frac{(k_{stat}^{2}+m^{2})^{\frac{5}{4}}}{m}$ and $C_{3}=0 $ . One may be disturbed about the nature of the inequality (\[hoerm\]) when $C_{3}\neq0$. The point here is that our estimate is uniform in $k_{stat}$ and then later we shall use (\[hoerm\]) for $C\neq0$ such that $k_{stat}$ will be of order $\mu^{-1}$, so for $C_{3}\neq0$ the last term will be part of the leading term in our estimation. This statement is a slight adaptation to our situation of a theorem of Hörmander [@hoer], and its proof in the appendix \[appendix1\]. Scattering into cones for a free particle ----------------------------------------- Applying Lemma \[statphas\] to (\[ps\]) we choose: $$\mu=\lambda\sqrt{k^{2}+m^{2}} ;\; a=0 ;\; \mathbf{y}=\frac{\mathbf{k}}{\sqrt{k^{2}+m^{2}}} ;\; \chi(\mathbf{k}^{\prime})=(2\pi)^{-\frac{3}{2}}\widehat{\psi}(\mathbf{k}^{\prime})$$ and calculate the stationary point $k_{stat}$: $$\begin{aligned} \frac{\mathbf{k_{stat}}}{\sqrt{k_{stat}^{2}+m^{2}}}-\mathbf{y}&=&0 \\k_{stat}^{2}&=&y^{2}(k_{stat}^{2}+m^{2}) \\\\k_{stat}&=&\frac{ym}{\sqrt{1-y^{2}}}\end{aligned}$$ obtaining \[scintocones\] (“Scattering into cones”) There exists a constant $C<\infty $ so that for all $ \mathbf{k}\in{\mbox{${\rm I\!R}$}}^{3} $ $$\parallel\psi(\lambda\underline{k})-\frac{e^{-i\lambda m^{2}}}{(i\lambda)^{\frac{3}{2}}}\widehat{\psi}(\mathbf{k})\sqrt{\frac{k^{2}}{m^{2}}+1}\parallel_{s} \leq C\lambda^{-2}\;.$$ Note, that this implies $$\begin{aligned} \label{scatlim} \lim_{\lambda\rightarrow\infty}\sup_{\mathbf{k}}(\parallel\sqrt{\lambda}^{3}\psi(\lambda\underline{k})\parallel_{s}-\parallel \widehat{\psi}(\mathbf{k})\sqrt{\frac{k^{2}}{m^{2}}+1}\parallel_{s} )=0\;.\end{aligned}$$ For the flux-across-surfaces theorem we need the asymptotics of the relativistic quantum flux (\[relflux\]) of the particle. Since all the $\alpha_{l}$ are bounded matrices and $\widehat{\psi}\in {\cal G}$, we obtain from (\[relflux\]) and (\[scatlim\]) for the flux: $$\begin{aligned} \label{jscat0} \lim_{\lambda\rightarrow\infty}\sup_{\mathbf{k}}\mid\lambda^{3}j_{\hspace{0.01cm}l}(\lambda\underline{k})-\langle\widehat{\psi}(\mathbf{k}), \alpha_{l}\widehat{\psi}(\mathbf{k})\rangle(\frac{k^{2}}{m^{2}}+1)\mid=0\;.\end{aligned}$$ Next observe (see the appendix \[appendix2\]), that: $$\label{ridofalpha} \langle\widehat{\psi}(\mathbf{k}),\mathbf{\alpha}\widehat{\psi}(\mathbf{k})\rangle=\frac{\mathbf{k}}{\sqrt{k^{2}+m^{2}}}\langle\widehat{\psi}(\mathbf{k}),\widehat{\psi}(\mathbf{k})\rangle\;.$$ Thus we get the uniform bound: $$\begin{aligned} \label{jscat} \forall\varepsilon>0\hspace{1cm}\exists\lambda\in\mathbb{R}:\hspace{2.1cm}\nonumber\\ \sup_{\mathbf{k}}\mid\lambda^{3}\mathbf{j}(\lambda\underline{k})-\langle\widehat{\psi}(\mathbf{k}),\widehat{\psi}(\mathbf{k})\rangle \frac{\mathbf{k}}{m^{2}}\sqrt{k^{2}+m^{2}}\mid<\varepsilon\;.\end{aligned}$$ Observe, that after a long time of propagation, the flux at $\mathbf{x}=\lambda\mathbf{k}$ will always be parallel to $\mathbf{k}$. So in the limit $t\rightarrow\infty$ it will always point away from the origin of the coordinate system. Flux across surfaces for a free particle ---------------------------------------- Theorem \[frflsu\] reads in this case $$\begin{aligned} \label{limes2} \lim_{R\rightarrow\infty}\int_{S}\int_{0}^{\infty}\mathbf{j}(\mathbf{R},t)\cdot\mathbf{n}dt R^{2}d\Omega-\int_{S}\int_{0}^{\infty}\langle\widehat{\psi}(\mathbf{k}),\widehat{\psi}(\mathbf{k})\rangle k^{2}dkd\Omega=0\\end{aligned}$$ and $$\begin{aligned} \label{limes3} \lim_{R\rightarrow\infty}\int_{S}\int_{0}^{\infty}j(\mathbf{R},t)dt R^{2}d\Omega-\int_{S}\int_{0}^{\infty}\langle\widehat{\psi}(\mathbf{k}),\widehat{\psi}(\mathbf{k})\rangle k^{2}dkd\Omega=0\;.\end{aligned}$$ In the following, we will prove (\[limes2\]) by inserting the longtime asymptotic (\[jscat\]) for $\mathbf{j}$ and showing, that the integral of the error we get by this approximation tends to zero in the limit $R\rightarrow\infty$. Now, the long time asymptotic of $\mathbf{j}$ is parallel to the normal $\mathbf{n}$ of the radial surface. Therefore the longtime asymptotic of $j$ is equal to the longtime asymptotic of $\mathbf{j}\cdot\mathbf{n}$. More detailed, one sees that using the approximation (\[jscat\]) for $\mathbf{j}$ in (\[limes2\]) and (\[limes3\]), the bound on the error terms in (\[limes2\]) and (\[limes3\]) arising from (\[jscat\]) are equal. So the proof of (\[limes3\]) is essentially the same as for (\[limes2\]) and we shall concentrate only on showing (\[limes2\]). The left side of (\[limes2\]) includes an integral over t, whereas the right hand side is integrated over k. We therefore substitute for t in the first term, to get integration over k, too. Since $\lambda$ plays the role of a time parameter it is natural to substitute: $$\mathbf{k}=\frac{R\mathbf{n}}{\lambda}$$ with $$\lambda=\frac{\sqrt{t^{2}-R^{2}}}{m}\;.$$ But this substitution is only possible in the time-like region ($t\geq R$). So we first handle the integral starting at $t=R$, later we deal with the space-like part of the integral. Then, substituting t by k, we obtain $$\begin{aligned} \label{subst1} \int_{S}\int_{R}^{\infty}\mathbf{j}(\mathbf{R},t)\cdot\mathbf{n}dt R^{2}d\Omega & = & \int_{S}\int_{0}^{\infty}\mathbf{j}(\mathbf{R},\frac{R}{k}\sqrt{k^{2}+m^{2}})\cdot\mathbf{n}\frac{m^{2}}{\sqrt{k^{2}+m^{2}}}\frac{R^{3}}{k^{2}}dk d\Omega \nonumber \\ & = & \int_{S}\int_{0}^{\infty}\mathbf{j}(\lambda(k) k,\lambda(k)\sqrt{k^{2}+m^{2}})\cdot\mathbf{n}\frac{m^{2}}{\sqrt{k^{2}+m^{2}}}k\lambda(k)^{3}dkd\Omega\nonumber\;.\end{aligned}$$ The integrand is now in the form that we can replace it by the asymptotic in (\[jscat\]). It turns out however, that the error in the integrand will be $\sim\frac{k}{\sqrt{k^{2}+m^{2}}}$ which is not integrable, therefore the replacement is not straight forward. We separate large momenta $k>X$ and small momenta $k<X$. In the following we choose $X>m$. Given X and $R_{0}=\lambda_{0}X$ $$k\leq X\Leftrightarrow\frac{R_{0}}{k}=\lambda(k)\geq\lambda_{0}=\frac{R_{0}}{X}.$$ Then by (\[jscat\]) for small momenta ($k\leq X\Leftrightarrow t\geq R\sqrt{1+\frac{m^{2}}{X^{2}}}$): $$\forall\varepsilon>0\hspace{1cm}\exists R_{0}\in\mathrm{R}\hspace{1cm}\forall R\geq R_{0}\:$$ $$\begin{aligned} \label{insert} \lefteqn{\hspace{-1cm} \mid\int_{S}\int_{R\sqrt{1+\frac{m^{2}}{X^{2}}}}^{\infty}\big(\mathbf{j}(\mathbf{R},t)\cdot\mathbf{n}dt R^{2}-\int_{0}^{X}\langle\widehat{\psi}(\mathbf{k}),\widehat{\psi}(\mathbf{k})\rangle k^{2}\big)dkd\Omega\mid}\\ &=& \mid\int_{S}\int_{0}^{X}\mathbf{j}(\lambda k,\lambda\sqrt{k^{2}+m^{2}})\cdot\mathbf{n}\frac{m^{2}}{\sqrt{k^{2}+m^{2}}}k\lambda^{3}-\langle\widehat{\psi}(\mathbf{k}),\widehat{\psi}(\mathbf{k})\rangle k^{2}dkd\Omega\mid\nonumber \\ &\leq&\int_{S}\int_{0}^{X}\frac{km^{2}\varepsilon}{\sqrt{k^{2}+m^{2}}} dkd\Omega=:\chi(X)\varepsilon\nonumber\end{aligned}$$ where $$\chi(X):=4\pi\int_{0}^{X}\frac{km^{2}}{\sqrt{k^{2}+m^{2}}}dk\;.$$ Given $X$ we can take $\varepsilon$ arbitrarily small, choosing $R_{0}$ large enough, so that the r.h.s. of (\[insert\]) goes to zero. Thus $$\label{in} \lim_{X\rightarrow\infty}\lim_{R\rightarrow\infty}\mid\int_{S}\int_{R\sqrt{1+\frac{m^{2}}{X^{2}}}}^{\infty}\mathbf{j}(\mathbf{R},t)\cdot\mathbf{n}dt R^{2}d\Omega-\int_{S}\int_{0}^{X}\langle\widehat{\psi}(\mathbf{k}),\widehat{\psi}(\mathbf{k})\rangle k^{2}dkd\Omega\mid=0\;.$$ For the large momenta note that by virtue of $\widehat{\psi}\in\mathcal{G}$: $$\label{bigX1} \lim_{X\rightarrow\infty}\int_{S}\int_{X}^{\infty}\langle\widehat{\psi}(\mathbf{k}),\widehat{\psi}(\mathbf{k})\rangle k^{2}dkd\Omega=0$$ and all it remains to show is that $$\label{bigXX} \lim_{X\rightarrow\infty}\lim_{R\rightarrow\infty}\int_{S}\int_{0}^{R\sqrt{1+\frac{m^{2}}{X^{2}}}}\mathbf{j}(\mathbf{R},t)\cdot\mathbf{n}dt R^{2}d\Omega=0$$ where we also included the time integration outside the light cone, which we excluded in the substitution. We first estimate the part of the integral (\[bigXX\]) that lies in the space-like region (more precisely: $t\in [0,R]$) then we estimate the time-like part near the light cone ( $t\in [R,R\sqrt{1+\frac{m^{2}}{X^{2}}}]$). That is, we first show that $$\label{othertimeb} \lim_{R\rightarrow\infty} \int\int_{0}^{R}\mathbf{j}(\mathbf{R},t)\cdot\mathbf{n}dtR^{2}d\Omega=0\;.$$ That this holds is physically related to the fact, that a particle moves slower than light, so for big time and space scales the main part of the wavefunction will be inside the light cone. This follows from a straightforward application of the stationary phase method, outside of the stationary point. Two partial integrations lead to: $$\begin{aligned} \parallel\psi(\mathbf{x},\eta x)\parallel_{s}&=&\parallel\int (2\pi)^{-\frac{3}{2}}e^{-ix(\sqrt{k^{2}+m^{2}}\eta-k_{1}) }\widehat{\psi}(\mathbf{k})d^{3}k\parallel_{s} \\&=&\parallel\int (2\pi)^{-\frac{3}{2}}e^{-ixg }\widehat{\psi}(\mathbf{k})d^{3}k\parallel_{s} \\&\leq&\frac{1}{x^{2}}\int\parallel(2\pi)^{-\frac{3}{2}} (\frac{\widehat{\psi}^{\prime\prime}}{g^{\prime2}}-\frac{3\widehat{\psi}^{\prime}g^{\prime\prime}}{g^{\prime 3}}+\frac{3\widehat{\psi} g^{\prime\prime2}}{g^{\prime4}}-\frac{\widehat{\psi} g^{\prime\prime\prime}}{g^{\prime3}})\parallel_{s}d^{3}k\end{aligned}$$ where $$g:=(\sqrt{k^{2}+m^{2}}\eta-k_{1})\hspace{1cm}f^{\prime}:=\partial_{k_{1}}f\;.$$ Since $$-g^{\prime}=1-\frac{k_{1}\eta}{\sqrt{k^{2}+m^{2}}}\geq 1-\frac{\mid k_{1}\mid}{\sqrt{k^{2}+m^{2}}}>0$$ it follows: $$\label{spacelike} \parallel\psi(\mathbf{x},\eta x)\parallel_{s} \leq(2\pi)^{-\frac{3}{2}}\frac{C_{2}}{x^{2}}$$ uniform in $\eta\leq 1$. Hence $$\begin{aligned} \lefteqn{\hspace{-1cm}\lim_{R\rightarrow\infty}\int\int_{0}^{R}\mathbf{j}(\mathbf{R},t)\cdot\mathbf{n}dtR^{2}d\Omega}\\&\leq& 4\pi\lim_{R\rightarrow\infty}\int_{0}^{R}\parallel\psi(\mathbf{x},t)\parallel_{s}^{2} dtR^{2}\leq\frac{1}{2\pi^{2}} C_{2}^{2}\lim_{R\rightarrow\infty}R^{3}\frac{1}{R^{4}}=0\end{aligned}$$ It is left to prove that the second part of the integral in (\[bigXX\]) goes to zero, i.e. that $$\lim_{X\rightarrow\infty}\lim_{R\rightarrow\infty}\int_{S}\int_{R}^{R\sqrt{1+\frac{m^{2}}{X^{2}}}}\mathbf{j}(R,t)\cdot\mathbf{n}dtR^{2}d\Omega=0\;.$$ The scalar norm of $\psi(\mathbf{x},t)$ is: $$\begin{aligned} \label{norm} \parallel\psi(\mathbf{x},t)\parallel_{s}&=&\parallel\int (2\pi)^{-\frac{3}{2}}e^{-i\sqrt{k^{2}+m^{2}}t+i\mathbf{k}\cdot \mathbf{x}}\widehat{\psi}d^{3}k\parallel_{s}\\&=&\parallel\int (2\pi)^{-\frac{3}{2}}e^{-i(\sqrt{k^{2}+m^{2}}-\mathbf{k}\cdot \mathbf{r})t}\widehat{\psi}d^{3}k\parallel_{s}\;.\end{aligned}$$ Applying Lemma \[statphas\] with $$\mu=t;\; a=0 ;\; \mathbf{y}=\mathbf{r} ;\; \chi(\mathbf{k}^{\prime})=(2\pi)^{-\frac{3}{2}}\widehat{\psi}(\mathbf{k}^{\prime})$$ we have by (\[hoerm\]), that: $$\begin{aligned} \parallel\int e^{-iE_{k}t+i\mathbf{k}\cdot\mathbf{x}}\widehat{\psi}(\mathbf{x})d^{3}k-C_{1}t^{-\frac{3}{2}}\widehat{\psi}(k_{stat})\parallel_{s}<C_{2}t^{-2}\;.\end{aligned}$$ As $\widehat{\psi}$ is bounded, we have: $$\begin{aligned} \label{wehave} \nonumber\exists M\in{\mbox{${\rm I\!R}$}}:\forall t>R \hspace{1cm}\parallel\psi(\mathbf{x},t)\parallel_{s}=\parallel\int e^{-iE_{k}t+i\mathbf{k}\cdot\mathbf{x}}\widehat{\psi}(\mathbf{k})d^{3}k\parallel_{s}\leq Mt^{-\frac{3}{2}}\;.\end{aligned}$$ So $$\mid\int_{S}\mathbf{j}(\mathbf{R},t)\cdot\mathbf{n}R^{2}d\Omega\mid\leq4\pi\frac{MR^{2}}{t^{3}}\;.$$ So we can write: $$\begin{aligned} \lefteqn{\hspace{-1cm}\mid\int_{S}\int_{R}^{R\sqrt{1+\frac{m^{2}}{X^{2}}}}\mathbf{j}(\mathbf{R},t)\cdot\mathbf{n}dt R^{2}d\Omega\mid}\\&\leq&2\pi MR^{2}(R^{-2}-R^{-2}\big(1+\frac{m^{2}}{X^{2}})^{-1}\big)=2\pi M\big(1-(1+\frac{m^{2}}{X^{2}})^{-1}\big)\;.\end{aligned}$$ This term goes to zero as $X\rightarrow\infty$ The flux-across-surfaces theorem with potential ----------------------------------------------- ### Generalized Eigenfunctions for the Dirac equation with potential For the proof of the free flux-across-surfaces theorem we used the $\varphi_{\mathbf{k}}^{s}$ as basis of the Hilbert space. In the potential case we adopt a new basis for doing calculations. Like in the free case, we again get four linear independent eigenfunctions for each $\mathbf{k}$, two of them have positive energy-eigenvalue $E_{k}^{eig}=E_{k}=\sqrt{k^{2}+m^{2}}$, two of them have negative energy-eigenvalue $E_{k}^{eig}=-E_{k}$. We denote by $\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})$ the eigenfunctions with $s\in\{1,2\}$: $$\label{dgmp} E_{k}\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})=(H_{0}+A\hspace{-0.2cm}/)\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})\;.$$ The corresponding Lipmann Schwinger equation reads: $$\label{EH} \widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})=\varphi_{\mathbf{k}}^{s}(\mathbf{x})+(E_{k}-H_{0})^{-1}A\hspace{-0.2cm}/\widetilde{\varphi}_{\mathbf{k}}^{s}(\mathbf{x})\;.$$ We replace the formal expression $(E_{k}-H_{0})^{-1}$ by the integral kernel $G^{+}_{k}$: $$\label{Green} (E_{k}-H_{0})G^{+}_{k}(\mathbf{x}-\mathbf{x^{\prime}})=\delta(\mathbf{x}-\mathbf{x^{\prime}})\;.$$ The explicit form for $G^{+}_{k}(\mathbf{x}-\mathbf{x^{\prime}})$ can be found in [@thaller]: $$\label{kernel} G^{+}_{k}(\mathbf{x})=\frac{1}{4\pi}e^{ikx}\left(-x^{-1}(E_{k}+\sum_{j=1}^{3}\alpha_{j}k\frac{x_{j}}{x}+\beta m) +x^{-2}\sum_{j=1}^{3}\alpha_{j}\frac{x_{j}}{x}\right)=:\frac{e^{ikx}}{x}S_{k}^{+}(\mathbf{x})\;.$$ Thus: $$\label{LSE} \widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})=\varphi_{\mathbf{k}}^{s}(\mathbf{x})-\int A\hspace{-0.2cm}/(\mathbf{x^{\prime}})G^{+}_{k}(\mathbf{x}-\mathbf{x^{\prime}}) \widetilde{\varphi}_{\mathbf{k}}^{s}(\mathbf{x^{\prime}})d^{3}x^{\prime}\;.$$ For $S^{+}_{\mathbf{k}}$, defined in (\[kernel\]), we have: $$\begin{aligned} \mid\partial_{k}^{j}S^{+}_{\mathbf{k}}\mid&=&\mid\frac{1}{4\pi}\partial_{k}^{j}(-E_{k}-\sum_{j=1}^{3}\alpha_{j}k\frac{x_{j}}{x}-\beta m+x^{-1}\sum_{j=1}^{3}\alpha_{j}\frac{x_{j}}{x})\mid\\ &=&\mid\frac{1}{4\pi}\partial_{k}^{j}(E_{k}+\sum_{j=1}^{3}\alpha_{j}(k\frac{x_{j}}{x}-\frac{x_{j}}{x^{2}})+\beta m\mid\end{aligned}$$ for $j=0,1,2$. Choosing $x\geq 1$ we have $$\frac{x_{j}}{x}\leq1 \hspace{1cm}\frac{x_{j}}{x^{2}}\leq1$$ and it follows, that $$\mid\partial_{k}^{j}S^{+}_{\mathbf{k}}\mid\leq\mid\frac{1}{4\pi}\partial_{k}^{j}(E_{k}+\sum_{j=1}^{3}\alpha_{j}(k+1)+\beta m)\mid$$ Thus with $$\label{sgreen} \widetilde{S}_{\mathbf{k}}^{+}:=\frac{1}{4\pi}(E_{k}+\sum_{j=1}^{3}\alpha_{j}(k+1)+\beta m)$$ we have: $$\label{sbound} \mid\partial_{k}^{j}S^{+}_{\mathbf{k}}\mid\leq\mid\partial_{k}^{j}\widetilde{S}_{\mathbf{k}}^{+}\mid$$ for $j=0,1,2$, $x\geq 1$. For the next steps we need some properties of the generalized eigenfunctions. We summarize these properties in the following Lemma which is proven in the Appendix \[appendix3\]: \[properties\] Let $A\hspace{-0.2cm}/$ satisfy Condition A (\[potcond\]). Then there exist unique solutions $\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})$ of (\[LSE\]) for all $k\in \mathbb{R}^{3}$, such that: (a) : For any $\mathbf{k}\in\mathbb{R}^{3}$, $s=1,2$ the functions $\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})$ are Hölder continuous of degree 1 in $\mathbf{x}$ (b) : Any $\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})$ which is a solution of (\[LSE\]) automatically satisfies (\[dgmp\]). (c) : The functions $$\label{zeta} \zeta^{s}_{\mathbf{k}}(\mathbf{x}):=\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})-\varphi^{s}_{\mathbf{k}}(\mathbf{x})$$ are infinitely often continuously differentiable with respect to $k$, furthermore we have for $j\in\mathbb{N}$ and any multi-index $\gamma$ with $\mid\gamma\mid\leq2$ : $$\begin{aligned} i)&&\sup_{\mathbf{x}\in\mathbb{R}^{3}}\parallel x \zeta^{s}_{\mathbf{k}}(\mathbf{x})\parallel_{s}<\infty \\ii)&&\sup_{\mathbf{x}\in\mathbb{R}^{3}}\parallel\partial_{k}^{j}\frac{\zeta^{s}_{\mathbf{k}}(\mathbf{x})}{\mid x+1\mid^{j-1}}\parallel_{s}<\infty \\iii)&&\sup_{\mathbf{x}\in\mathbb{R}^{3}}\parallel k^{\mid\gamma\mid-1}D_{\mathbf{k}}^{\gamma}\frac{\zeta^{s}_{\mathbf{k}}(\mathbf{x})}{\mid x+1\mid^{j-1}}\parallel_{s}<\infty\;.\end{aligned}$$ (d) : The $\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})$ form a basis of the space of scattering states, i.e. for scattering states $\psi(\mathbf{x},t)$: $$\label{hin} \psi(\mathbf{x},t)=\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} e^{-i\sqrt{k^{ 2}+m^{2}}t}\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})\widehat{\psi}_{out,s}(\mathbf{k})d^{3}k$$ $$\label{her} \widehat{\psi}_{out,s}(\mathbf{k})=\int(2\pi)^{-\frac{3}{2}}\langle\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),\psi(\mathbf{x})\rangle d^{3}x$$ where $\widehat{\psi}_{out,s}(\mathbf{k})$ is the fourier transform of $\psi_{\text{out}}=\Omega_{+}\psi$. ### Flux-across-surfaces for the Dirac-equation with potential We prove now Theorem \[frflsu\]. As in the free case only the equality of the second and third integral is shown. From the nature of the estimates in the proof it will become evident, that essentially by the same argument as in the free case, the first equality can be established, and we do not say anything more to that. We again split our flux integral into two parts, one inside the light-cone (from $R$ to $\infty$) and one outside the light-cone (from $0$ to $R$), where the main contribution comes from the times $t>R$, i.e. we prove that $$\begin{aligned} \label{newsplit} i)&&\lim_{R\rightarrow\infty}\mid\int_{S}\int_{R}^{\infty}\mathbf{j}(\mathbf{R},t)\cdot\mathbf{n}dt R^{2}d\Omega-\int_{S}\int_{0}^{\infty}\langle\widehat{\psi}_{\text{out}}(\mathbf{k}),\widehat{\psi}_{\text{out}}(\mathbf{k})\rangle k^{2}dkd\Omega\mid=0 \nonumber \\ ii)&&\lim_{R\rightarrow\infty} \int\int_{0}^{R}\mathbf{j}(\mathbf{R},t)\cdot\mathbf{n}dtR^{2}d\Omega=0\end{aligned}$$ We start with i): According to (\[hin\]) $$\psi(\mathbf{x},t)=\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} e^{-i\sqrt{k^{ 2}+m^{2}}t}\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})\widehat{\psi}_{out,s}(\mathbf{k})d^{3}k\;.$$ Setting $$\widehat{\psi}_{\text{out}}(\mathbf{k^{\prime}})=\sum_{s=1}^{2}s^{s}_{\mathbf{k^{\prime}}}\widehat{\psi}_{out,s}(\mathbf{k^{\prime}})$$ and using (\[LSE\]) with (\[zeta\]) we get: $$\begin{aligned} \label{sum} \psi(\mathbf{x},t) & = & \int(2\pi)^{-\frac{3}{2}} e^{-i\sqrt{k^{ 2}+m^{2}}t}e^{i\mathbf{k}\cdot\mathbf{x}}\widehat{\psi}_{\text{out}}(\mathbf{k})d^{3}k\nonumber\\ & & -\int(2\pi)^{-\frac{3}{2}} e^{-i\sqrt{k^{ 2}+m^{2}}t}\int \frac{e^{ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}S^{+}_{k}(\mathbf{x}-\mathbf{x}^{\prime})A\hspace{-0.2cm}/(\mathbf{x}^{\prime})e^{i\mathbf{k}\cdot\mathbf{x}^{\prime}}d^{3}x^{\prime}\widehat{\psi}_{\text{out}}(\mathbf{k})d^{3}k \nonumber\\ & & -\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} e^{-i\sqrt{k^{ 2}+m^{2}}t}\int \frac{e^{ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}S^{+}_{k}(\mathbf{x}-\mathbf{x}^{\prime})A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\zeta^{s}_{\mathbf{k}}(\mathbf{x}^{\prime})d^{3}x^{\prime}\widehat{\psi}_{out,s}(\mathbf{k})d^{3}k \nonumber\\ &=:& S_{0}-S_{1}-S_{2}\;.\end{aligned}$$ $S_{0}$ is the propagation of the free outgoing state. The free Flux-Across-Surfaces-Theorem yields therefore: $$\lim_{R\rightarrow\infty}\mid\int_{S}\int_{R}^{\infty}\langle S_{0},\mathbf{\alpha}S_{0}\rangle\cdot\mathbf{n}dt R^{2}d\Omega-\int_{S}\int_{0}^{\infty}\langle\widehat{\psi}_{\text{out}}(\mathbf{k}),\widehat{\psi}_{\text{out}}(\mathbf{k})\rangle k^{2}dkd\Omega\mid=0\;.$$ Hence for (\[newsplit\])(i) it remains to show, that (using \[relflux\]): $$\begin{aligned} \lefteqn{\hspace{-1cm} \lim_{R\rightarrow\infty}\int_{S}\int_{R}^{\infty}(\mathbf{j}(R,t)-\langle S_{0},\mathbf{\boldsymbol{\alpha}}S_{0}\rangle)\cdot\mathbf{n}dt R^{2}d\Omega} \\&=&\lim_{R\rightarrow\infty}\int_{S}\int_{R}^{\infty}(\langle \sum_{j=0}^{2}S_{j},\mathbf{\boldsymbol{\alpha}}\sum_{j=0}^{2}S_{j}\rangle-\langle S_{0},\mathbf{\boldsymbol{\alpha}}S_{0}\rangle)\cdot\mathbf{n}dt R^{2}d\Omega \\&=&\lim_{R\rightarrow\infty}\int_{S}\int_{R}^{\infty}(\langle \psi,\mathbf{\boldsymbol{\alpha}}\sum_{j=1}^{2}S_{j}\rangle+\langle \sum_{j=1}^{2}S_{j},\mathbf{\boldsymbol{\alpha}}\psi\rangle)\cdot\mathbf{n}dt R^{2}d\Omega=0\;.\end{aligned}$$ By Schwartz-inequality we need only show: $$\label{summesa} \lim_{R\rightarrow\infty}\int_{S}\int_{R}^{\infty} \parallel\psi\parallel_{s} \sum_{j=1}^{2}\parallel S_{j}\parallel_{s}dt R^{2}d\Omega=0\;.$$ We first want to estimate $\parallel S_{1}\parallel_{s}$. Recalling (\[sum\]) we get by Fubinis theorem: $$\begin{aligned} S_{1}&=&\int(2\pi)^{-\frac{3}{2}} e^{-i\sqrt{k^{ 2}+m^{2}}t}\int \frac{e^{ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}S^{+}_{k}(\mathbf{x}-\mathbf{x}^{\prime})A\hspace{-0.2cm}/(\mathbf{x}^{\prime})e^{i\mathbf{k}\cdot\mathbf{x}^{\prime}}d^{3}x^{\prime}\widehat{\psi}_{\text{out}}(\mathbf{k})d^{3}k \\&=&\int\int(2\pi)^{-\frac{3}{2}} e^{-i\sqrt{k^{ 2}+m^{2}}t} \frac{e^{ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}S^{+}_{k}(\mathbf{x}-\mathbf{x}^{\prime})A\hspace{-0.2cm}/(\mathbf{x}^{\prime})e^{i\mathbf{k}\cdot\mathbf{x}^{\prime}}d^{3}x^{\prime}\widehat{\psi}_{\text{out}}(\mathbf{k})d^{3}k \\&=:&\int(2\pi)^{-\frac{3}{2}} \frac{1}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}\widetilde{S}_{1}(\mathbf{x},\mathbf{x}^{\prime})A\hspace{-0.2cm}/(\mathbf{x}^{\prime})d^{3}x^{\prime}\end{aligned}$$ where $$\label{innen} \widetilde{S}_{1}(\mathbf{x},\mathbf{x}^{\prime})=\int(2\pi)^{-\frac{3}{2}} e^{-i\sqrt{k^{ 2}+m^{2}}t} e^{ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}S^{+}_{k}(\mathbf{x}-\mathbf{x}^{\prime})e^{i\mathbf{k}\cdot\mathbf{x}^{\prime}}d^{3}x^{\prime}\widehat{\psi}_{\text{out}}(\mathbf{k})d^{3}k\;.$$ Next we use Lemma \[statphas\], setting: $$\mu=t;\; a=t^{-1}\mid\mathbf{x}-\mathbf{x}^{\prime}\mid ;\; \mathbf{y}=t^{-1}\mathbf{x}^{\prime} ;\; k^{\prime}=k;\; \chi(\mathbf{k}^{\prime})=(2\pi)^{-\frac{3}{2}}S^{+}_{\mathbf{k}}(\mathbf{x}-\mathbf{x}^{\prime})\widehat{\psi}(\mathbf{k}^{\prime})\;.$$ With regard to (\[sbound\]), the function $$\widetilde{\chi}(\mathbf{k})=(2\pi)^{-\frac{3}{2}}\widetilde{S}_{\mathbf{k}}^{+}\widehat{\psi}(\mathbf{k}^{\prime})$$ satisfies the properties we need in (\[hoerm\]). Furthermore we observe that for the stationary point: $$\begin{aligned} \frac{k_{stat}}{\sqrt{k_{stat}^{2}+m^{2}}}+a-y&=&0\nonumber\\ k_{stat}&=& \sqrt{k_{stat}^{2}+m^{2}}(y-a)\;.\end{aligned}$$ So we can estimate $k_{stat}$ by: $$\begin{aligned} \label{kstat} k_{stat}&=&\sqrt{k_{stat}^{2}+m^{2}}t^{-1}(x^{\prime}-\mid \mathbf{x}-\mathbf{x}^{\prime}\mid) \leq \sqrt{k_{stat}^{2}+m^{2}}xt^{-1}\;.\end{aligned}$$ Hence by (\[hoerm\]) we obtain for (\[innen\]) that there exists $M_{1}<\infty$, bounding in particular $\sqrt{k_{stat}^{2}+m^{2}}\widehat{\chi}(\mathbf{k}_{stat)}$, which is bounded by the choice of $\widehat{\psi}_{\text{out}}\in\mathcal{G}$ and incorporating also the constants $C_{1}$ and $C_{2}$, uniformly in $\mathbf{y}$ and $a$ so that: $$\begin{aligned} \label{seins} \parallel S_{1}\parallel_{s}&\leq&\parallel M_{1}t^{-\frac{3}{2}}(1+x^{\frac{1}{2}})\int \frac{1}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid} A\hspace{-0.2cm}/(\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s} \nonumber\\&=&M_{1}t^{-\frac{3}{2}}P_{1}(\mathbf{x})\rightarrow_{x\rightarrow\infty}0\;.\end{aligned}$$ That the function $P_{1}$ goes to zero in the limit $x\rightarrow\infty$ may be seen as follows:For any function $f(\mathbf{x})\in L^{1}$ with $\limsup_{x\rightarrow\infty}\mid x^{3}f(\mathbf{x})\mid<\infty$ we have: $$\begin{aligned} \label{decayofint} &&\lim_{x\rightarrow\infty}x\mid\int\frac{1}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}f(\mathbf{x}^{\prime})d^{3}x^{\prime}\mid\nonumber\\&\leq& \lim_{x\rightarrow\infty}x\int\mid\frac{1}{x^{\prime}}f(\mathbf{x}-\mathbf{x}^{\prime})\mid d^{3}x^{\prime} \nonumber\\&=&\lim_{x\rightarrow\infty}x\big(\int_{B(0,\frac{x}{2})}\mid\frac{1}{x^{\prime}}f(\mathbf{x}-\mathbf{x}^{\prime})\mid d^{3}x^{\prime}+\int_{\mathbb{R}^{3}\backslash B(0,\frac{x}{2})}\mid\frac{1}{x^{\prime}}f(\mathbf{x}-\mathbf{x}^{\prime})\mid d^{3}x^{\prime}\big) \nonumber\\&\leq&\lim_{x\rightarrow\infty}x\big( \sup_{\widetilde{x}\geq\frac{x}{2}}\{\mid f(\mathbf{\widetilde{x}})\mid\}\int_{B(0,\frac{x}{2})}\frac{1}{x^{\prime}} d^{3}x^{\prime}+\frac{2}{x}\int_{\mathbb{R}^{3}\backslash B(0,\frac{x}{2})}\mid f(\mathbf{x}-\mathbf{x}^{\prime})\mid d^{3}x^{\prime}\big) \nonumber\\&\leq&\lim_{x\rightarrow\infty}\frac{1}{8}x^{3}\sup_{\widetilde{x}\geq\frac{x}{2}}\{\mid f(\mathbf{\widetilde{x}})\mid\} +\lim_{x\rightarrow\infty}2\int_{\mathbb{R}^{3}\backslash B(0,\frac{x}{2})}\mid f(\mathbf{x}-\mathbf{x}^{\prime})\mid d^{3}x^{\prime}<\infty\end{aligned}$$ where $B(\mathbf{a},r)$ means the ball with center $\mathbf{a}$ and radius r. Next we estimate $\parallel S_{2}\parallel_{s}$. According to (\[sum\]) we can write it down as: $$S_{2}=\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} e^{-i\sqrt{k^{ 2}+m^{2}}t}\int \frac{e^{ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}S^{+}_{k}(\mathbf{x}-\mathbf{x}^{\prime})(x^{\prime}+1)A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\frac{\zeta^{s}_{\mathbf{k}}(\mathbf{x}^{\prime})}{x^{\prime}+1}d^{3}x^{\prime}\widehat{\psi}_{out,s}(\mathbf{k})d^{3}k\;.$$ Therefore we again use Lemma \[statphas\], setting: $$\mu=t ;\; a=t^{-1}(\mid\mathbf{x}-\mathbf{x}^{\prime}\mid) ;\; \mathbf{y}=0 ;\; k^{\prime}=k;\; \chi(\mathbf{k}^{\prime})=(2\pi)^{-\frac{3}{2}}\sum_{s=1}^{2}\frac{\zeta^{s}_{\mathbf{k}}(\mathbf{x}^{\prime} )}{x^{\prime}+1}S^{+}_{\mathbf{k}}(\mathbf{x}-\mathbf{x}^{\prime})\widehat{\psi}_{out,s}(\mathbf{k}^{\prime})\;.$$ With regard to (\[sbound\]) and Lemma \[properties\](c) there exists a $M_{2}<\infty$, so that the function $$\widetilde{\chi}=(2\pi)^{-\frac{3}{2}}M_{2}\widetilde{S}_{\mathbf{k}}^{+}\widehat{\psi}(\mathbf{k}^{\prime})$$ satisfies the properties we need in (\[hoerm\]). Since our phase function has no stationary point we get with (\[hoerm\]): $$\begin{aligned} \label{szwei} \parallel S_{2}\parallel_{s}\leq M_{2}t^{-2}\mid\int \frac{1}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid} (x^{\prime}+1)A\hspace{-0.2cm}/(\mathbf{x}^{\prime})d^{3}x^{\prime}\mid \nonumber=M_{2}t^{-2}P_{2}(\mathbf{x})_{\overrightarrow{x\rightarrow\infty}}0\;.\end{aligned}$$ Choosing $(x^{\prime}+1)A\hspace{-0.2cm}/(\mathbf{x}^{\prime})$ for $f$ in the most left side of (\[decayofint\]), one can see, that $xP_{2}(\mathbf{x})$ is bounded, so $P_{2}$ goes to zero in the limit $x\rightarrow\infty$. Since $S_{0}$ is the analogue of the freely evolving wavefunction, we have by Corollary \[scintocones\]: $$\label{snull} \parallel S_{0}\parallel_{s}\leq M_{0}t^{-\frac{3}{2}}\;.$$ We use the estimates (\[snull\]), (\[seins\]) and (\[szwei\]) in the right side of (\[summesa\]) and get, defining $M:=M_{0}+M_{1}+M_{2}$: $$\begin{aligned} \lefteqn{\hspace{-1cm}\lim_{R\rightarrow\infty}\mid\int_{S}\int_{R}^{\infty} (\parallel\psi\parallel_{s}\parallel\sum_{j=1}^{2}S_{j}\parallel_{s})dt R^{2}d\Omega\mid}\\&\leq&\lim_{R\rightarrow\infty}\int_{R}^{\infty}M^{2}(P_{1}(\mathbf{R})+P_{2}(\mathbf{R}))t^{-3}dt R^{2}\leq\lim_{R\rightarrow\infty}3M^{2}(P_{1}(\mathbf{R})+P_{2}(\mathbf{R}))=0\end{aligned}$$ and (\[summesa\]) is proved. Like in the free case, (\[newsplit\] ii) follows directly from an analogous argument which used equation (\[spacelike\]), thus we prove (\[spacelike\]) for the case at hand. Since in (\[newsplit\] ii) we need only estimates of the wavefunction for times $t\leq x$ we have in view of (\[sum\]), setting $t=\eta x$ with $0\leq\eta\leq 1$ and using Fubinis Theorem: $$\begin{aligned} \psi(\mathbf{x},\eta x)&=&\int(2\pi)^{-\frac{3}{2}} e^{-i\sqrt{k^{2}+m^{2}}\eta x+i\mathbf{k}\cdot\mathbf{x}}\widehat{\psi}_{\text{out}}(\mathbf{k})d^{3}k\\&&-\int\int e^{-i\sqrt{k^{2}+m^{2}}\eta x+ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid+i\mathbf{k}\cdot\mathbf{x}^{\prime}}\frac{A\hspace{-0.2cm}/(\mathbf{x}^{\prime})S^{+}_{\mathbf{k}}(\mathbf{x}-\mathbf{x}^{\prime})\widehat{\psi}_{\text{out}}(\mathbf{k})}{(2\pi)^{\frac{3}{2}}\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}d^{3}kd^{3}x^{\prime} \\&&-\sum_{s=1}^{2}\int\int e^{-i\sqrt{k^{2}+m^{2}}\eta x+ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}\frac{A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\zeta^{s}_{\mathbf{k}}(\mathbf{x}^{\prime})S^{+}_{\mathbf{k}}(\mathbf{x}-\mathbf{x}^{\prime})\widehat{\psi}_{out,s}(\mathbf{k})}{(2\pi)^{\frac{3}{2}}\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}d^{3}kd^{3}x^{\prime} \\&=:&S_{0}-S_{1}-S_{2}\;.\end{aligned}$$ For $S_{0}$ we have (\[spacelike\]), for the other summands we define: $$\begin{aligned} \widetilde{S}_{1}&:=&\int(2\pi)^{-\frac{3}{2}} e^{-i\sqrt{k^{2}+m^{2}}\eta x+ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid+i\mathbf{k}\cdot\mathbf{x}^{\prime}}S^{+}_{\mathbf{k}}(\mathbf{x}-\mathbf{x}^{\prime})\widehat{\psi}_{\text{out}}(\mathbf{k})d^{3}k \\\widetilde{S}_{2}&:=&\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} e^{-i\sqrt{k^{2}+m^{2}}\eta x+ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid+i\mathbf{k}\cdot\mathbf{x}^{\prime}}e^{-i\mathbf{k}\cdot\mathbf{x}^{\prime}}\zeta^{s}_{\mathbf{k}}(\mathbf{x}^{\prime})S^{+}_{\mathbf{k}}(\mathbf{x}-\mathbf{x}^{\prime})\widehat{\psi}_{out,s}(\mathbf{k})d^{3}k\;.\end{aligned}$$ So we have for $S_{j}$, j=1;2: $$S_{j}=\int\widetilde{S}_{j}\frac{A\hspace{-0.2cm}/(\mathbf{x}^{\prime})}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}d^{3}x^{\prime}\;.$$ We can estimate the $\widetilde{S}_{j}$ by two partial integrations. One can easily see, that the phase functions of $\widetilde{S}_{j}$ have no stationary point. This leads to: $$\begin{aligned} \parallel\widetilde{S}_{j}\parallel_{s}&=&\parallel\int(2\pi)^{-\frac{3}{2}} e^{-ixg(\mathbf{k})}\chi_{j}(\mathbf{x},\mathbf{x}^{\prime},\mathbf{k})d^{3}k\parallel_{s} \\&=&\frac{1}{x^{2}}\parallel\int(2\pi)^{-\frac{3}{2}} e^{-ixg(\mathbf{k})}\partial_{k_{1}}(\frac{1}{g^{\prime}}\partial_{k_{1}}\frac{\chi_{j}}{g^{\prime}})d^{3}k\parallel_{s} \\&=&\frac{1}{x^{2}}\parallel\int(2\pi)^{-\frac{3}{2}} (\frac{\chi_{j}^{\prime\prime}}{g^{\prime2}}-\frac{3\chi_{j}^{\prime}g^{\prime\prime}}{g^{\prime 3}}+\frac{3\chi_{j} g^{\prime\prime2}}{g^{\prime4}})d^{3}k\parallel_{s}\end{aligned}$$ where $$\begin{aligned} g(\mathbf{k})&:=&\sqrt{k^{2}+m^{2}}\eta -k\frac{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}{x}-\mathbf{k}\cdot\frac{\mathbf{x}^{\prime}}{x}\\ \chi_{1}(\mathbf{x},\mathbf{x}^{\prime},\mathbf{k})&:=&S^{+}_{\mathbf{k}}(\mathbf{x}-\mathbf{x}^{\prime})\widehat{\psi}_{\text{out}}(\mathbf{k}) \\\chi_{2}(\mathbf{x},\mathbf{x}^{\prime},\mathbf{k})&:=&\sum_{s=1}^{2}e^{-i\mathbf{k}\cdot\mathbf{x}^{\prime}}\zeta^{s}_{\mathbf{k}}(\mathbf{x}^{\prime})S^{+}_{\mathbf{k}}(\mathbf{x}-\mathbf{x}^{\prime})\widehat{\psi}_{out,s}(\mathbf{k}) \\g^{\prime}&:=&\partial_{k_{1}}g\;.\end{aligned}$$ Since $$\begin{aligned} \mid g^{\prime}\mid&=&\frac{x^{\prime}}{x}+\frac{k_{1}\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}{kx}-\frac{k_{1}\eta}{\sqrt{k^{2}+m^{2}}}\geq\frac{k_{1}}{k}(\frac{x^{\prime}}{x}+\frac{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}{x}-\frac{k}{\sqrt{k^{2}+m^{2}}}) \\&\geq&\frac{k_{1}}{k}(1-\frac{k}{\sqrt{k^{2}+m^{2}}})>0\;.\end{aligned}$$ $g^{\prime\prime}$ is bounded and due to Lemma \[properties\] the $\chi_{j}$ are bounded, we can find $C_{2}<\infty$ with: $$\sum_{j=1}^{2}\widetilde{S}_{j}\leq \frac{C_{2}}{x^{2}}\;.$$ So $x^{2}\sum_{j=1}^{2}S_{j}$ is bounded (see \[decayofint\]) and the analogue of (\[spacelike\]) is proved. Appendix ======== Proof of Lemma(\[statphas\]) {#appendix1} ---------------------------- We consider for a family of phase functions g, which we should think of being indexed by $a\geq0,\mathbf{y}$: $$g(\mathbf{k})=\sqrt{k^{2}+m^{2}}+a\mid k\mid-\mathbf{y}\cdot\mathbf{k}$$ the integral $$I:=\int e^{-i\mu g(\mathbf{k})}\chi(\mathbf{k})d^{3}k$$ where $\chi\in\mathcal{G}$ (see\[definitionG\]). We shall find its asymptotic behavior as a function of $\mu$. In major parts we will recall the proof of theorem 7.7.5 in the book of Hörmander [@hoer], which unfortunately is formulated for compactly supported $\chi$ and which moreover does not give uniformity over the family, i.e. uniformity in $a,\mathbf{y}$ which we need. The compactness can easily be handled but for the uniformity we must invoke the special form of the family of phase functions $g$ and we shall give the argument here. The stationary points of the phase functions are given by: $$\begin{aligned} \label{kstat2} g^{\prime}({\mathbf{k}}_{stat})&=&\frac{\mathbf{k}_{stat}}{\sqrt{k_{stat}^{2}+m^{2}}} +a\frac{\mathbf{k}_{stat}}{k_{stat}}-\mathbf{y}=0 \nonumber\\ k^{2}_{stat}&=&(k_{stat}^{2}+m^{2})(y-a)^{2} \nonumber\\ k_{stat}&=&\frac{m(y-a)}{\sqrt{1-(y-a)^{2}}} \nonumber\\\mathbf{k}_{stat}&\parallel&\mathbf{y}\;.\end{aligned}$$ Since $k_{stat}$ is a function of $a$ and $\mathbf{y}$, we sometimes use the phrase: uniform in $k_{stat}$ to express uniformity in $a$ and $\mathbf{y}$. (I) For $y\geq a+1$ there is no stationary point and for $y=a$ the stationary point is at $k_{stat}=0$. First we handle the family where $y \in [a+\frac{1}{2};a+1[$. These phase-functions do exactly have one stationary point bounded away from zero: $$\label{ungleichungk} k_{stat}=\frac{m(y-a)}{\sqrt{1-(y-a)^{2}}}\geq\frac{m}{\sqrt{3}}$$ Later we will handle phase functions, where the stationary point is close to zero and phase-functions without stationary point. We choose a coordinate system, where the $k_{1}$-direction is parallel to $\mathbf{y}$. So the stationary points will have the coordinates $(k_{stat},0,0)$. To estimate the integral, we separate from the integral the contribution coming from near the stationary point. This part of integral includes the leading term. Therefore we define a smooth function $\rho_{k_{stat}}$ which is one near the stationary points and zero away from the stationary point. (We shall omit further on for ease of notation the index $k_{stat}$). More precisely we define the compact set Q by: $$\mathbf{k}\in Q\Leftrightarrow k_{1}\in[\frac{k_{stat}}{2},2k_{stat}] \wedge k_{2},k_{3}\in[-1,1]$$ and choose $$\label{rho} \rho(\mathbf{k}):=1 \hspace{1cm}\forall \mathbf{k}\in Q$$ falling quickly off to zero outside of Q, lets say $$\label{qepsilon} \rho(\mathbf{k}):=0 \hspace{1cm}\forall k\notin Q_{\varepsilon}$$ where $Q_{\varepsilon}$ is some $\varepsilon$-neighborhood of $Q$ for some $\varepsilon>0$. With the help of $\rho$ we can split $\chi=\chi_{1}+\chi_{2}$ by defining: $$\label{chi} \begin{tabular}{ll} $\chi_{1}:=\rho\chi\nonumber$ & \hspace{1cm}$\chi_{2}:=(1-\rho)\chi$ \\ $I_{1}:=\int e^{-i\mu g(\mathbf{k})}\chi_{1}(\mathbf{k})d^{3}k\nonumber$ & \hspace{1cm}$I_{2}:=\int e^{-i\mu g(\mathbf{k})}\chi_{2}(\mathbf{k})d^{3}k\;.$ \\ \end{tabular}$$ This split has the following advantages: The compactly supported $\chi_{1}$ includes the stationary point, so $I_{1}$ can be estimated the same way as in Hörmanders theorem, but with focus on the uniformity of the estimates. $\chi_{2}$ is zero near the stationary point, so $I_{2}$ can be easily estimated by partial integrations. $\rho$ has been defined in such a way, that we may estimate the terms we get by the partial integrations uniform in $k_{stat}.$ We start with $I_{1}$. We move the stationary point to the center of our coordinate system setting $\mathbf{k}^{\prime}:=\mathbf{k}-\mathbf{k}_{stat}$, i.e. $g(\mathbf{k})$ becomes $\widetilde{g}(\mathbf{k}^{\prime})=g(\mathbf{k}^{\prime}+\mathbf{k}_{stat})$. Slightly abusing notation we simply write $g(\mathbf{k}^{\prime})$ for $\widetilde{g}$. By Taylor’s formula we obtain a function $f$: $$\label{taylorg} g(\mathbf{k}^{\prime})=g(\mathbf{k}^{\prime}=0)+\sum_{\mid\gamma\mid=2}\frac{D_{\mathbf{k}^{\prime}}^{\gamma}g(\mathbf{k}^{\prime} =0)\mathbf{k}^{\prime\gamma}}{\gamma!}+f(\mathbf{k}^{\prime})\;,$$ where $\frac{f(\mathbf{k}^{\prime})}{k^{\prime 3}}$ bounded. Computing the second-order terms of $g(\mathbf{k}^{\prime})$ we find that only diagonal terms survive at $(k_{stat},0,0)$ and $$\begin{aligned} \label{partialjj} \partial_{k^{\prime}_{j}}^{2}g(\mathbf{k}^{\prime}=0) &=&\partial_{k_{j}}^{2}g(\mathbf{k}=\mathbf{k}_{stat}) \nonumber\\&=&(\partial_{k_{j}}(\frac{k_{j}}{\sqrt{k^{2}+m^{2}}} +a\frac{k_{j}}{k}-y_{l}))\mid_{\mathbf{k}=\mathbf{k}_{stat}} \nonumber\\&=&(\frac{k^{2}-k_{j}^{2}+m^{2}}{\sqrt{k^{2}+m^{2}}^{3}} +a\frac{k^{2}-k_{j}^{2}}{k^{3}})\mid_{\mathbf{k}=\mathbf{k}_{stat}}\end{aligned}$$ so that $$\begin{aligned} \label{d2g} \partial_{k^{\prime}_{j}}^{2}g(\mathbf{k}^{\prime}=0) &=&\frac{k_{stat}^{2}+m^{2}}{\sqrt{k_{stat}^{2}+m^{2}}^{3}} +a\frac{1}{k_{stat}}\,\,\mbox{for}\, j=2,3 \nonumber\\ \partial_{k^{\prime}_{1}}^{2}g(\mathbf{k}^{\prime}=0) &=&\frac{m^{2}}{\sqrt{k_{stat}^{2}+m^{2}}^{3}}\;.\end{aligned}$$ We define: $$\label{triangle} g_{2}(\vartheta,\theta):= \frac{\sum_{j=1}^{3}\partial_{k^{\prime}_{j}}^{2}g(\mathbf{k}^{\prime}=0 )k^{\prime2}_{j}}{k^{\prime2}}\;.$$ By this definition, $g_{2}$ does only depend on the angular, not on the radial coordinate of $\mathbf{k}^{\prime}$. Using (\[triangle\]) in (\[taylorg\]), we may write $$\label{gkstrich} g(\mathbf{k}^{\prime})=g(0)+\frac{1}{2}k^{\prime2}g_{2}(\vartheta,\theta) +f(\mathbf{k'})\;.$$ Furthermore for $s\in[0,1]$ set: $$\label{gs} g_{s}:=g(0)+\frac{1}{2}k^{\prime2}g_{2}(\vartheta,\theta) +sf(\mathbf{k'})$$ and $$I(s)=\int e^{-i\mu g_{s}(\mathbf{k}^{\prime})}\chi_{1}(\mathbf{k}^{\prime})d^{3}k^{\prime}\;.$$ Note that $g=g_{1}$, $I_{1}=I(1)$. By Taylor’s Formula there exits $\xi\leq 1$ so that: $$\label{xx} I_{1}=I(1)=I(0)+\partial_{s}I(s)\mid_{\xi}\;.$$ We begin with $I(0)$, introducing spherical coordinates. With slight abuse of notation: (leaving the notation for the functions unchanged) $$I(0)=\int e^{-i\mu(g(0)+\frac{1}{2}k^{\prime2}g_{2}(\vartheta,\theta)) }\chi_{1}(k^{\prime},\vartheta,\theta)k^{\prime2}dk^{\prime}d\Omega\;.$$ Writing $\chi_{1}=\chi(k^{\prime}=0)+\widetilde{\chi}$ the integral splits into: $$\begin{aligned} \label{xxx} I(0)&=&\int e^{-i\mu(g(0)+\frac{1}{2}k^{\prime2}g_{2}(\vartheta,\theta) ) }\chi(k^{\prime}=0)k^{\prime2}dk^{\prime}d\Omega\nonumber\\&+&\int e^{-i\mu(g(0)+\frac{1}{2}k^{\prime2}g_{2}(\vartheta,\theta)) }\widetilde{\chi}(k^{\prime},\vartheta,\theta)k^{\prime2}dk^{\prime}d\Omega=:I_{1}^{1}+I_{1}^{2}\;.\end{aligned}$$ The integral $I_{1}^{1}$ is a gaussian integral, which includes the leading term: $$\begin{aligned} \label{I11} I_{1}^{1}&=&\int e^{-i\mu(g(0)+\frac{1}{2}k^{\prime2}g_{2}(\vartheta,\theta) ) }\chi(k^{\prime}=0)k^{\prime2}dk^{\prime}d\Omega \nonumber\\&=&\int e^{-i\mu\sum_{j=1}^{3}\frac{1}{2}\partial_{k^{\prime2}_{j}}g(\mathbf{k}^{\prime}=0))k_{j}^{2}}e^{-i\mu g(0) }\chi(\mathbf{k}^{\prime}=0)k^{\prime2}d^{3}k^{\prime} \nonumber\\&=&(2\pi)\frac{3}{2}\mu^{-\frac{3}{2}}e^{-i\mu g(0) }(\prod_{j=1}^{3}\partial^{2}_{k^{\prime}_{j}}g(\mathbf{k}^{\prime}=0))^{-\frac{1}{2}}\chi(\mathbf{k}_{stat})\;.\end{aligned}$$ For $a=0$ the $\partial_{k^{\prime2}_{j}}g(\mathbf{k}^{\prime}=0)$ terms can be easily calculated. We get: $$\begin{aligned} \partial^{2}_{k^{\prime}_{j}}g(\mathbf{k}^{\prime}=0)&=&\partial^{2}_{k^{\prime}_{j}}g(\mathbf{k}=\mathbf{k}_{stat}) \\&=&\partial_{k^{\prime}_{j}}\frac{k_{j}}{\sqrt{k^{2}+m^{2}}}\mid_{\mathbf{k}=\mathbf{k}_{stat}}=\frac{k^{2}+m^{2}-k_{j}^{2}}{\sqrt{k^{2}+m^{2}}^{3}}\mid_{\mathbf{k}=\mathbf{k}_{stat}}\;.\end{aligned}$$ So we get: $$\prod_{j=1}^{3}\partial^{2}_{k^{\prime}_{j}}g(\mathbf{k}^{\prime}=0)=\frac{m^{2}(k_{stat}^{2}+m^{2})^{2}}{\sqrt{k_{stat}^{2}+m^{2}}^{9}}=\frac{m^{2}}{\sqrt{k_{stat}^{2}+m^{2}}^{5}}\;.$$ $I_{1}^{1}$ is the leading term of our integral. For $a=0$ we get the desired value for $C_{1}$ (\[statphas\]). For $I_{1}^{2}$ put: $$\phi(k^{\prime},\vartheta,\theta):=\widetilde{\chi}(k^{\prime},\vartheta,\theta)k^{\prime-1}$$ which is bounded and smooth. $$\label{ieinszwei} I_{1}^{2}=\int e^{-i\mu(g(0)+\frac{1}{2}k^{\prime2}g_{2}(\vartheta,\theta) ) }\phi(k^{\prime},\vartheta,\theta)k^{\prime3}dk^{\prime}d\Omega\;.$$ One partial integration leads to: $$\begin{aligned} \parallel I_{1}^{2}\parallel_{s}&=&\mu^{-1}\parallel\int e^{-i\mu\frac{1}{2}k^{\prime2}g_{2}(\vartheta,\theta) }\partial_{k^{\prime}}\frac{\phi(k^{\prime},\vartheta,\theta)k^{\prime3}}{k^{\prime}g_{2}(\vartheta,\theta) }dk^{\prime}d\Omega\parallel_{s}\\&=&\mu^{-1}\parallel\int e^{-i\mu\frac{1}{2}k^{\prime2}g_{2}(\vartheta,\theta) }\frac{\partial_{k^{\prime}}\phi(k^{\prime},\vartheta,\theta)k^{\prime2}+2\phi(k^{\prime},\vartheta,\theta)k^{\prime}}{g_{2}(\vartheta,\theta) }dk^{\prime}d\Omega\parallel_{s}\;.\end{aligned}$$ So another partial integration is possible: $$\begin{aligned} \label{i12} \parallel I_{1}^{2}\parallel_{s}&=&\mu^{-2}\parallel\int e^{-i\mu\frac{1}{2}k^{\prime2}g_{2}(\vartheta,\theta) }\partial_{k^{\prime}}(\frac{\partial_{k^{\prime}}\phi(k^{\prime},\vartheta,\theta)k^{\prime2}+2\phi(k^{\prime},\vartheta,\theta)k^{\prime}}{ k^{\prime}(g_{2}(\vartheta,\theta))^{2}})dk^{\prime}d\Omega\parallel_{s}\nonumber\\&=&\mu^{-2}\parallel\int e^{-i\mu\frac{1}{2}k^{\prime2}g_{2}(\vartheta,\theta) }\partial_{k^{\prime}}(\frac{\partial_{k^{\prime}}\phi(k^{\prime},\vartheta,\theta)k^{\prime}+2\phi(k^{\prime},\vartheta,\theta)}{ (g_{2}(\vartheta,\theta))^{2}})dk^{\prime}d\Omega\parallel_{s} \nonumber\\&\leq&\mu^{-2}\parallel\int \partial_{k^{\prime}}(\frac{\partial_{k^{\prime}}\phi(k^{\prime},\vartheta,\theta)k^{\prime}+2\phi(k^{\prime},\vartheta,\theta)}{ (g_{2}(\vartheta,\theta))^{2}})dk^{\prime}d\Omega\parallel_{s}\;.\end{aligned}$$ With our definition of Q, the support of the integrand increases and $g_{2}(\vartheta,\theta)$ decreases polynomially with $k_{stat}$ (see (\[d2g\]) and (\[triangle\])). While the support moves away from the center of our coordinate system. But $\widetilde{\chi}=\chi-\chi(\mathbf{k}_{stat})$ and its derivatives decay faster in $k_{stat}$ than any power, so we get a constant C uniform in $k_{stat}$ with: $$I_{1}^{2}\leq \mu^{-2}C\;.$$ For $I_{1}$ it is left to estimate $\partial_{s}I(s)\mid_{\xi}$: $$\label{partialsi} \partial_{s}I(s)\mid_{\xi}=\int -i\mu f(k^{\prime},\vartheta,\theta) e^{-i\mu g_{\xi}(k^{\prime},\vartheta,\theta)}\chi_{1}(k^{\prime},\vartheta,\theta)k^{\prime2}dk^{\prime}d\Omega\;.$$ By Taylor’s formula we can define: $$\begin{aligned} \label{g-tilde} \widetilde{f}(k^{\prime},\vartheta,\theta):=f(k^{\prime},\vartheta,\theta)k^{\prime-3} \nonumber\hspace{1cm}\widetilde{g}(k^{\prime},\vartheta,\theta):=k^{\prime-1}\partial_{k^{\prime}}g_{\xi}(k^{\prime},\vartheta,\theta)\end{aligned}$$ and thus: $$\label{50a} \partial_{s}I(s)\mid_{\xi}=\int -i\mu \widetilde{f}(k^{\prime},\vartheta,\theta) e^{-i\mu g_{\xi}(k^{\prime},\vartheta,\theta)}\chi_{1}(k^{\prime},\vartheta,\theta)k^{\prime5}dk^{\prime}d\Omega\;.$$ On $Q_{\varepsilon}$ (see below (\[qepsilon\]), g is infinitely often differentiable. So these functions are well defined and bounded on $Q_{\varepsilon}$. To estimate the integral by partial integrations we have to assure, that $g_{\xi}$ has only one stationary point, which is $k_{stat}=0$ as one easily sees from (\[next\]). By (\[gs\]): $$\label{next} g_{\xi}=g(k^{\prime}=0)+\frac{1}{2}k^{\prime2}g_{2}(\vartheta,\theta) +\xi f(\mathbf{k'})=\xi g+(1-\xi)\left(g(k^{\prime}=0)+\frac{1}{2}k^{\prime2}g_{2}(\vartheta,\theta)\right)\;.$$ Looking at $$\partial_{k^{\prime}}^{2}g_{\xi}=\xi\partial_{k^{\prime}}^{2}g+(1-\xi)\partial_{k^{\prime}}^{2}\frac{1}{2}k^{\prime2}g_{2}$$ we observe, that $$\begin{aligned} \partial_{k^{\prime}}^{2}g&=&\partial_{k^{\prime}}^{2}(\sqrt{k^{\prime2}-2k^{\prime}k_{stat}\cos(\vartheta)+k_{stat}^{2}+m^{2}}+a\sqrt{k^{\prime2}-2k^{\prime}k_{stat}\cos(\vartheta)+k_{stat}^{2}}-\mathbf{y}\cdot\mathbf{k}^{\prime}) \\&=&\partial_{k^{\prime}}(\frac{k^{\prime}-k_{stat}\cos(\vartheta)}{\sqrt{k^{\prime2}-2k^{\prime}k_{stat}\cos(\vartheta)+k_{stat}^{2}+m^{2}}}+a\frac{k^{\prime}-k_{stat}\cos(\vartheta)}{\sqrt{k^{\prime2}-2k^{\prime}k_{stat}\cos(\vartheta)+k_{stat}^{2}}}) \\&=&\frac{(1-\cos(\vartheta)^{2})k_{stat}^{2}+m^{2}}{\sqrt{k^{\prime2}-2k^{\prime}k_{stat}\cos(\vartheta)+k_{stat}^{2}+m^{2}}^{3}}+a\frac{(1-\cos(\vartheta)^{2})k_{stat}^{2}}{\sqrt{k^{\prime2}-2k^{\prime}k_{stat}\cos(\vartheta)+k_{stat}^{2}}^{3}}>0\;.\end{aligned}$$ And for $\mathbf{k}\in Q_{\varepsilon}$, $k_{1}$ is positive, so the angular component $\vartheta\in]-\frac{\pi}{2},\frac{\pi}{2}[$, we also have, that on $Q_{\varepsilon}$ also $g_{2}$ is positive. Since $\xi\in[0;1]$ it follows, that $\partial_{k^{\prime}}^{2}g_{\xi}$ is positive, so $\partial_{k^{\prime}}g_{\xi}$ is strictly monotonous on $Q_{\varepsilon}$ and has only one stationary point. Recalling the definition of $\widetilde{g}$ (see \[g-tilde\]) we see, that $\widetilde{g}$ is bounded away from zero. Now we can estimate the integral (\[50a\]). By partial integration $$\begin{aligned} \partial_{s}I(s)\mid_{\xi}&=&\int e^{-i\mu g_{\xi}(k^{\prime},\vartheta,\theta)} \partial_{k^{\prime}}\frac{\widetilde{f}(k^{\prime},\vartheta,\theta) \chi_{1}(k^{\prime},\vartheta,\theta)k^{\prime4}}{\widetilde{g}(k^{\prime},\vartheta,\theta)}dk^{\prime}d\Omega\\&=& \int e^{-i\mu g_{\xi}(k^{\prime},\vartheta,\theta)}( \partial_{k^{\prime}}\frac{\widetilde{f}(k^{\prime},\vartheta,\theta) \chi_{1}(k^{\prime},\vartheta,\theta)}{\widetilde{g}(k^{\prime},\vartheta,\theta)}k^{\prime4}+4\frac{\widetilde{f}(k^{\prime},\vartheta,\theta) \chi_{1}(k^{\prime},\vartheta,\theta)}{\widetilde{g}(k^{\prime},\vartheta,\theta)})k^{\prime3}dk^{\prime}d\Omega\;.\end{aligned}$$ Setting $$\label{psitilde} \widetilde{\psi}(k^{\prime},\vartheta,\theta):=\partial_{k^{\prime}}\frac{\widetilde{f}(k^{\prime},\vartheta,\theta) \chi_{1}(k^{\prime},\vartheta,\theta)}{\widetilde{g}(k^{\prime},\vartheta,\theta)}k^{\prime}+4\frac{\widetilde{f}(k^{\prime},\vartheta,\theta) \chi_{1}(k^{\prime},\vartheta,\theta)}{\widetilde{g}(k^{\prime},\vartheta,\theta)}\;.$$ Hence $$\partial_{s}I(s)\mid_{\xi}=\int e^{-i\mu g_{\xi}(k^{\prime},\vartheta,\theta)}\widetilde{\psi}(k^{\prime},\vartheta,\theta)k^{\prime3}dk^{\prime}d\Omega\;.$$ This term is similar to (\[ieinszwei\]). The only differences are, that we have $\widetilde{\psi}$ instead of $\phi$ and $g_{\xi}$ instead of $g_{0}$. So with the same estimate as in (\[ieinszwei\]) we get: $$\label{dsi} \parallel\partial_{s}I(s)\mid_{\xi}\parallel_{s}\leq\mu^{-2}\parallel\int \partial_{k^{\prime}}\frac{\partial_{k^{\prime}}\widetilde{\psi}(k^{\prime},\vartheta,\theta)k^{\prime}+2\widetilde{\psi}(k^{\prime},\vartheta,\theta)}{ (\widetilde{g}(\mathbf{k}^{\prime},\vartheta,\theta))^{2}}dk^{\prime}d\Omega\parallel_{s}\;.$$ This term again has uniform bound in $k_{stat}$, as its support moves away from the center of the coordinate system. So we get a constant C uniform in $k_{stat}$ with: $$\parallel\partial_{s}I(s)\mid_{\xi}\parallel_{s}\leq \mu^{-2}C\;.$$ Now we estimate $I(2)$ (\[chi\]). As this integral includes no stationary point, two partial integrations are possible without any problem, but we have to assure, that we can estimate the factors we get by these partial integrations uniform in $k_{stat}$. To be able to find an uniform estimate, we estimate the areas of $\chi$ separately. So we again split our integral: $$\begin{aligned} I_{2}&=&\int_{k_{1}<\frac{k_{stat}}{2}} e^{-i\mu g(\mathbf{k})}\chi_{2}(\mathbf{k})d^{3}k+\int_{k_{1}>2k_{stat}} e^{-i\mu g(\mathbf{k})}\chi_{2}(\mathbf{k})d^{3}k\\&&+\int_{k_{1}\in B;\mid k_{2}\mid>1} e^{-i\mu g(\mathbf{k})}\chi_{2}(\mathbf{k})d^{3}k+\int_{k_{1}\in B;\mid k_{2}\mid<1;\mid k_{3}\mid>1} e^{-i\mu g(\mathbf{k})}\chi_{2}(\mathbf{k})d^{3}k\\&=:&I_{2}^{1}+I_{2}^{2}+I_{2}^{3}+I_{2}^{4}\end{aligned}$$ where $B:=[\frac{k_{stat}}{2};2k_{stat}]$. The integrals $I_{2}^{1}$ and $I_{2}^{2}$ we estimate by two partial integrations under the $k_{1}$-integral. This leads to: $$\begin{aligned} \label{kurz} \parallel I_{2}^{1}\parallel_{s}&\leq&\mu^{-2}\int_{k_{1}<\frac{k_{stat}}{2}}\parallel\partial_{k_{1}}(\frac{1}{\dot{g}(\mathbf{k})}\partial_{k_{1}}\frac{\chi_{2}(\mathbf{k})}{\dot{g}(\mathbf{k})})\parallel_{s} d^{3}k \nonumber\\&=&\mu^{-2}\int_{k_{1}<\frac{k_{stat}}{2}}\parallel3\frac{\ddot{\chi_{2}}}{\dot{g}^{2}}+3\frac{\chi_{2}\ddot{g}^{2}}{\dot{g}^{4}}-3\frac{\dot{\chi_{2}}\ddot{g}}{\dot{g}^{3}}\parallel_{s} d^{3}k\nonumber\\ \parallel I_{2}^{2}\parallel_{s}&\leq&\mu^{-2}\int_{k_{1}>2k_{stat}}\parallel\partial_{k}(\frac{1}{g^{\prime}(\mathbf{k})}\partial_{k}\frac{\chi_{2}(\mathbf{k})}{g^{\prime}(\mathbf{k})})\parallel_{s} d^{3}k \nonumber\\&=&\mu^{-2}\int_{k_{1}>2k_{stat}}\parallel3\frac{\chi_{2}^{\prime\prime}}{g^{\prime2}}+3\frac{\chi_{2}g^{\prime\prime2}}{g^{\prime4}}-3\frac{\chi_{2}^{\prime}g^{\prime\prime}}{g^{\prime3}}\parallel_{s} d^{3}k\end{aligned}$$ where $\dot{g}(\mathbf{k}):=\partial_{k_{1}}g(\mathbf{k})$; $g^{\prime}(\mathbf{k}):=\partial_{k}g(\mathbf{k})$ At first sight these estimates do not seem to be uniform in a and $\mathbf{y}$. In fact $$\ddot{g}(\mathbf{k})=\frac{m^{2}}{\sqrt{k^{2}+m^{2}}^{3}}+a\frac{k_{2}^{2}+k_{3}^{2}}{k^{3}}$$ and $$g^{\prime\prime}(\mathbf{k})=\frac{m^{2}}{\sqrt{k^{2}+m^{2}}^{3}}$$ are bounded on the area of integration. So it is left to show, that we can find functions $h_{j}$ with j=1;2, which do not depend on a and $\mathbf{y}$ and which is bounded away from zero on ${\mbox{${\rm I\!R}$}}^{3}\backslash Q$ with $$\begin{aligned} h_{1}(\mathbf{k})\leq g^{\prime}(\mathbf{k}) \hspace{1cm}h_{2}(\mathbf{k})\leq\dot{g}(\mathbf{k})\end{aligned}$$ for all $a$, $\mathbf{y}$, $\mathbf{k}$. For this we estimate $\dot{g}$ for $k_{1}\leq \frac{k_{stat}}{2}$. As $\ddot{g}>0$, it follows, that (see (\[xxx\])) $$\mid \dot{g}(\mathbf{k})\mid=y-\frac{k_{1}}{\sqrt{k^{2}+m^{2}}}-a\frac{k_{1}}{k}\geq\frac{1}{2}\;.$$ For $k_{1}<0$ and by virtue $y\geq a+\frac{1}{2}\geq\frac{1}{2}$. For $k_{1}>0$ we estimate, using that $y-a-\frac{k_{stat}}{\sqrt{k_{stat}^{2}+m^{2}}}=0$: $$\begin{aligned} \mid \dot{g}(\mathbf{k})\mid&=&y-\frac{k_{1}}{\sqrt{k^{2}+m^{2}}}-a\frac{k_{1}}{k} \geq y-a-\frac{k_{1}}{\sqrt{k^{2}+m^{2}}} \\&\geq&y-a-\frac{k_{stat}}{\sqrt{k_{stat}^{2}+m^{2}}}+\frac{k_{stat}}{\sqrt{k_{stat}^{2}+m^{2}}}-\frac{k_{1}}{\sqrt{k^{2}+m^{2}}} \\&=&\frac{k_{stat}}{\sqrt{k_{stat}^{2}+m^{2}}}-\frac{k_{1}}{\sqrt{k^{2}+m^{2}}} \\&\geq&\frac{k_{stat}\sqrt{k^{2}+m^{2}}-k_{1}\sqrt{k_{stat}^{2}+m^{2}}}{\sqrt{k^{2}+m^{2}}\sqrt{k_{stat}^{2}+m^{2}}} \\&=&\frac{k_{stat}^{2}(k^{2}+m^{2})-k_{1}^{2}(k_{stat}^{2}+m^{2})}{\left(k_{stat}\sqrt{k^{2}+m^{2}}+k_{1}\sqrt{k_{stat}^{2}+m^{2}}\right)\sqrt{k_{1}^{2}+m^{2}}\sqrt{k_{stat}^{2}+m^{2}}}\;.\end{aligned}$$ Recalling $k\in[0;\frac{k_{stat}}{2}]$ $$\begin{aligned} \mid \dot{g}(\mathbf{k})\mid&\geq&\frac{\frac{3}{4}k_{stat}^{2}m^{2}}{\left(k_{stat}\sqrt{k^{2}+m^{2}}+k_{1}\sqrt{k_{stat}^{2}+m^{2}}\right)\sqrt{k_{1}^{2}+m^{2}}\sqrt{k_{stat}^{2}+m^{2}}} \\&=&\frac{3m^{2}}{4\left(\sqrt{k^{2}+m^{2}}+k_{1}\sqrt{1+(\frac{m}{k_{stat}})^{2}}\right)\sqrt{k_{1}^{2}+m^{2}}\sqrt{1+(\frac{m}{k_{stat}})^{2}}}\;.\end{aligned}$$ As $k_{stat}\geq\frac{m}{\sqrt{3}}$ (see \[ungleichungk\]) it follows: $$\mid \dot{g}(\mathbf{k})\mid\geq\frac{3m^{2}}{8\left(\sqrt{k^{2}+m^{2}}+2k_{1}\right)\sqrt{k_{1}^{2}+m^{2}}}=:h_{1}\;.$$ For $k_{1}\geq 2k_{stat}$, $g^{\prime}$ is positive. Therefore similar as before: $$\begin{aligned} \mid g^{\prime}(\mathbf{k})\mid&=&\frac{k}{\sqrt{k^{2}+m^{2}}}+a-y\cos(\vartheta) \\&\geq&\frac{k}{\sqrt{k^{2}+m^{2}}}-\frac{k_{stat}}{\sqrt{k_{stat}^{2}+m^{2}}}+\frac{k_{stat}}{\sqrt{k_{stat}^{2}+m^{2}}}+a-y \\&=&\frac{k}{\sqrt{k^{2}+m^{2}}}-\frac{k_{stat}}{\sqrt{k_{stat}^{2}+m^{2}}} =\frac{k\sqrt{k_{stat}^{2}+m^{2}}-k_{stat}\sqrt{k^{2}+m^{2}}}{\sqrt{k^{2}+m^{2}}\sqrt{k_{stat}^{2}+m^{2}}} \\&=&\frac{k^{2}(k_{stat}^{2}+m^{2})-k_{stat}^{2}(k^{2}+m^{2})}{(k^{2}+m^{2})(k_{stat}^{2}+m^{2})} \geq\frac{\frac{1}{4}k^{2}m^{2}}{(k^{2}+m^{2})^{2}}=:h_{2}(\mathbf{k})\;.\end{aligned}$$ Note, that $h_{1}$ and $h_{2}$ do not depend on $a$ and $\mathbf{y}$. We can use this estimate in (\[kurz\]). As $g^{\prime\prime}$ and $\ddot{g}$ have uniform bounds in $a$ and $\mathbf{y}$ we get uniform estimates for $I_{2}^{1}$ and $I_{2}^{2}$: $$\begin{aligned} \parallel I_{2}^{1}\parallel_{s}&\leq&\mu^{-2}\int_{k_{1}<\frac{k_{stat}}{2}}\parallel3\frac{\ddot{\chi_{2}}}{h_{1}^{2}}+3\frac{\chi_{2}\ddot{g}^{2}}{h_{1}^{4}}+3\frac{\dot{\chi_{2}}\ddot{g}}{h_{1}^{3}}\parallel_{s} d^{3}k\\&\leq&\mu^{-2}\int_{\mathbb{R}^{3}}\parallel3\frac{\ddot{\widetilde{\chi}_{2}}}{h_{1}^{2}}+3\frac{\widetilde{\chi}_{2}\ddot{g}^{2}}{h_{1}^{4}}+3\frac{\dot{\widetilde{\chi}_{2}}\ddot{g}}{h_{1}^{3}}\parallel_{s} d^{3}k\\ \parallel I_{2}^{2}\parallel_{s}&\leq&\mu^{-2}\int_{k_{1}>2k_{stat}}\parallel3\frac{\chi_{2}^{\prime\prime}}{h_{2}^{2}}+3\frac{\chi_{2}g^{\prime\prime2}}{h_{2}^{4}}+3\frac{\chi_{2}^{\prime}g^{\prime\prime}}{h_{2}^{3}}\parallel_{s} d^{3}k\\ &\leq&\mu^{-2}\int_{k_{1}\geq\frac{m}{\sqrt{3}}}\parallel3\frac{\widetilde{\chi}_{2}^{\prime\prime}}{h_{2}^{2}}+3\frac{\widetilde{\chi}_{2}g^{\prime\prime2}}{h_{2}^{4}}+3\frac{\widetilde{\chi}_{2}^{\prime}g^{\prime\prime}}{h_{2}^{3}}\parallel_{s} d^{3}k\;.\end{aligned}$$ Hence $$\parallel I_{2}^{1}\parallel_{s}+\parallel I_{2}^{2}\parallel_{s}\leq\mu^{-2}C$$ with a constant C uniform in $k_{stat}$. The integrals $I_{2}^{3}$ and $I_{2}^{4}$ can be estimated in a similar way, partial integration now be done with $k_{2}$ and $k_{3}$ $$\mid \partial_{k_{j}}g(\mathbf{k})\mid=\frac{1}{\sqrt{k^{2}+m^{2}}}+\frac{ak_{j}}{k^{2}}\leq\frac{1}{\sqrt{k^{2}+m^{2}}}+\frac{a}{k}\hspace{1cm}\text{ for } j=1;2$$ which is uniformly bounded away from zero on the area of integration. So we have a uniform constant C with: $$I_{2}\leq \mu^{-2}C\;,$$ and the lemma is proven for $y\in[a+\frac{1}{2},a+1]$. \(II) Next we prove the Lemma for $y<a+1/2$. We again have to assure, that all estimates are uniform in $a$ and $\mathbf{y}$. In the last section the main difficulty we had to solve was, that $g^{\prime}$ near the stationary point was increasing with $k_{stat}$ (recall that $\lim_{y\rightarrow a+1}k_{stat}=\infty$). So on the first view it seems to be simple to have uniform estimates for $y<a+1/2$ just by setting $Q=\mathbb{R}^{3}$. But we have to face a new problem, which is, that the stationary point may be very close to zero. This is problematical in the differentiation of $k$ appearing in our estimates. For $a=0$ this problem does not appear and the lemma is also proven for $y<\frac{1}{2}$ with $a=0$. As the divergence only appears for small $k_{stat}$ we can set $k_{stat}<\frac{1}{2}$ (For $k_{stat}\geq\frac{1}{2}$ the estimates can be done very closely to the ones of (I), setting $Q=\mathbb{R}^{3}$). We solve the problem by first “cutting out” the stationary point. We split our integral: $$I=\int_{B(0,\sqrt{k_{stat}})} e^{-i\mu g(\mathbf{k})}\chi(\mathbf{k})d^{3}k+\int_{{\mbox{${\rm I\!R}$}}^{3}\setminus B(0,\sqrt{k_{stat}})} e^{-i\mu g(\mathbf{k})}\chi(\mathbf{k})d^{3}k=:I_{1}+I_{2}\;.$$ As $k_{stat}<\frac{1}{2}$, the stationary point is inside the ball. We estimate $I_{1}$, writing it in spherical coordinates “centered” around the stationary point, by one partial integration: $$\begin{aligned} \parallel I_{1}\parallel_{s}&\leq&\parallel\int_{B(0,\sqrt{k_{stat}})}e^{-i\mu g(\mathbf{k}^{\prime})}\chi(\mathbf{k}^{\prime})k^{\prime2}dk^{\prime}d\Omega\parallel_{s} \\&\leq&\parallel\mu^{-1}\int_{B(0,\sqrt{k_{stat}})}(\frac{\chi^{\prime}k^{\prime2}}{g^{\prime}}+\frac{2\chi k^{\prime}}{g^{\prime}}+\frac{\chi g^{\prime\prime}k^{\prime2}}{g^{\prime2}})dk^{\prime}d\Omega\parallel_{s}\;.\end{aligned}$$ As $\chi\in\mathcal{G}$ all these terms are bounded, we have $$\parallel I_{1}\parallel_{s}\leq M\mu^{-1}\sqrt{k_{stat}}$$ We now estimate $I_{2}$. The first idea is to estimate this integral by two partial integrations. But the integrand still comes “very close” to the stationary point, where $(g^{\prime})^{-1}$ is not bounded. So this procedure will not yield uniform bound in $a$ and $\mathbf{y}$. The trick to get uniform bound is to redo the split (\[xx\]), (\[xxx\]) of (I) into the integral for $a+\frac{1}{2}<y<a+1$. $$I_{2}=I_{1}^{1}+I_{1}^{2}+\partial_{s}I(s)\mid_{s=\xi}$$ now using $k=0$ as the center for our Taylor-expansion. So we get: $$\begin{aligned} I_{1}^{1}&=&\int_{{\mbox{${\rm I\!R}$}}^{3}\setminus B(0,\sqrt{k_{stat}})} e^{i(g^{\prime}(0)k+\frac{1}{2}g^{\prime\prime}(0)k^{2})} \chi(0)d^{3}k \\I_{1}^{2}&=&\int_{{\mbox{${\rm I\!R}$}}^{3}\setminus B(0,\sqrt{k_{stat}})}e^{i(g^{\prime}(0)k+\frac{1}{2}g^{\prime\prime}(0)k^{2})}(\chi(\mathbf{k})-\chi(0))k^{2}dkd\Omega \\\partial_{s}I(s)\mid_{s=\xi}&=&\int_{{\mbox{${\rm I\!R}$}}^{3}\setminus B(0,\sqrt{k_{stat}})}\lambda k^{3}\widetilde{f}(\mathbf{k})e^{g_{\xi}}(\mathbf{k})\chi(k)k^{2}dkd\Omega\end{aligned}$$ where $$\begin{aligned} \label{newtaylor} g^{\prime}(0)&=&\frac{k}{\sqrt{k^{2}+m^{2}}}+a-y\cos(\vartheta)\mid_{k=0}=a-y\cos(\vartheta)\nonumber\\ g^{\prime\prime}(0)&=&\partial_{k}^{2}g=\frac{m^{2}}{\sqrt{k^{2}+m^{2}}^{3}}\mid_{k=0}=\frac{1}{m}\nonumber\\ \widetilde{f}(\mathbf{k})&=&(g(\mathbf{k})-g(0)-g^{\prime}(0)k-\frac{1}{2}g^{\prime\prime}(0)k^{2})k^{-3}\nonumber\\ g_{\xi}(\mathbf{k})&=&g(0)+g^{\prime}(0)k+\frac{1}{2}g^{\prime\prime}(0)k^{2}+\xi\widetilde{f}(\mathbf{k})\;.\end{aligned}$$ As by similar argument concerning (\[next\]) $g_{\xi}$ has only one stationary point $\widetilde{\mathbf{k}}_{stat}$. One can easily see, that $$g^{\prime}(0)+g^{\prime\prime}(0)k=a-y\cos(\vartheta)+\frac{k}{m}\geq a-y\cos(\vartheta)+\frac{k}{\sqrt{k^{2}+m^{2}}}=g^{\prime}(\mathbf{k})\;.$$ Furthermore we have, that: $$g^{\prime}_{\xi}(\mathbf{k})=(1-\xi)(g^{\prime}(0)+g^{\prime\prime}(0)k)+\xi g^{\prime}(\mathbf{k})\;.$$ It follows, that $$g^{\prime}(0)+g^{\prime\prime}(0)k\geq g^{\prime}_{\xi}\geq g^{\prime}\;.$$ Therefore at $\mathbf{k}=\widetilde{\mathbf{k}}_{stat}$ (where by definition $g^{\prime}_{\xi}(\widetilde{\mathbf{k}}_{stat})=0$) the $g^{\prime}$ has to be negative. It follows (recalling, that $g^{\prime}$ increases monotonously on the $k_{1}$-axis), that $$0\leq\widetilde{k}_{stat}\leq k_{stat}\;.$$ For the same reasons we have the zero point $\overline{\mathbf{k}}_{stat}$ of $g^{\prime}(0)+g^{\prime\prime}(0)k$ (i.e. $\overline{\mathbf{k}}_{stat}=-\frac{g^{\prime}(0)}{g^{\prime\prime}(0)})$: $$0\leq\overline{k}_{stat}\leq k_{stat}\;.$$ As the second derivative of $g_{\xi}^{\prime\prime}(\widetilde{k}_{stat})$ is not equal to zero, we can define a function $\widetilde{g}_{\xi}$ with: $$\label{gtilde} 0<M\leq\widetilde{g}_{\xi}:=\mid\mathbf{k}-\mathbf{\widetilde{k}}_{stat}\mid^{-1}g_{\xi}\;.$$ The integral $I_{1}^{1}$ includes the leading term. It can be estimated like (\[I11\]). The other terms can be estimated again by partial integrations. For that we define: $$\begin{aligned} \zeta_{1}:=(\chi(\mathbf{k})-\chi(0))k^{2}=:\widetilde{\zeta}_{1}k^{3} \hspace{1cm}\zeta_{2}:=\widetilde{f}(\mathbf{k})\chi(k)k^{5}=:\widetilde{\zeta}_{2}k^{5}\end{aligned}$$ where $\widetilde{\zeta}_{1,2}$ are bounded $C^{\infty}$-functions. We now make two partial integrations in $I_{1}^{2}$ and three partial integrations in $\partial_{s}I(s)$ to get the estimates $$\begin{aligned} \parallel I_{1}^{2}\parallel_{s}&\leq&\mu^{-2}\parallel\int_{{\mbox{${\rm I\!R}$}}^{3}\setminus B(0,\sqrt{k_{stat}})}\partial_{k}\big(\frac{1}{g^{\prime}(0)+g^{\prime\prime}(0)k}\partial_{k}(\frac{\zeta_{1}}{g^{\prime}(0)+g^{\prime\prime}(0)k})\big)dkd\Omega\parallel_{s} \\&=&\mu^{-2}\parallel\int_{{\mbox{${\rm I\!R}$}}^{3}\setminus B(0,\sqrt{k_{stat}})}\partial_{k}\big(\frac{\zeta_{1}^{\prime}}{(g^{\prime}(0)+g^{\prime\prime}(0)k)^{2}}-\frac{\zeta_{1}g^{\prime\prime}(0)}{(g^{\prime}(0)+g^{\prime\prime}(0)k)^{3}}\big)dkd\Omega\parallel_{s} \\\parallel\partial_{s}I(s)\mid_{s=\xi}\parallel_{s}&\leq&\mu^{-2}\parallel\int_{{\mbox{${\rm I\!R}$}}^{3}\setminus B(0,\sqrt{k_{stat}})}\partial_{k}\big(\frac{1}{g_{\xi}^{\prime}}\partial_{k}(\frac{1}{g_{\xi}^{\prime}}\partial_{k}\frac{\zeta_{2}}{g_{\xi}^{\prime}})\big)dkd\Omega\parallel_{s} \\&=&\mu^{-2}\parallel\int_{{\mbox{${\rm I\!R}$}}^{3}\setminus B(0,\sqrt{k_{stat}})}\partial_{k}\big(\frac{1}{g_{\xi}^{\prime}}\partial_{k}(\frac{\zeta_{2}^{\prime}}{g_{\xi}^{\prime2}}-\frac{\zeta_{2}g_{\xi}^{\prime\prime}}{g_{\xi}^{\prime3}})\big)dkd\Omega\parallel_{s} \\&=&\mu^{-2}\parallel\int_{{\mbox{${\rm I\!R}$}}^{3}\setminus B(0,\sqrt{k_{stat}})}\partial_{k}\big(\frac{\zeta_{2}^{\prime\prime}}{g_{\xi}^{\prime3}}-3\frac{\zeta_{2}^{\prime}g_{\xi}^{\prime\prime}}{g_{\xi}^{\prime4}}-\frac{\zeta_{2}g_{\xi}^{\prime\prime\prime}}{g_{\xi}^{\prime3}}+3\frac{\zeta_{2}g_{\xi}^{\prime\prime2}}{g_{\xi}^{\prime5}}\big)dkd\Omega\parallel_{s}\;.\end{aligned}$$ So we can define functions $f_{j}$, $j=1;...;5$ which are bounded, with: $$\begin{aligned} \parallel I_{1}^{2}\parallel_{s}&\leq&\mu^{-2}\parallel\int_{{\mbox{${\rm I\!R}$}}^{3}\setminus B(0,\sqrt{k_{stat}})}\partial_{k}(f_{1}q_{1}^{2}+f_{2}q_{1}^{3})dkd\Omega\parallel_{s} \\\parallel\partial_{s}I(s)\mid_{s=\xi}\parallel_{s}&\leq&\mu^{-2}\parallel\int_{{\mbox{${\rm I\!R}$}}^{3}\setminus B(0,\sqrt{k_{stat}})}\partial_{k}(f_{3}q_{2}^{3}+f_{4}q_{2}^{4}+f_{5}q_{2}^{5})dkd\Omega\parallel_{s}\end{aligned}$$ where $$\begin{aligned} q_{1}:=\frac{k}{\mid\mathbf{k}-\overline{\mathbf{k}}_{stat}\mid} \hspace{1cm}q_{2}:=\frac{k}{\mid\mathbf{k}-\mathbf{\widetilde{k}}_{stat}\mid}\;.\end{aligned}$$ So it is only left to show, that $\partial_{k}q_{1}$ and $\partial_{k}q_{2}$ are bounded on ${\mbox{${\rm I\!R}$}}^{3}\setminus B(0,\sqrt{k_{stat}})$. But this is easy: $$\begin{aligned} \partial_{k}q_{1}=\partial_{k}\frac{1}{\sqrt{1-2\frac{k_{stat}\cos(\vartheta)}{k}+\frac{k_{stat}^{2}}{k^{2}}}} =\frac{1}{\sqrt{1-2\frac{k_{stat}\cos(\vartheta)}{k}+\frac{k_{stat}^{2}}{k^{2}}}^{3}} (\frac{k_{stat}^{2}}{k^{3}}-\frac{k_{stat}\cos(\vartheta)}{k^{2}})\end{aligned}$$ for $k\geq\sqrt{k_{stat}}$ this term has obviously uniform bound. The derivative of $q_{2}$ can be estimated in the same way. We only have to replace $\widetilde{k}_{stat}$ by $\overline{k}_{stat}$. \(III) For $y>a+1$ we have no stationary point any more. So two partial integrations are possible without any problem. We again choose $k_{1}$ parallel to $\mathbf{y}$ $$\begin{aligned} \parallel I_{2}\parallel_{s}&\leq&\mu^{-2}\int\parallel\partial_{k}(\frac{1}{g^{\prime}(\mathbf{k})}\partial_{k}\frac{\chi_{2}(\mathbf{k})}{g^{\prime}(\mathbf{k})})\parallel_{s} d^{3}k=\mu^{-2}\int\parallel\partial_{k}(\frac{\chi^{\prime}}{g^{\prime2}}-\frac{\chi g^{\prime\prime} }{g^{\prime3}})\parallel_{s} d^{3}k \\&=&\mu^{-2}\int\parallel\frac{\chi^{\prime\prime}}{g^{\prime2}}-2\frac{\chi^{\prime}g^{\prime\prime}}{g^{\prime 3}}-\frac{\chi^{\prime} g^{\prime\prime}}{g^{\prime3}}-\frac{\chi g^{\prime\prime\prime}}{g^{\prime3}}+3\frac{\chi g^{\prime\prime2}}{g^{\prime4}}\parallel_{s} d^{3}k\end{aligned}$$ ($f^{\prime}$ means $\partial_{k_{1}}f$). This integral still depends on $k_{stat}$. To get an estimate uniform in $k_{stat}$ we use: $$\mid g^{\prime}(\mathbf{k})\mid=y-\frac{k_{1}}{\sqrt{k^{2}+m^{2}}}-a\frac{k_{1}}{k}\geq 1-\frac{k_{1}}{\sqrt{k^{2}+m^{2}}}=:h(\mathbf{k})\;.$$ It follows: $$\parallel I_{2}\parallel_{s}\leq\mu^{-2}\int\parallel\frac{\chi^{\prime\prime}}{h^{2}}+3\frac{\chi^{\prime}g^{\prime\prime}}{h^{3}}+3\frac{\chi g^{\prime\prime2}}{h^{4}}+\frac{\chi g^{\prime\prime\prime}}{h^{3}}\parallel_{s} d^{3}k=:\mu^{-2}C\;.$$ Proof of equation(\[ridofalpha\]) {#appendix2} --------------------------------- For each $\mathbf{k}$ we have two eigenstates for electrons. These two eigenstates span the two dimensional spinor subspace for electrons. In the standard representation of the Dirac matrices these two spinors[^2] are: $$\begin{aligned} \label{spinors} s^{1}_{\mathbf{k}}=\begin{pmatrix} _{\widehat{E}_{k}} \nonumber \\ _{0} \nonumber \\ _{k_{1}} \nonumber \\ _{k^{+}} \end{pmatrix}\hspace{1cm} s^{2}_{\mathbf{k}}=\begin{pmatrix} _{0} \nonumber \\ _{\widehat{E}_{k}} \nonumber \\ _{k^{-}} \nonumber \\ _{-k_{1}} \end{pmatrix}\end{aligned}$$ where $$k^{\pm}=k_{2}\pm ik_{3}\hspace{1cm}\widehat{E}_{k}=E_{k}+m\hspace{1cm}E_{k}=\sqrt{k^{2}+m^{2}}\;.$$ If we now take any linear combination of these spinors $s_{\mathbf{k}}=a_{\mathbf{k}}s^{1}(\mathbf{k})+b(\mathbf{k})s^{2}_{\mathbf{k}}$ and compute for example $\langle s_{\mathbf{k}}^{*},\alpha_{1} s_{\mathbf{k}}\rangle$, we get (see(\[alphas\])): $$\begin{aligned} \langle s_{\mathbf{k}}^{*},\alpha_{1}s_{\mathbf{k}}\rangle &=& \big\langle(a^{*}(\mathbf{k})s^{1*}_{\mathbf{k}}+b^{*}(\mathbf{k})s^{2*}_{\mathbf{k}}),\alpha_{1}(a(\mathbf{k})s^{1}_{\mathbf{k}}+b(\mathbf{k})s^{2}_{\mathbf{k}})\big\rangle \\ &=& \big(a^{*}(\mathbf{k}) \begin{pmatrix} _{{\widehat{E}_{k}}} \\ _{0} \\ _{k_{1}} \\ _{k^{-}} \end{pmatrix} +b^{*}(\mathbf{k}) \begin{pmatrix} _{0} \\ _{\widehat{E}_{k}}\\ _{k^{+}} \\ _{-k_{1}} \end{pmatrix}), (a(\mathbf{k}) \begin{pmatrix} _{k_{1}} \\ _{-k^{+}} \\ _{\widehat{E}_{k}} \\ _{0} \end{pmatrix} +b(\mathbf{k}) \begin{pmatrix} _{k^{-}} \\ _{k_{1}} \\ _{0} \\ _{-\widehat{E}_{k}} \end{pmatrix}\big) \\ &=& \big(a^{2}(\mathbf{k})+b^{2} (\mathbf{k})\big)2\widehat{E}_{k}k_{1}\;.\end{aligned}$$ With the normalization factor $$\begin{aligned} \langle s_{\mathbf{k}}^{*},s_{\mathbf{k}}\rangle&=&(a^{2}(\mathbf{k})+b^{2}(\mathbf{k}))(\widehat{E}_{k}^{2}+k^{2})=(a^{2}(\mathbf{k})+b^{2}(\mathbf{k}))(E_{k}^{2}+2E_{k}m+m^{2}+ k^{2})\\ &=&(a^{2}(\mathbf{k})+b^{2}(\mathbf{k}))(2E_{k}(E_{k}+m))=(a^{2}(\mathbf{k})+b^{2}(\mathbf{k}))(2E_{k}\widehat{E}_{k})\end{aligned}$$ we get: $$\langle s_{ \mathbf{k}}^{*},\alpha_{1}s_{\mathbf{k}}\rangle=\frac{k_{1}}{\sqrt{k^{2}+m^{2}}}\langle s_{\mathbf{k}}^{*},s_{\mathbf{k}}\rangle\;.$$ Analogously we get: $$\langle s_{\mathbf{k}}^{*},\boldsymbol{\alpha}s_{\mathbf{k}}\rangle=\frac{\mathbf{k}}{\sqrt{k^{2}+m^{2}}}\langle s_{\mathbf{k}}^{*},s_{\mathbf{k}}\rangle\;.$$ By linearity (\[ridofalpha\]) follows. Proof of Lemma \[properties\] {#appendix3} ----------------------------- $\mathbf{(a)}$ To begin with, we consider the integral $$\label{form} I(\mathbf{x})=\int\frac{1}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid^{j}}f(\mathbf{x}^{\prime})d^{3}x^{\prime}$$ for bounded, integrable $\parallel f\parallel_{s}$ and j=1;2. For j=1 it has been proven by Ikebe [@ikebe], that I is Hölder continuous. We extend this to j=2. Therefore we need to estimate: $$I(\mathbf{x}+\mathbf{h})-I(\mathbf{x}-\mathbf{h})$$ for arbitrary $\mathbf{h}$ with $h\leq\frac{1}{4}$ (We do not need to focus on $h>\frac{1}{4}$, as $I(\mathbf{x})$ is bounded). We split the integral into: $$\begin{aligned} \label{formest} \lefteqn{\hspace{-1cm}I(\mathbf{x}+\mathbf{h})-I(\mathbf{x}-\mathbf{h})=}\\&=& \int_{B(\mathbf{x},\sqrt{h})}(\frac{1}{\mid\mathbf{x}+\mathbf{h}-\mathbf{x}^{\prime}\mid^{2}}-\frac{1}{\mid\mathbf{x}-\mathbf{h}-\mathbf{x}^{\prime}\mid^{2}})f(\mathbf{x}^{\prime})d^{3}x^{\prime}\nonumber\\&&+ \int_{B(\mathbf{x},1)\backslash B(\mathbf{x},\sqrt{h})}(\frac{1}{\mid\mathbf{x}+\mathbf{h}-\mathbf{x}^{\prime}\mid^{2}}-\frac{1}{\mid\mathbf{x}-\mathbf{h}-\mathbf{x}^{\prime}\mid^{2}})f(\mathbf{x}^{\prime})d^{3}x^{\prime}\nonumber\\&&+ \int_{\mathbb{R}^{3}\backslash B(\mathbf{x},1) }(\frac{1}{\mid\mathbf{x}+\mathbf{h}-\mathbf{x}^{\prime}\mid^{2}}-\frac{1}{\mid\mathbf{x}-\mathbf{h}-\mathbf{x}^{\prime}\mid^{2}})f(\mathbf{x}^{\prime})d^{3}x^{\prime}=:I_{1}+I_{2}+I_{3}\;.\end{aligned}$$ For $I_{1}$ we have: $$\parallel I_{1}\parallel_{s}\leq2\sup_{x\in\mathbb{R}^{3}}\{\parallel f(\mathbf{x})\parallel_{s}\}\int_{B(\mathbf{x},\sqrt{h})}\frac{1}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid^{2}}d^{3}x^{\prime}\;.$$ So we can find a constant $M<\infty$, so that $$\label{holder1/2a} \parallel I_{1}(\mathbf{x},\mathbf{h})\parallel_{s}\leq M\sqrt{h}\hspace{1cm}\forall\mathbf{h}\in\mathbb{R}^{3}\;.$$ For $I_{2}$ we have, using $\mid\sqrt{h}-h\mid\leq\frac{1}{2}\sqrt{h}$: $$\begin{aligned} \parallel I_{2}\parallel_{s}&=&\parallel\int_{B(\mathbf{x},1)\backslash B(0,\sqrt{h}) }(\frac{1}{\mid\mathbf{x}^{\prime}+\mathbf{h}\mid^{2}}-\frac{1}{\mid\mathbf{x}^{\prime}-\mathbf{h}\mid^{2}})f(\mathbf{x}-\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s} \\&\leq&\sup_{x\in\mathbb{R}^{3}}\{\parallel f(\mathbf{x})\parallel_{s}\}\int_{B(\mathbf{x},1)\backslash B(0,\sqrt{h})}\frac{\mid \mid\mathbf{x}^{\prime}-\mathbf{h}\mid^{2}-\mid\mathbf{x}^{\prime}+\mathbf{h}\mid^{2}\mid}{\mid\mathbf{x}^{\prime}+\mathbf{h}\mid^{2}\mid\mathbf{x}^{\prime}-\mathbf{h}\mid^{2}}d^{3}x^{\prime} \\&\leq&\sup_{x\in\mathbb{R}^{3}}\{\parallel f(\mathbf{x}\parallel_{s})\}\int_{B(\mathbf{x},1)\backslash B(0,\sqrt{h})}\frac{4hx^{\prime}}{\mid\mathbf{x}^{\prime}+\mathbf{h}\mid^{2}\mid\mathbf{x}-\mathbf{h}\mid^{2}}d^{3}x^{\prime} \\&\leq&\sup_{x\in\mathbb{R}^{3}}\{\parallel f(\mathbf{x})\parallel_{s}\}\int_{B(\mathbf{x},1)\backslash B(0,\sqrt{h})}\frac{8h}{x^{\prime3}}d^{3}x^{\prime}\;.\end{aligned}$$ So we can find a constant $M<\infty$, so that $$\label{holder1/2b} \parallel I_{2}(\mathbf{x},\mathbf{h})\parallel_{s}\leq M\sqrt{h}\hspace{1cm}\forall\mathbf{h}\in\mathbb{R}^{3}\;.$$ For $I_{3}$ we have, using similar reasoning as above: $$\begin{aligned} \parallel I_{3}\parallel_{s}\leq\int_{\mathbb{R}^{3}\backslash B(\mathbf{0},1) }\frac{8h}{x^{\prime3}}\parallel f(\mathbf{x}-\mathbf{x}^{\prime})\parallel_{s} d^{3}x^{\prime} \leq8h\int\parallel f(\mathbf{x}-\mathbf{x}^{\prime})\parallel_{s} d^{3}x^{\prime}\;.\end{aligned}$$ Since $f$ is absolutely integrable, we can find a constant $M<\infty$, so that $$\label{holder1/2c} \parallel I_{3}(\mathbf{x},\mathbf{h})\parallel_{s}\leq Mh\hspace{1cm}\forall\mathbf{h}\in\mathbb{R}^{3}\;.$$ We use this estimate on (\[LSE\]), observing, that $G_{\mathbf{k}}^{+}$ multiplied by $A\hspace{-0.2cm}/\widetilde{\varphi}_{\mathbf{k}}^{s}$ is essentially of the form of the integrals in (\[form\]).Therfore: $$\label{holder1/2} \parallel \widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}+\mathbf{h})-\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})\parallel_{s}\leq M\sqrt{h}\hspace{1cm}\forall\mathbf{h}\in\mathbb{R}^{3}\;.$$ Now we want to focus on integrals of the form (\[form\]) for j=2 where $f(\mathbf{x})$ satisfies: $$\label{holder1/2f} \parallel f(\mathbf{x}+\mathbf{h})-f(\mathbf{x})\parallel_{s}\leq M\sqrt{h}\;.$$ We do a similar splitting as in (\[formest\]). Now we have for $I_{1}$, using (\[holder1/2f\]): $$\begin{aligned} \parallel I_{1}\parallel_{s}&\leq&\parallel\int_{B(\mathbf{x} ,\sqrt{h})}\frac{1}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid^{2}}(f(\mathbf{x}^{\prime}+\mathbf{h})-f(\mathbf{x}^{\prime}-\mathbf{h}))d^{3}x^{\prime}\parallel_{s} \\&\leq&\mid\int_{B(\mathbf{x},\sqrt{h})}\frac{1}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid^{2}}M\sqrt{h}d^{3}x^{\prime}\mid\;.\end{aligned}$$ Thus with an appropriate $\widetilde{M}<\infty$: $$\label{holdera} \parallel I_{1}(\mathbf{x},\mathbf{h})\parallel_{s}\leq Mh\hspace{1cm}\forall\mathbf{h}\in\mathbb{R}^{3}\;.$$ For $I^{2}$ we have: $$\begin{aligned} \parallel I_{2}^{2}\parallel_{s}&=&\parallel\int_{B(0,1)\backslash B(0,\sqrt{h})}(\frac{1}{\mid\mathbf{x}^{\prime}+\mathbf{h}\mid^{2}}-\frac{1}{\mid\mathbf{x}^{\prime}-\mathbf{h}\mid^{2}})f(\mathbf{x}-\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s}\\&=&\parallel\int_{B(0,1)\backslash B(0,\sqrt{h})}\frac{\mid\mathbf{x}^{\prime}-\mathbf{h}\mid^{2}-\mid\mathbf{x}^{\prime}+\mathbf{h}\mid^{2}}{\mid\mathbf{x}^{\prime}-\mathbf{h}\mid^{2}\mid\mathbf{x}^{\prime}+\mathbf{h}\mid^{2}}f(\mathbf{x}-\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s}\;.\end{aligned}$$ Since the fraction under this integral is point-symmetric to zero, we can estimate the integral by: $$\begin{aligned} \parallel I_{2}^{2}\parallel_{s}&\leq&\parallel\int_{B(0,1)\backslash B(0,\sqrt{h})}\mid\frac{\mid\mathbf{x}^{\prime}-\mathbf{h}\mid^{2}-\mid\mathbf{x}^{\prime}+\mathbf{h}\mid^{2}}{\mid\mathbf{x}^{\prime}-\mathbf{h}\mid^{2}\mid\mathbf{x}^{\prime}+\mathbf{h}\mid^{2}}\mid (f(\mathbf{x}-\mathbf{x}^{\prime})-f(\mathbf{x}+\mathbf{x}^{\prime}))d^{3}x^{\prime}\parallel_{s} \\&\leq&\parallel\int_{B(0,1)\backslash B(0,\sqrt{h})}\mid\frac{\mid\mathbf{x}^{\prime}-\mathbf{h}\mid^{2}-\mid\mathbf{x}^{\prime}+\mathbf{h}\mid^{2}}{\mid\mathbf{x}^{\prime}-\mathbf{h}\mid^{2}\mid\mathbf{x}^{\prime}+\mathbf{h}\mid^{2}}\mid M\sqrt{2x^{\prime}}d^{3}x^{\prime}\parallel_{s} \\&\leq&\parallel\int_{B(0,1)\backslash B(0,\sqrt{h})}4\mid\frac{2h}{x^{\prime3}}\mid M\sqrt{2x^{\prime}}d^{3}x^{\prime}\mid\leq\mid16\pi M\sqrt{2}\int^{1}_{\sqrt{h}}\mid\frac{2h}{x^{\prime-\frac{1}{2}}}\mid d^{3}x^{\prime}\parallel_{s}\;.\end{aligned}$$ So we can find a $\widetilde{M}<\infty$ with: $$\label{holderc} \parallel I_{2}^{2}(\mathbf{x},\mathbf{h})\parallel_{s}\leq \widetilde{M}h\hspace{1cm}\forall\mathbf{h}\in\mathbb{R}^{3}\;.$$ For $I_{3}$ we do the same estimations as before. Applying this to (\[LSE\]) we obtain the Hölder continuity of degree 1 for $\widetilde{\varphi}_{\mathbf{k}}^{s}$. $\mathbf{(b)}$ Assume that $\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})$ satisfies (\[LSE\]) and is Hölder continuous of degree 1. Inserting $\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})$ in the right hand side of (\[dgmp\]) leads to: $$\begin{aligned} H\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})&=&(H_{0}+A\hspace{-0.2cm}/(\mathbf{x}))\left(\varphi_{\mathbf{k}}^{s}(\mathbf{x})-\int A\hspace{-0.2cm}/(\mathbf{x^{\prime}})G^{+}_{k}(\mathbf{x}-\mathbf{x^{\prime}}) \widetilde{\varphi}_{\mathbf{k}}^{s}(\mathbf{x^{\prime}})d^{3}x^{\prime}\right)\\&=&(E_{k}+A\hspace{-0.2cm}/(\mathbf{x}))\varphi_{\mathbf{k}}^{s}(\mathbf{x})-(H_{0}+A\hspace{-0.2cm}/(\mathbf{x}))\int A\hspace{-0.2cm}/(\mathbf{x^{\prime}})G^{+}_{k}(\mathbf{x}-\mathbf{x^{\prime}}) \widetilde{\varphi}_{\mathbf{k}}^{s}(\mathbf{x^{\prime}})d^{3}x^{\prime}\;.\end{aligned}$$ For (\[dgmp\]) this term has to be equal to $E_{k}\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})$. So we have to prove, that $$(H_{0}-E_{k})\int G^{+}_{k}(\mathbf{x}-\mathbf{x^{\prime}})A\hspace{-0.2cm}/(\mathbf{x^{\prime}}) \widetilde{\varphi}_{\mathbf{k}}^{s}(\mathbf{x^{\prime}})d^{3}x^{\prime}=A\hspace{-0.2cm}/(\mathbf{x})\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})\;.$$ In other words we have to prove, that with $f$ Hölder continuous of degree $1$: $$\label{povzner} (H_{0}-E_{k})\int G^{+}_{k}(\mathbf{x}-\mathbf{x^{\prime}})f(\mathbf{x}^{\prime})d^{3}x^{\prime}=f(\mathbf{x}^{\prime})\;.$$ $G^{+}_{k}$ can be written as [@thaller]: $$G^{+}_{k}(\mathbf{x})=(H_{0}+E_{k})\frac{e^{ikx}}{4\pi x}=:(H_{0}+E_{k})G_{k}^{KG}$$ with $$\label{GKG} (H_{0}-E_{k})(H_{0}+E_{k})G_{k}^{KG}=(\triangle-k^{2})G_{k}^{KG}=\delta\;.$$ So for (\[povzner\]) we need to show, that: $$\label{hauff} (H_{0}-E_{k})(H_{0}+E_{k})\int G^{KG}_{k}(\mathbf{x}-\mathbf{x^{\prime}})f(\mathbf{x}^{\prime})d^{3}x^{\prime}= (\triangle-k^{2})\int G^{KG}_{k}(\mathbf{x}-\mathbf{x^{\prime}})f(\mathbf{x}^{\prime})d^{3}x^{\prime}=f(\mathbf{x})\;.$$ We define for $\varepsilon>0$ the following function $G^{\varepsilon}_{k}$: $$\begin{aligned} \label{gepsilon} G_{k}^{\varepsilon}(\mathbf{x}):=G^{KG}_{\mathbf{k}}(\mathbf{x})\text{ for } x\geq\varepsilon\hspace{1cm} G^{\varepsilon}_{k}(\mathbf{x})=G^{KG}_{\mathbf{k}}(\mathbf{x})(1-e^{\frac{x}{\varepsilon-x}})\text{ for } x<\varepsilon\;.\end{aligned}$$ We denote $$\label{gkprime} G_{k}^{\prime}(\mathbf{x})=\nabla G^{KG}_{k}=\frac{ik\mathbf{x}e^{ikx}}{4\pi x^{2}}+\frac{\mathbf{x}e^{ikx}}{x^{3}}\;.$$ We split the right hand side of (\[hauff\]) into: $$\label{haufsplit} (\triangle-k^{2})\int (G^{KG}_{k}(\mathbf{x}-\mathbf{x^{\prime}})-G_{k}^{\varepsilon}(\mathbf{x}))f(\mathbf{x}^{\prime})d^{3}x^{\prime}+(\triangle-k^{2})\int G_{k}^{\varepsilon}(\mathbf{x})f(\mathbf{x}^{\prime})d^{3}x^{\prime}\;.$$ By definition of $G_{k}^{KG}$ (\[GKG\]) we have outside the Ball $B(0,\varepsilon)$: $$\label{outball} (\triangle-k^{2})G_{k}^{\varepsilon}(\mathbf{x})=(\triangle-k^{2})G_{k}^{KG}(\mathbf{x})=0\;.$$ So for the first summand we have: $$\begin{aligned} \lefteqn{\hspace{-1cm}\lim_{\varepsilon\rightarrow0}\parallel(\triangle-k^{2})\int (G^{KG}_{k}(\mathbf{x}-\mathbf{x^{\prime}})-G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x^{\prime}}))f(\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s}}\\ &=&\lim_{\varepsilon\rightarrow0}\parallel\triangle\int_{B(\mathbf{x},\varepsilon)} (G^{KG}_{k}(\mathbf{x}-\mathbf{x^{\prime}})-G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x^{\prime}}))f(\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s}\\ &=&\lim_{\varepsilon\rightarrow0}\parallel\nabla\int_{B(\mathbf{x},\varepsilon)} \nabla(G^{KG}_{k}(\mathbf{x}-\mathbf{x^{\prime}})-G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x^{\prime}}))f(\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s}\\ &=&\lim_{\varepsilon\rightarrow0}\parallel\nabla\int_{B(\mathbf{x},\varepsilon)} \big(\nabla_{x^{\prime}}(G^{KG}_{k}(\mathbf{x}-\mathbf{x^{\prime}})-G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x^{\prime}}))\big)f(\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s}\\ &=&\lim_{\varepsilon\rightarrow0}\parallel\nabla\int_{B(\mathbf{x},\varepsilon)} \big(\nabla_{x^{\prime}}(G^{KG}_{k}(\mathbf{x}^{\prime})-G_{k}^{\varepsilon}(\mathbf{x^{\prime}}))\big)f(\mathbf{x}-\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s}\\ &\leq&\lim_{\varepsilon\rightarrow0}\int_{B(\mathbf{x},\varepsilon)}\parallel \big(\nabla_{x^{\prime}}(G^{KG}_{k}(\mathbf{x^{\prime}})-G_{k}^{\varepsilon}(\mathbf{x}^{\prime}))\big)\frac{f(\mathbf{x}-\mathbf{x^{\prime}})-f(\mathbf{x}+\mathbf{h}-\mathbf{x^{\prime}})}{h}\parallel_{s} d^{r}x^{\prime}\;.\end{aligned}$$ As f is Hölder continuous, the last term can be estimated by: $$\begin{aligned} \lefteqn{\hspace{-1cm}\lim_{\varepsilon\rightarrow0}\parallel(\triangle-k^{2})\int (G^{KG}_{k}(\mathbf{x}-\mathbf{x}^{\prime})-G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x}^{\prime}))f(\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s}}\\ &\leq&\lim_{\varepsilon\rightarrow0}\int_{B(0,\varepsilon)}\mid \nabla_{x^{\prime}}(G^{KG}_{k}(\mathbf{x}^{\prime})-G_{k}^{\varepsilon}(\mathbf{x}^{\prime}))M\mid d^{3}x^{\prime}\\ &\leq&\lim_{\varepsilon\rightarrow0}\int_{B(0,\varepsilon)}\mid \left(G^{\prime}_{k}(\mathbf{x}^{\prime})-G^{\prime}_{k}(\mathbf{x}^{\prime})(1-e^{\frac{x^{2}}{\varepsilon-x}})-G^{\varepsilon}_{k}(\mathbf{x}^{\prime})\frac{-\varepsilon}{(x-\varepsilon)^{2}}\right)M\mid d^{3}x^{\prime}=0\;.\end{aligned}$$ For the second summand, we use (\[outball\]) and the mean value theorem $$\begin{aligned} \lefteqn{\hspace{-1cm}\lim_{\varepsilon\rightarrow0}(\triangle-k^{2})\int G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x}^{\prime})f(\mathbf{x}^{\prime})d^{3}x^{\prime}}\\ &=&\lim_{\varepsilon\rightarrow0}(\triangle-k^{2})\int_{B(\mathbf{x},\varepsilon)} G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x}^{\prime})f(\mathbf{x}^{\prime})d^{3}x^{\prime}\\ &=&\lim_{\varepsilon\rightarrow0}\int_{B(\mathbf{x},\varepsilon)} (\triangle-k^{2}) G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x}^{\prime})f(\mathbf{x}^{\prime})d^{3}x^{\prime}\\ &=&\lim_{\varepsilon\rightarrow0}\int_{B(\mathbf{x},\varepsilon)} (\triangle-k^{2}) (e^{-ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x}^{\prime})e^{ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid})f(\mathbf{x}^{\prime})d^{3}x^{\prime}\\ &=&\lim_{\varepsilon\rightarrow0}\int_{B(\mathbf{x},\varepsilon)} (\triangle-k^{2}) (e^{-ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x}^{\prime})e^{ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid})f(\mathbf{x}^{\prime})d^{3}x^{\prime}\\ &=&\lim_{\varepsilon\rightarrow0}\int_{B(\mathbf{x},\varepsilon)} (\triangle+2ik\nabla) \big(e^{-ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x}^{\prime})\big)e^{ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}f(\mathbf{x}^{\prime})d^{3}x^{\prime}\\ &=&\lim_{\varepsilon\rightarrow0}\int_{B(\mathbf{x},\varepsilon)} \triangle \big(e^{-ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x}^{\prime})\big)e^{ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}f(\mathbf{x}^{\prime})d^{3}x^{\prime}\\ &=&\lim_{\varepsilon\rightarrow0}e^{ik\mid\mathbf{x}-\mathbf{x}_{\varepsilon}\mid}f(\mathbf{x}_{\varepsilon})\int_{B(\mathbf{x},\varepsilon)} \triangle \big(e^{-ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x}^{\prime})\big)d^{3}x^{\prime}\\\end{aligned}$$ where $\mathbf{x}_{\varepsilon}\in B(\mathbf{x},\varepsilon)$ using the positivity of $$\triangle \big(e^{-ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x}^{\prime})\big)=2\frac{1-e^{\frac{x}{\varepsilon-x}}}{4\pi x^{3}}+2\frac{\varepsilon e^{\frac{x}{\varepsilon-x}}}{(x-\varepsilon)^{2}4\pi x^{2}}+\frac{(\varepsilon^{2}+2x\varepsilon)e^{\frac{x}{\varepsilon-x}})}{(x-\varepsilon)^{4}4\pi x}\geq0\;.$$ Hence with Gauss’ theorem and (\[gkprime\]) $$\begin{aligned} \lefteqn{\hspace{-1cm}\lim_{\varepsilon\rightarrow0}(\triangle-k^{2})\int G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x}^{\prime})f(\mathbf{x}^{\prime})d^{3}x^{\prime}}\\ &=&f(\mathbf{x})\lim_{\varepsilon\rightarrow0}\int_{B(\mathbf{x},\varepsilon)} \triangle \big(e^{-ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x}^{\prime})\big)d^{3}x^{\prime}\\ &=&f(\mathbf{x})\lim_{\varepsilon\rightarrow0}\int_{\partial B(\mathbf{x},\varepsilon)} \nabla \big(e^{-ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}G_{k}^{\varepsilon}(\mathbf{x}-\mathbf{x}^{\prime})\big)\cdot\mathbf{n}d\Omega\\ &=&f(\mathbf{x})\lim_{\varepsilon\rightarrow0}\int_{\partial B(\mathbf{x},\varepsilon)} \nabla \big( e^{-ik\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}G_{k}^{KG}(\mathbf{x}-\mathbf{x}^{\prime})\big)\cdot\mathbf{n}\mid\mathbf{x}-\mathbf{x}^{\prime}\mid^{\prime2}d\Omega \\ &=&f(\mathbf{x})\lim_{\varepsilon\rightarrow0}\int_{\partial B(\mathbf{x},\varepsilon)} \frac{\mathbf{x}-\mathbf{x}^{\prime}}{4\pi \mid\mathbf{x}-\mathbf{x}^{\prime}\mid^{3}}\cdot\mathbf{n}\mid\mathbf{x}-\mathbf{x}^{\prime}\mid^{2}d\Omega=f(\mathbf{x})\end{aligned}$$ and (b) is proved. We show now, that for any $\mathbf{k}\in \mathbb{R}^{3}$ there exists a unique solution $\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})$ of (\[LSE\]). Using the definition of the $\zeta_{\mathbf{k}}^{s}(\mathbf{x})$ (see \[zeta\]) in (\[LSE\]) yields: $$\label{zetalse} \zeta_{\mathbf{k}}^{s}(\mathbf{x})=v_{\mathbf{k}}(\mathbf{x})-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\zeta_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}$$ where $$\label{vk} v_{\mathbf{k}}(\mathbf{x}):=-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\varphi_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}\;.$$ It suffices to prove, that (\[zetalse\]) has a unique solution for any $\mathbf{k}\in \mathbb{R}^{3}$. For the Schrödinger Greens-function, this has already been proven by Ikebe [@ikebe]. We want to proceed in the same way. Let $\mathcal{B}$ be the Banach space of all continuous functions tending uniformly to zero as $x\rightarrow\infty$. Due to (\[decayofint\]) $v(\mathbf{x})\in\mathcal{B}$. Ikebe uses the Riesz-Schauder theory of completely continuous operators in a Banach space [@riesz]: If T is a completely continuous operator in $\mathcal{B}$, then for any given $g\in\mathcal{B}$ the equation $$\label{zetalse2} f=g+Tf$$ has a unique solution in $\mathcal{B}$ if $\widetilde{f}=T\widetilde{f}$ implies that $\widetilde{f}=0$. Defining the integral operator $T$ by: $$Tf(\mathbf{x}):=-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})f(\mathbf{x}^{\prime})d^{3}x^{\prime}$$ and using $v$ for $g$, (\[zetalse2\]) is equivalent to (\[zetalse\]). Note, that this operator is completely continuous by the proof of Lemma \[properties\](a) following a similar argumentation as in [@ikebe] Lemma 4.2. So it is left to show, that the integral equation $$\label{ftilde} \widetilde{f}(\mathbf{x})=-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\widetilde{f}(\mathbf{x}^{\prime})d^{3}x^{\prime}$$ has the unique solution $\widetilde{f}\equiv0$. Obviously $\widetilde{f}\equiv0$ is a solution of (\[ftilde\]). By virtue of (\[decayofint\]) any solution of (\[ftilde\]) has to be of order $x^{-1}$. Furthermore $\widetilde{f}$ satisfies $$\label{kleingordon} (-\Delta-k^{2}+A\hspace{-0.2cm}/)\widetilde{f}=0$$ which can be shown by direct calculation. Following Ikebe, $\widetilde{f}\equiv0$ is the only solution of (\[ftilde\]). $\mathbf{(c)}$ (c)i) follows directly from (\[decayofint\]). For (c)ii) we need to work more. We exemplarily prove (c)ii) for j=1,2. Heuristically deriving (\[zetalse\]) with respect to $k$ will yield $\partial_{k}\zeta$. We denote the function we get by this formal method by $\zeta^{\prime s}_{\mathbf{k}}$. Then $$\label{ableitung} \zeta^{\prime s}_{\mathbf{k}}(\mathbf{x})=\partial_{k}v_{\mathbf{k}}(\mathbf{x})-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\partial_{k}G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\zeta_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\zeta^{\prime s}_{\mathbf{k}}(\mathbf{x}^{\prime})d^{3}x^{\prime}\;.$$ We will now show, that this integral equation has a unique solution. We define $$\begin{aligned} \label{dkzeta} p(\mathbf{x})&:=&\partial_{k}v_{\mathbf{k}}(\mathbf{x})+\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\partial_{k}G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\zeta_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}\\ \overline{\zeta}_{\mathbf{k}}^{s}(\mathbf{x})&:=&\zeta^{\prime s}_{\mathbf{k}}(\mathbf{x})-p(\mathbf{x})\end{aligned}$$ so $\overline{\zeta}_{\mathbf{k}}^{s}$ satisfies: $$\overline{\zeta}_{\mathbf{k}}^{s}(\mathbf{x})=-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})p(\mathbf{x}^{\prime})d^{3}x^{\prime}-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\overline{\zeta}^{\prime s}_{\mathbf{k}}(\mathbf{x}^{\prime})d^{3}x^{\prime}\;.$$ Since $$v^{\prime}(\mathbf{x}):=-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})v^{\prime}(\mathbf{x}^{\prime})d^{3}x^{\prime}\in\mathcal{B}$$ this integral equation again has a unique solution, so does (\[dkzeta\]). We will now show, that $\zeta^{\prime}=\partial_{k}\zeta$. We define the integral of $\zeta^{\prime}$: $$\label{intzeta} \widetilde{\zeta}^{s}_{k,\vartheta,\varphi}(\mathbf{x}):=\zeta^{s}_{0}(\mathbf{x})+\int_{0}^{k}\zeta^{\prime s}_{k^{\prime},\vartheta,\varphi}(\mathbf{x})dk\mathbf{\prime}\;.$$ Obviously $\partial_{k}\widetilde{\zeta}_{\mathbf{k}}^{s}=\zeta_{\mathbf{k}}^{s\prime}$ and $\widetilde{\zeta}^{s}_{0}=\zeta_{0}^{s}$. Using (\[zetalse\]) and (\[dkzeta\]) in (\[intzeta\]) leads to: $$\begin{aligned} \widetilde{\zeta}_{\mathbf{k}}^{s}(\mathbf{x})&=&\zeta^{s}_{0}(\mathbf{x})+\int_{0}^{k}\zeta^{\prime s}_{k^{\prime},\vartheta,\varphi}(\mathbf{x})dk^{\prime} \\&=&v_{0}(\mathbf{x})-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{0}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\zeta_{0}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime} +\int_{0}^{k}\partial_{k^{\prime}}v_{\mathbf{k}^{\prime}}(\mathbf{x})dk^{\prime} \\&&-\int\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\partial_{k^{\prime}}G_{\mathbf{k}^{\prime}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\zeta_{\mathbf{k}^{\prime}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}+\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{\mathbf{k}^{\prime}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\zeta^{\prime s}_{\mathbf{k}^{\prime}}(\mathbf{x}^{\prime})d^{3}x^{\prime}dk^{\prime} \\&=&v_{\mathbf{k}}(\mathbf{x})-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{0}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\widetilde{\zeta}_{0}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime} \\&&-\int\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\partial_{k^{\prime}}G_{\mathbf{k}^{\prime}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\widetilde{\zeta}_{\mathbf{k}^{\prime}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}+\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{\mathbf{k}^{\prime}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\zeta^{\prime s}_{\mathbf{k}^{\prime}}(\mathbf{x}^{\prime})d^{3}x^{\prime}dk^{\prime} \\&&-\int\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\partial_{k^{\prime}}\big(G_{\mathbf{k}^{\prime}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\widetilde{\zeta}_{\mathbf{k}^{\prime}}^{s}(\mathbf{x}^{\prime})\big)d^{3}x^{\prime}dk^{\prime} \\&=&v_{\mathbf{k}}(\mathbf{x})-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\widetilde{\zeta}_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}\;.\end{aligned}$$ So $\widetilde{\zeta}_{\mathbf{k}}^{s}$ satisfies (\[zetalse\]). As the solution is unique, it follows, that $\widetilde{\zeta}_{\mathbf{k}}^{s}=\zeta_{\mathbf{k}}^{s}$, hence $$\partial_{k}\zeta_{\mathbf{k}}^{s}=\zeta_{\mathbf{k}}^{s\prime}\;.$$ By (\[decayofint\]) $\widetilde{\zeta}_{\mathbf{k}}^{s}$ and $p(\mathbf{x})$ have uniform bound, so $$\sup_{\mathbf{x}\in\mathbb{R}^{3}}\parallel\partial_{k}\zeta^{s}_{\mathbf{k}}(\mathbf{x})\parallel_{s}<\infty\;.$$ For the second derivative we have: $$\partial_{k}^{2}\frac{\zeta^{s}}{x+1}=\partial_{k}\frac{\overline{\zeta}^{s}}{x+1}+\partial_{k}\frac{p}{x+1}\;.$$ The proof of the existence and uniqueness of $\partial_{k}\frac{\overline{\zeta}^{s}}{x}$ is the same as for $\partial_{k}\zeta^{s}$, furthermore $\partial_{k}\overline{\zeta}^{s}$ is bounded uniformly in $\mathbf{x}$. For $\partial_{k}\frac{p}{x+1}$ we have: $$\begin{aligned} \label{partialp} \parallel\partial_{k}\frac{p}{x+1}\parallel_{s}&=&\parallel\frac{1}{x+1}\partial_{k}\big(\partial_{k}v_{\mathbf{k}}(\mathbf{x})-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\partial_{k}G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\zeta_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}\big)\parallel_{s}\nonumber\\ &=&\parallel\frac{1}{x+1}\big(\partial_{k}^{2}v_{\mathbf{k}}(\mathbf{x})-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\partial_{k}^{2}G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\zeta_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}\nonumber\\ &&-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\partial_{k}G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\partial_{k}\zeta_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}\big)\parallel_{s}\;.\end{aligned}$$ Note, that $$\begin{aligned} \label{xxxx} \mid\partial_{k}^{2}G_{\mathbf{k}}^{+}(\mathbf{x})\mid=\mid x^{2}S_{\mathbf{k}}^{+}(\mathbf{x})+x\partial_{k}S_{\mathbf{k}}^{+}(\mathbf{x})+\partial_{k}^{2}S_{\mathbf{k}}^{+}(\mathbf{x})\mid\leq M(xk+\frac{k}{x^{2}})\;.\end{aligned}$$ Observing (\[xxxx\]) and (\[kernel\]) $\frac{\partial_{k}^{2}G_{\mathbf{k}}^{+}}{x+1}$ and $\partial_{k}\zeta^{s}_{\mathbf{k}}$ are bounded uniformly in $\mathbf{x}$, we have also, that $$\frac{1}{x+1}\big(\partial_{k}^{2}v_{\mathbf{k}}(\mathbf{x})-\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\partial_{k}G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\partial_{k}\zeta_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}\big)$$ is uniformly bounded in $\mathbf{x}$. For the other summand we have get: $$\begin{aligned} \lefteqn{\hspace{-1cm}\sup_{x,k\in\mathbb{R}^{3}}\parallel\frac{1}{x+1}\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\partial_{k}^{2}G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\zeta_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s}}\\ &\leq&\sup_{x\in\mathbb{R}^{3}}\parallel\frac{1}{x+1}\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})(\mathbf{x}-\mathbf{x}^{\prime})\zeta_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s}\\ &\leq&\sup_{x\in\mathbb{R}^{3}}\parallel\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})\frac{M(\mathbf{x}-\mathbf{x}^{\prime})}{(x+1)(x^{\prime}+1)}(x^{\prime}+1)\zeta_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s}\\ &\leq&\sup_{x\in\mathbb{R}^{3}}\parallel\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})M(x^{\prime}+1)\zeta_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}\parallel_{s}<\infty\end{aligned}$$ This proves (c)ii). $\mathbf{(c)iii)}$ The proof of (c)iii) is very similar to the proof of (c)ii). The only difference is, that we get new functions $p(\mathbf{x})$. $$p(\mathbf{x})=k^{\mid\gamma\mid-1}D_{\mathbf{k}}^{\gamma}v_{\mathbf{k}}(\mathbf{x}))+\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})k^{\mid\gamma\mid-1}D^{\gamma}_{\mathbf{k}}G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\zeta_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}\;.$$ To have $p(\mathbf{x})$ in $\mathcal{B}$ one only has to assure, that $k^{\mid\gamma\mid-1}D_{\mathbf{k}}^{\gamma}k$ is bounded for $\mid\gamma\mid\leq2$, which follows by direct calculation. $\mathbf{(d)}$ For potentials satisfying Condition A (\[potcond\]) the scattering system $(H,H_{0})$ is asymptotically complete (see [@thaller]), i.e. for any scattering state $\psi$ there exists a free outgoing asymptotic $\psi_{\text{out}}$ with: $$\label{asymp} \lim_{t\rightarrow\infty}\parallel\psi(\mathbf{x},t)-\psi_{\text{out}}(\mathbf{x},t)\parallel=0\;.$$ We write this, using the Fourier transform $\widehat{\psi}_{\text{out}}^{s}$ of $\psi_{\text{out}}$: $$\lim_{t\rightarrow\infty}\parallel\psi(\mathbf{x},t)-\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}}\widehat{\psi}_{out,s}(\mathbf{k})\varphi^{s}_{\mathbf{k}}(\mathbf{x},t)d^{3}k\parallel=0\;.$$ We shall show that $$\label{zerfall} \lim_{t\rightarrow\infty}\parallel\int(2\pi)^{-\frac{3}{2}}\widehat{\psi}_{out,s}(\mathbf{k})\left(\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x},t)-\varphi^{s}_{\mathbf{k}}(\mathbf{x},t)\right)d^{3}k\parallel=0\;.$$ With that: $$\begin{aligned} \lefteqn{\hspace{-1cm}\lim_{t\rightarrow\infty}\parallel\psi(\mathbf{x},t)-\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}}\widehat{\psi}_{out,s}(\mathbf{k})\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x},t)d^{3}k\parallel} \\&=&\lim_{t\rightarrow\infty}\parallel e^{-iHt}(\psi(\mathbf{x})-\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}}\widehat{\psi}_{out,s}(\mathbf{k})\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})d^{3}k)\parallel \\&=&\parallel \psi(\mathbf{x})-\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} \widehat{\psi}_{out,s}(\mathbf{k})\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})d^{3}k\parallel=0\;.\end{aligned}$$ which establishes (\[hin\]). For (\[zerfall\]) we consider $$\begin{aligned} \label{s2} \lefteqn{\hspace{-1cm}\int(2\pi)^{-\frac{3}{2}}\widehat{\psi}_{out,s}(\mathbf{k})\left(\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x},t)-\varphi^{s}_{\mathbf{k}}(\mathbf{x},t)\right)d^{3}k\nonumber}\\ &=&\int(2\pi)^{-\frac{3}{2}}e^{iE_{k}t}\widehat{\psi}_{out,s}(\mathbf{k})\zeta_{\mathbf{k}}^{s}(\mathbf{x})d^{3}k \nonumber\\&=&\int(2\pi)^{-\frac{3}{2}}e^{iE_{k}t}\widehat{\psi}_{out,s}(\mathbf{k})v_{\mathbf{k}}^{s}(\mathbf{x})d^{3}k \nonumber\\&&-\int(2\pi)^{-\frac{3}{2}}e^{iE_{k}t}\widehat{\psi}_{out,s}(\mathbf{k})\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime)}G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})\zeta_{\mathbf{k}}^{s}(\mathbf{x}^{\prime})d^{3}x^{\prime}d^{3}k\nonumber\\&=:&\xi_{1}(\mathbf{x})+\xi_{2}(\mathbf{x})\;.\end{aligned}$$ For the k-integration of $\xi_{1}$ we introduce (\[vk\]) and (\[kernel\]) and then use Lemma \[statphas\], setting: $$\mu=t ;\; a=t^{-1}\mid\mathbf{x}-\mathbf{x}^{\prime}\mid ;\; \mathbf{y}=t^{-1}\mathbf{x}^{\prime} ;\; k^{\prime}=k;\; \chi(\mathbf{k}^{\prime})=(2\pi)^{-\frac{3}{2}}\widehat{\psi}_{out,s}(\mathbf{k}^{\prime})\;.$$ Furthermore we recall, that: $$\begin{aligned} \frac{k_{stat}}{\sqrt{k_{stat}^{2}+m^{2}}}+a-y&=&0\\ k_{stat}^{2}&=&(k_{stat}^{2}+m^{2})(y-a)^{2}\\ k_{stat}&=& m(y-a)\sqrt{k_{stat}^{2}+m^{2}}= m\frac{x^{\prime}}{t}\sqrt{k_{stat}^{2}+m^{2}}\;.\end{aligned}$$ For $\xi_{2}$ we set: $$\mu=t ;\; a=t^{-1}\mid\mathbf{x}-\mathbf{x}^{\prime}\mid ;\; \mathbf{y}=0 ;\; k^{\prime}=k;\; \chi(\mathbf{k}^{\prime})=(2\pi)^{-\frac{3}{2}}\zeta_{\mathbf{k}}^{s}(\mathbf{k}^{\prime})\widehat{\psi}_{out,s}(\mathbf{k}^{\prime})\;.$$ Hence by (\[hoerm\]) we obtain for (\[s2\]) that there exists $M<\infty$ uniform in $\mathbf{y}$ and $a$, such that: $$\begin{aligned} \label{summesj} \parallel\xi_{1}(\mathbf{x})+\xi_{2}(\mathbf{x})\parallel_{s}&\leq& Mt^{-\frac{3}{2}}\mid\int A\hspace{-0.2cm}/(\mathbf{x}^{\prime})G_{\mathbf{k}}^{+}(\mathbf{x}-\mathbf{x}^{\prime})(1+x^{\prime})d^{3}x^{\prime}\mid\nonumber\\&=:&Mt^{-\frac{3}{2}}G(\mathbf{x})\;.\end{aligned}$$ The integral $G(\mathbf{x})$ is bounded and goes to zero in the limit $x\rightarrow\infty$ (see \[decayofint\]). This we shall use in the following estimate. For (\[zerfall\]) we need to control $$\lim_{t\rightarrow\infty}\parallel\xi_{1}+\xi_{2}\parallel=\lim_{t\rightarrow\infty}\big(\int\parallel\xi_{1}+\xi_{2}\parallel_{s}^{2}d^{3}x\big)^{\frac{1}{2}}\;.$$ We split this integral into three parts, which are time dependent by introducing for all $\varepsilon>0$: $$\begin{aligned} \rho_{\varepsilon}(\mathbf{x})&=&\mathbb{I}_{B(0,\varepsilon t)}\,\,, \text{ the indicator function of the set }B(0,\varepsilon t) \\ \widetilde{\rho}_{\varepsilon}(\mathbf{x})&=&\mathbb{I}_{B(0,t)\backslash B(0,\varepsilon t)}\\ \rho_{\text{out}}(\mathbf{x})&=&\mathbb{I}_{\mathbb{R}^{3}\backslash B(0,t)}\end{aligned}$$ thus splitting our integral into: $$\begin{aligned} \label{tripart} \lim_{t\rightarrow\infty}\int\parallel\xi_{1}+\xi_{2}\parallel_{s}^{2}d^{3}x &=&\lim_{t\rightarrow\infty}\int\rho_{\varepsilon}(\mathbf{x})\parallel\xi_{1}+\xi_{2}\parallel_{s}^{2}d^{3}x \nonumber\\&&+\lim_{t\rightarrow\infty}\int\widetilde{\rho}_{\varepsilon}(\mathbf{x})\parallel\xi_{1}+\xi_{2}\parallel_{s}^{2}d^{3}x \nonumber\\&&+\lim_{t\rightarrow\infty}\int\rho_{\text{out}}(\mathbf{x})\parallel\xi_{1}+\xi_{2}\parallel_{s}^{2}d^{3}x \nonumber\\&=:&I_{1}+I_{2}+I_{3}\;.\end{aligned}$$ The last part of this integral is the part, that lies outside the light cone. For large times, all wavefunctions which are solutions of the free or full Dirac equation will lie inside the light cone. By virtue of (\[spacelike\]): $$\begin{aligned} \lim_{t\rightarrow\infty}\parallel\rho_{\text{out}}(\mathbf{x})\psi_{out,s}(\mathbf{x})\parallel&=&0 \\\lim_{t\rightarrow\infty}\parallel\rho_{\text{out}}(\mathbf{x})\int(2\pi)^{-\frac{3}{2}}\widehat{\psi}_{out,s}(\mathbf{k})\varphi_{\mathbf{k}}^{s}(\mathbf{x})d^{3}k\parallel&=&0\end{aligned}$$ or by (\[asymp\]): $$\lim_{t\rightarrow\infty}\parallel\rho_{\text{out}}(\mathbf{x})\int(2\pi)^{-\frac{3}{2}}\widehat{\psi}_{out,s}(\mathbf{k})\widetilde{\varphi}_{\mathbf{k}}^{s}(\mathbf{x})d^{3}k\parallel=0\;.$$ By (\[s2\]) it follows, that: $$I_{3}=\lim_{t\rightarrow\infty}\parallel\rho_{\text{out}}(\mathbf{x})\int\widehat{\psi}_{out,s}(\mathbf{k})\left(\widetilde{\varphi}_{\mathbf{k}}^{s}(\mathbf{x})-\varphi_{\mathbf{k}}^{s}(\mathbf{x})\right)d^{3}k\parallel=0\;.$$ Now we use (\[summesj\]) on: $$I_{1}\leq M^{2}\lim_{t\rightarrow\infty}(\sup_{x\leq\varepsilon t}\{G(x)\})^{2}t^{-3}\frac{4\pi}{3}(\varepsilon t)^{3}=C\varepsilon^{3}\;.$$ Since $\varepsilon$ is arbitrary, $I_{1}=0$. For $I_{2}$ we have: $$\begin{aligned} I_{2}&=&\lim_{t\rightarrow\infty}\mid\int\widetilde{\rho}_{\varepsilon}(\mathbf{x})\parallel\xi_{1}+\xi_{2}\parallel_{s}^{2}d^{3}x\mid \\&=&\lim_{t\rightarrow\infty}\mid M^{2}\int t^{-3}\widetilde{\rho}_{\varepsilon}(\mathbf{x})G(\mathbf{x})^{2}d^{3}x\mid \\&\leq&\lim_{t\rightarrow\infty}\sup_{x\geq\varepsilon t}\mid G(\mathbf{x})^{2}\mid=0\end{aligned}$$ and (\[hin\]) is proved. We first prove (\[her\]) for wavefunctions, where $\psi_{\text{out}}$ is in $L^{1}\cap L^{2}$. The general result can then be obtained by density arguments. Therefore we again use the unitarity of the time propagator: $$\begin{aligned} \int(2\pi)^{-\frac{3}{2}}\langle\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),\psi(\mathbf{x})\rangle d^{3}x &=&\lim_{t\rightarrow\infty}\int(2\pi)^{-\frac{3}{2}}e^{iHt}\langle\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),e^{-iHt}\psi(\mathbf{x})\rangle d^{3}x\\&=&\lim_{t\rightarrow\infty}e^{iEt}\int(2\pi)^{-\frac{3}{2}}\langle\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),\psi(\mathbf{x},t)\rangle d^{3}x\\ &=&\lim_{t\rightarrow\infty}e^{iEt}\int_{B(0,R)}(2\pi)^{-\frac{3}{2}}\langle\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),\psi(\mathbf{x},t)\rangle d^{3}x \\&&+\lim_{t\rightarrow\infty}e^{iEt}\int_{\mathbb{R}^{3}\backslash B(0,R)}(2\pi)^{-\frac{3}{2}}\langle\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),\psi(\mathbf{x},t)\rangle d^{3}x\;.\end{aligned}$$ By asymptotical completeness (\[asymp\]) we obtain therefore $$\begin{aligned} \int(2\pi)^{-\frac{3}{2}}\langle\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),\psi(\mathbf{x})\rangle d^{3}x &=&\lim_{t\rightarrow\infty}e^{iEt}\int_{B(0,R)}(2\pi)^{-\frac{3}{2}}\langle\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),\psi_{\text{out}}(\mathbf{x},t)\rangle d^{3}x \\&&+\lim_{t\rightarrow\infty}e^{iEt}\int_{\mathbb{R}^{3}\backslash B(0,R)}(2\pi)^{-\frac{3}{2}}\langle\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),\psi_{\text{out}}(\mathbf{x},t)\rangle d^{3}x\;.\end{aligned}$$ By the free scattering into cones theorem, the first integral of the right hand side goes to zero because any freely evolving wavefunction leaves any bounded region. For the second integral we write for all $R>0$: $$\begin{aligned} \int(2\pi)^{-\frac{3}{2}}\langle\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),\psi(\mathbf{x})\rangle d^{3}x &=&\lim_{R\rightarrow\infty}\lim_{t\rightarrow\infty}e^{iEt}\int_{\mathbb{R}^{3}\backslash B(0,R)}(2\pi)^{-\frac{3}{2}}\langle\varphi^{s}_{\mathbf{k}}(\mathbf{x}),\psi_{\text{out}}(\mathbf{x},t)\rangle d^{3}x \\&&+\lim_{R\rightarrow\infty}\lim_{t\rightarrow\infty}e^{iEt}\int_{\mathbb{R}^{3}\backslash B(0,R)}(2\pi)^{-\frac{3}{2}}\langle\zeta^{s}_{\mathbf{k}}(\mathbf{x}),\psi_{\text{out}}(\mathbf{x},t)\rangle d^{3}x\;.\end{aligned}$$ Using Lemma \[properties\](c)i), the second integral on the right hand side becomes: $$\begin{aligned} \lefteqn{\hspace{-1cm}\mid\lim_{R\rightarrow\infty}\lim_{t\rightarrow\infty}e^{iEt}\int_{\mathbb{R}^{3}\backslash B(0,R)}(2\pi)^{-\frac{3}{2}}\langle\zeta^{s}_{\mathbf{k}}(\mathbf{x}),\psi_{\text{out}}(\mathbf{x},t)\rangle d^{3}x\mid} \\&\leq&\lim_{R\rightarrow\infty}\frac{M}{R}\parallel\lim_{t\rightarrow\infty}e^{iEt}\int_{\mathbb{R}^{3}\backslash B(0,R)}(2\pi)^{-\frac{3}{2}}\psi_{\text{out}}(\mathbf{x},t)d^{3}x\parallel \\&=&\lim_{R\rightarrow\infty}\frac{M}{R}\parallel\int_{\mathbb{R}^{3}\backslash B(0,R)}(2\pi)^{-\frac{3}{2}}\psi_{\text{out}}(\mathbf{x},0)d^{3}x\parallel=0\;.\end{aligned}$$ Therefore: $$\begin{aligned} \lefteqn{\hspace{-1cm}\int(2\pi)^{-\frac{3}{2}}\langle\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),\psi(\mathbf{x})\rangle d^{3}x}\\ &=&\lim_{R\rightarrow\infty}\lim_{t\rightarrow\infty}e^{iEt}\int_{\mathbb{R}^{3}\backslash B(0,R)}(2\pi)^{-\frac{3}{2}}\langle\varphi^{s}_{\mathbf{k}}(\mathbf{x}),\psi_{\text{out}}(\mathbf{x},t)\rangle d^{3}x\\ &=&\lim_{R\rightarrow\infty}\lim_{t\rightarrow\infty}e^{iEt}\int_{\mathbb{R}^{3}\backslash B(0,R)}(2\pi)^{-\frac{3}{2}}\langle\varphi^{s}_{\mathbf{k}}(\mathbf{x}),\psi_{\text{out}}(\mathbf{x},t)\rangle d^{3}x\\ &=&\lim_{R\rightarrow\infty}\lim_{t\rightarrow\infty}e^{iEt}\int(2\pi)^{-\frac{3}{2}}\langle\varphi^{s}_{\mathbf{k}}(\mathbf{x}),\psi_{\text{out}}(\mathbf{x},t)\rangle d^{3}x \\ &=&\lim_{R\rightarrow\infty}\int(2\pi)^{-\frac{3}{2}}\langle\varphi^{s}_{\mathbf{k}}(\mathbf{x}),\psi_{\text{out}}(\mathbf{x},0)\rangle d^{3}x =\widehat{\psi}_{out,s}(\mathbf{k})\;.\end{aligned}$$ and (\[her\]) is proved. Proof of Lemma \[equiv\] {#appendix4} ------------------------ First we want to prove “$\Rightarrow$”: Let $\widehat{\psi}_{\text{out}}(\mathbf{k})\in\mathcal{G}$. According to (\[hin\]) we have for any $n\in\mathbb{N}_{0}$: $$\begin{aligned} H^{n}\psi(\mathbf{x})&=&\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} H^{n}\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})\widehat{\psi}_{out,s}(\mathbf{k})d^{3}k \\&=&\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} E_{k}^{n}\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})\widehat{\psi}_{out,s}(\mathbf{k})d^{3}k\;.\end{aligned}$$ Since $\widehat{\psi}_{\text{out}}(\mathbf{k})$ decays faster than any polynom, this term is bounded and in $L^{2}\bigotimes\mathbb{C}^{4}$ for all $n\in\mathbb{N}_{0}$. As the potential $A\hspace{-0.2cm}/\in C^{\infty}$, also $$(H-A\hspace{-0.2cm}/-\beta m)^{n}\psi(\mathbf{x})=\nabla\hspace{-0.25cm}/^{n}\psi(\mathbf{x})$$ is bounded and in $L^{2}\bigotimes\mathbb{C}^{4}$ for all $n\in\mathbb{N}_{0}$. Furthermore we have, using (\[LSE\]) in (\[hin\]): $$\begin{aligned} H^{n}\psi(\mathbf{x})&=&\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} \widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})E_{k}^{n}\widehat{\psi}_{out,s}(\mathbf{k})d^{3}k \\&=&\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} \varphi^{s}_{\mathbf{k}}(\mathbf{x})E_{k}^{n}\widehat{\psi}_{out,s}(\mathbf{k})d^{3}k \\&&-\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} \int A\hspace{-0.2cm}/(\mathbf{x^{\prime}})G^{+}_{k}(\mathbf{x}-\mathbf{x^{\prime}}) \widetilde{\varphi}_{\mathbf{k}}^{s}(\mathbf{x^{\prime}})d^{3}x^{\prime}E_{k}^{n}\widehat{\psi}_{out,s}(\mathbf{k})d^{3}k=:I_{1}+I_{2}\;.\end{aligned}$$ $I_{1}$ is the Fourier transform of $E_{k}^{n}\widehat{\psi}_{out,s}(\mathbf{k})$. As $E_{k}^{n}\widehat{\psi}_{out,s}(\mathbf{k})\in\mathcal{G}$, $I_{1}$ lies in $\widehat{G}$. Next we write for $I_{2}$: $$\begin{aligned} I_{2} &=&-\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} \int A\hspace{-0.2cm}/(\mathbf{x^{\prime}})e^{ikx+ik(\mid\mathbf{x}-\mathbf{x^{\prime}}\mid-x)}\frac{S^{+}_{k}(\mathbf{x}-\mathbf{x^{\prime}})}{\mid\mathbf{x}-\mathbf{x^{\prime}}\mid} \widetilde{\varphi}_{\mathbf{k}}^{s}(\mathbf{x^{\prime}})d^{3}x^{\prime}E_{k}^{n}\widehat{\psi}_{out,s}(\mathbf{k})d^{3}kd\Omega \\&=&-\sum_{s=1}^{2}\int\int_{0}^{\infty}(2\pi)^{-\frac{3}{2}} \int A\hspace{-0.2cm}/(\mathbf{x^{\prime}})e^{ikx}F(\mathbf{k},\mathbf{x},\mathbf{x}^{\prime})d^{3}x^{\prime}E_{k}^{n}\widehat{\psi}_{out,s}(\mathbf{k})k^{2}dkd\Omega\end{aligned}$$ where $$\label{deff} F(\mathbf{k},\mathbf{x},\mathbf{x}^{\prime}):=e^{ik(\mid\mathbf{x}-\mathbf{x^{\prime}}\mid-x)}\frac{S^{+}_{k}(\mathbf{x}-\mathbf{x^{\prime}})}{\mid\mathbf{x}-\mathbf{x^{\prime}}\mid} \widetilde{\varphi}_{\mathbf{k}}^{s}(\mathbf{x^{\prime}})\;.$$ We make now two partial integrations under the k-integral, which is possible by Fubinis theorem: $$\begin{aligned} I_{2}&=&-\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} \int_{0}^{\infty}\int A\hspace{-0.2cm}/(\mathbf{x^{\prime}})e^{ikx}F(\mathbf{k},\mathbf{x},\mathbf{x}^{\prime})d^{3}x^{\prime}E_{k}^{n}k^{2}\widehat{\psi}_{out,s}(\mathbf{k})dkd\Omega \\&=&-\sum_{s=1}^{2}\frac{1}{x^{2}}\int(2\pi)^{-\frac{3}{2}} \int_{0}^{\infty}\int A\hspace{-0.2cm}/(\mathbf{x^{\prime}})e^{ikx}\partial_{k}^{2}\big(F(\mathbf{k},\mathbf{x},\mathbf{x}^{\prime})d^{3}x^{\prime}E_{k}^{n}k^{2}\widehat{\psi}_{out,s}(\mathbf{k})\big)dkd\Omega \\&=&-\sum_{s=1}^{2}\frac{1}{x^{2}}\int\int_{0}^{\infty}\int(2\pi)^{-\frac{3}{2}} A\hspace{-0.2cm}/(\mathbf{x^{\prime}})e^{ikx}\partial_{k}^{2}F(\mathbf{k},\mathbf{x},\mathbf{x}^{\prime})E_{k}^{n}k^{2}\widehat{\psi}_{out,s}(\mathbf{k})\\&&\hspace{3cm}+\;2\partial_{k}F(\mathbf{k},\mathbf{x},\mathbf{x}^{\prime})\partial_{k}\big(E_{k}^{n}k^{2}\widehat{\psi}_{out,s}(\mathbf{k})\big)dkd\Omega d^{3}x^{\prime} \\&&-\sum_{s=1}^{2}\frac{1}{x^{2}}\int_{0}^{\infty}\int(2\pi)^{-\frac{3}{2}} \int A\hspace{-0.2cm}/(\mathbf{x^{\prime}})e^{ikx}F(\mathbf{k},\mathbf{x},\mathbf{x}^{\prime})d^{3}x^{\prime}\partial_{k}^{2}\big(E_{k}^{n}k^{2}\widehat{\psi}_{out,s}(\mathbf{k})\big)dkd\Omega\\&=:&I_{3}+I_{4}\;.\end{aligned}$$ For $I_{4}$ we can write, using the definition of $F$ (\[deff\]) and (\[LSE\]) $$\begin{aligned} x^{2}I_{4}&=&\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} \widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x})\partial_{k}^{2}\big(E_{k}^{n}k^{2}\widehat{\psi}_{out,s}(\mathbf{k})\big)\frac{1}{k^{2}}d^{3}k\\&&-\sum_{s=1}^{2}\int(2\pi)^{-\frac{3}{2}} \varphi^{s}_{\mathbf{k}}(\mathbf{x})\partial_{k}^{2}\big(E_{k}^{n}k^{2}\widehat{\psi}_{out,s}(\mathbf{k})\big)\frac{1}{k^{2}}d^{3}k\;.\end{aligned}$$ As $\widehat{\psi}_{\text{out}}\in\mathcal{G}$, $\partial_{k}^{2}\big(E_{k}^{n}k^{2}\widehat{\psi}_{out,s}(\mathbf{k})\big)\frac{1}{k^{2}}$ lies in $L^{2}$ and so does $x^{2}\partial_{k}^{n}I_{4}$ for $n\in\mathbb{N}_{0}$. Under the k-integral in $I_{3}$ one more partial integration is possible. $$\begin{aligned} I_{3}=-\sum_{s=1}^{2}\frac{1}{x^{3}}\int(2\pi)^{-\frac{3}{2}} \int A\hspace{-0.2cm}/(\mathbf{x^{\prime}})\widetilde{F}(\mathbf{k},\mathbf{x},\mathbf{x}^{\prime})d^{3}x\end{aligned}$$ where $$\widetilde{F}(\mathbf{k},\mathbf{x},\mathbf{x}^{\prime}):=\partial_{k}\big(\partial_{k}^{2}F(\mathbf{k},\mathbf{x},\mathbf{x}^{\prime})E_{k}^{n}k^{2}\widehat{\psi}_{out,s}(\mathbf{k})+2\partial_{k}F(\mathbf{k},\mathbf{x},\mathbf{x}^{\prime})\partial_{k}\big(E_{k}^{n}k^{2}\widehat{\psi}_{out,s}(\mathbf{k})\big)\big)\;.$$ Due to Lemma \[properties\] (c) $\partial_{k}^{n}\widetilde{\varphi}_{\mathbf{k}}(\mathbf{x}^{\prime)}\leq Mx^{\prime}$. Furthermore we have, that $$\mid\partial_{k}e^{ik(\mid\mathbf{x}-\mathbf{x}^{\prime}\mid-x)}\mid=\mid(\mid\mathbf{x}-\mathbf{x}^{\prime}\mid-x)e^{ik(\mid\mathbf{x}-\mathbf{x}^{\prime}\mid-x)}\mid\leq x^{\prime}\mid e^{ik(\mid\mathbf{x}-\mathbf{x}^{\prime}\mid-x)}\mid\;.$$ It follows, that (remember the definition of $F$ (\[deff\])) $$\parallel \widetilde{F}(\mathbf{x},\mathbf{x})\parallel_{s}\leq M_{2}\frac{x^{\prime3}}{\mid\mathbf{x}-\mathbf{x}^{\prime}\mid}\;.$$ So due to (\[decayofint\]), with Condition B (\[potcond2\]) on the potential, the integral $$\int A\hspace{-0.2cm}/(\mathbf{x^{\prime}})\widetilde{F}(\mathbf{k},\mathbf{x},\mathbf{x}^{\prime})d^{3}x^{\prime}$$ decays as fast as or faster than $x^{-1}$, so $x^{4}I_{3}$ is bounded. It follows, that $x^{2}I_{3}$ lies in $L^{2}$ for $n\in\mathbb{N}_{0}$. The proof, that $x\partial_{x}^{n}\psi\in L^{2}$ is similar as above, just with one partial integration less. It follows, that $\psi\in\widehat{\mathcal{G}}$. It is left to prove “$\Leftarrow$”: By Lemma \[properties\](b) it follows, that $$\begin{aligned} E_{k}\widehat{\psi}_{out,s}(\mathbf{k})&=&H\widehat{\psi}_{out,s}(\mathbf{k})=\int(2\pi)^{-\frac{3}{2}}\langle\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),H\psi(\mathbf{x})\rangle d^{3}x \\&=&\int(2\pi)^{-\frac{3}{2}}\langle\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),(H_{0}+A\hspace{-0.2cm}/)\psi(\mathbf{x})\rangle d^{3}x\;.\end{aligned}$$ For $\psi\in\mathcal{\widehat{G}}$, the right hand side is integrable, so $E_{k}\widehat{\psi}_{out,s}(\mathbf{k})$ is bounded. As $A\hspace{-0.2cm}/\in C^{\infty}$, this can be repeated, so $E_{k}^{n}\widehat{\psi}_{out,s}(\mathbf{k})$ is bounded for any $n\in\mathbb{N}$. Since $E_{k}=\sqrt{k^{2}+m^{2}}\geq k$, it follows, that $$k^{n}\widehat{\psi}_{out,s}(\mathbf{k})<\infty\;.$$ Equivalently we get: $$\begin{aligned} E_{k}^{n}\partial_{k}^{j}\widehat{\psi}_{out,s}(\mathbf{k})&=&\int(2\pi)^{-\frac{3}{2}}\langle\partial_{k}^{j}\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),H^{n}\psi(\mathbf{x})\rangle d^{3}x\\ E_{k}^{n}k^{\mid\gamma\mid-1}D_{\mathbf{k}}^{\gamma}\widehat{\psi}_{out,s}(\mathbf{k})&=&\int(2\pi)^{-\frac{3}{2}}\langle k^{\mid\gamma\mid-1}D_{\mathbf{k}}^{\gamma}\widetilde{\varphi}^{s}_{\mathbf{k}}(\mathbf{x}),H^{n}\psi(\mathbf{x})\rangle d^{3}x\end{aligned}$$ With (c) of Lemma \[properties\] it follows, that for $\psi\in\widehat{\mathcal{G}}$ these terms are bounded for $j=1,2$, $n\in\mathbb{N}_{0}$ and $\mid\gamma\mid\leq2$. So $\widehat{\psi}_{out,s}(\mathbf{k})\in\mathcal{G}$. Combes, J.-M., Newton, R.G. and Shtokhamer, R: Scattering into cones and flux across surfaces, Phys. Rev. D [**11**]{}, 366-372 (1975). M. Daumer, D.  Dürr, S. Goldstein, N.  Zanghì: [*On the quantum probability flux through surfaces*]{}, J.Stat. Phys. [**88**]{}, 967–977 (1997). D. Dürr, S. Goldstein, S. Teufel, N. Zanghi: [*Scattering theory from microscopic first principles*]{}, Physica A [**279**]{}, 416-431 (2000). Thaller, B.:The Dirac equation, Springer Verlag, Berlin (1992). S. Teufel, D.  Dürr, K. Münch-Berndl: [*The flux-across-surfaces theorem for wavefunctions without energy cutoffs*]{}, J. Math. Phys. [**40**]{}, 1901–1922 (1999). Amrein, W.O., Zuleta, J.L.:Flux and scattering into cones in potential scattering, Helv. Phys. Acta [**70**]{}, 1-15 (1997). W. O. Amrein, D. B. Pearson: [*Flux and scattering into cones for long range and singular potentials*]{}, Journal of Physics A [**30**]{}, 5361–5379 (1997). S. Teufel: [*The flux-across-surfaces theorem and its implications for scattering theory*]{}, Dissertation an der Ludwig-Maximilians-Universität München (1999). G. Panati, A. Teta: [*The flux-across-surfaces theorem for a point interaction Hamiltonian*]{}, in “Stochastic Processes, Physics and Geometry: New Interplays I”, eds. Holden et al., Conference Proceedings of the Canadian Mathematical Society (2001). G. F. Dell’Antonio, G.  Panati: [*Zero-energy resonances and the flux-across surfaces theorem*]{}, mp\_arc preprint 01-402 (2001). Daumer, M., Dürr,D., Goldstein, S. and Zanghi, N.: On the flux-across-surfaces theorem , Letters in Mathematical Physics [**38**]{}, 103-116 (1996). Dollard, J.D.: Scattering into cones I, potential scattering, Comm. Math. Phys. [**12**]{}, 193-203 (1969). Haag, R: Local Quantum Physics, Springer Verlag, Berlin (1992) Hörmander, L: The Analysis of Linear Partial Differential Operatoers I, Springer Verlag, Berlin (1983) Ikebe, T: Eigenfunction Expansions Associated with the Schroedinger Operators and their Applications to Scattering Theory, Arch. Rat. Mech. Anal. [**5**]{}, 1-34 (1960). Riesz, F., von Sz.-Nagy, B.: Functional Analysis. New York: F. Ungar. Publ. Co. (1955). [^1]: We only parameterize the time-like region, as for big time-scales the main part of our wavefunction will be in this region. [^2]: The spinors here are not normalized!
--- abstract: 'We study the detectability of the lightest CP-odd Higgs boson of the NMSSM, $a_1$, at the LHC through its production in association with a bottom-quark pair followed by the $a_1\to b\bar b$ decay. It is shown that, for large $\tan\beta$ and very high luminosity of the LHC, there exist regions of the NMSSM parameter space that can be exploited to detect the $a_1$ through this channel. This signature is a characteristic feature of the NMSSM in comparison to the MSSM, as $a_1$ masses involved are well below those allowed in the MSSM for the corresponding CP-odd Higgs state.' --- preprint SHEP-11-12\ [**Very Light CP-odd Higgs bosons of the NMSSM at the LHC in $4b$-quark final states**]{}\ \ Introduction ============ Supersymmetry (SUSY) is one of the preferred candidates for physics beyond the Standard Model (SM). The simplest version of SUSY is the Minimal Supersymmetric Standard Model (MSSM). This realisation however suffers from two critical flaws: the $\mu$-problem and the little hierarchy problem. The former one results from the fact that the superpotential has a dimensional parameter, $\mu$ (the so-called ‘Higgs(ino) mass parameter’), that, due to SUSY, its natural values would be either 0 or the Plank mass scale; yet, phenomenologically, in order to achieve Electro-Weak Symmetry Breaking (EWSB), it is required to take values of order of the EW scale, 100 GeV, or, possibly, up to the TeV range. The latter one emerges from LEP, which failed to detect a light CP-even Higgs boson, thereby imposing severe constraints on a SM-like Higgs boson mass, $114$ GeV being its lower limit from data, thereby requiring unnaturally large higher order corrections from both the SM and SUSY particle spectrum (chiefly, from third generation quarks and squarks) in order to pass such experimental constraints (recall that at tree level the lightest CP-even Higgs boson mass of the MSSM is less than $M_Z$). The simplest SUSY realisation beyond the MSSM which can solve these two problems at once is the Next-to-Minimal Supersymmetric Standard Model (NMSSM) [@review]. This model includes a singlet superfield in addition to the two MSSM-type Higgs doublets, giving rise to seven Higgs bosons: three CP-even Higgses $h_{1, 2, 3}$ ($m_{h_1} < m_{h_2} < m_{h_3}$), two CP-odd Higgses $a_{1, 2}$ ($m_{a_1} < m_{a_2} $) and a pair of charged Higgses $h^{\pm}$. When the scalar component of the singlet superfield acquires a Vacuum Expectation Value (VEV), an ‘effective’ $\mu$-term, $\mu_{\rm eff}$, will be automatically generated and can rather naturally have values of order of the EW/TeV scale, as required. Moreover, the NMSSM can solve the little hierarchy problem as well, or at least alleviate it greatly, in two ways: firstly, a SM-like Higgs boson can unconventionally decay into two $a_{1}$’s with $m_{a_1}<2m_b$ [@excess], thus avoiding current Higgs bounds (yet this mass region is highly constrained by ALEPH [@Schael:2010aw] and BaBar [@Aubert:2009cka] data); secondly, a CP-even Higgs ($h_1$ or $h_2$) has naturally reduced couplings to the $Z$ boson due to the additional Higgs singlet field of the NMSSM and the ensuing mixing with the Higgs doublets. One of the primary goals of present and future colliders is looking for Higgs bosons. In regard to the Higgs sector of the NMSSM, there has been some work dedicated to explore the detectability of at least one Higgs boson at the Large Hadron Collider (LHC) and the Tevatron. In particular, some efforts have been made to extend the so-called ‘no-lose theorem’ of the MSSM – stating that at least one Higgs boson of the MSSM should be found at the LHC via the usual SM-like production and decay channels throughout the entire MSSM parameter space [@NoLoseMSSM] – to the case of the NMSSM [@NMSSM-Points; @NoLoseNMSSM1; @Shobig2]. By assuming that Higgs-to-Higgs decays are not allowed, it was realised that at least one Higgs boson of the NMSSM will be discovered at the LHC. However, this theorem could be violated if Higgs-to-SUSY particle decays are kinematically allowed (e.g., into neutralino pairs, yielding invisible Higgs signals) [@Cyril; @NMSSM-Benchmarks]. So far, there is no conclusive evidence that the ‘no-lose theorem’ can be confirmed in the context of the NMSSM. In order to establish the theorem for the NMSSM, Higgs-to-Higgs decay should be taken into account, in particular $h_1\to a_1a_1$. Such a decay can in fact be dominant in large regions of NMSSM parameter space, for instance, for small $A_k$ [@Almarashi:2010jm], and may not give Higgs signals with sufficient significance at the LHC. Besides, there have also been some attempts to distinguish the NMSSM Higgs sector from the MSSM one, by affirming a so-called ‘more-to-gain theorem’ [@Shobig1; @Erice; @CPNSH; @Almarashi:2010jm; @Almarashi:2011hj]. That is, to assess whether there exist some areas of the NMSSM parameter space where more and/or different Higgs bosons can be discovered at the LHC compared with what is expected from the MSSM. Some comparisons between NMSSM and MSSM phenomenology, specifically in the Higgs sectors of the two SUSY realisations, can be found in [@Mahmoudi:2010xp]. In this analysis, we explore the two theorems at once through studying the direct production of a very light $a_1$ (with $m_{a_1}<M_Z$) in association with $b$-quark pairs followed by the $a_1\to b\bar b$ decay, hence a $4b$-quark final state, at the LHC. This channel has a very large cross section at large $\tan\beta$ yet, being a totally hadronic signal, is plagued by very large (both reducible and irreducible) backgrounds. This work is complementary to the one carried in [@Almarashi:2010jm; @Almarashi:2011hj], in which we explored the $\tau^+\tau^-$, $\gamma\gamma$ and $\mu^+\mu^-$ decay modes of such a light $a_1$ state (again, produced in association with $b$-quark pairs). This paper is organised as follows: in Sec. 2, we describe the parameter space scan performed and give inclusive event rates for the signal. In Sec. 3, we analyse signal and QCD backgrounds for some benchmark points. Finally, we summarise and conclude in Sect. 4.\ Parameter Scan and Inclusive Signal Rates {#sect:rates} ========================================= In our exploration of the Higgs sector of the NMSSM, we used here the fortran package NMSSMTools developed in Refs. [@NMHDECAY; @NMSSMTools]. This package computes the masses, couplings and decay rates (widths and Branching Ratios (BRs)) of all the Higgs bosons of the NMSSM in terms of its parameters at the EW scale. NMSSMTools also takes into account theoretical as well as experimental constraints from negative Higgs searches at LEP [@LEP] and the Tevatron[^1], along with the unconventional channels relevant for the NMSSM. The features of the scan performed have been already discussed in Refs. [@Almarashi:2010jm; @Almarashi:2011hj], to which we refer the reader for details. We map the NMSSM parameter space in terms of six independent input quantities: the Yukawa couplings $\lambda$ and $\kappa$, the soft trilinear terms $A_\lambda$ and $A_\kappa$, plus tan$\beta$ (the ratio of the VEVs of the two Higgs doublets) and $\mu_{\rm eff} = \lambda\langle S\rangle$ (where $\langle S\rangle$ is the VEV of the Higgs singlet). For successful data points generated in the scan, i.e., those that pass both theoretical and experimental constraints, we used CalcHEP [@CalcHEP] to determine the cross-sections for NMSSM Higgs production[^2]. As the SUSY mass scales have been arbitrarily set well above the EW one (see Refs. [@Almarashi:2010jm; @Almarashi:2011hj]), the NMSSM Higgs production modes exploitable in simulations at the LHC are those involving couplings to heavy ordinary matter only. Amongst the production channels onset by the latter, we focus here on $ gg,q\bar q\to b\bar b~{a_1}, $ i.e., Higgs production in association with a $b$-quark pair. This production mode is the dominant one at large $\tan\beta$. We chose $gg,q\bar q\to b\bar ba_1\to b\bar b b\bar b$ also because $b$-tagging can be exploited to trigger on the signal and enable us to require four displaced vertices in order to reject light jets. The ensuing $4b$ signature (in which we do not enforce a charge measurement) has already been exploited to detect neutral Higgs bosons of the MSSM at the LHC and proved useful, provided $\tan\beta$ is large and the collider has good efficiency and purity in tagging $b$-quark jets, albeit for the case of rather heavy Higgs states (with masses beyond $M_Z$, typically) [@Dai:1994vu; @Dai:1996rn]. As an initial step of the analysis, we have computed the fully inclusive signal production cross-section times the decay BR against each of the six parameters of the NMSSM. Figs. 1 and 2 present the results of our scan, the first series of plots (in Fig. 1) illustrating the distribution of event rates over the six NMSSM parameters plus as a function of the BR and of $m_{a_1}$. The plots in Fig. 2 display instead the correlations between the $a_1\to b\bar b$ decay rate versus the $a_1$ mass and the $a_1\to\gamma\gamma$ decay rate. It is clear from Fig. 1 that, for our parameter space, the large $\tan\beta$ and small $\mu_{\rm eff}$ (and, to some extent, also small $\lambda$) region is the one most compatible with current theoretical and experimental constraints, while the distributions in $\kappa$, $A_\lambda$ and $A_\kappa$ are rather uniform (top six panes in Fig. 1). From a close look at the bottom-left pane of Fig. 1, it is further clear that the BR$(a_1\to b\bar b)$ is dominant for most points in the parameter space, about 90% and above. In addition, by looking at the the bottom-right pane of Fig. 1, it is remarkable to notice that the event rates are sizable in most regions of parameter space, topping the $10^7$ fb level for small values of $m_{a_1}$ and are decreasing rapidly with increasing $m_{a_1}$. However, there are some points in parameter space, with $m_{a_1}$ between 40 to 120 GeV, as shown in the left pane of Fig. 2, in which the BR$(a_1\to b\bar b)$ is reduced due to the enhancement of the BR($a_1\to \gamma\gamma$) (see right pane of the same figure), phenomenon peculiar to the NMSSM and which depends upon the amount of Higgs singlet-doublet mixing, see [@Almarashi:2010jm]. Signal-to-Background Analysis {#sect:S2B} ============================= We perform here a partonic signal-to-background ($S/B$) analysis, based on CalcHEP results. We assume $\sqrt s=14$ TeV throughout for the LHC energy. We apply the following cuts in our calculations: $$\Delta R (j,j)> 0.4,\qquad \arrowvert\eta(j)\arrowvert<2.5,\qquad P_{T}(j)>15~{\rm{GeV}}. \label{cuts:BB}$$ Here, we assume that the $b$-tagging probability of a $b$-quark is 50% while the mis-tagging probability of a gluon is 1% and the one for a light quark is 2%. Figs. 3-7 show the distributions of the invariant mass of a two $b$-jet (di-jet) system after multiplying the production times decay rates (after cuts) by the aforementioned efficiency/rejection factors, i.e., true $b$-tagging and mis-tagging probability. It is clear that the largest background is the irreducible one $b\bar bb\bar b$, which is one order of magnitude larger than the reducible background $b\bar bgg$. Further, the other reducible background, involving light quarks, labelled here as $b\bar bc\bar c$, is negligible compared to the other two. Notice that all these backgrounds reach their maximum in invariant mass at around 40 GeV. Our plots have a bin width of 1 GeV and account for all combinatorial effects (as appropriate in absence of jet-charge determination). To claim discovery of the lightest CP-odd Higgs, $a_1$, at the LHC, we plotted in Fig. 8 both signal significances, left-panes, and corresponding signal event rates, right-panes, as a function of the collected luminosity, and after integrating over 10 and 5 bins (thus mimicking a more optimistic and a more conservative, respectively, detector resolution in mass of a di-jet system). From this figure, there is a small window to detect $a_1$ with masses between 35 at 50 GeV at a very large luminosity, $L=300$ $fb^{-1}$, which could well pertain to the final number of recorded events of the LHC at design luminosity. Further, for an upgraded LHC, with a tenfold increase in design luminosity, known as the Super-LHC (SLHC), which could well lead to data sample with $L=1000$ $fb^{-1}$, there is clear potential to detect an $a_1$ with mass up to $80$ GeV or so. This is the more probable the better the experimental resolution, naturally. Finally, notice in Figs. 9-10 the distributions in pseudorapidity-azimuth (the standard cone measure) separation of di-jet pairs and average transverse momentum of the jets, respectively (in the latter case we also distinguish, in the reducible backgrounds, between $b$-jets and non-$b$-jets, by exploiting our knowledge of the Monte Carlo ‘truth’). Guided by Refs. [@Dai:1994vu; @Dai:1996rn], this has been done to check whether further cuts (in addition to those in eq. (\[cuts:BB\])) could be employed to improve the chances of detecting these particular final states. While some combinations of cuts in these quantities are more efficient for the signal than the backgrounds, there is no overall gain in the signal significance, further, to the cost of a reduced signal absolute rate. (Results are here shown for just one signal mass value, yet they are typical also for the other masses considered too.) Conclusions {#sect:summa} =========== The Higgs sector of the NMSSM is phenomenologically richer than the one of the MSSM as it has two more (neutral) Higgs states. In this paper, we explored the detectability of the lightest CP-odd Higgs state of the NMSSM, $a_1$, at the LHC and SLHC, through its production in association with a $b\bar b$ pair followed by $a_1\to b\bar b$ decay. We have shown that there are some regions of NMSSM parameter space where a rather light $a_1$ state can be discovered at very large integrated luminosity using standard reconstruction techniques [@ATLAS-TDR; @CMS-TDR]. This is true so long that $\tan\beta$ is very large, typically above 30, over an $m_{a_1}$ interval extending from 20 GeV to $80$ GeV, depending on the collider configuration. High $b$-tagging efficiency is crucial to achieve this, to transverse momenta as low as 15 GeV, ideally, also accompanied by di-jet mass resolutions up to 10 GeV. Firstly, these results (together with those of Refs.[@Almarashi:2010jm; @Almarashi:2011hj]) support the ‘no-lose theorem’ by looking for the (quite possibly resolvable) direct production of a light $a_1$ rather than looking for its production through the decay $h_{1, 2}\to a_1a_1$, whose detectability remains uncertain. Secondly, they enable distinguishing the NMSSM Higgs sector from MSSM one (thus contributing to enforcing a ‘more-to-gain theorem’) since such light CP-odd Higgs states, below $M_Z$ (which have a large singlet component), are not at all possible in the context of the MSSM.\ In reaching these conclusions we have, first, sampled the entire parameter space of the NMSSM to extract the regions which are interesting phenomenologically and, then, sampled five benchmark points in the NMSSM parameter space (see the captions to Figs. 3-7), all taken at large $\tan\beta$, for which we have extracted the detector performances and collider luminosities required for discovery. The encouraging results obtained should help to motivate further and more detailed analyses. Acknowledgments {#acknowledgments .unnumbered} =============== This work is supported in part by the NExT Institute. M. M. A. acknowledges a scholarship granted to him by Taibah University (Saudi Arabia). [99]{} For reviews, see: e.g., U. Ellwanger, C. Hugonie and A. M. Teixeira, Phys. Rept. [**496**]{} (2010) 1 (and references therein); M. Maniatis, Int. J. Mod. Phys. A [**25**]{} (2010) 3505 (and references therein). R. Dermisek and J. F. Gunion, Phys. Rev. D [**76**]{} (2007) 095006. S. Schael [*et al.*]{} \[ALEPH Collaboration\], JHEP [**1005**]{} (2010) 049. B. Aubert [*et al.*]{} \[BABAR Collaboration\], Phys. Rev. Lett.  [**103**]{} (2009) 181801. J. Dai, J.F. Gunion, R. Vega, Phys. Lett. B **315** (1993) 355 and Phys. Lett. B **345** (1995) 29; J.R. Espinosa, J.F. Gunion, Phys. Rev. Lett. **82** (1999) 1084. U. Ellwanger, J. F. Gunion and C. Hugonie, JHEP [**0507**]{} (2005) 041; U. Ellwanger, J.F. Gunion, C. Hugonie and S. Moretti, hep-ph/0305109 and hep-ph/0401228. U. Ellwanger, J.F. Gunion and C. Hugonie, hep-ph/0111179; D.J. Miller and S. Moretti, hep-ph/0403137; C. Hugonie and S. Moretti, hep-ph/0110241; A. Belyaev, S. Hesselbach, S. Lehti, S. Moretti, A. Nikitenko and C. H. Shepherd-Themistocleous, arXiv:0805.3505 \[hep-ph\]; J. R. Forshaw, J. F. Gunion, L. Hodgkinson, A. Papaefstathiou and A. D. Pilkington, JHEP [**0804**]{} (2008) 090; A. Belyaev, J. Pivarski, A. Safonov, S. Senkin and A. Tatarinov, Phys. Rev. D [**81**]{} (2010) 075021. S. Moretti, S. Munir and P. Poulose, Phys. Lett. B [**644**]{} (2007) 241. U. Ellwanger and C. Hugonie, Eur. Phys. J. C [**25**]{} (2002) 297. A. Djouadi [*et al.*]{}, JHEP [**0807**]{} (2008) 002. M. Almarashi and S. Moretti, Eur. Phys. J.  C [**71**]{} (2011) 1618. S. Moretti and S. Munir, Eur. Phys. J. C [**47**]{} (2006) 791. S. Munir, talk given at the ‘International School of Subnuclear Physics, 43rd Course’, Erice, Italy, August 29 – Sept. 7, 2005, to be published in the proceedings, preprint SHEP-05-37, October 2005. E. Accomando [*et al.*]{}, arXiv:hep-ph/0608079. M. M. Almarashi and S. Moretti, Phys. Rev.  D [**83**]{} (2011) 035023. F. Mahmoudi, J. Rathsman, O. Stal and L. Zeune, Eur. Phys. J.  C [**71**]{} (2011) 1608. U. Ellwanger, J.F. Gunion and C. Hugonie, JHEP [**0502**]{} (2005) 066; U. Ellwanger and C. Hugonie, Comput. Phys. Commun. [**175**]{} (2006) 290. See http://www.th.u-psud.fr/NMHDECAY/nmssmtools.html. S. Schael [*et al.*]{}, Eur. Phys. J. C [**47**]{} (2006) 547. A. Pukhov, arXiv:hep-ph/0412191. See http://hep.pa.msu.edu/cteq/public/cteq6.html. J. Dai, J. F. Gunion and R. Vega, Phys. Lett.  B [**345**]{} (1995) 29. J. Dai, J. F. Gunion and R. Vega, Phys. Lett.  B [**387**]{} (1996) 801. ATLAS Collaboration, arXiv:0901.0512 \[hep-ex\]. CMS Collaboration, J. Phys. G [**34**]{} (2007) 995. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ![The rates for $\sigma(gg\to b\bar b {a_1})~{\rm BR}(a_1\to b\bar b)$ as a function of $\lambda$, $\kappa$, $\tan\beta$, $\mu_{\rm eff}$, $A_\lambda$, $A_\kappa$, Br$(a_1\to b\bar b)$ and $m_{a_1}$. ](Xsec-lambda.eps "fig:") ![The rates for $\sigma(gg\to b\bar b {a_1})~{\rm BR}(a_1\to b\bar b)$ as a function of $\lambda$, $\kappa$, $\tan\beta$, $\mu_{\rm eff}$, $A_\lambda$, $A_\kappa$, Br$(a_1\to b\bar b)$ and $m_{a_1}$. ](Xsec-kappa.eps "fig:") ![The rates for $\sigma(gg\to b\bar b {a_1})~{\rm BR}(a_1\to b\bar b)$ as a function of $\lambda$, $\kappa$, $\tan\beta$, $\mu_{\rm eff}$, $A_\lambda$, $A_\kappa$, Br$(a_1\to b\bar b)$ and $m_{a_1}$. ](Xsec-tanbeta.eps "fig:") ![The rates for $\sigma(gg\to b\bar b {a_1})~{\rm BR}(a_1\to b\bar b)$ as a function of $\lambda$, $\kappa$, $\tan\beta$, $\mu_{\rm eff}$, $A_\lambda$, $A_\kappa$, Br$(a_1\to b\bar b)$ and $m_{a_1}$. ](Xsec-mueff.eps "fig:") ![The rates for $\sigma(gg\to b\bar b {a_1})~{\rm BR}(a_1\to b\bar b)$ as a function of $\lambda$, $\kappa$, $\tan\beta$, $\mu_{\rm eff}$, $A_\lambda$, $A_\kappa$, Br$(a_1\to b\bar b)$ and $m_{a_1}$. ](Xsec-Alambda.eps "fig:") ![The rates for $\sigma(gg\to b\bar b {a_1})~{\rm BR}(a_1\to b\bar b)$ as a function of $\lambda$, $\kappa$, $\tan\beta$, $\mu_{\rm eff}$, $A_\lambda$, $A_\kappa$, Br$(a_1\to b\bar b)$ and $m_{a_1}$. ](Xsec-Akappa.eps "fig:") ![The rates for $\sigma(gg\to b\bar b {a_1})~{\rm BR}(a_1\to b\bar b)$ as a function of $\lambda$, $\kappa$, $\tan\beta$, $\mu_{\rm eff}$, $A_\lambda$, $A_\kappa$, Br$(a_1\to b\bar b)$ and $m_{a_1}$. ](Xsec-a1BrbB.eps "fig:") ![The rates for $\sigma(gg\to b\bar b {a_1})~{\rm BR}(a_1\to b\bar b)$ as a function of $\lambda$, $\kappa$, $\tan\beta$, $\mu_{\rm eff}$, $A_\lambda$, $A_\kappa$, Br$(a_1\to b\bar b)$ and $m_{a_1}$. ](Xsec-ma1.eps "fig:") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- \[fig:scan\] -------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------- ![The BR$(a_1\to b\bar b)$ as a function of the CP-odd Higgs mass $m_{a_1}$ and of the BR$(a_1\to \gamma\gamma)$ .](ma1-a1BrbB.eps "fig:") ![The BR$(a_1\to b\bar b)$ as a function of the CP-odd Higgs mass $m_{a_1}$ and of the BR$(a_1\to \gamma\gamma)$ .](a1BAbb-a1BRAA.eps "fig:") -------------------------------------------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------- \[fig:mass-branching ratio\] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- ![The differential cross-section in di-jet invariant mass $m_{jj}$, after the cuts in (\[cuts:BB\]), for signal (with $m_{a_1}$=19.98 GeV) and backgrounds, the former obtained for the benchmark point with $\lambda$ = 0.075946278, $\kappa$ = 0.11543578, tan$\beta$ = 51.507125, $\mu$ = 377.4387, $A_\lambda$ = -579.63592 and $A_\kappa$ = -3.5282881.](20Xsection-mjj.eps "fig:") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- \[fig:Mjj1\] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- ![The differential cross-section in di-jet invariant mass $m_{jj}$, after the cuts in (\[cuts:BB\]), for signal (with $m_{a_1}$=35.14 GeV) and backgrounds, the former obtained for the benchmark point with $\lambda$ = 0.091741231, $\kappa$ = 0.51503049, tan$\beta$ = 38.09842, $\mu$ = 130.56601, $A_\lambda$ = -720.88387 and $A_\kappa$ = -5.3589352.](35Xsection-mjj.eps "fig:") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- \[fig:Mjj2\] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- ![The differential cross-section in di-jet invariant mass $m_{jj}$, after the cuts in (\[cuts:BB\]), for signal (with $m_{a_1}$=46.35 GeV) and backgrounds, the former obtained for the benchmark point with $\lambda$ = 0.14088263, $\kappa$ = 0.25219468, tan$\beta$ = 50.558484, $\mu$ = 317.07532, $A_\lambda$ = -569.60665 and $A_\kappa$ = -8.6099538.](46Xsection-mjj.eps "fig:") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- \[fig:Mjj3\] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- ![The differential cross-section in di-jet invariant mass $m_{jj}$, after the cuts in (\[cuts:BB\]), for signal (with $m_{a_1}$=60.51 GeV) and backgrounds, the former obtained for the benchmark point with $\lambda$ = 0.17410656, $\kappa$ = 0.47848034, tan$\beta$ = 52.385408, $\mu$ = 169.83139, $A_\lambda$ = -455.85097 and $A_\kappa$ = -9.0278415.](60Xsection-mjj.eps "fig:") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- \[fig:Mjj4\] ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- ![The differential cross-section in di-jet invariant mass $m_{jj}$, after the cuts in (\[cuts:BB\]), for signal (with $m_{a_1}$=80.91 GeV) and backgrounds, the former obtained for the benchmark point with $\lambda$ = 0.10713292, $\kappa$ = 0.13395171, tan$\beta$ = 44.721569, $\mu$ = 331.43456, $A_\lambda$ = -418.13018 and $A_\kappa$ = -9.7077267.](80Xsection-mjj.eps "fig:") ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ -- \[fig:Mjj5\] ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![The significances $S/\sqrt B$ (left) and total event rates $S$ (right) of the $gg, q\bar q\to b\bar b a_1\to b\bar bb\bar b $ signal as functions of the integrated luminosity for 10 GeV (top) and 5 GeV (bottom) di-jet mass resolutions.](10L-significance.eps "fig:") ![The significances $S/\sqrt B$ (left) and total event rates $S$ (right) of the $gg, q\bar q\to b\bar b a_1\to b\bar bb\bar b $ signal as functions of the integrated luminosity for 10 GeV (top) and 5 GeV (bottom) di-jet mass resolutions.](10L-S.eps "fig:") ![The significances $S/\sqrt B$ (left) and total event rates $S$ (right) of the $gg, q\bar q\to b\bar b a_1\to b\bar bb\bar b $ signal as functions of the integrated luminosity for 10 GeV (top) and 5 GeV (bottom) di-jet mass resolutions.](5L-significance.eps "fig:") ![The significances $S/\sqrt B$ (left) and total event rates $S$ (right) of the $gg, q\bar q\to b\bar b a_1\to b\bar bb\bar b $ signal as functions of the integrated luminosity for 10 GeV (top) and 5 GeV (bottom) di-jet mass resolutions.](5L-S.eps "fig:") ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ \[fig:significance\] -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![The differential cross-section in di-jet (pseudorapidity-azimuth) separation, after the cuts in (\[cuts:BB\]), for signal (with $m_{a_1}$=46.35 GeV) and backgrounds, the former obtained for the benchmark point with $\lambda$ = 0.14088263, $\kappa$ = 0.25219468, tan$\beta$ = 50.558484, $\mu$ = 317.07532, $A_\lambda$ = -569.60665 and $A_\kappa$ = -8.6099538. On the left the absolute normalisations are given while on the right those to 1.](4DeltaR.eps "fig:") ![The differential cross-section in di-jet (pseudorapidity-azimuth) separation, after the cuts in (\[cuts:BB\]), for signal (with $m_{a_1}$=46.35 GeV) and backgrounds, the former obtained for the benchmark point with $\lambda$ = 0.14088263, $\kappa$ = 0.25219468, tan$\beta$ = 50.558484, $\mu$ = 317.07532, $A_\lambda$ = -569.60665 and $A_\kappa$ = -8.6099538. On the left the absolute normalisations are given while on the right those to 1.](normdeltaR2.eps "fig:") -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ \[fig:DeltaR\] -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ ![The differential cross-section in jet transverse momentum, after the cuts in (\[cuts:BB\]), for signal (with $m_{a_1}$=46.35 GeV) and backgrounds, the former obtained for the benchmark point with $\lambda$ = 0.14088263, $\kappa$ = 0.25219468, tan$\beta$ = 50.558484, $\mu$ = 317.07532, $A_\lambda$ = -569.60665 and $A_\kappa$ = -8.6099538. On the left the absolute normalisations are given while on the right those to 1.](pt.eps "fig:") ![The differential cross-section in jet transverse momentum, after the cuts in (\[cuts:BB\]), for signal (with $m_{a_1}$=46.35 GeV) and backgrounds, the former obtained for the benchmark point with $\lambda$ = 0.14088263, $\kappa$ = 0.25219468, tan$\beta$ = 50.558484, $\mu$ = 317.07532, $A_\lambda$ = -569.60665 and $A_\kappa$ = -8.6099538. On the left the absolute normalisations are given while on the right those to 1.](normalisedpt.eps "fig:") -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ \[fig:DeltaR\] [^1]: Speculations of an excess at LEP which could be attributed to NMSSM Higgs bosons are found in [@excess]. [^2]: We adopt herein CTEQ6L [@cteq] as parton distribution functions, with scale $Q=\sqrt{\hat{s}}$, the centre-of-mass energy at parton level, for all processes computed.
--- bibliography: - 'bibliography(3).bib' --- **[Pfaffian groupoids]{}** Pfaffian groupoids, María Amelia Salazar Pinzón Ph.D. Thesis Utrecht University, May 2013 ISBN: 978-90-393-5961-7 **[Pfaffian groupoids]{}** **[Pfaffiaanse groepoïden]{}** (met een samenvatting in het Nederlands) [Proefschrift]{} ter verkrijging van de graad van doctor aan de Universiteit Utrecht\ op gezag van de rector magnificus, prof.dr. G.J. van der Zwaan,\ ingevolge het besluit van het college voor promoties in het openbaar\ te verdedigen op vrijdag 24 mei 2013 des middags te 12.45 uur door [María Amelia Salazar Pinzón]{} geboren op 29 september 1983 te Manizales, Colombia ----------- --------------------- Promotor: Prof.dr. M. Crainic ----------- --------------------- This thesis was accomplished with financial support from the Netherlands Organisation for Scientific Research (NWO) under the Project “Fellowship of Geometry and Quantum Theory” no. 613.000.003 Introduction {#introduction .unnumbered} ============ This thesis is motivated by the study of symmetries of partial differential equations (PDEs for short). Partial differential equations are used to model many different ‘real life’ phenomena, while also describing many interesting geometric structures. As such, they are a worthwhile object of study, which has interested scores of mathematicians since the 19th century. In fact, Lie [@Lie] first studied the symmetries of PDEs in an attempt to attain a better understanding of their geometry. Ever since many mathematicians considered different aspects of the geometric theory of PDEs and developed various techniques to attack problems in this area; amongst others, this thesis has been inspired by ideas in [@Gold1; @GuilleminSternberg:deformation; @KumperaSpencer; @Kuranishi; @SingerSternberg], which are often used and referred to in the chapters below. This begs the question of what we mean by symmetries of PDEs; in general, symmetries play a fundamental role in mathematics and, in particular, in geometry, as they describe qualitative behaviour of the object under consideration. To give an idea of symmetries in geometry, consider a wheel as the object of interest whose center is fixed; a rotation by any angle (either clockwise or counterclockwise) keeps the shape of the wheel: as such it can be thought of as a symmetry. Rotations have the property that they can be [*composed*]{}, in the sense that when we rotate the wheel twice, first by an angle $x$ and then by an angle $y$, the final symmetry is a rotation by the angle $x+y$. Also, not moving the wheel can be thought of as a rotation by the $0$ angle (call it the [*identity*]{} symmetry). Finally, if we rotate the wheel by an angle $x$ and then rotate counterclockwise by the same angle (in other words, rotation by the [*inverse*]{} of $x$), then at the end we get the identity symmetry. This example shows that when we consider symmetries of a geometric object we are interested in extra structure that can be attached to these, which, in this case, is that of a [*Lie group*]{}, i.e. the abstract space of rotations thought of as a circle, which [*acts*]{} on the wheel. It was Lie’s original work that gave rise to the modern notion of Lie groups; these encode the abstract smooth objects that describe symmetries. In fact, Lie worked with a more general notion, which is nowadays known as a [*Lie pseudogroup*]{}, in which not all symmetries necessarily act on the whole geometric object under consideration. The presence of [*local*]{} symmetries allows to study the qualitative behaviour of more general spaces, for instance (systems of) PDEs. One of Lie’s greatest achievements was the discovery that Lie pseudogroups are better understood by considering their infinitesimal data (or linearization), which consists of going from very complicated differential equations to much simpler ones which nonetheless encode much geometric information about the original system. Actually, his work concentrated on a special kind of Lie pseudogroups, called of finite type, which in today’s language correspond to Lie groups. Predictably, their linearizations are nothing but their associated Lie algebras. However, it was Cartan [@Cartan1904; @Cartan1905; @Cartan1937] at the beginning of the 20th century who made further progress on the topic of Lie pseudogroups of infinite type. He discovered that the way to encode the infinitesimal data for Lie pseudogroups of infinite type, in analogy with that of Lie algebras for Lie pseudogroups of finite type, was using differential 1-forms. This approach led him to the discovery of differential forms and the theory of exterior differential systems (EDS for short) [@BC]. With his new interpretation he was dealing with a [*jet bundle*]{} space $X$, together with the so-called Cartan (contact) form $\theta$. The infinitesimal data of the original Lie pseudogroup was encoded in Maurer-Cartan type equations that $\theta$ satisfies. This is the right place to explain the title of this thesis: the reason for the nice interaction between $X$ and $\theta$, relies on their compatibility (i.e. $\theta$ is multiplicative with respect to the structure of $X$), rather than on the fact that $X$ is a jet bundle and $\theta$ is the Cartan form... And this is what a Pfaffian groupoid is: a Lie groupoid together with a multiplicative differential form. Just to give an idea, one of the main theorems (theorem 2) says that Lie’s infinitesimal picture corresponding to Pfaffian groupoids, consists of certain connection-like operator (Spencer operator), and this correspondence is one to one under the usual assumptions. I would also like to comment on the relation between theorem 1 and Pfaffian groupoids. The main reason for us to allow general forms is the fact that multiplicative $2$-forms are central to Poisson and related geometries. Moreover, while multiplicative $2$-forms with trivial coefficients (!) are well understood, the question of passing from trivial to arbitrary coefficients has been around for a while. Surprisingly enough, even the statement of an integrability theorem for multiplicative forms with non-trivial coefficients was completely unclear. This shows, that even the case of trivial coefficients was still hiding phenomena that we did not understand. The fact that our work related to Pfaffian groupoids clarifies this point came as a nice surprise to us and, looking back, we can now say what was missing from the existing picture in multiplicative $2$-forms: Spencer operators.\ Next I will explain the form of this thesis in more detail. As I mentioned above, this thesis is about the study of Lie groupoids endowed with a compatible (multiplicative) differential 1-form. The motivation and scope of the present work is to study the geometry of PDEs using the formalism of Lie groupoids and multiplicative forms; as such, ideas from the two theories have to be introduced and explained from our point of view (which may not be the same as in the literature!) before new results can be presented. Therefore the thesis can be naturally split in two halves: the first, consisting of chapters 1, 2 and 3, recall the ideas and methods which are used in the second half, where the majority of original results are presented. It is important to remark that when considering multiplicative structures on Lie groupoids we shall employ two (equivalent) points of view: the one using differential forms and the dual picture with distributions. Chapter 1 provides some preliminaries that are used throughout the thesis, such as jet bundles, Spencer cohomology, Lie groupoids, Lie algebroids, etc. The aim of this chapter is twofold: on the one hand, it introduces concepts which may not be familiar to all readers in a way to ease them into the rest of the thesis, while on the other it provides crucial motivation for this work. In particular, some examples coming from Lie pseudogroups are discussed and a new notion of “generalized pseudogroups" is proposed. As many notions come from the classical theory of Pfaffian systems (where we have a general bundle, instead of a Lie groupoid), it is important to understand their geometry conceptually. Chapters 2 and 3 are devoted to the study of this, starting with the easier-to-handle linear picture of relative connections in Chapter 2, passing to the global description of Chapter 3. While this may not be the most natural order (especially from a historical point of view), we choose to do it the other way because the linear picture is easier to consider. In Chapter 4, we move to the theory of multiplicative forms on Lie groupoids and their infinitesimal counterparts on Lie algebroids. While this chapter can be read independently from the rest of the thesis, it presents ideas which are crucial to the understanding of Chapters 5 and 6. The main result of this chapter is the integrability theorem for multiplicative $k$-form with coefficients (cf. Theorem 1 below), which states that under the usual conditions we can recover a given multiplicative $k$-form from its infinitesimal data (namely, a $k$-Spencer operator). Of course, $k=1$ is the relevant case for the original motivation of the thesis. Chapter 5 and 6 are the core of the thesis in terms of original results. In Chapter 6 everything comes together. The multiplicativity condition for Pfaffian groupoids simplifies the theory developed in Chapter 3, where all the notions here become “Lie theoretic”. In contrast with Chapter 3, integrability results (cf. Theorem 2 below) can by applied to ensure that a Pfaffian groupoid can be recovered from its infinitesimal data (namely, its Spencer operator). These are discussed in Chapter 5. Of course, as the linear counterpart of Pfaffian groupoids, they are the natural relative connections on the setting of Lie algebroids: they are compatible with the anchor and the Lie bracket. Again, in some sense it is more natural to start with the global picture of Pfaffian groupoids and then pass by linearization to that of Spencer operators, but we chose to start with the easier to handle picture of Spencer operators in conformity with our introduction to the standard theory of Pfaffian bundles. Chapter 6 also discusses some other results which stem from this thesis, such as the infinitesimal condition that ensures the Frobenius involutivity of the Pfaffian distribution (cf. Theorem 5 below), and integration of Jacobi structures to contact groupoids (cf. Corollary 2 below). ### Description of the chapters {#description-of-the-chapters .unnumbered} This is a more detailed description of each chapter with a short review of the most important notions and results. The following table illustrates the dynamics of the thesis. In the table “Lin” stands for linearization, and “Int” for integration.\ [**Infinitesimal**]{} [**Global**]{} ------------------------------- ----------------------- ---------------- \*[[**General**]{}]{} (Ch. 2) (Ch. 3) \*[ [**Multiplicative**]{}]{} (Ch. 5) (Ch. 6) We should also keep in mind that $$\text{\bf Multiplicative}\subset \text{\bf General},$$ in the global picture, as in the infinitesimal counterpart.\ This consists of the preliminaries and some motivation for this thesis, while also setting the notation used throughout.\ This chapter discusses relative connections. These are linear operators $$\begin{aligned} \Gamma(F){\longrightarrow}\Omega^1(M,E)\end{aligned}$$ satisfying connection-like properties along a surjective vector bundle map $F\to E$ over a manifold $M$. The example to keep in mind and which will be of relevance is the classical Spencer operator $\Gamma(J^1E)\to\Omega^1(M,E)$ on jets, whose role is to detect holonomic sections (cf. [@quillen]). Although relative connections are basically the same thing as linear Pfaffian bundles, I build the theory on its own (and leave the explanation of this correspondence for Chapter 3). I study the geometry of relative connections carrying out standard notions of the geometry of ODEs, such as solutions, Spencer cohomology, curvature map, prolongations, formal integrability, etc. (cf. [@BC; @Gold1; @Gold4; @Gold3; @Kuranishi1; @quillen; @SingerSternberg; @Spencer]). I pay particular attention to the notion of prolongation: I introduce an abstract notion of prolongation (very natural from the cohomological point of view), and I recover the standard prolongation as the “universal abstract prolongation”.\ This chapter is very similar in spirit to Chapter 2, but this time we deal with non-linear objects, namely Pfaffian bundles. A Pfaffian bundle $$\begin{aligned} \pi:R{\longrightarrow}M,\quad H\subset TR\end{aligned}$$ is a surjective submersion $\pi$ together with a distribution $H$ which is transversal to the $\pi$-fibers, and whose vertical part $H^\pi:=H\cap\ker d\pi$ is Frobenius involutive. Dually, I give an alternative equivalent definition in terms of a point-wise surjective 1-form $\theta$ with values on a vector bundle $E$ over $R$, with the property that the vertical part of its kernel is Frobenius involutive. Of course the connection between the two is the kernel of the 1-form. The latter definition admits a slightly more general theory where one allows 1-forms of non-constant rank (and there are still notions that make sense in this case). With the two dual definitions, the main notion is that of a solution $\sigma\in\Gamma(R)$. These are sections with the property that $$\begin{aligned} d\sigma(TM)\subset H.\end{aligned}$$ These objects appear, for instance, in the study of PDE’s. For example in [@russians] the authors consider the (infinite dimensional) infinite jet space together with a distributions. Solutions of the PDE are sections whose infinite jet is tangent to such a distribution. I study the geometry of Pfaffian systems using techniques from EDS (cf. [@BC; @Gardner; @Cogliati; @Kamran1; @Ivey]) and PDE’s (cf. [@Gold2; @Gold3; @Stormark]), such as prolongations (cf. [@Kuranishi1]), curvature maps, formal integrability (cf. [@Gold2; @Kakie; @Makeev]), and so on. I mention the key properties that govern the classical prolongations, allowing to develop further the abstract notion of prolongation in the case of Pfaffian groupoids. Towards the end of the chapter I study Pfaffian bundles on which $\pi$ is a vector bundle and $H$ (or $\theta$ in the dual picture) is compatible with the linear structure of $R$. These are the so-called linear Pfaffian bundles. They correspond naturally to connections relative to the projection $\theta:R\to E$ (using the identification $T^\pi R|_M\simeq R$), via the formula $$\begin{aligned} D(s)=s^*\theta\end{aligned}$$ for $X\in{\ensuremath{\mathfrak{X}}}(M),s\in\Gamma(R)$. With this, I can apply all the theory of Chapter 2, and I actually go through the notions of chapter 2 (such as prolongations, symbol space, curvature maps, integrability...) and the analogous ones for Pfaffian bundles, showing that in this case they coincide. At the end of the chapter I discuss linearization of Pfaffian bundles (along solutions), which can be described directly and naturally as relative connections.\ This chapter contains the integrability result (cf. Theorem 1 below) for multiplicative $k$-forms on a Lie groupoid ${\mathcal{G}}$ with coefficients in a representation $E$ of ${\mathcal{G}}$. This result describes multiplicative forms of degree $k$ in terms of their infinitesimal data, i.e. $k$-Spencer operators. More precisely, for a representation $E$ of a Lie algebroid $A$ over $M$, a $k$-Spencer operator is a linear operator $$\begin{aligned} D:\Gamma(A){\longrightarrow}\Omega^{k}(M,E)\end{aligned}$$ together with a vector bundle map $l:A\to \wedge^{k-1}T^*M\otimes E$, satisfying some compatibility conditions with the bracket and the anchor. We remark that for Pfaffian groupoids, the relevant case is $k =1$, and the resulting notion is that of Spencer operator (cf. Chapter 5). Any multiplicative form $\theta \in \Omega^k({\mathcal{G}}, t^{\ast}E)$ induces a $k$-Spencer operator $D_{\theta}$ on $A=Lie({\mathcal{G}})$, given by $$\small{ \left\{\begin{aligned} D_{\theta}({\alpha})_x(X_1, \ldots, X_k) &= \frac{{d}}{{d}{\epsilon}}\Big|_{{\epsilon}= 0} \phi^{{\epsilon}}_{{\alpha}}(x)^{-1}\cdot\theta(({d}\phi^{{\epsilon}}_{{\alpha}})_x(X_1), \ldots, ({d}\phi^{{\epsilon}}_{{\alpha}})_x(X_k)), \\ \\l_{\theta}({\alpha}) &= u^{\ast}(i_{{\alpha}}\theta) .\end{aligned}\right.}$$ where $\phi_{\alpha}^\epsilon:M\to R$ is the flow of $\alpha$. If ${\mathcal{G}}$ is $s$-simply connected, then this construction defines a 1-1 correspondence between $E$-valued $k$-forms on ${\mathcal{G}}$ and $E$-valued $k$-Spencer operators on $A$. This is more general than what is needed for Pfaffian groupoids (which only requires 1-forms). However, this greater generality settles the question of integrability of multiplicative forms with non-trivial coefficients, which, to the best of my knowledge, was not solved before. This also leads to interesting developments in geometric settings other than those arising from Lie pseudogroups, e.g. in understanding the relation between contact and Jacobi manifolds (cf. Chapter 6). Moreover, theorem \[t1\] can be applied to multiplicative forms of arbitrary degree, e.g. to recover results on the integrability of Poisson manifolds.\ Spencer operators are relative connections on a Lie algebroid which are compatible with the algebroid structure. The compatibility conditions are far from obvious, and the main explanation is that they arise as the infinitesimal counterpart of multiplicative 1-forms. In other words, Spencer operators are the linearization of Pfaffian groupoids along the unit map (actually, Theorem 6 asserts that, under the usual assumptions, the Pfaffian groupoid can be recovered from its Spencer operator). Hence, in this chapter I go through all the notions of chapter 2 and show that they become “Lie theoretic”.\ In this chapter everything is brought together. The theory of Pfaffian bundles becomes simpler when the objects are multiplicative (e.g. the vector bundles involved are pull-backs from the base, etc.) and all the notions (coming from Pfaffian bundles) become “Lie theoretic”. On the infinitesimal side, I get that the Spencer operators are the linearization of Pfaffian groupoids along the unit map; in contrast with the theory of Pfaffian bundles, Pfaffian groupoids, under the usual conditions can be recovered from the associated Spencer operators. More precisely, I have the following result: Let ${\mathcal{G}}\rightrightarrows M$ be a $s$-simply connected Lie groupoid with Lie algebroid $A\to M$. There is a one to one correspondence between 1. multiplicative distributions ${\mathcal{H}}\subset T{\mathcal{G}}$, 2. sub-bundles ${\mathfrak{g}}\subset A$ together with a Spencer operator $D$ on $A$ relative to the projection $A\to A/{\mathfrak{g}}$. Moreover, ${\mathcal{H}}$ is of Pfaffian type if and only if ${\mathfrak{g}}\subset A$ is a Lie subalgebroid. As we shall see the relation between $D$ and ${\mathcal{H}}$ is given by $$\begin{aligned} D_X\alpha=[\tilde X,\alpha^r]|_M{\operatorname{mod}}{\mathcal{H}}^s ,\quad {\mathfrak{g}}={\mathcal{H}}^s|_M,\end{aligned}$$ where $\tilde X\in \Gamma({\mathcal{H}})$ is any vector field which is $s$-projectable to $X$ and extends $u_*(X)$.\ Then I go on with the notion of prolongation. The notions of abstract prolongation for Spencer operators (called compatible Spencer operators) is “integrated” to find the global counterpart of abstract prolongation for Pfaffian groupoids. I again have an integrability result which gives a 1-1 correspondence between (abstract) prolongations of Spencer operators on the Lie algebroid side, and (abstract) prolongations of Pfaffian groupoids on the Lie groupoid side. Explicitly, Let ${\tilde{{\mathcal{G}}}}$ and ${\mathcal{G}}$ be two Lie groupoids over $M$ with ${\tilde{{\mathcal{G}}}}$ $s$-simply connected and ${\mathcal{G}}$ $s$-connected, with Lie algebroids $\tilde A$ and $A$ respectively. Let $\theta\in\Omega^1({\mathcal{G}},t^*E)$ be a point-wise surjective multiplicative form and denote by $(D,l):A\to E$ the Spencer operator associated to $\theta$. There is a 1-1 correspondence between: 1. prolongations $p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}})\to ({\mathcal{G}},\theta)$ of $({\mathcal{G}},\theta)$, and 2. Spencer operators $(\tilde D,\tilde l):\tilde A\to A$ compatible with $(D,l):A\to E$. In this correspondence, $\tilde D$ is the associated Lie-Spencer operator of ${\tilde{\theta}}$. Corollary 1 is a slight generalization of the previous theorem which gives a correspondence between towers of prolongations on the groupoid side (Cartan towers), with the ones on the algebroid side (Spencer towers). I also show that the compatibility conditions that are required for the notion of abstract prolongation of a Pfaffian groupoids are equivalent to Maurer-Cartan type equations: we consider two Pfaffian groupoids $({\tilde{{\mathcal{G}}}},{\tilde{\theta}})$ and $({\mathcal{G}},\theta)$ over $M$, with the property that ${\tilde{\theta}}$ takes values on the Lie algebroid $A$ of ${\mathcal{G}}$. For the the Spencer operator $D$ of $\theta$, we define $$\begin{aligned} \frac{1}{2}\{{\alpha},{\beta}\}_D:=D_{\rho({\alpha})}({\beta})-D_{\rho({\beta})}({\alpha})-l[{\alpha},{\beta}]\end{aligned}$$ for ${\alpha},{\beta}\in\Gamma(A)$. On the other hand, we have the differential with respect to $D$ $$\begin{aligned} d_D{\tilde{\theta}}\in\Omega^2({\tilde{{\mathcal{G}}}},t^*E)\end{aligned}$$ defined by the usual Koszul formula for the pull-back connection $t^*D:\Gamma(A)\to \Omega^1({\tilde{{\mathcal{G}}}},E)$. Let $p:{\tilde{{\mathcal{G}}}}\to{\mathcal{G}}$ be a Lie groupoid map which is also a surjective submersion. If $$p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}}){\longrightarrow}({\mathcal{G}},\theta)$$ is a Lie prolongation of $({\mathcal{G}},\theta)$ then $$d_{D}{\tilde{\theta}}-\frac{1}{2}\{{\tilde{\theta}},{\tilde{\theta}}\}_{D}=0.$$ If ${\tilde{{\mathcal{G}}}}$ is source connected and $Lie(p)={\tilde{\theta}}|_{Lie({\tilde{{\mathcal{G}}}})}$, then the converse also holds. At the end of thesis we investigate the relation of our results and ideas to Poisson and relates geometries. There has been recent interest on understanding multiplicative foliations. Theorem 5 clarifies the infinitesimal conditions that ensure that the Pfaffian distributions ${\mathcal{H}}$ is Frobenius involutive. I consider the associated Spencer operator $D$ relative to the projection $A\to A/{\mathfrak{g}}$, and look at its restriction to ${\mathfrak{g}}$ $$\partial_D:{\mathfrak{g}}{\longrightarrow}{\mathrm{Hom}}(TM,A/{\mathfrak{g}}).$$ One notices that whenever $\partial_D$ vanishes, $D$ descends to an honest connection $\nabla$ on the quotient $A/{\mathfrak{g}}$. A multiplicative distribution ${\mathcal{H}}\subset T{\mathcal{G}}$ is involutive if and only if $\partial_D$ vanishes and the connection $\nabla$ on $A/{\mathfrak{g}}$ is flat. Finally I have an application to Jacobi structures and contact groupoids. The main contribution is to introduce an appropriate language to deal with these structures using a more conceptual and global approach. First of all, on the Jacobi side we allow for line bundles $L$ which are not necessarily trivial, with a Lie bracket on the space of sections. We define a Lie algebroid structure on $J^1L$ canonically associated to the Jacobi structure. Given a Jacobi structure $(L, \{\cdot, \cdot\})$ over $M$, if the associated Lie algebroid ${J}^1L$ comes from an $s$-simply connected Lie groupoid $\Sigma$, then $\Sigma$ carries a contact hyperfield ${\mathcal{H}}$ making it into a contact groupoid in the wide sense. Preliminaries {#Preliminaries} ============= Some notions related to PDEs ---------------------------- ### Jets {#sec:jets} Throughout this thesis we will use the language of jets, which we now briefly recall. As reference we cite [@russians]. Let $k\geq 0$ be an integer. Recall that, for a function $$f: \mathbb{R}^m {\longrightarrow}\mathbb{R}^r ,$$ ($m, r\geq 1$ integers), the $k$-jet of $f$ at $x\in\mathbb{R}^m$ is encoded by the collection of all partial derivatives up to order $k$ of $f$ at $x$. More concretely, one says that two such functions $f$ and $g$ have the same $k$-jet at $x$ if $$\partial^{\alpha}_{x}f= \partial^{\alpha}_{x}g$$ for all $m$-multi-indices $\alpha= (\alpha_1, \ldots, \alpha_m)$, with $|\alpha|\leq k$, where $|\alpha|= \alpha_1+ \ldots + \alpha_m$ and where $$\partial^{\alpha}_{x}f= \frac{\partial^{|\alpha|}f}{\partial x_{1}^{\alpha_1} \ldots \partial x_{m}^{\alpha_m}}(x) .$$ This defines an equivalence relation $\sim^{k}_{x}$ on the space of smooth maps $C^{\infty}(\mathbb{R}^m, \mathbb{R}^r)$. A point in the quotient has coordinates $p^{\alpha}_{i}$ with $\alpha$ as above and $i\in \{1, \ldots , r\}$, representing the partial derivatives. More generally, given two manifolds $R$ and $M$ (of dimensions $r$ and $m$ respectively), one obtains an induced equivalence relation $\sim^{k}_{x}$ on the set $C^{\infty}(M, R)$ by representing maps $f\in C^{\infty}(M, R)$ in local coordinates. The space of $k$-jets at $x$ of functions from $R$ to $M$, denoted by $J^k(M, R)_x$, is the resulting quotient; for $f: R \to M$ smooth, we will denote the induced $k$-jet by $j^{k}_{x}f$. Hence, $$J^k(M, R)_x= \{ j^{k}_{x}f: x\in M\}.$$ ### Jet bundles Assume now that we have a bundle (by which we mean a surjective submersion) $$\pi: R{\longrightarrow}M .$$ We denote by $\Gamma(R)$ the set of sections of $\pi$, i.e. the set of smooth maps $\sigma: M \to R$ satisfying $\pi\circ \sigma= \textrm{Id}_M$. We also consider the set $\Gamma_{\textrm{loc}}(\pi)$ of local sections $\sigma$ of $\pi$ (each such $\sigma$ is defined over some open in $M$, called the domain of $\sigma$ and denoted by $\textrm{Dom}(\sigma)$). For $k\geq 0$ an integer, the space of $k$-jets of sections of $\pi: R\to M$ is defined as $${J}^kR= \{j_{x}^{k}\sigma: x\in M, \ \sigma\in \Gamma_{\textrm{loc}}(\pi), x\in \textrm{Dom}(\sigma) \}.$$ This has a canonical manifold structure (as can be seen by looking at local coordinates), which fibers over $M$; the various jet bundles are related to each other via the obvious projection maps. In other words, we obtain a tower of bundles over $M$ $$\ldots {\longrightarrow}{J}^2R {\longrightarrow}{J}^1R {\longrightarrow}{J}^0R= R.$$ In the limit, one obtains the infinite jet bundle ${J}^{\infty}R$. To keep notation simpler, we will denote all projections between jet bundles by $pr$, and all bundle maps ${J}^kR\to M$ by $\pi$.\ ### PDEs and the Cartan forms {#PDEs; the Cartan forms:} The language of jets is very well suited for a conceptual theory of PDE. See for example [@Gold2; @russians]. Given a submersion $\pi$ as above, a PDE of order $k$ on $\pi$ is, by definition, a fibered submanifold $$R_k\subset J^kR .$$ A (local) solution of $R_k$ is then any (local) section $\sigma$ of $R$ with the property that $$j^{k}_{x}\sigma\in R_k,\ \ \forall \ x\in \textrm{Dom}(\sigma).$$ In other words, $j^{k}\sigma$, a priori a (local) section of $J^kR$, must be a section of $R_k$. We denote the set of solutions by $\textrm{Sol}(R_k)$. To recognize which sections of $R_k$ are $k$-jets of sections of $R$, one makes use of the so-called Cartan form. For notational ease, let us assume here that $k= 1$ (a more general discussion can be found in example \[cartandist\]). Then the Cartan form is a 1-form $$\theta\in \Omega^1({J}^1R, pr^*T^{\pi}R),$$ where $T^{\pi}R= T^{\textrm{vert}}R$ consisting of vectors tangent to the fibers of $\pi$ (a vector bundle over $R$) where $pr: {J}^1R\to R$ is the projection. To describe $\theta$, let $p= j^{1}_{x}\sigma\in {J}^1R$ and $X_p\in T_p{J}^1R$; then $\theta(X_p)\in T^{\textrm{vert}}_{\sigma(x)}R$ is given by the expression $$dpr(X_p)- d_x\sigma(d\pi)(X_p)$$ (a priori an element in $T_{\sigma(x)}R$, it is clearly in $\ker d\pi$). The main property of the Cartan form is recalled below. \[nomas\] A section $\xi$ of ${J}^1R\to M$ is of type $j^{1}\sigma$ for some section $\sigma$ of $R$ if and only if $\xi^*(\theta)= 0$. For arbitrary $k$, there is a version of the Cartan form (see example \[cartandist\]), $$\label{eq-higher-Cartan} \theta\in \Omega^1({J}^kR, pr^*T^{\pi}{J}^{k-1} R)$$ and the analogue of lemma \[nomas\] holds true. Hence, for a PDE $R_k\subset J^kR$, one has $$\textrm{Sol}(R_k) \cong \{\xi\in \Gamma(R_k): \xi^{*}\theta = 0\}.$$ Conceptually, this indicates that the main information is contained in $R_k$, viewed as a bundle over $M$ (and forgetting that it sits inside ${J}^kR$), together with the restriction of the Cartan form to $R_k$. In other words, for the study of PDE’s as above it may be enough to consider bundles $S\to M$ endowed with appropriate $1$-forms on them. This will be formalised in chapter \[Pfaffian bundles\].   \[1-when working with jets\] Here is a slightly different description of ${J}^1R$, which we will be using whenever we have to work more explicitly. Since the first jet ${j}^1_x\sigma$ of a section $\sigma$ at $x\in M$ is encoded by $r:= \sigma(x)$ and $d_x\sigma: T_xM\to T_r R$, we see that an element of ${J}^1R \to M$ can be thought of as a splitting $$\begin{aligned} \xi=d_x\sigma:T_xM{\longrightarrow}T_rR\end{aligned}$$ of the map $d_r\pi:T_rR\to T_xM$, with $r$ an element in $R$ and $x=\pi(r)$. Of course, $r$ is encoded by $\xi$, but we will often use the notation $\xi_r$ to indicate that our splitting takes values in $T_rR$. ### Jets of vector bundles; Spencer relative connections: {#sp-dc} Assume now that $$\pi: E{\longrightarrow}M$$ is a vector bundle over $M$. In this case, $J^kE$ is canonically a vector bundle over $M$, with addition determined by $$j^{k}_{x}\alpha+ j^{k}_x\beta= j^{k}_{x}(\alpha + \beta)$$ for $\alpha, \beta\in \Gamma(E)$. \[remark-J-decomposition\]  Again, when $k= 1$, there is a very convenient way of representing sections of the first jet bundle $J^1E$ which will be used throughout this thesis under the name “Spencer decomposition”. More precisely, one has a canonical isomorphism of vector spaces $$\label{J-decomposition} \Gamma({J}^1E)\cong \Gamma(E)\oplus \Omega^1(M, E).$$ This decomposition comes from the short exact sequence of vector bundles $$0{\longrightarrow}\textrm{Hom}(TM, E)\stackrel{i}{{\longrightarrow}} J^1(E) \stackrel{{pr}}{{\longrightarrow}} E{\longrightarrow}0,$$ where ${pr}$ is the canonical projection $j^{1}_{x}s\mapsto s(x)$ and $i$ is determined by $$i({d}f\otimes \alpha)= f{j}^1\alpha- {j}^1(f\alpha).$$ Although this sequence does not have a canonical splitting, at the level of sections it does: $\alpha\mapsto {j}^1\alpha$. This gives the identification (\[J-decomposition\]). In other words, any $\xi\in \Gamma({J}^1E)$ can be written uniquely as $$\xi= j^1\alpha+ i(\omega)$$ with ${\alpha}\in \Gamma(E)$, $\omega\in \Omega^1(M, E)$; we write $\xi= (\alpha, \omega)$. One should keep in mind, however, that the resulting $C^{\infty}(M)$-module structure becomes $$\label{strange-module-structure} f \cdot (\alpha, \omega)= (f\alpha, f\omega+ {d}f\wedge \alpha).$$ Using the decomposition (\[J-decomposition\]), the projection on the second component induces an operator $$D^{\textrm{clas}}: \Gamma({J}^1E) {\longrightarrow}\Omega^1(M, E)$$ called **Spencer’s relative connection** (or **the classical Spencer operator**). There is an extensive list of literature about the classical Spencer operator, see for example [@Veloso; @Ngo; @Ngo1; @Spencerflatt; @Gold3; @GoldschmidtSpencer; @Spencer]. As we shall show in this thesis, $D^{\textrm{clas}}$ is the infinitesimal counterpart of the Cartan form. This is already indicated by the fact that $D^{\textrm{clas}}$ has a property resembling that of $\theta$: a section $\xi$ of ${J}^1E$ is the first jet of a section of $E$ if and only if $D^{\textrm{clas}}(\xi)= 0$. Moreover, as for Cartan forms, there are versions of the Spencer operator for higher jets: $$D^{\textrm{clas}}: \Gamma(J^kE) {\longrightarrow}\Omega^1(M, J^{k-1}E).$$ ### Tableaux and bundles of tableaux {#digression} When dealing with PDE’s one often encounters huge jet spaces, specially after the prolongation process. One would like to compare such (prolongation) spaces by looking at smaller, hopefully linear, spaces. This is one way to arrive at the notion of a tableau. Of course, their importance is deeper and they provide the framework for handling the intricate linear algebra behind PDE’s. In this section we recall some of the basic definitions, and we allow a slight generalisation of the notion of tableau (which will be used later on). Our main reference for this part is [@BC; @Gold2].\ Let $V$ and $W$ be finite dimensional vector spaces, let $S^kV^*$ be the $k$-th symmetric product of $V^*$, and let $S^kV^*\otimes W$ be the $W$-valued $k$-th symmetric multilinear maps $\phi:V^k\to W$. A **tableau** on $(V, W)$ is a linear subspace $$\mathfrak{g}\subset V^*\otimes W .$$ More generally, for $k\geq 1$ integer, a [**tableau of order $k$**]{} on $(V, W)$ is a linear subspace $$\begin{aligned} {\mathfrak{g}}_k\subset S^{k}V^*\otimes W\end{aligned}$$ We will often omit “on $(V, W)$” from the terminology. In general, a tableau of order $k$ can be “prolonged” to tableaux of higher orders. The **first prolongation** of a tableau of order $k$ $${\mathfrak{g}}_k\subset S^{k}V^*\otimes W$$ is the tableau of order $(k+1)$ $${\mathfrak{g}}_{k}^{(1)} :=\{T\in S^{k+1}V^*\otimes W\mid T(v,\cdot,\ldots,\cdot)\in {\mathfrak{g}}_k\text{ for all }v\in V\}.$$ The higher prolongations ${\mathfrak{g}}_k^{(q)}$ are defined inductively by $${\mathfrak{g}}_{k}^{(q)}= ({\mathfrak{g}}_{k}^{(q-1)})^{(1)} \subset S^{k+q}V^*\otimes W.$$ Next, we recall the construction of the formal De Rham operator on a vector space $V$. Start with $$\begin{aligned} \label{fdo} \partial:S^{k}V^*{\longrightarrow}V^*\otimes S^{k-1}V^*\end{aligned}$$ the linear map sending $T\in S^kV^*$ to $$\begin{aligned} \partial(T):V{\longrightarrow}S^{k-1}V^*,\quad v\mapsto T(v,\cdot,\ldots,\cdot).\end{aligned}$$ We extend $\partial$ to a linear map $$\begin{aligned} \partial:\wedge^jV^*\otimes S^kV^*{\longrightarrow}\wedge^{j+1}V^*\otimes S^{k-1}V*\end{aligned}$$ sending $\omega\otimes T$ to $(-1)^{j}\omega\wedge\partial(T)$. We form the resulting complex $$\begin{aligned} 0\longrightarrow S^kV^*\xrightarrow{\partial}{}V^*\otimes S^{k-1}V^*\xrightarrow{\partial}{}\wedge^2V^*\otimes S^{k-2}V^*\xrightarrow{\partial}{}\cdots\\ \cdots\xrightarrow{\partial}{}\wedge^{n}V^*\otimes S^{k-n}V^*\longrightarrow 0,\end{aligned}$$ where $S^lV^*=0$ for $l<0$. Of course, $\partial$ is just the restriction of the classical De Rham operator acting on $\Omega^{\bullet}(V)= C^{\infty}(V, \Lambda^{\bullet}V^*)$ to polynomial forms, where we identify $S^{p}V^*$ with the space of polynomial functions on $V$ of degree $p$. For that reason we will call $\partial$ [**formal differentiation**]{}. In the presence of another vector space $W$, we will tensor the sequence above by $W$ and the operator $\partial$ by $\textrm{Id}_W$, keeping the same notation $\partial$. Note that, for a tableau ${\mathfrak{g}}_k\subset S^{k}V^*\otimes W$, its first prolongation can also be described as $${\mathfrak{g}}_{k}^{(1)}= \{ T:V\to {\mathfrak{g}}_k\ \textrm{linear}\mid \partial(Tv)u=\partial(Tu)v\ \forall \ u, v\in V\}.$$ Hence it can be interpreted as a tableau on $(V, {\mathfrak{g}}_k)$. It is not difficult to check that the complex above tensored by $W$ contains the sequence $$\label{Spencer-cohomology}\begin{aligned} 0\longrightarrow \mathfrak{g}_k^{(p)}\xrightarrow{\partial}{}&V^*\otimes\mathfrak{g}_k^{(p-1)}\xrightarrow{\partial}{}\wedge^2V^*\otimes\mathfrak{g}_k^{(p-2)}\xrightarrow{\partial}{}\cdots\\& \cdots\xrightarrow{\partial}{}\wedge^{p}V^*\otimes \mathfrak{g}_k\xrightarrow{\partial}{} \wedge^{p+1}V^*\otimes S^{k-1}V^*\otimes W \end{aligned}$$ as a sub-complex. Given a tableau $\mathfrak{g}_k\subset S^{k}V^*\otimes W$ of order $k$, - The [**Spencer cohomology**]{} of $\mathfrak{g}_k$ is given by the cohomology groups of the sequences (\[Spencer-cohomology\]). More precisely, we denote by $H^{k+p-j,j}(\mathfrak{g}_k)$ (or simply $H^{k+p-j,j}$) the cohomology at $\wedge^jV^*\otimes_R\mathfrak{g}_k^{(p-j)}$. - We say that ${\mathfrak{g}}_k$ is [**involutive**]{} if $H^{p,q}=0$ for all $p\geq k, q\geq 0.$ - We call ${\mathfrak{g}}_k$ [**$r$-acyclic**]{} if $H^{k+p,j}= 0$ for $p\geq 0,0\leq j\leq r$. Spencer cohomology has been studied in the context of PDE’s. See for example the work of Spencer [@Spencer; @Spencerflatt]. Although most of the time we will consider tableaux ${\mathfrak{g}}$ of order $1$, we will need a slight generalization in which the inclusion ${\mathfrak{g}}\hookrightarrow V^* \otimes W$ is replaced by an arbitrary linear map $$\begin{aligned} \varphi:{\mathfrak{g}}{\longrightarrow}V^*\otimes W . \end{aligned}$$ \[1st-prolongation\] The [**first prolongation of ${\mathfrak{g}}$ with respect to $\varphi$**]{} (or simply of $\varphi$) is $$\begin{aligned} {\mathfrak{g}}^{(1)}(\varphi):= \{T:V\to {\mathfrak{g}}\ \textrm{\rm linear}\mid \varphi(Tu)v=\varphi(Tv)u\ \forall\ u,v\in V\}.\end{aligned}$$ Interpreting ${\mathfrak{g}}^{(1)}(\varphi)$ as a tableau on $(V, {\mathfrak{g}})$, one can prolong it repeatedly, giving rise to the higher prolongations $${\mathfrak{g}}^{(k)}(\varphi)\subset S^kV^*\otimes {\mathfrak{g}}.$$ One also has a version of the Spencer sequences (\[Spencer-cohomology\]). For that, we extend $\varphi$ to a linear map $$\begin{aligned} \partial_{\varphi}:\wedge^jV^*\otimes {\mathfrak{g}}{\longrightarrow}\wedge^{j+1}V^*\otimes W \end{aligned}$$ by mapping $\omega\otimes T$ to $(-1)^j\omega\wedge\varphi(T)$. In the next lemma, we use the notation ${\mathfrak{g}}^{(k)}= {\mathfrak{g}}^{(k)}(\varphi)$. The Spencer complex of the tableau $\mathfrak{g}^{(1)}$ extends to the complex $$\begin{aligned} \label{extension} \begin{aligned} 0\longrightarrow \mathfrak{g}^{(k)}\xrightarrow{\partial}{}V^*\otimes&\mathfrak{g}^{(k-1)}\xrightarrow{\partial}{}\wedge^2V^*\otimes\mathfrak{g}^{(k-2)}\xrightarrow{\partial}{}\cdots\\ &\xrightarrow{\partial}{}\wedge^{k-1}V^*\otimes\mathfrak{g}^{(1)}\xrightarrow{\partial}{}\wedge^kV^*\otimes \mathfrak{g}\xrightarrow{\partial_{\varphi}}{} \wedge^{k+1}V^*\otimes W, \end{aligned}\end{aligned}$$ i.e. $\partial_{\varphi}\circ\partial=0,$ where $k\geq0$. Notice that the sequence $$\begin{aligned} \begin{aligned} 0{\longrightarrow}{\mathfrak{g}}^{(1)}\xrightarrow{\partial} V^*\otimes {\mathfrak{g}}\xrightarrow{\varphi} \wedge^2V^*\otimes W \end{aligned}\end{aligned}$$ is exact and that for $\omega\in\wedge^jV^*$ and $\phi\in {\mathfrak{g}}^{(1)}$, $$\begin{aligned} \varphi\circ\partial(\omega\otimes \phi)=-\omega\wedge\varphi(\partial (\phi)).\end{aligned}$$ \[exten\] The [**$\varphi$-Spencer cohomology of ${\mathfrak{g}}$**]{} is the cohomology of the sequence . Next, we extend the previous discussion to vector bundles over a manifold $M$. So, assume now that $V$ and $W$ are smooth vector bundles over $M$. A **bundle of tableaux** on $(V, W)$ (or **tableau bundle** over $M$) is any vector sub-bundle $$\begin{aligned} {\mathfrak{g}}_k\subset S^kV^*\otimes W,\end{aligned}$$ which is not necessarily of constant rank. For a bundle of tableaux the notions of prolongation, involutivity, Spencer cohomology and so on are defined point-wise. Of course the prolongation ${\mathfrak{g}}_k^{(p)}$ may fail to be smooth even if the starting ${\mathfrak{g}}_k$ is. The following is a cohomological criteria for smoothness of the prolongation. We refer to [@Gold2] for the proof. \[smooth\] Assume that ${\mathfrak{g}}_k$ is the kernel of a morphism of smooth vector bundles $\Psi:S^{k}V^*\otimes W\to E$ over $M$. If ${\mathfrak{g}}_k$ is $2$-acyclic and ${\mathfrak{g}}_{k}^{(1)}$ is smooth, then all prolongations ${\mathfrak{g}}_k^{(p)}$ are smooth. For the first prolongation of a vector bundle map $\varphi:{\mathfrak{g}}\to V^*\otimes W$ we have a similar result, namely \[exact\] If ${\mathfrak{g}}^{(1)}:=\mathfrak{g}^{(1)}(\varphi)$ is a vector bundle over $M$ and the sequence $$\begin{aligned} 0\longrightarrow \mathfrak{g}^{(m)}\xrightarrow{\partial}{}V^*\otimes\mathfrak{g}^{(m-1)}\xrightarrow{\partial}{}\wedge^2V^*\otimes\mathfrak{g}^{(m-2)}\xrightarrow{\partial}{}\wedge^3T^*\otimes_R\mathfrak{g}^{(m-3)}\end{aligned}$$ is exact for all $m\geq2$, then $\mathfrak{g}^{(k)}$ is smooth for $k\geq1$. Here ${\mathfrak{g}}^{-1}$ denotes $W$. The proof is an inductive argument and is basically the same as that of lemma 6.5 of [@Gold2]. The exact sequence of vector bundles $$\begin{aligned} V^*\otimes\mathfrak{g}^{(1)}\xrightarrow{\partial}{}\wedge^2V^*\otimes\mathfrak{g}\xrightarrow{\varphi}{}\wedge^3V^*\otimes W\end{aligned}$$ induces the exact sequence $$\begin{aligned} 0\longrightarrow\partial(V^*\otimes\mathfrak{g}^{(1)})\longrightarrow\wedge^2V^*\otimes\mathfrak{g}\xrightarrow{\varphi}{}\wedge^3V^*\otimes W.\end{aligned}$$ This shows that $\partial(V^*\otimes\mathfrak{g}^{(1)})$ is the kernel of a vector bundle map and therefore the function $M\ni x\mapsto\dim\partial(V^*\otimes\mathfrak{g}^{(1)})$ is upper semi-continuous. On the other hand, by definition of $\mathfrak{g}^{(2)}$ (the first prolongation of $\mathfrak{g}^{(1)}$), one has that it is the kernel of the vector bundle map given by the composition $$\begin{aligned} S^2V^*\otimes\mathfrak{g}\xrightarrow{\Delta_{1,1}}{}V^*\otimes V^*\otimes\mathfrak{g}\xrightarrow{id\otimes pr}{}V^*\otimes(V^*\otimes\mathfrak{g})/\mathfrak{g^{(1)}}\end{aligned}$$ which implies, once again, that $M\ni x\mapsto\dim\delta(\mathfrak{g}^{(2)})$ is upper semi-continuous. Now, taking the Euler-Poincare characteristic of the exact sequence $$\begin{aligned} 0\longrightarrow \mathfrak{g}^{(2)}\xrightarrow{\partial}{}V^*\otimes\mathfrak{g}^{(1)}\xrightarrow{\partial}{ }\partial(V^*\otimes\mathfrak{g}^{(1)})\xrightarrow{}{}0 \end{aligned}$$ one has that $\dim(\mathfrak{g}^{(2)})+\dim\partial(V^*\otimes\mathfrak{g}^{(1)})$ is locally constant since $\mathfrak{g}^{(1)}$ is a vector bundle, and therefore $\mathfrak{g}^{(2)}$ and $\partial(V^*\otimes\mathfrak{g}^{(1)})$ are vector bundles over $M.$ For $l>2$, consider the exact sequence $$\begin{aligned} 0\longrightarrow \mathfrak{g}^{(l)}\xrightarrow{\partial}{}V^*\otimes\mathfrak{g}^{(l-1)}\xrightarrow{\partial}{}\wedge^2V^*\otimes\mathfrak{g}^{(l-2)}\xrightarrow{\partial}{}\wedge^3V^*\otimes\mathfrak{g}^{(l-3)}\end{aligned}$$ which shows by lemma 3.3 in [@Gold2] that $\mathfrak{g}^{(l)}$ is a vector bundle whenever $\mathfrak{g}^{(l-1)}$, $\mathfrak{g}^{(l-2)}$ and $\mathfrak{g}^{(l-3)}$ are vector bundles. Another important result is the so called $\partial$-Poincaré lemma (proved e.g. in [@quillen]), a version of which is: \[poincare\]Let ${\mathfrak{g}}_k\subset S^{k}V^*\otimes W$ be a bundle of tableaux. There exists an integer $p_0>0$, such that ${\mathfrak{g}}_{k}^{(p)}$ is involutive for all $p\geq p_0$. ### Towers of tableaux Let $V$ and $W$ be two vector spaces. Our main reference for this part is [@SingerSternberg]. A [**tableaux tower**]{} (on $(V, W)$) is a sequence $$\begin{aligned} \label{es} {\mathfrak{g}}^{\infty}= ({\mathfrak{g}}^{1},{\mathfrak{g}}^{2},\ldots,{\mathfrak{g}}^{p},\ldots)\end{aligned}$$ consisting of tableaux ${\mathfrak{g}}^{p}\subset S^{p}V^*\otimes W$ for $p\geq 1$ such that each ${\mathfrak{g}}^{p+1}$ is inside the prolongation of ${\mathfrak{g}}^{p}$: $$\begin{aligned} {\mathfrak{g}}^{p+1}\subset ({\mathfrak{g}}^{p})^{(1)}.\end{aligned}$$ Of course, one may want to consider towers starting with a tableaux of order $k\geq 1$ $$({\mathfrak{g}}^k, {\mathfrak{g}}^{k+1}, \ldots ),$$ but any such object can be completed to a tower in the previous sense by adding $${\mathfrak{g}}^i= S^iV^*\otimes W,\ \ \forall \ 1\leq i\leq k-1 .$$ We see that, in particular, any tableaux ${\mathfrak{g}}_{k}\subset S^{k}V^*\otimes W$ defines a tower $$(V^*\otimes W, \ldots, S^{k-1}V^* \otimes W, {\mathfrak{g}}_k, {\mathfrak{g}}_{k}^{(1)}, {\mathfrak{g}}_{k}^{(2)}, \ldots )$$ by prolongation. The Spencer cohomology of a tableau can then be viewed as an instance of the Spencer cohomology of a tower. More precisely, for a tower ${\mathfrak{g}}^{\infty}$, the formal differential $$\begin{aligned} \partial:\wedge^kV^*\otimes ({\mathfrak{g}}^{p})^{(1)}{\longrightarrow}\wedge^{k+1}V^*\otimes {\mathfrak{g}}^{p}\end{aligned}$$ restricts to $$\begin{aligned} \partial:\wedge^kV^*\otimes {\mathfrak{g}}^{p+1}{\longrightarrow}\wedge^{k+1}V^*\otimes {\mathfrak{g}}^{p}\end{aligned}$$ for any $p\geq0$, and this induces the Spencer complexes of the tower ${\mathfrak{g}}^{\infty}$ given by $$\label{Spencer-cohomology2}\begin{aligned} 0\longrightarrow \mathfrak{g}^{k+p}\xrightarrow{\partial}{}&V^*\otimes\mathfrak{g}^{k+p-1}\xrightarrow{\partial}{}\wedge^2V^*\otimes\mathfrak{g}^{k+p-2}\xrightarrow{\partial}{}\cdots\\& \cdots\xrightarrow{\partial}{}\wedge^{p-k}V^*\otimes \mathfrak{g}^k\xrightarrow{\partial}{} \wedge^{p+1}V^*\otimes S^{k-1}V^*\otimes W. \end{aligned}$$ The **Spencer cohomology of the sequence of the tableaux tower ${\mathfrak{g}}^{\infty}$** is the cohomology of the sequence . As before, denote by $H^{k+p-j,j}({\mathfrak{g}}^{\infty})$, the cohomology of the sequence at $\wedge^jV^*\otimes{\mathfrak{g}}^{k+p-j}$. \[lemma:prol\] For any tower of tableaux ${\mathfrak{g}}^{\infty}$, there exists an integer $p_0$ such that, for all $p\geq p_0$ $$\begin{aligned} {\mathfrak{g}}^{p+1}= ({\mathfrak{g}}^{p})^{(1)}.\end{aligned}$$ For the proof we refer to [@SingerSternberg]. Finally, we will also encounter bundles of towers of tableaux. Let $V$ and $W$ be two vector bundles over the manifold $M$. \[tableaux tower\] A [**bundle of tableaux towers over $M$**]{} (on $(V, W)$) is a sequence of smooth vector bundles over $M$ $$\begin{aligned} ({\mathfrak{g}}^1,{\mathfrak{g}}^{2},\ldots,{\mathfrak{g}}^{k},\ldots), \end{aligned}$$ with ${\mathfrak{g}}^k\subset S^{k}V^*\otimes W$ and such that, for all $p\geq 1$ $$\begin{aligned} {\mathfrak{g}}^{p+1}\subset({\mathfrak{g}}^{p})^{(1)}.\end{aligned}$$ Here we state the analogue of lemma \[lemma:prol\] in the smooth case. For the proof see for example [@Gold3]. \[stability\]Let $({\mathfrak{g}}^1,{\mathfrak{g}}^{2},\ldots,{\mathfrak{g}}^{k},\ldots)$ be a bundle of tableaux towers over a connected manifold $M$. Then there exists an integer $p_0$ such that for all $p\geq p_0$ $$\begin{aligned} {\mathfrak{g}}^{p+1}=({\mathfrak{g}}^{p})^{(1)}.\end{aligned}$$ Lie groupoids and Lie algebroids -------------------------------- We refer the reader to [@CrainicFernandes:lecture] for a more complete description of the theory of Lie groupoids and Lie algebroids. ### Lie groupoids {#Lie groupoids} Recall that a **groupoid** ${\mathcal{G}}$ is a (small) category in which every arrow is invertible. That means that we have a set ${\mathcal{G}}$ of arrows and a set $M$ of objects equipped with the following structure maps: 1. the source and target $$s, t: {\mathcal{G}}{\longrightarrow}M,$$ 2. the composition $$m: {\mathcal{G}}_{2}{\longrightarrow}{\mathcal{G}},$$ where ${\mathcal{G}}_2$ is the set of composable arrows $${\mathcal{G}}_2= \{(g, h): g, h\in {\mathcal{G}}, s(g)= t(h) \},$$ 3. the unit $$u: M{\longrightarrow}{\mathcal{G}},$$ 4. the inversion $$i: {\mathcal{G}}{\longrightarrow}{\mathcal{G}}.$$ We will identify the groupoid ${\mathcal{G}}$ with its set of arrows and we will say that ${\mathcal{G}}$ is a groupoid over $M$ or that $M$ is the base of the groupoid ${\mathcal{G}}$, using the graphic notation $${\mathcal{G}}{\rightrightarrows}M .$$ The elements of ${\mathcal{G}}$ will be called arrows of the groupoid and will be denoted by letters $g, h, \gamma, $ etc, while the elements of $M$ will be called points of the groupoid and will be denoted by letters $x, y, p, $ etc. An arrow $g$ from $x$ to $y$ is any arrow $g\in {\mathcal{G}}$ with $s(g)= x$, $t(g)= y$; in this case we use the graphic notation $$g: x{\longrightarrow}y, \ \ \textrm{or} \ \ x \stackrel{g}{{\longrightarrow}} y \ \ \textrm{or}\ \ y \stackrel{g}{{\longleftarrow}} x .$$ For the structure maps, we will use the notation $$m(g, h)= g\cdot h= gh, \ \ u(x)= 1_x, \ \ i(g)= g^{-1}.$$ With these, the groupoid axioms take the familiar form that reminds us of composition of functions: 1. if $z \stackrel{g}{{\longleftarrow}} y \stackrel{h}{{\longleftarrow}} x$, then $z \stackrel{gh}{{\longleftarrow}} x$ and the composition is associative. 2. for $x\in M$, $1_x: x{\longrightarrow}x$. 3. for any $g: x{\longrightarrow}y$, $g\cdot 1_x= 1_y\cdot g= g$. 4. if $g: x{\longrightarrow}y$ then $g^{-1}: y{\longrightarrow}x$ and $$g^{-1}\cdot g= 1_x,\ \ g\cdot g^{-1}= 1_y.$$ A **smooth** (or **Lie**) groupoid ${\mathcal{G}}$ over $M$ is any groupoid ${\mathcal{G}}{\rightrightarrows}M$ endowed with smooth (manifold) structures on ${\mathcal{G}}$ and $M$, such that $s, t: {\mathcal{G}}\to M$ are submersions and all the other structure maps $m$, $u$ and $i$ are smooth. Henceforth, all objects in this thesis will be smooth unless otherwise stated. Note that the conditions on $s$ and $t$ imply that the set ${\mathcal{G}}_2$ of composable arrows is a smooth submanifold of ${\mathcal{G}}\times {\mathcal{G}}$; hence it makes sense to say that $m$ is smooth. Note also that, in the smooth case, $i$ is a diffeomorphism and $u$ is an embedding. Actually, we will often identify $M$ with the submanifolds of units via $$u: M\hookrightarrow {\mathcal{G}}.$$ One of the main notions used in this thesis is that of a bisection. \[definition-bisections\] Given a Lie groupoid ${\mathcal{G}}$ over $M$, a **bisection** of ${\mathcal{G}}$ is any splitting $b: M\to {\mathcal{G}}$ of the source map with the property that $\phi_b:= t\circ b: M\to M$ is a diffeomorphism. The bisections of ${\mathcal{G}}$ form a group $\textrm{Bis}({\mathcal{G}})$ with multiplication and inverse given by $$b_1\cdot b_2(x) = b_1(\phi_{b_2}(x))b_2(x),\ b^{-1}(x)=i\circ b\circ\phi_b^{-1}(x).$$ A Lie group is a Lie groupoid over a point. The group of bisections coincides with the group itself.   Bundles of Lie groups over $M$ can be seen as Lie groupoids over $M$ with $s= t$. In particular, any vector bundle $\pi: E\to M$ can be interpreted as a Lie groupoid over $M$ with $s= t:= \pi$ and with composition being the fiberwise vector bundle addition. For any manifold $M$, the pair groupoid of $M$ is $$\Pi(M):= M\times M$$ with source and target $$s(x, y)= y, \ t(x, y)= x$$ and with composition $$(x, y)\cdot (y, z)= (x, z).$$ For the group of bisections, we find that $\textrm{Bis}(\Pi(M))= \textrm{Diff}(M)$. \[ex-action-groupoids\]  For any Lie group $G$ acting on a manifold $M$ (say on the left) one forms the action groupoid $G\ltimes M$, whose space of arrows is the product $G\times M$, the base is $M$, the source and target maps are $$s(g, x)= x, \ t(g, x)= gx,$$ and multiplication is given by $$(h, gx)\cdot (g, x)= (hg, x).$$   For any vector field $X$ on a manifold $M$, the domain $\mathcal{D}(X)$ of the flow $\phi_{X}^{\epsilon}$ of $X$, $$\mathcal{D}(X)= \{ (\epsilon, x)\in \mathbb{R}\times M: \ \phi_{X}^{\epsilon} \ \textrm{is\ defined} \}$$ can be seen as a groupoid over $M$ with source and target $$s(\epsilon, x)= x,\ t(\epsilon, x)= \phi_{X}^{\epsilon}(x),$$ and composition $$(\epsilon', \phi_{X}^{\epsilon}(x))\cdot (\epsilon, x)= (\epsilon+ \epsilon', x).$$ Note that, when $X$ is complete, then $\mathcal{D}(X)$ is the action groupoid $\mathbb{R}\ltimes M$ (see the previous example) associated to the global flow of $X$ interpreted as an action of $\mathbb{R}$ on $M$. \[gauge-groupoids\] For any principal $G$-bundle $\pi: P\to M$ the gauge groupoid of $P$, denoted ${\mathcal{G}}\textrm{auge}(P)$, is defined as the quotient of the pair groupoid $\Pi(P)$ modulo the (diagonal) action of $G$. Hence $${\mathcal{G}}\textrm{auge}(P)= (P\times P)/G,$$ is a Lie groupoid over $P/G= M$, with source and target $$s([p, q])= \pi(q),\ t([p, q])= \pi(p).$$ For the bisections, we find that $\textrm{Bis}({\mathcal{G}}\textrm{auge}(P))$ is isomorphic to the automorphism group of $P$. Note that the gauge groupoid is transitive, in the sense that any two points of $M$ are related by an arrow of the groupoid. Conversely, any transitive groupoid ${\mathcal{G}}$ over $M$ must be of this type. Indeed, fixing a base point $x\in M$, it follows that $$G_x:= \{g\in {\mathcal{G}}: s(g)= t(g)= x\}$$ is a Lie group, $$P_x:= s^{-1}(x)= \{g\in {\mathcal{G}}: s(g)= x\}$$ is a principal $G_x$-bundle over $M$ with projection $t$, and $${\mathcal{G}}\textrm{auge}(P_x){\longrightarrow}{\mathcal{G}},\ \ [h, k]\mapsto hk^{-1}$$ is an isomorphism of Lie groupoids. \[GL-groupoids\]  While the general linear group $GL(V)$ associated to a (finite dimensional) vector space $V$ is a Lie group, the similar object $GL(E)$ associated to a vector bundle $\pi: E\to M$ is a Lie groupoid over $M$. More precisely, an arrow of $GL(E)$ between two points $x, y\in M$ is a linear isomorphisms $E_x\stackrel{\sim}{\to} E_y$ and the multiplication is given by the usual composition of maps. There is a canonical smooth structure on $GL(E)$ which makes it into a Lie groupoid. Alternatively, one can realize $GL(E)$ as a gauge groupoid in the sense of the previous example. More precisely, consider the frame bundle $Fr(E)$ associated to $E$ $$Fr(E)= \{ (x, u)\mid x\in M, u: \mathbb{R}^r \to E_x\ \ \textrm{linear\ isomorphism} \},$$ where $r$ is the rank of $\pi$. Then $Fr(E)$ is a principal $GL_r$-bundle and one has a simple isomorphism of Lie groupoids $${\mathcal{G}}\textrm{auge}(Fr(E)) {\longrightarrow}GL(E), \ [(u, v)] \mapsto u\circ v^{-1} .$$ Note that, for an arbitrary Lie groupoid ${\mathcal{G}}$ over $M$, a representation of ${\mathcal{G}}$ on $E$ (see definition \[def:representation\]) is the same thing as a Lie groupoid homomorphism ${\mathcal{G}}\to \textrm{GL}(E)$. There are other important examples arising from foliation theory or from Poisson geometry. For this thesis the most important examples are jet groupoids, whose simplest version is introduced below. \[difeo\]Given a manifold $M$, consider the set $\textrm{Diff}_{\textrm{loc}}(M)$ of all diffeomorphisms $\phi: U\to V$ between two open sets $U, V\subset M$; $U$ will be called the domain of $\phi$ and will be denoted by $\textrm{Dom}(\phi)$. For any integer $k\geq 0$, one forms the groupoid of $k$-jets of local diffeomorphisms of $M$: $$J^k(M, M):= \{ j^{k}_{x}\phi: \ x\in M,\ \ \phi\in \textrm{Diff}_{\textrm{loc}}(M), \ x\in \textrm{Dom}(\phi)\},$$ which is a Lie groupoid over $M$ with source and target $$s(j^{k}_{x}\phi)= x, \ \ t(j^{k}_{x}\phi)= \phi(x),$$ and composition $$j^{k}_{\psi(x)}\phi\cdot j^{k}_{x}\psi= j^{k}_{x}(\phi\circ \psi).$$ Note that $J^k(M, M)$ is a transitive Lie groupoid (see example \[gauge-groupoids\]); hence, fixing a base-point $O\in M$, one can consider the Lie group $$G^{k}:= \{j^{k}_{O}\phi: \phi\in \textrm{Diff}_{\textrm{loc}}(M), \ O\in \textrm{Dom}(\phi), \phi(O)= O \},$$ the $k$-th order frame bundle (with origin $O$) $$F^{k}:= \{j^{k}_{O}\phi: \phi\in \textrm{Diff}_{\textrm{loc}}(M), \ O\in \textrm{Dom}(\phi)\}$$ and then $F^k$ is a principal $G^k$-bundle over $M$ whose associated gauge groupoid is precisely $J^k(M, M)$. Note also that, for $k= 0$, $J^0(M, M)$ is just the pair groupoid of $M$. Also, for $k= 1$, $J^1(M, M)$ is just the general linear groupoid $GL(TM)$ (see example \[GL-groupoids\]); equivalently, $G^1$ is isomorphic to $GL_n$ ($n= \textrm{dim}(M)$) and $F^1$ is isomorphic to the frame bundle of $M$. Another notion central to this thesis is that of a representation of a Lie groupoid. Recall that, given a Lie groupoid ${\mathcal{G}}$ over $M$ and a bundle $\mu: P\to M$, an action of ${\mathcal{G}}$ on $P$ (via $\mu$) associates to any arrow $g: x\to y$ of ${\mathcal{G}}$ a map $$g\cdot \ : \mu^{-1}(x){\longrightarrow}\mu^{-1}(y), \ \ p\mapsto g\cdot p= gp$$ such that the usual algebraic axioms for actions are satisfied ($(gh)\cdot p= g\cdot (h\cdot p)$ whenever $g$ and $h$ are composable, and $1_x\cdot p= p$ for $p\in \mu^{-1}(x)$), and such that the action is smooth, i.e. the map $${\mathcal{G}}\times_{s,\mu} P \longrightarrow P,\ \ (g, p)\mapsto g\cdot p$$ defined on the space of pairs $(g, p)$ with $s(g)= \mu(p)$ (a smooth submanifold of ${\mathcal{G}}\times P$) is smooth. \[def:representation\] Let ${\mathcal{G}}$ be a Lie groupoid over $M$. A **representation** of ${\mathcal{G}}$ is a vector bundle $\mu: E \to M$ together with a linear action of ${\mathcal{G}}$ on $E$, i.e. an action with the property that, for each $g: x\to y$, the multiplication $E_x\to E_y, v\mapsto g\cdot v$ is linear. We denote by $ \textrm{\rm Rep}({\mathcal{G}})$ the set of representations of ${\mathcal{G}}$.  A representation of ${\mathcal{G}}$ on a vector bundle $E$ is the same thing as a Lie groupoid homomorphism ${\mathcal{G}}\to \textrm{GL}(E)$. This notion generalizes the usual notion of representations of Lie groups. For an action groupoid $G\ltimes M$ (example \[ex-action-groupoids\]), representations are the same thing as equivariant vector bundles over $M$. For a general Lie groupoid ${\mathcal{G}}$ over $M$, there are very few representations available “for free”. Of course, there is the trivial representation $\mathbb{R}_M$ whose underlying vector bundle is the trivial line bundle and the action is the identity on the fibers. However, there is no analogue of the adjoint representation for Lie groups (see also below). \[adjoint-for-classical\] Consider the $k$-jet groupoids ${J}^k(M, M)$ from example \[difeo\]. For $k= 1$ it is clear that $$TM\in \textrm{Rep}({J}^1(M, M)),$$ i.e. there is a natural action of ${J}^1(M, M)$ on $TM$: for $g= {j}^{1}_{x}\phi$, the induced linear action is $$d_x\phi: T_xM{\longrightarrow}T_{\phi(x)}M.$$ Similarly, for any $k$ one has $${J}^{k-1}TM\in \textrm{Rep}({J}^k(M, M)).$$ To describe the action, let $g= j^{k}_{x}\phi\in J^k(M, M)$. Due to the naturality of our objects, the bundle map $d\phi: TM\to TM$ (covering $\phi$) induces a map $j^{k-1}d\phi$ on $J^{k-1}TM$; explicitely, $$j^{k-1}_xd\phi: J^{k-1}_{x} TM{\longrightarrow}J^{k-1}_{\phi(x)} TM,\ \ j_{x}^{k-1}(X)\mapsto j^{k-1}_{\phi(x)}(\phi_*(X)),$$ where recall that the push-forward vector field is given by $$\phi_*(X)_y:= (d\phi)_{\phi^{-1}(y)}(X_{\phi^{-1}(y)}).$$ Of course, $j^{k-1}_xd\phi$ only depends on $j^{k}_{x}\phi= g$, and this defines the linear action of $g$. ### Lie algebroids {#Lie algebroids} Lie algebroids arise as the infinitesimal counterpart of Lie groupoids. Abstractly, they can be thought of as “replacements of tangent bundles” (which are related to the ordinary tangent bundle by the anchor map), which are better suited to reflect the various geometric structures under consideration. A **Lie algebroid** over a manifold $M$ is a vector bundle $A \to M$ endowed with a vector bundle map, called anchor, $\rho: A \to TM$, and a Lie bracket on the space $\Gamma(A)$ of sections of $A$ such that the following Leibniz identity $$[{\alpha}, f{\beta}] = f[{\alpha},{\beta}] + ({L}_{\rho({\alpha})}f){\beta}$$ holds for all sections ${\alpha},{\beta}\in \Gamma(A)$ and smooth functions $f \in \mathrm{C}^{\infty}(M)$. Let us first recall the construction of the Lie algebroid of a Lie groupoid ${\mathcal{G}}$ over $M$. This is completely analogous to the construction of the Lie algebra of a Lie group, as the tangent of the group at the unit element, or as the algebra of right invariant vector fields on the group. The main differences come from the fact that 1. there are many units – one for each $x\in M$ – hence one ends up with a vector bundle over $M$; 2. right translations by an element $g\in {\mathcal{G}}$ are defined only along $s$-fibers: the formula $R_{g}(a)= ag$ defines a map $$R_g: s^{-1}(y){\longrightarrow}s^{-1}(x) ,$$ where $x= s(g)$, $y= t(g)$. Putting everything together, the relevant infinitesimal object will be the vector bundle over $M$ whose fiber over $x\in M$ is the tangent space at the unit $1_x$ of the $s$-fiber $s^{-1}(x)$. More formally, we consider the $s$-tangent bundle $$T^s{\mathcal{G}}:= \textrm{Ker}(ds)$$ and the Lie algebroid of ${\mathcal{G}}$ is, as a vector bundle over $M$, the restriction of $T^s{\mathcal{G}}$ to $M$ (the pull-back via the unit map $u: M\to {\mathcal{G}}$): $$A= (T^s{\mathcal{G}})|_{M}.$$ The anchor of $A$ is simply the restriction of $dt: T{\mathcal{G}}\to TM$ to vectors that belong to $A$. For the bracket, we identify the space of sections $\Gamma(A)$ with the right invariant vector fields on ${\mathcal{G}}$. More precisely, right translation by $g$ differentiates to give a linear map $$R_g: T^{s}_{a}{\mathcal{G}}{\longrightarrow}T^{s}_{ag}{\mathcal{G}}$$ (for $a\in s^{-1}(y)$); with this, the space of right invariant vector fields on ${\mathcal{G}}$ is $${\ensuremath{\mathfrak{X}}}^{\textrm{inv}}({\mathcal{G}})= \{X\in \Gamma(T^s{\mathcal{G}}): R_g(X_a)= X_{ag} \ \forall\ a, g\in {\mathcal{G}}\ \textrm{composable} \}.$$ Finally, there is a 1-1 correspondence $$\Gamma(A) \cong {\ensuremath{\mathfrak{X}}}^{\textrm{inv}}({\mathcal{G}}),\ \ \alpha\mapsto \alpha^r,$$ where $$\alpha^{r}_{g}= R_g(\alpha_{t(g)}).$$ This induces the Lie bracket of $A$, characterized by: $$[\alpha, \beta]^r= [\alpha^r, \beta^r],$$ where the second bracket is the usual Lie bracket of vector fields on ${\mathcal{G}}$. To keep some formulas simpler, we will sometimes also use the notation $$\label{notation-right-invariant} \stackrel{\rightarrow}{\alpha}= \alpha^{r}$$ for right invariant vector fields.   A Lie algebra is a Lie algebroid over a point.   Bundles of Lie algebras can be viewed as Lie algebroids with $\rho= 0$. \[ex-0-jet-gpd\]   The tangent bundle of any manifold $M$ is a Lie algebroid with $\rho= \textrm{Id}$ and the Lie bracket the usual Lie bracket of vector fields. As such, $TM$ coincides with the Lie algebroid of the pair groupoid $\Pi(M)$. To fix the notation for computations in local coordinates, let us describe this more explicitely in the case of the pair groupoid $\Pi$ of $\mathbb{R}^n$. As a convention, we denote the source coordinates by $x_a$, the target coordinates by $X_i$. Hence, in local coordinates, $$\Pi = \{(x_1, \ldots , x_n, X_1, \ldots , X_n)\}$$ with the source, target and multiplication $$s(x, X)= x, \ t(x, X)= X, \ (x, X)\cdot (x', x)= (x', X).$$ Denote the Lie algebroid of $\Pi$ by $A(\Pi)$, and let’s calculate it explicitly by applying the definition. Since the unit of $\Pi$ at $x\in \mathbb{R}^n$ is $1_x= (x, x)$, the fiber of $A(\Pi)$ at $x$ is (by definition) $$A(\Pi)_x= T_{1_x} s^{-1}(x)= T_{(x, x)} \{(x, X): X-\textrm{variable} \}.$$ The canonical basis at $x$ is then $$\partial^{i}(x):= \frac{\partial}{\partial X_i} (x, x) \in A(\Pi)_x,\ 1\leq i\leq n .$$ In some examples (e.g. in low dimensions when the variables are denoted $x, y, z,$ etc), it is more natural to use the notation $$\partial^{X_i}:= \partial^{i}.$$ Hence, as a vector bundle, $A(\Pi)$ is spanned by $$\{ \partial^1, \ldots , \partial^n\}= \{\partial^{X_1}, \ldots , \partial^{X_n}\}.$$ The induced right invariant vector fields on $\Pi$ (tangent to $s$-fibers) are (in the notation (\[notation-right-invariant\])) $$\label{right-inv-local} \stackrel{\rightarrow}{\partial^{i}}(x, X)= \frac{\partial}{\partial X_i} (x, X)\in T_{x, X}\Pi .$$ For the anchor of $A(\Pi)$ we find $$\rho: A(\Pi){\longrightarrow}T\mathbb{R}^n,\ \rho(\partial^{i})= \frac{\partial}{\partial X_i}$$ and this provides an identification between $A(\Pi)$ and $T\mathbb{R}^n$. \[inf-act\] Let $\mathfrak{g}$ be a Lie algebra and assume that we have given an infinitesimal action of $\mathfrak{g}$ on a manifold $M$, i.e. a Lie algebra map $a: \mathfrak{g}\to {\ensuremath{\mathfrak{X}}}(M)$. Then one forms the action Lie algebroid $\mathfrak{g}\ltimes M$ as follows. As a vector bundle, it is the trivial vector bundle $\mathfrak{g}_{M}$ over $M$ with fiber $\mathfrak{g}$. The anchor is precisely the infinitesimal action interpreted as a vector bundle map $$\rho: \mathfrak{g}_M{\longrightarrow}TM, \ \ (x, u)\mapsto a(u)_x .$$ For the bracket, we note that $\Gamma(\mathfrak{g}_{M})= C^{\infty}(M, \mathfrak{g})$ contains the constant sections $c_u$ with $u\in \mathfrak{g}$; with this, the bracket of $\mathfrak{g}\ltimes M$ is defined on the constant sections by $$[c_u, c_v]= c_{[u, v]_{\mathfrak{g}}},$$ (where the bracket on the right hand side is the one on $\mathfrak{g}$) and extended to arbitrary sections using the Leibniz identity. For a global formula, using the canonical flat connection $\nabla^{\textrm{flat}}$ on $\mathfrak{g}_M$, we have: $$[\alpha, \beta]= [\alpha, \beta]_{\mathfrak{g}}+ \nabla^{\textrm{flat}}_{\rho(\alpha)}(\beta)- \nabla^{\textrm{flat}}_{\rho(\beta)}(\alpha).$$ Note that, if the infinitesimal action comes from a global action of a Lie group $G$ on $M$, then $\mathfrak{g}\ltimes M$ is precisely the Lie algebroid of the action groupoid $G\ltimes M$. \[ex-alg-J1\] In analogy with Example \[ex-0-jet-gpd\], for any manifold $M$, the Lie algebroid of the first jet groupoid $J^1(M, M)$ (Example \[difeo\] for $k= 1$) is isomorphic to the bundle $J^1(TM)$ of first jets of vector fields on $M$. Let us start by recalling that $J^1(TM)$ has a canonical structure of Lie algebroid, with the anchor given by the canonical projection $l: J^1(TM){\longrightarrow}TM$ and the bracket uniquely determined by the Leibniz identity and the condition $$[j^1(V), j^1(W)]= j^1([V, W])$$ for all vector field $V, W$. Recall (see remark \[remark-J-decomposition\]) that $T^*M\otimes TM$ is identified with a subspace of $J^1(TM)$ (namely with the kernel of the anchor $l$) and this identification reads, at the level of elements, as follows: $$\label{eq-ident-Hom} df\otimes V= fj^1(V)- j^1(fV).$$ A simple computation shows that the bracket of $J^1(TM)$ restricts to $T^*M\otimes TM$ to: $$[df\otimes V, dg\otimes W]= L_W(f) dg\otimes V- L_V(g) df\otimes W .$$ Equivalently, $T^*M\otimes TM= \textrm{Hom}(TM, TM)$ is endowed with the pointwise standard commutator bracket $$[T, S]= T\circ S- S\circ T ,$$ making it into a bundle of Lie algebras (hence a Lie algebroid with zero anchor). With these, $J^1(TM)$ is canonically isomorphic to the Lie algebroid of the first jet groupoid $J^1(M, M)$. As before, in order to fix the notation for computations in local coordinates, we describe this identification more explicitely in the case of the first jet groupoid $J^1$ of $\mathbb{R}^n$. First, $$J^1= J^1(\mathbb{R}^n, \mathbb{R}^n)= \{(x_1, \ldots , x_n , X_1, \ldots , X_n, p): p= (p^{i}_{a})_{1\leq i, a\leq n}) \in GL_n\} ,$$ where the last equality indicates the notation that we use for the coordinates in $J^1$ ($p^{i}_{a}$ corresponds to the partial derivative $\frac{\partial X_i}{\partial x_a}$). The source is the projection on $x$, the target is the projection on $X$, while multiplication is $$(x, X, p)\cdot (x', x, q)= (x', X, pq),$$ where $pq$ uses matrix multiplication. For the algebroid $A(J^1)$ of $J^1$, since the unit of $J^1$ at $x\in \mathbb{R}^n$ is $$1_x= (x, x, 1)\in J^1(\mathbb{R}^n, \mathbb{R}^n),$$ the fiber of $A(J^1)$ above $x\in \mathbb{R}^n$ is (by definition) $$A(J^1)_x= T_{1_x} s^{-1}(x) = T_{(x, x, 1)} \{(x, X, p): X, p-\textrm{variables}\},$$ with canonical basis $$\partial^{i}(x):= \frac{\partial}{\partial X_i} (x, x, 1), \ \partial^{i}_{a}(x):= \frac{\partial}{\partial p^{i}_{a}} (x, x, 1)\ \ (1\leq i, a\leq n) .$$ Hence $$A(J^1)= \textrm{Span} \{\partial^{i}, \partial^{i}_{a}: 1\leq i, a\leq n \}.$$ For the anchor of $A(J^1)$ we find $$\rho(\partial^{i})= \frac{\partial}{\partial X_i},\ \rho( \partial^{i}_{a})= 0,$$ To compute the Lie bracket of $A(J^1)$, one first has to compute the corresponding right invariant vector fields on $J^1$. For this we use right translations: associated to $g= (x, X, p)\in J^1$, we have $$R_g: s^{-1}(X){\longrightarrow}s^{-1}(x), \ R_g(X, \tilde{X}, q)= (x, \tilde{X}, qp),$$ and then we compute $$\stackrel{{\longrightarrow}}{\partial^{i}}= \frac{\partial}{\partial X_i},\ \stackrel{{\longrightarrow}}{\partial^{i}_{a}}= \sum_{u} p^{a}_{u} \frac{\partial}{\partial p^{i}_{u}} \ \ (\textrm{at\ any}\ (x, X, p)\in J^1(T\mathbb{R}^n)).$$ We then find: $$[\partial^{i}, \partial^{j}]= 0,\ [\partial^{i}_{a}, \partial^{j}]= 0, \ [\partial^{i}_{a}, \partial^{j}_{b}]= \delta^{i}_{b} \partial^{j}_{a}- \delta^{j}_{a} \partial^{i}_{b},$$ where $\delta$ is the Kronecker symbol. The canonical identification between $J^1(T\mathbb{R}^n)$ and $A(J^1)$ can be described as follows: the first jet at $X$ of a vector field $V= V_i \frac{\partial}{\partial X_i}$ is identified with $$J^1(T\mathbb{R}^n)\ni j^1_{X}(V) \longleftrightarrow V_i(X)\partial^{i} + \frac{\partial V_i}{\partial X_a}(X) \partial^{i}_{a}\in A(J^1).$$ We see that, in terms of jets, the canonical frame of $A(J^1)= J^1(T\mathbb{R}^n)$ is $$\partial^{i}= j^1(\frac{\partial}{\partial X_i}),\ \partial^{i}_{a}= j^1(X_a\frac{\partial}{\partial X_i})-X_a j^1(\frac{\partial}{\partial X_i})= -dX_a\otimes \frac{\partial}{\partial X_i}.$$ where, for the last equality, we used (\[eq-ident-Hom\]). As in example \[ex-0-jet-gpd\], it is sometimes more appropriate to use different notations for the variables $p^{i}_{a}$ and the basis of $A$: $$p^{X_i}_{x_a}:= p^{i}_{a},\ \ \partial^{X_i}:= \partial^i, \ \ \partial^{X_i}_{x_a}= \partial^{p^{i}_{a}}:= \partial^{i}_{a} .$$ For instance, when $n= 2$, we have coordinates $(x, y)$ in $\mathbb{R}^2$, coordinates $$\Bigg(x, y, X, Y, \left( \begin{array}{ll} p^{X}_{x} & p^{X}_{y}\\ p^{Y}_{x} & p^{Y}_{y}\\ \end{array}\right) \Bigg)$$ for $J^1$ (where the matrix is invertible) and $A(J^1)$ is spanned by $\partial^X, \partial^Y, \partial^{X}_{x}, \partial^{X}_{y}, \partial^{Y}_{x}, \partial^{Y}_{y}$. \[Cartan-forms-on-jets\]  Of course, the previous example has a version for higher jets and the Lie algebroid of the $k$-jet groupoid ${J}^k(M, M)$ is isomorphic to ${J}^kTM$. Note that this implies that the resulting Cartan forms (coming from (\[eq-higher-Cartan\]) of subsection \[PDEs; the Cartan forms:\]) has simpler coefficients. Let us be more precise. First of all, considering the fibration $s= \textrm{pr}_1: M\times M\to M$, we see that ${J}^k(M, M)$ is open inside ${J}^kR$ ($R:=M\times M,\pi:= s$). Restricting the Cartan form (\[eq-higher-Cartan\]) of ${J}^kR$, we obtain a 1-form with values in $$\textrm{pr}^*T^{s}{J}^{k-1}(M, M) .$$ But $T^{s}{J}^{k-1}(M, M)$ is precisely the pull-back by the target map of the Lie algebroid of ${J}^{k-1}(M, M)$, i.e. of ${J}^{k-1}TM$, hence the Cartan form becomes $$\label{eq-Cartan-forms-on-jets} \theta\in \Omega^1( {J}^k(M, M), t^*{J}^{k-1}TM).$$ See also [@GuilleminSternberg:deformation]. \[toy-example\]  Here is an explicit example which, as we shall explain later, arises from a Lie pseudogroup. Over the base manifold $$M: =\{ (x, y)\in \mathbb{R}^2: y\neq 0\}$$ we consider the $5$-dimensional Lie groupoid $${\mathcal{G}}= \{ (x, y, X, Y, u)\in \mathbb{R}^5: y\neq 0, Y\neq 0 \}$$ with source, target and multiplication given by: $$s(x, y, X, Y, u)= (x, y), \ t(x, y, X, Y, u)= (X, Y),$$ $$(x, y, X, Y, u)(x', y', x, y, v)= (x', y', X, Y, \frac{y'u+ Yv}{y}).$$ Let us compute the Lie algebroid $A({\mathcal{G}})$ in two different ways. We first apply directly the definition. Since the unit at a point $(x, y)$ is $(x, y, x, y, 0)$, we find that $A({\mathcal{G}})$ is the trivial vector bundle over $\mathbb{R}^2$ spanned by $\{e^X, e^Y, e^u\}$, where $$e^X(x,y)= \frac{\partial}{\partial X}(x, y, x, y, 0) \in T_{(x, y, x, y, 0)}^{s} {\mathcal{G}}$$ and similarly for $e^Y$, $e^u$. The anchor sends $e^X$ to $\frac{\partial}{\partial X}$, similarly for $e^Y$ and kills $e^u$. For the bracket, we need the induced right invariant vector fields. To compute them at some $g= (x, y, X, Y, u)\in J^1(\Gamma)$, we use right translations $$R_g: s^{-1}(X, Y){\longrightarrow}s^{-1}(x, y),\ (X, Y, \overline{X}, \overline{Y}, v)\mapsto (x, y, \overline{X}, \overline{Y}, \frac{yV+ \overline{Y}u}{Y})$$ whose differential at at the unit $(X, Y, X, Y, 0)$ gives us $$\stackrel{\rightarrow}{e^X}= \frac{\partial}{\partial X}, \ \stackrel{\rightarrow}{e^Y}= \frac{\partial}{\partial Y}+ \frac{u}{Y} \frac{\partial}{\partial u}, \ \stackrel{\rightarrow}{e^u}= \frac{y}{Y} \frac{\partial}{\partial u}.$$ Computing their brackets we find $$[\stackrel{\rightarrow}{e^X}, \ \stackrel{\rightarrow}{e^Y}]= 0, \ [\stackrel{\rightarrow}{e^X}, \stackrel{\rightarrow}{e^u}]= 0, \ [\stackrel{\rightarrow}{e^Y}, \ \stackrel{\rightarrow}{e^u}]= -\frac{2y}{Y^2} \frac{\partial}{\partial u}= -\frac{2}{Y} \stackrel{\rightarrow}{e^u},$$ hence the Lie bracket of $A({\mathcal{G}})$ is given by $$[e^X, e^Y]= [e^X, e^u]= 0, \ [e^Y, e^u]= -\frac{2}{Y} e^u.$$ Here is an alternative way of computing $A({\mathcal{G}})$. The key remark is that ${\mathcal{G}}$ can be embedded as a Lie subgroupoid of the first jet groupoid $J^1(\mathbb{R}^2, \mathbb{R}^2)$: $$\label{ex1-inclusion} {\mathcal{G}}\ni (x, y, X, Y, u)\mapsto (x, y, X, Y, \left( \begin{array}{ll} \frac{y}{Y} & 0\\ u & \frac{Y}{y}\\ \end{array}\right) )\in J^1(\mathbb{R}^2, \mathbb{R}^2).$$ Indeed, the multiplication of ${\mathcal{G}}$ comes out of the matrix multiplication $$\left( \begin{array}{ll} \frac{y}{Y} & 0\\ u & \frac{Y}{y}\\ \end{array}\right) \left( \begin{array}{ll} \frac{y'}{y} & 0\\ v & \frac{y}{y'}\\ \end{array}\right)= \left( \begin{array}{ll} \frac{y'}{Y} & 0\\ \frac{y'u+ Yv}{y} & \frac{Y}{y'}\\ \end{array}\right) .$$ Computing the map induced by this inclusion at the level of Lie algebroids, we find the inclusion $$A({\mathcal{G}})\hookrightarrow J^1(T\mathbb{R}^2), \ \left\{ \begin{array}{lll} e^X & \mapsto \partial^{X}\\ e^Y & \mapsto \partial^Y- \frac{1}{Y}\partial^{X}_{x}+ \frac{1}{Y} \partial^{Y}_{x} \\ e^u & \mapsto \partial^{Y}_{y} \end{array}\right.$$ Computing the Lie brackets of the vectors on the right hand side (using the formulas we already know in $J^1(T\mathbb{R}^2)$), we recover the Lie brackets of $A^1(\Gamma)$. \[flows of sections\]  If $A$ is the Lie algebroid of a Lie groupoid ${\mathcal{G}}$, then the Lie algebra $\Gamma(A)$ plays (morally) the role of the Lie algebra of the group of bisections $\textrm{Bis}({\mathcal{G}})$ (see definition \[definition-bisections\]). For instance, for the pair groupoid of $M$, this becomes the usual interpretation of the Lie algebra ${\ensuremath{\mathfrak{X}}}(M)$ of vector fields on $M$ as the Lie algebra of the diffeomorphism group $\textrm{Diff}(M)$. Here are a few details, which will also allow us to fix some notation regarding flows that will be used throughout the thesis. Let us assume that $A$ is the Lie algebroid of ${\mathcal{G}}$. For $\alpha\in \Gamma(A)$, one defines the (local) flow of $\alpha$ by $$\phi_{\alpha}^{\epsilon}:= \varphi_{\alpha^r}^{\epsilon}|_{M}: M{\longrightarrow}{\mathcal{G}},$$ where $\varphi_{\alpha^r}^{\epsilon}$ is the (local) flow of the right invariant vector field $\alpha^r$. As usual, we are sloppy with the precise notation for the domain of the flow. From right invariance it follows that $\phi_{\alpha}^{\epsilon}$ is a bisection of ${\mathcal{G}}$ which determines the entire flow $\varphi_{\alpha^r}^{\epsilon}$ ($\varphi_{\alpha^r}^{\epsilon}(g)= \phi_{\alpha}^{\epsilon}(t(g))g$). Note also that, in terms of multiplication of (local) bisections, the flow property for $\varphi_{\alpha^r}^{\epsilon}$ translates into $$\phi_{\alpha}^{\epsilon}\cdot \phi_{\alpha}^{\epsilon'}= \phi_{\alpha}^{\epsilon+ \epsilon'}.$$ This shows that, indeed, $\Gamma(A)$ behaves like “the Lie algebra of $\textrm{Bis}({\mathcal{G}})$”. Next, we recall the notion of representation of Lie algebroids. Let $A$ be a Lie algebroid over $M$. An $A$-connection on a vector bundle over $M$ is an $\mathbb{R}$-bilinear operator $$\nabla: \Gamma(A)\times \Gamma(E){\longrightarrow}\Gamma(E),\ \ (\alpha, e)\mapsto \nabla_{\alpha}(e)$$ with the property that, for all $\alpha\in \Gamma(A)$, $e\in \Gamma(E)$, $f\in C^{\infty}(M)$, $$\nabla_{f\alpha}(e)= f\nabla_{\alpha}(e),\ \ \nabla_{\alpha}(fe)= f\nabla_{\alpha}(e)+ L_{\rho(\alpha)}(f) e.$$ The curvature of the $A$-connection $\nabla$ is the tensor $$R_{\nabla}\in \textrm{Hom}(\Lambda^2A, \textrm{Hom}(E, E))$$ given by (for $\alpha, \beta\in \Gamma(A)$) $$R_{\nabla}(\alpha, \beta) = \nabla_{[\alpha, \beta]}- [\nabla_{\alpha}, \nabla_{\beta}].$$ Let $A$ be a Lie algebroid over $M$. A **representation** (or **infinitesimal action**) of $A$ on a vector bundle $E$ over $M$ is an $A$-connection $\nabla$ on $E$ satisfying the flatness condition $R_{\nabla}= 0$. \[rk-algbds\] Similar to the groupoid case, a vector bundle $E$ has an associated Lie algebroid $\operatorname{\mathfrak{gl}}(E)$ whose sections are derivations on $E$, i.e. operators $P: \Gamma(E)\to \Gamma(E)$ with the property that they satisfy a Leibniz identity of type $$P(fs)= fP(s)+ L_{X_P}(f) s$$ for some vector field $X_P$ on $M$ (called the symbol of $P$). The bracket on $\operatorname{\mathfrak{gl}}(E)$ corresponds to commutators of operators, and the anchor sends $P$ to $X_{P}$. Of course, $\operatorname{\mathfrak{gl}}(E)$ is just the Lie algebroid of $GL(E)$. With this, a representation of $A$ on $E$ is the same thing as a Lie algebroid homomorphism $A\to \operatorname{\mathfrak{gl}}(E)$. In particular, since Lie groupoid morphisms give rise, after differentiation, to Lie algebroid morphisms, it follows that any representation $E$ of a Lie groupoid ${\mathcal{G}}$ is canonically a representation of the Lie algebroid $A$ of ${\mathcal{G}}$. For the explicit formulas, see lemma \[derivating-representations\] of the next subsection.  Using the previous remark and example \[adjoint-for-classical\], we obtain that the jet bundle $J^{k-1}TM$ is canonically a representation of the $k$-jet algebroid $J^kTM$. ### The Lie functor {#Lie functor} Throughout this thesis, the term “Lie functor” is used to indicate passing from global to infinitesimal objects and should be thought of as “linearization”. The reverse process is coined as “integration”. For “integrability theorems”, one of the conditions that constantly appears as a necessary condition in the case of groupoids is the $s$-simply connectedness. A Lie groupoids ${\mathcal{G}}$ is called **$s$-connected** if all $s$-fibers $s^{-1}(x)$ are connected. It is called **$s$-simply connected** if, furthermore, all $s$-fibers $s^{-1}(x)$ are simply connected. The first “example” of the Lie functor is the construction of the Lie algebroid $A= A({\mathcal{G}})$ of a Lie groupoid ${\mathcal{G}}$ that we have already mentioned in the previous section. For the reverse process, starting with a Lie algebroid $A$, one looks for a Lie groupoid ${\mathcal{G}}$ which integrates $A$, i.e. whose Lie algebroid is isomorphic to $A$; if such a ${\mathcal{G}}$ exists, one says that $A$ is integrable. A Lie groupoid ${\mathcal{G}}$ can always be replaced by an $s$-connected one without changing the Lie algebroid: one considers the open subgroupoid ${\mathcal{G}}^{0}\subset {\mathcal{G}}$ made of the connected component of the identities in the $s$-fibers of ${\mathcal{G}}$. One can go further and replace ${\mathcal{G}}$ by an $s$-simply connected Lie groupoid. More precisely, one constructs $\widetilde{{\mathcal{G}}}$ by putting together the universal covers of the $s$-fibers of ${\mathcal{G}}$ with base points the units (see e.g. [@CrainicFernandes] for the general discussion). Note that the canonical projection $$p: \widetilde{{\mathcal{G}}} {\longrightarrow}{\mathcal{G}}$$ is a groupoid map which is a local diffeomorphism onto the $s$-connected component ${\mathcal{G}}^0$; this immediately implies that $\widetilde{{\mathcal{G}}}$ has the same Lie algebroid as ${\mathcal{G}}$. As for Lie groups, there is a Lie II theorem (saying that a morphism of Lie algebroids can be integrated to one of Lie groupoids, provided the domain groupoid is $s$-simply connected); these imply the following basic result in the theory of Lie groupoids: If the Lie algebroid $A$ is integrable then there exists and is unique (up to isomorphism) a Lie groupoid ${\mathcal{G}}$ which is $s$-simply connected and integrates $A$. Back to the general discussion on the Lie functor, given a Lie groupoid ${\mathcal{G}}$ with Lie algebroid $A$, intuitively, the Lie functor takes structures on ${\mathcal{G}}$ and transforms them into structures on $A$. However, one should be aware that the outcome is not always obvious; also, the reverse process (integrability theorems) is usually more difficult and, as mentioned above, requires various connectedness assumptions on the $s$-fibers. A very good and simple example is the notion of representations. Remark \[rk-algbds\] shows that any representation $E$ of a Lie groupoid ${\mathcal{G}}$ can be made into a representation of the Lie algebroid $A$ of ${\mathcal{G}}$. Writing out the explicit formulas, one finds: \[derivating-representations\] Let ${\mathcal{G}}$ be a Lie groupoid with Lie algebroid $A$ and let $E$ be a representation of ${\mathcal{G}}$. Then $E$ is canonically a representation of $A$, with linear action defined as follows: for ${\alpha}\in \Gamma(A)$, and $e \in \Gamma(E)$, $$\begin{aligned} \nabla_{{\alpha}}e(x) = \frac{{d}}{{d}{\epsilon}}\bigg\vert_{{\epsilon}= 0}g({\epsilon})^{-1}\cdot e\big(t(g({\epsilon}))\big),\end{aligned}$$ where $g({\epsilon})$ is any curve in $s^{-1}(x)$ with $g(0)=1_x$, $\frac{{d}}{{d}{\epsilon}}\vert_{{\epsilon}= 0}g({\epsilon}) = {\alpha}(x)$. We see that the Lie functors takes representations of ${\mathcal{G}}$ into those of $A$. It is not difficult to check that, if ${\mathcal{G}}$ is $s$-connected then, for two representations $E$ and $F$ of ${\mathcal{G}}$, if $\textrm{Lie}(E)= \textrm{Lie}(F)$ as representations of $A$, then $E= F$. For the reverse process we have the following integrability theorem (itself a consequence of Lie II mentioned above): If ${\mathcal{G}}$ is an $s$-simply connected Lie groupoid with Lie algebroid $A$, then any representation of $A$ comes from a representation of ${\mathcal{G}}$. Again, one should keep in mind that this is just one instance of an integrability theorem (for representations). ### Lie pseudogroups {#Lie pseudogroups} Recall that, for a manifold $M$, $\textrm{Diff}_{\textrm{loc}}(M)$ stands for the set of diffeomorphisms $\phi: U\to V$ between open sets $U, V\subset M$. There is a lot in the literature about pseudogroups, some references are [@Cartan1904; @GuilleminSternberg:deformation; @Olver:MC; @Niky; @Shadwick; @KumperaSpencer; @Pommaret] A **pseudogroup** on a manifold $M$ is a collection $\Gamma$ of diffeomorphisms between open sets in $M$, i.e. a subset $\Gamma \subset \textrm{Diff}_{\textrm{loc}}(M)$, satisfying: 1. If $\phi\in \Gamma$, then $\phi^{-1}\in \Gamma$. 2. If $\phi, \psi \in \Gamma$, and $\phi\circ \psi$ is defined, then $\phi\circ \psi\in \Gamma$. 3. If $\phi\in \Gamma$ and $U$ is an open set contained in the domain of $\phi$, then $\phi|_{U}\in \Gamma$. 4. If $\phi: U\to V$ is a diffeomorphism and $U$ can be covered by a family of open sets $U_i$ such that $\phi|_{U_i}\in \Gamma$ for all $i$, then $\phi\in \Gamma$. Roughly speaking, a Lie pseudogroup is a pseudogroup which is defined by (a system of) PDEs. Of course, there are various regularity conditions one may require; unfortunately, this gives rise to several non-equivalent notions of Lie pseudogroups that one can find in the literature. The conditions that we will impose here are weaker than most of the conditions one finds; however, to avoid (even more) conflicts in terminology, we will call our objects “smooth pseudogroups”. Let us start with the notion of order of a pseudogroup. We say that a pseudogroup $\Gamma$ is of **order** $k$ if $k$ is the smallest number with the following property: any $\phi\in \textrm{Diff}_{\textrm{loc}}(M)$ with the property that for any $x\in \textrm{Dom}(\phi)$ there exists $\phi_x\in \Gamma$ such that $j^{k}_{x}\phi= j^{k}_{x}(\phi_x)$, must belong to $\Gamma$. In other words, the elements of $\Gamma$ are determined by their $k$-jets. This is best formalized using jet spaces and groupoids. More precisely, for any pseudogroup $\Gamma$, it is clear that the $k$-jets of elements of $\Gamma$ define a sub-groupoid $$\Gamma^{(k)} \subset {J}^k(M, M)$$ of the groupoid of $k$-jets of diffeomorphisms of $M$. Moreover, while the $k$-jet of any $\phi \in \textrm{Diff}_{\textrm{loc}}(M)$ can be viewed as a local bisection of ${J}^k(M, M)$, the $k$-jet of an element $\phi\in \Gamma$ defines a bisection which takes values in $\Gamma^{(k)}$. The previous condition on $k$ says that, for $\phi \in \textrm{Diff}_{\textrm{loc}}(M)$: $$j^k\phi \in \textrm{Bis}_{\textrm{loc}}(\Gamma^{(k)}) \Longrightarrow \phi \in \Gamma .$$ The regularity condition that we will impose is the following: A pseudogroup $\Gamma$ on $M$ is called **smooth** if it is of finite order, all its $k$-jet groupoids $\Gamma^{(k)}$ are smooth subgroupoids of ${J}^k(M, M)$ (for all $k\geq 0$), and the projection maps $\Gamma^{(k+1)}\to \Gamma^{(k)}$ are smooth surjective submersions. Sometimes one requires the previous conditions only for $k$ greater or equal to the order of $\Gamma$, but one can always pass to the case of the definition above. This is one way of making sense of the fact that “$\Gamma$ is defined by (a system of) PDEs”. Note that in this case, indeed, $\Gamma^{(k)}$ is a PDE on the bundle $\textrm{pr}_1: M\times M\to M$ whose solutions correspond precisely to the elements of $\Gamma$. Given a smooth pseudogroup $\Gamma$, one has induced sequence of Lie groupoids related to each other by surjective submersions (a tower of Lie groupoids) $$\cdots {\longrightarrow}\Gamma^{(2)}{\longrightarrow}\Gamma^{(1)} {\longrightarrow}\Gamma^{(0)} .$$ Applying the Lie functor, one obtains a similar sequence of Lie algebroids $$\cdots {\longrightarrow}A^{(2)}(\Gamma){\longrightarrow}A^{(1)}(\Gamma) {\longrightarrow}A^{(0)}(\Gamma).$$ In what follows, we will use the notation $A^{(k)}= A^{(k)}(\Gamma)$ for the Lie algebroid of $\Gamma^{(k)}$.\ A large part of this thesis arises from our attempt of understanding the structures that governs these towers. Here are a few such concepts. As we know from the existing theory, one of the main ingredients is the Cartan forms; our aim will be to understand them more conceptually, to understand their key properties, etc. (and this will be done in the more general context of Pfaffian groupoids). In this setting, the Cartan forms become 1-forms $$\theta\in \Omega^1(\Gamma^{(k)}, t^*A^{(k-1)}),$$ called **the Cartan forms of the pseudogroup $\Gamma$**. Let us give here the direct description. To describe $\theta_g(V_g)$ for $g\in \Gamma^{(k)}$, $V_g\in T_g\Gamma^{(k)}$, write $g= j^{k}_{x}\phi$ with $\phi\in \Gamma$. Using $l: \Gamma^{(k)}\to \Gamma^{(k-1)}$ and viewing $j^{k-1}\phi$ as a map (bisection) $\sigma: M\to \Gamma^{(k-1)}$, $$dl(V_g)- d_x\sigma\circ d_gs(V_g) \in T_{l(g)}\Gamma^{(k-1)}$$ is killed by $ds$, hence it comes from an element in the fiber of $A^{(k-1)}$ at $t(l(g))= t(g)$; this is $\theta_g(V_g)$. Explicitely, $$\theta_g(V_g)= (dR_{l(g)})_{l(g)^{-1}} ( dl(V_g)- d_x\sigma \circ d_gs(V_g)) \in A^{(k-1)}_{t(g)}.$$ Of course, $\theta$ can also be seen as the restriction to $\Gamma^{(k)}$ of the Cartan form $\theta$ on the groupoid ${J}^k(M, M)$ (i.e. equation (\[eq-Cartan-forms-on-jets\]) from example \[Cartan-forms-on-jets\]) and the previous discussion shows that, indeed, this restriction takes values in $A^{(k-1)}\subset {J}^{k-1}TM$. In particular, the main property of $\theta$ gives, for $k$ greater or equal to the order of $\Gamma$, that the elements of $\Gamma$ correspond to bisections of $\Gamma^{(k)}$ which kill $\theta$.\ As we shall see later in the thesis, the key property of $\theta$ (which gives rise to the entire theory!) is its compatibility with the groupoid structure (called multiplicativity later on). To make sense of this, one has to make sense of $A^{(k-1)}$ as a representation of $\Gamma^{(k)}$: $$A^{(k-1)} \in \textrm{Rep}(\Gamma^{(k)}) .$$ This is immediately obtained by restricting the canonical action of $J^k(M, M)$ on $J^{k-1}TM$ (from example \[adjoint-for-classical\]) to $\Gamma^{(k)}$; staring at the definition carefully one finds that, indeed, $A^{(k-1)}$ is invariant under the action of $\Gamma^{(k)}$. Together with this action, $A^{(k-1)}$ will be called **the adjoint representation of $\Gamma^{(k)}$**.\ Next, let us also indicate the appearance of tableaux towers in this discussion. They arise when comparing the various levels of the tower of Lie algebroids mentioned above. More precisely, we consider $${\mathfrak{g}}^k(\Gamma):= \textrm{Ker}(A^{(k)}{\longrightarrow}A^{(k-1)}) .$$ Interpreting again $A^{(k)}(\Gamma)$ as a sub-algebroid of ${J}^kTM$, and using the fact that the kernel of the projection ${J}^kTM\to {J}^{k-1}TM$ is canonically isomorphic to $\textrm{Hom}(S^kTM, TM)$, one obtains a canonical inclusion $${\mathfrak{g}}^k(\Gamma)\subset \textrm{Hom}(S^kTM, TM) .$$ Hence, for each $k$, ${\mathfrak{g}}^k(\Gamma)$ is a bundle of tableaux on $(TM, TM)$, called **the $k$-th order tableaux of $\Gamma$**. One can also show that ${\mathfrak{g}}^{k+1}(\Gamma)$ sits inside the prolongation of ${\mathfrak{g}}^k(\Gamma)$, i.e. in the terminology of definition \[tableaux tower\] $${\mathfrak{g}}^{\infty}(\Gamma): = ({\mathfrak{g}}^1(\Gamma), {\mathfrak{g}}^2(\Gamma), \ldots )$$ is a bundle of tableaux towers over $M$, called **the tableaux tower associated to $\Gamma$**. Here are a few examples of pseudogroups and the computation of their first Cartan form.  For the maximal pseudogroup $\Gamma= \textrm{Diff}_{\textrm{loc}}(M)$, one recovers the full jet groupoids ${J}^k(M, M)$ and their Cartan forms (see example \[Cartan-forms-on-jets\] and [@Gold3]). Let us use the previous description of $\theta$ to compute it for $J^1(\mathbb{R}^n, \mathbb{R}^n)$ (see also example \[ex-alg-J1\]). For $g=j^{1}_{x}(\phi)= (x, X, p)\in J^1$, and $V_{g}\in T_{g}J^1$, we have to look at $$(d_gl- d_x\sigma\circ d_gs)(V_g),$$ where $\sigma(x)= (x, \phi(x))$. We find: - for $V= \frac{\partial}{\partial p^{i}_{a}}(g)$ it is $0$. - for $V= \frac{\partial}{\partial X_i}(g)$ it is $\frac{\partial}{\partial X_i}(x, X)= \stackrel{\rightarrow}{\partial}^{X_{i}}$. - for $V= \frac{\partial}{\partial x_a}(g)$ it is $- p^{i}_{a} \frac{\partial}{\partial X_i} (x, X)= - p^{i}_{a} \stackrel{\rightarrow}{\partial}^{X_{i}}$. Hence we obtain the standard formula $$\omega_i= dX_i- p^{i}_{a} dx_a .$$  On the manifold $$M= \{(x, y)\in \mathbb{R}^2: y\neq 0 \}$$ we consider the pseudogroup $\Gamma$ consisting of all transformations of type $$(x, y)\mapsto (f(x), \frac{y}{f'(x)}),$$ where $f$ is a smooth function with non-vanishing derivative defined on some open inside $\mathbb{R}$. One also writes: $$\Gamma: \ X= f(x),\ Y= \frac{y}{f'(x)}.$$ One can easily check that this is a pseudogroup. Playing with the partial derivatives of $X$ and $Y$, we see that $\Gamma$ can be described as the solutions of the system $$\frac{\partial X}{\partial x}= \frac{y}{Y},\ \ \frac{\partial X}{\partial y}= 0, \ \ \frac{\partial Y}{\partial y}= \frac{Y}{y}.$$ Note that $\frac{\partial Y}{\partial x}$ does not appear in this list (one can easily check that, fixing the values of $X$ and $Y$ at any point $(x_0, y_0)$, $\frac{\partial Y}{\partial x}(x_0, y_0)$ can be arbitrary). This shows that $\Gamma$ is of order $1$ and $$\Gamma^{(1)}= \{ (x, y, X, Y, \left( \begin{array}{ll} \frac{y}{Y} & 0\\ u & \frac{Y}{y}\\ \end{array}\right): (x, y), (X, Y)\in M, u\in \mathbb{R}\} \subset J^1(\mathbb{R}^2, \mathbb{R}^2),$$ which is precisely the groupoid discussed in example \[toy-example\]. Using the formulas for the Cartan form from the previous example and pulling it back to ${\mathcal{G}}$, we see that $\Gamma^{(1)}$ is isomorphic to the groupoid ${\mathcal{G}}$ from example \[toy-example\] and the Cartan form becomes $$\theta\in \Omega^1({\mathcal{G}}, \mathbb{R}^2), \ \ \theta_{1}= dX- \frac{y}{Y}dx,\ \ \theta_{2}= dY- udx- \frac{Y}{y} dy.$$ It is instructive to check directly that $\Gamma$ can be recovered from $({\mathcal{G}}, \theta)$ (i.e. without using that they are related to the jet groupoids). \[SL-2-example\]  On the manifold $M= \mathbb{R}$ we consider the pseudogroup $$\Gamma:\ \ X= \frac{ax+b}{cx+d}, \ \ ad-bc= 1,$$ i.e. consisting of all local transformations of type $$x\mapsto \frac{ax+b}{cx+d},$$ where $a, b, c, d$ are arbitrary real numbers satisfying $ad-bc= 1$. Let us first find the order of $\Gamma$. Denoting $$u= \frac{\partial X}{\partial x}= \frac{1}{(cx+ d)^2}, \ \ v= \frac{\partial^2 X}{\partial x^2}= - \frac{2c}{(cx+ d)^3},$$ we see that the resulting equations on $a, b, c, d$ always have the unique solution $$a= \frac{2u^2- vX}{2u\sqrt{u}}, \ \ b = \frac{2uX-2u^2x+ vxX}{2u\sqrt{u}}, \ \ c= -\frac{v}{2u\sqrt{u}}, \ \ d= \frac{2u+ vx}{2u\sqrt{u}},$$ hence $$\Gamma^{(0)}= J^0(\mathbb{R}, \mathbb{R}), \ \ \Gamma^{(1)}= J^1(\mathbb{R}, \mathbb{R}),\ \ \Gamma^{(2)}= J^2(\mathbb{R}, \mathbb{R}).$$ Computing the third order derivatives of $X$ we find $$\frac{\partial^3 X}{\partial x^3}= \frac{6c^2}{(cx+ d)^4}= \frac{3v^2}{2u}= \frac{3}{2}\frac{\frac{\partial^2 X}{\partial x^2}}{\frac{\partial X}{\partial x}}.$$ We see that $\Gamma$ is of order $3$, $\Gamma^{(3)}$ is four-dimensional, diffeomorphic to $${\mathcal{G}}:= \{ (x, X, u, v)\in \mathbb{R}^4: u\neq 0\},$$ with explicit diffeomorphism $${\mathcal{G}}\stackrel{\sim}{{\longrightarrow}} \Gamma^{(3)}\subset J^3(\mathbb{R}, \mathbb{R}),\ \ (x, X, u, v)\mapsto (x, X, u, v, \frac{3v^2}{2u}).$$ Note that the resulting composition on ${\mathcal{G}}$ is $$(x, X, u, v)(x', x, u', v')= (x', X, uu', vu'^2+ uv').$$ The Lie algebroid $A^{(3)}$ is then the 3-dimensional vector space spanned by three vectors $e^X$, $e^u$ and $e^v$ (corresponding to the partial derivatives with respect to the indicated variables): $$A^{(3)}= \textrm{Span}\{e^X, e^Y, e^u\}.$$ The induced right invariant vector fields are computed applying the right translations: for $g= (x, X, u, v)$, $$R_g: s^{-1}(X){\longrightarrow}s^{-1}(x),\ \ (X, \overline{X}, p, q)\mapsto (x, \overline{X}, pu, qu^2+ pv);$$ one computes $dR_g$ at the unit element $1_x= (X, X, 1, 0)$ and one finds $$\label{right-invariant-sl2} \stackrel{\rightarrow}{e}^X= \frac{\partial}{\partial X}, \ \ \stackrel{\rightarrow}{e}^u= u\frac{\partial}{\partial u}+ v\frac{\partial}{\partial v}, \ \ \stackrel{\rightarrow}{e}^v= u^2\frac{\partial}{\partial v}.$$ Hence the Lie bracket of $A^3(\Gamma)$ is $$[e^X, e^u]= [e^X, e^v]= 0, \ \ [e^u, e^v]= e^v.$$ Of course, one could also have used the inclusion into $J^3(\mathbb{R}, \mathbb{R})$. Alternatively, we could have noticed that the restriction of $l: J^3(\mathbb{R}, \mathbb{R})\to J^2(\mathbb{R}, \mathbb{R})$ to $\Gamma^{(3)}$ induces an isomorphism between $\Gamma^{(3)}$ and $J^2(\mathbb{R}, \mathbb{R})$ (actually, this is implicitly present in the computations above). The relevant Cartan form, $$\theta\in \Omega^1(\Gamma^{(3)}, J^2(T\mathbb{R})),$$ takes values in the algebroid $J^2(T\mathbb{R})$ of $\Gamma^{(2)}= J^2(\mathbb{R}, \mathbb{R})$ (with coordinates $(x, X, u, v)$). As above, $J^2(T\mathbb{R})$ is spanned by three vectors $e^{X}, e^{u}, e^{v}$ which determine the right invariant vector fields given by the same formulas (\[right-invariant-sl2\]). Note that, on $J^2(\mathbb{R}, \mathbb{R})$, $$\frac{\partial}{\partial X}= \stackrel{\rightarrow}{e}^{X},\ \ \frac{\partial}{\partial u}= \frac{1}{u}\stackrel{\rightarrow}{e}^u- \frac{v}{u^3}\stackrel{\rightarrow}{e}^v , \ \ \frac{\partial}{\partial v}= \frac{1}{u^2} \stackrel{\rightarrow}{e}^v .$$ Returning to $\theta$, to compute it at $g= (x, X, u, v)$ choose $\phi\in \Gamma$ with $j^{3}_{x}\phi= g$ in $\Gamma^{(3)}$. We have to consider $$\sigma= j^{2}\phi= (id, \phi, \phi', \phi''): \mathbb{R}{\longrightarrow}\Gamma^{(2)}$$ and then to compute $\stackrel{\rightarrow}{\theta}_g= d_gl- d_x\sigma\circ d_gs$ in terms of the right invariant vector fields above. We find $$\left\{ \begin{array}{llll} \stackrel{{\longrightarrow}}{\theta (\frac{\partial}{\partial X}) }& = \frac{\partial}{\partial X}= \stackrel{\rightarrow}{e}^{X}\\ \stackrel{{\longrightarrow}}{\theta (\frac{\partial}{\partial u}) }& = \frac{\partial}{\partial u}= \frac{1}{u}\stackrel{\rightarrow}{e}^u- \frac{v}{u^3}\stackrel{\rightarrow}{e}^v\\ \stackrel{{\longrightarrow}}{\theta (\frac{\partial}{\partial v}) }& = \frac{\partial}{\partial v}= \frac{1}{u^2} \stackrel{\rightarrow}{e}^v\\ \stackrel{{\longrightarrow}}{\theta (\frac{\partial}{\partial x}) }& = -(\frac{\partial}{\partial x}+ u\frac{\partial}{\partial X}+ v \frac{\partial}{\partial u}+ \frac{3v^2}{2u}\frac{\partial}{\partial v})= - (u \stackrel{\rightarrow}{e}^X+ \frac{v}{u} \stackrel{\rightarrow}{e}^u+ \frac{v^2}{2u^3}\stackrel{\rightarrow}{e}^v) \end{array}\right.$$ Hence the components of $\theta$ (the coefficients of $\stackrel{\rightarrow}{e}^X, \stackrel{\rightarrow}{e}^u, \stackrel{\rightarrow}{e}^v$) are $$\theta_1= dX- udx,\ \ \theta_2= \frac{1}{u}du- \frac{v}{u}dx,\ \ \theta_3= \frac{1}{u^2}dv- \frac{v}{u^3} du- \frac{v^2}{2u^3} dx.$$ Note again that $\Gamma$ can be recovered from the groupoid ${\mathcal{G}}$ and the form $\theta\in \Omega^1({\mathcal{G}}, \mathbb{R}^3)$ without any reference to jet groupoids, by looking at the bisection of ${\mathcal{G}}$ that kill $\theta$. Generalized pseudogroups {#Generalized pseudogroups} ------------------------ In this section we recall the construction of the jet groupoids $J^k{\mathcal{G}}$ associated to an arbitrary Lie groupoid ${\mathcal{G}}$, of the jet algebroids $J^kA$ associated to an arbitrary Lie algebroid $A$, the associated adjoint representations and Cartan forms; these are rather straightforward generalizations of the $k$-jet groupoids and algebroids that we already discussed and which are recovered when ${\mathcal{G}}$ is a pair groupoid. Finally, these will serve as the start of a theory of “generalized pseudogroups” that we are proposing (for motivations, see subsection \[Generalized Lie pseudogroups\]). ### Jet groupoids and algebroids {#Jet groupoids and algebroids} In this section we recall the jet construction applied to general Lie groupoids and Lie algebroids. First, we consider Lie groupoids. The concept of bisections of ${\mathcal{G}}$ (see definition \[definition-bisections\]) extends easily to that of local bisection, the difference being that the latter are defined only over some open $U\subset M$ (and then $\phi_b$ is a diffeomorphism from $U$ to $\phi_b(U)$); if $b_1$ is defined on $U_1$ and $b_2$ on $U_2$, then $b_1\cdot b_2$ is a local bisection defined on $\phi_{b_2}^{-1}( U_1)\cap U_2$. We denote by $\textrm{Bis}_{\textrm{loc}}({\mathcal{G}})$ the set of local bisections of ${\mathcal{G}}$. With these, for any integer $k\geq 0$, one defines the $k$-jet groupoid $$J^k{\mathcal{G}}:= \{{j}^{k}_{x}b:\ \ b\in \textrm{Bis}_{\textrm{loc}}({\mathcal{G}}), \ x\in \textrm{Dom}(b)\},$$ with groupoid structure given by: $$\begin{aligned} s({j}^k_xb)= x,\ t({j}^k_xb)= \phi_b(x),\end{aligned}$$ $$\begin{aligned} {j}^k_{\phi_{b_2}(x)}b_1\cdot{j}^k_xb_2={j}^k_x(b_1\cdot b_2),\text{ and }({j}^k_xb)^{-1}={j}_{\phi_{b}(x)}^k(b^{-1}).\end{aligned}$$ Note that $J^k{\mathcal{G}}$ is an open subspace of the manifold of $k$-jets of sections of the source map, hence it carries a natural smooth structure and then it is not difficult to check that $J^k{\mathcal{G}}$ becomes a Lie groupoid. $J^k{\mathcal{G}}$, with the Lie groupoid structure described above, is called the **$k$-jet groupoid associated to ${\mathcal{G}}$**.  Of course, when ${\mathcal{G}}= \Pi(M)$ is the pair groupoid of $M$, we recover the $k$-jet groupoids $J^k(M, M)$. As in the previous discussion, to any Lie algebroid $A$ and for any integer $k\geq 0$, one associates a new Lie algebroid $J^kA$. The underlying vector bundle is just the vector bundle of $k$-jets of sections of $A$; the anchor is the composition of the anchor of $A$ with the projection $J^kA\to A$ and the main property of the bracket is that $$[j^k(\alpha), j^k(\beta)]= j^k([\alpha, \beta])$$ (for all $\alpha, \beta\in \Gamma(A)$). Of course, since the space of sections of $J^kA$ is generated, as a $C^{\infty}(M)$-module, by the holonomic sections (i.e. those of type $j^k\alpha$), the Leibniz identity implies that the previous condition determines the bracket of $J^kA$ uniquely; of course, one proves separately that such a bracket actually exists. For that, one could use for instance the explicit formulas in therms of the relative Spencer connection, given in the next subsection. Note also that the projections $J^kA\to J^{k-1}A$ become Lie algebroid morphisms. Finally, one also shows, as in the case of pair groupoids, that for any Lie groupoid ${\mathcal{G}}$ with Lie algebroid $A$, $J^kA$ is (isomorphic to) the Lie algebroid of $J^k{\mathcal{G}}$.  \[when working with jets\]  According to remark \[1-when working with jets\], for $k= 1$, there is a slightly different description of ${J}^1{\mathcal{G}}$, which we will be using whenever we have to work more explicitly with $J^1{\mathcal{G}}$. According to the remark, we identify ${j}^1_xb$ with $d_xb$ and then ${J}^1{\mathcal{G}}{\rightrightarrows}M$ is interpreted as the the space of splittings $$\begin{aligned} \sigma_g :T_xM{\longrightarrow}T_g{\mathcal{G}}\end{aligned}$$ of the map $d_gs:T_g{\mathcal{G}}\to T_xM$, with $g$ an element in ${\mathcal{G}}$ and $x= s(g)$, with the property that $$\begin{aligned} \lambda_{\sigma_g}:= {d}t\circ\sigma:T_xM\to T_{t(g)}M \text{ is an isomorphism.}\end{aligned}$$ The groupoid structure of $J^1({\mathcal{G}})$ becomes: $$\begin{aligned} s(\sigma_g)= s(g),\ t(\sigma_g)= t(g), \end{aligned}$$ $$\label{mult-i-J1} \sigma_g\cdot \sigma_h (u)=({d}m)_{g, h}(\sigma_g(\lambda_{\sigma_h}(u)), \sigma_h(u)) .$$ A similar discussion applies at the Lie algebroid level, when working with $J^1A$. In this case one can use the Spencer decomposition of remark \[remark-J-decomposition\] to represent the sections of $J^1A$ as pairs $(\alpha, \omega)$ with $\alpha$ a section of $A$ and $\omega\in \Omega^1(M, A)$ (but be aware of the module structure (\[strange-module-structure\])!). The Lie bracket can then be written as $$[(\alpha, \omega), (\alpha', \omega')]= ([\alpha, \alpha'], L_{\alpha}(\omega')- L_{\alpha'}(\omega)+ [\omega, \omega']_{\rho}),$$ where $$[\omega, \omega']_{\rho}(X)= \omega(\rho(\omega'(X)))- \omega'(\rho(\omega(X))),$$ and the Lie derivative $L_{\alpha}(\omega')$ is $$L_{\alpha}(\omega')(X)= [\alpha, \omega'(X)]- \omega'([\rho(\alpha), X]).$$ Another way of writing (and explaining) these formulas will be given in the next subsection. ### The adjoint representations; the Cartan forms {#The adjoint representation} Given a Lie groupoid ${\mathcal{G}}$ over $M$ with Lie algebroid denoted by $A$, the groupoid ${J}^k{\mathcal{G}}$ has canonical representations on the vector bundles $J^{k-1}TM$ and $J^{k-1}A$, known under the name of “adjoint representations”. This is done in complete analogy with the case of the pair groupoid, explained in example \[adjoint-for-classical\]. Let us repeat the construction but, for notational simplicity, let us assume that $k= 1$. Then: 1. the linear action of $J^1{\mathcal{G}}$ on $TM$ associates to an element $\sigma= {j}^{1}_{x}b \in J^1{\mathcal{G}}$ the isomorphism $\lambda_{\sigma}:= d_x\phi_b: T_{s(\sigma)}M \to T_{t(\sigma)}M$. 2. the linear action of $J^1{\mathcal{G}}$ on $A$ (the adjoint action) is defined as follows. A bisection $b$ of ${\mathcal{G}}$ acts on ${\mathcal{G}}$ by conjugation $$C_b(g) = b(t(g))\cdot g \cdot b(s(g))^{-1}.$$ It is clear that this action maps units to units, and source fibers to source fibers. Moreover, the differential of $C_b$ at a unit $x$ depends only on ${j}^1_xb$. We define $${\text{\rm Ad}\,}_{{j}^1_xb}{\alpha}= {d}_{x}C_b({\alpha})$$ for all ${\alpha}\in A_x$. Using the description of ${J}^1{\mathcal{G}}$ given in remark \[when working with jets\], the adjoint representations can be described as follows. If ${\alpha}\in A_x$, and ${\epsilon}\mapsto h_{{\epsilon}}$ is any curve in $s^{-1}(x)$ such that $$h_0 = 1_x, \ \text{ and } \ \frac{{d}}{{d}{\epsilon}}\big{|}_{{\epsilon}= 0}h_{{\epsilon}} = {\alpha},$$ then $$\begin{aligned}\label{eq: Ad} {\text{\rm Ad}\,}_{\sigma_g}{\alpha}& = \frac{{d}}{{d}{\epsilon}}\big{|}_{{\epsilon}= 0}m(m(b(t(h_{{\epsilon}})),h_{\epsilon}), b(x))\\ &={d}_gR_{g^{-1}} \circ {d}_{(g, s(g))}m(\sigma_g(\rho({\alpha})), {\alpha}). \end{aligned}$$ Similarly, ${J}^1A$ has a canonical adjoint representations on $TM$ and on $A$ (both denoted by ${\text{\rm ad}\,}$) determined by the Leibniz identities and the conditions $${\text{\rm ad}\,}_{{j}^1{\alpha}}X = [\rho({\alpha}),X], \ \text{ and } \ {\text{\rm ad}\,}_{{j}^1{\alpha}}{\beta}= [{\alpha},{\beta}].$$ Using the Spencer decomposition to represent the sections of $J^1A$ as pairs $(\alpha, \omega)$ as at the end of remark \[when working with jets\], one has the general formulas $${\text{\rm ad}\,}_{(\alpha, \omega)}(X)= [\rho(\alpha), X]+ \rho(\omega(X)), \ {\text{\rm ad}\,}_{(\alpha, \omega)}(\beta)= [\alpha, \beta]+ \omega(\rho(\beta))$$ (see also below). Again, if $A$ is the Lie algebroid of a Lie groupoid ${\mathcal{G}}$, then these representations correspond (via lemma \[derivating-representations\]) to the canonical representations of ${J}^1{\mathcal{G}}$ on $TM$ and $A$ respectively (mentioned in the groupoid version of our discussion). What is probably less known is the fact the the Lie bracket of $J^1A$ and the adjoint actions can be described explicitely using Spencer’s relative connection $D^{\text{\rm clas}}$ (see remark \[remark-J-decomposition\]). For the actions, $$\begin{aligned} \label{actions} {\text{\rm ad}\,}_{\xi}(X)= [\rho(\xi), X]+ \rho D^{\text{\rm clas}}_{X}(\xi), \ \ {\text{\rm ad}\,}_{\xi}({\alpha})= [pr(\xi),{\alpha}]+ D^{\text{\rm clas}}_{\rho({\alpha})}(\xi).\end{aligned}$$ Using these, each $\xi\in \Gamma(J^1A)$ induces a Lie derivative ${L}_{\xi}$ on $\Omega^1(M, A)$ by $$\begin{aligned} \label{adjunta} L_{\xi}(\omega)(X)= {\text{\rm ad}\,}_{\xi}(\omega(X))- \omega([\rho(\xi),X])\end{aligned}$$ and then, for the bracket of $J^1A$, one finds $$\begin{aligned} \label{bracket} [\xi, \eta]= j^1([pr(\xi), pr(\eta)])+ L_{\xi}(D^{\text{\rm clas}}(\eta))- L_{\eta}(D^{\text{\rm clas}}(\xi)).\end{aligned}$$ Note several of the advantages of this point of view: - (\[bracket\]) can be taken as the definition of the bracket of $J^1A$. The defining properties of the bracket (that it satisfies Leibniz and the bracket of $1$-jets is the $1$-jet of the bracket are easy checks); - (\[actions\]) can be taken as the definition of the adjoint actions; - both formulas work for the higher jet algebroids $J^kA$ (and this will be our definition for the adjoint representations of $J^kA$); - more conceptually, it indicates that the key ingredient for the entire theory are the Spencer operators $D^{\text{\rm clas}}$. This will become more clear in the later parts of the thesis. One of the first questions to answer is: which are the key properties of $D^{\text{\rm clas}}$ that make the theory work? (e.g. what makes the bracket satisfy the Jacobi identity?; what ensures that the adjoint actions are actual actions?). The answer will be given later on, when discussing abstract Spencer operators. Finally, we move to the global counterpart of the Spencer operators $D^{\text{\rm clas}}$, i.e. to the Cartan forms. Again, the precise meaning of “global counterpart" will be made more clear later on, where we will actually see that the Cartan form and the Spencer operator are, modulo the Lie functor, the same object. Given a Lie groupoid ${\mathcal{G}}$ with Lie algebroid $A$, the associated Cartan forms will be $1$-forms $$\theta\in \Omega^1({J}^k{\mathcal{G}}, t^*J^{k-1}A).$$ They are induced by the Cartan forms associated to the bundle $s: {\mathcal{G}}\to M$ (see subsection \[PDEs; the Cartan forms:\]) by restricting them to ${J}^k{\mathcal{G}}\subset {J}^k({\mathcal{G}}\stackrel{s}{\to} M)$. As such, they take values in $$\pi^* T^{s} {J}^{k-1}{\mathcal{G}}\cong \pi^* t^* {J}^{k-1}A= t^* {J}^{k-1}A .$$ The main property of the Cartan forms discussed before becomes: A bisection $\xi$ of ${J}^k{\mathcal{G}}$ is of type ${j}^kb$ for some bisection $b$ of ${\mathcal{G}}$ if and only if $\xi^*(\theta)= 0$. One can also give an explicit description of $\theta$, completely similar to the one of the Cartan forms associated to pseudogroups (subsection \[Lie pseudogroups\]). Here is the outcome when $k= 1$, i.e. for $$\theta\in \Omega^1({J}^1{\mathcal{G}}, t^{\ast}A).$$ Let ${pr}:{J}^1{\mathcal{G}}\to {\mathcal{G}}$ be the canonical projection and let $\xi$ be a vector tangent to ${J}^1{\mathcal{G}}$ at some point ${j}^1_xb\in {J}^1{\mathcal{G}}$. Then the difference $${d}_{{j}^1_xb}{pr}(\xi)-{d}_x b\circ {d}_{{j}^1_xb}s(\xi) \in T_g{\mathcal{G}}$$ lies in $ \ker {d}s$, hence it comes from an element in $A_{t(g)}$: $$\theta_{\mathrm{can}}(\xi)=R_{b(x)^{-1}}({d}_{{j}^1_xb}pr(\xi)- {d}_x b\circ {d}_{{j}^1_xb}s(\xi))\in A_{t(g)}.$$ ### Generalized Lie pseudogroups {#Generalized Lie pseudogroups} One of the main problems with the various notions of “Lie pseudogroups” that one finds in the literature is that, in most cases, the pseudogroups are assumed to be transitive (recall that a pseudogroup $\Gamma$ acting on $M$ is transitive if for any $x, y\in M$, there exists $\phi\in \Gamma$ such that $\phi(x)= y$; equivalently, $\Gamma^{(0)}$ is the pair groupoid of $M$). Moreover, the literature on “the intransitive case” refers to a rather mild relaxation of transitivity as it still requires the orbits of $\Gamma$ to have constant dimension (usually required to the be fibers of a submersion $I: M\to B$). Of course, this excludes some very simple examples. Here we indicate how, using arbitrary Lie groupoids instead of the pair groupoid (and bisections instead of local diffeomorphisms), there is a straightforward generalization of the notion of pseudogroups, for which the intransitivity is built in (because one can start with an intransitive Lie groupoid in the first place), and for which the standard theory can be extended without much trouble (that is at least the way we organized the exposition). So, let ${\mathcal{G}}$ be a Lie groupoid over a manifold $M$. When ${\mathcal{G}}= \Pi(M)$ we will recover the classical theory. In general, we consider the set $\operatorname{Bis}_{\text{\rm loc}}({\mathcal{G}})$ of local bisections of ${\mathcal{G}}$, where the composition of elements of $\operatorname{Diff}_{\text{\rm loc}}(M)$, is replaced by the product of local bisections (see definition \[Lie groupoids\] and the previous subsection). A [**generalized Lie pseudogroup**]{} is a Lie groupoid ${\mathcal{G}}$ together with a subset $\Gamma\subset\operatorname{Bis}_{\text{\rm loc}}({\mathcal{G}})$, satisfying: 1. If $b\in \Gamma$, then $b^{-1}\in\Gamma$. 2. If $b_1,b_2\in \Gamma$ and $b_1\cdot b_2$ is defined, then $b_1\cdot b_2\in \Gamma$. 3. If $b\in\Gamma$ and $U$ is an open set contained in the domain of $b$, then $b|_U\in\Gamma$. 4. If $b:U\to {\mathcal{G}}$ is a bisection and $U$ can be covered be a family of open sets $U_i$ such that $b|_{U_i}\in\Gamma$ for all $i$, then $b\in\Gamma$. We also say that $\Gamma$ is supported by ${\mathcal{G}}$, or that $\Gamma$ is a ${\mathcal{G}}$-pseudogroup.  Since any bisection $b$ of ${\mathcal{G}}$ induces a local diffeomorphism $\phi_b:= t\circ b$ on $M$, any generalized pseudogroup $\Gamma$ induces a classical pseudogroup $$\Gamma^{\textrm{cl}}:= \{ \phi_b: b\in \Gamma \},$$ which we call the [**classical shadow**]{} of $\Gamma$; however, in many case $\Gamma^{\textrm{cl}}$ is badly behaved although $\Gamma$ is not. As in the classical case (see subsection \[Lie pseudogroups\]), - the $k$-jets of elements of $\Gamma$ define a subgroupoid $$\begin{aligned} \Gamma^{(k)}\subset J^k{\mathcal{G}};\end{aligned}$$ - the order $k$ of $\Gamma$ is determined by the condition that, for a bisection $b$ of ${\mathcal{G}}$: $$j^{k}_{x}b\in \Gamma^{(k)} \ \ \forall \ \ x\in \textrm{Dom}(b) \Longrightarrow b\in \Gamma ;$$ - we say that $\Gamma$ is smooth if each $\Gamma^{(k)}$ is smooth (as a subgroupoid of $J^k{\mathcal{G}}$); - in the smooth case (which we tacitly assume from now on), we obtain a tower of Lie groupoids $$\ldots {\longrightarrow}\Gamma^{(2)}{\longrightarrow}\Gamma^{(1)}{\longrightarrow}\Gamma^{(0)}$$ and, passing to Lie algebroids, a tower of Lie algebroids $$\ldots {\longrightarrow}A^{(2)} {\longrightarrow}A^{(1)}{\longrightarrow}A^{(0)} .$$ Here, each $A^{(k)}$ is a Lie sub-algebroid of $J^kA$, where $A$ is the Lie algebroid of ${\mathcal{G}}$; - the adjoint actions of $J^k{\mathcal{G}}$ (see the previous subsections) restrict to give the adjoint representations of $\Gamma$: $$A^{(k-1)}, J^{k-1}TM \in \textrm{Rep}(\Gamma^{(k)});$$ - the Cartan form on $J^k{\mathcal{G}}$ restricts to give the Cartan forms of $\Gamma$: $$\theta\in \Omega^1(\Gamma^{(k)}; A^{(k-1)}).$$ Similarly for the classical Spencer relative connections: $$D^{\textrm{clas}}: \Gamma(A^{(k)}){\longrightarrow}\Omega^1(M, A^{(k-1)});$$ - for each $k$, one considers $$\mathfrak{g}^{k}(\Gamma):= \textrm{Ker} (A^{(k)}{\longrightarrow}A^{(k-1)}).$$ Since the kernel of the map $J^kA{\longrightarrow}J^{k-1}A$ is canonically identified with $\textrm{Hom}(S^kTM, A)$, we have canonical inclusions $$\mathfrak{g}^{k}(\Gamma)\subset \textrm{Hom}(S^kTM, A),$$ hence $\mathfrak{g}^k(\Gamma)$ is a tableaux (bundle). Again, $\mathfrak{g}^{k+1}(\Gamma)$ sits inside the prolongation of $\mathfrak{g}^k(\Gamma)$ hence, in the terminology of definition \[tableaux tower\] $${\mathfrak{g}}^{\infty}(\Gamma): = ({\mathfrak{g}}^1(\Gamma), {\mathfrak{g}}^2(\Gamma), \ldots )$$ is a bundle of tableaux towers over $M$. What is maybe less obvious is the notion that corresponds to the transitivity from the classical case. To avoid confusions, we will use the term “full”. More precisely, given a generalized $\Gamma$ supported by ${\mathcal{G}}$, we say that $\Gamma$ is a **full generalized pseudogroup** if $\Gamma^{(0)}= {\mathcal{G}}$. We expect that a large part of the classical theory of transitive Lie pseudogroups can be generalized to such full pseudogroups. Here are some examples of generalized pseudogroups. We start with some trivial ones, in order to illustrate the concept.  For any submersion $p: M\to B$ between two manifolds, one can form the subgroupoid ${\mathcal{G}}:= M\times_{B}M$ of the pair groupoid of $M$, consisting of pairs $(x, X)$ with $p(x)= p(X)$. In this case, generalized pseudogroups $\Gamma$ supported by ${\mathcal{G}}$ are the same thing as classical pseudogroups on $M$ with the property that $p\circ f= p$ – a property that is usually formulated as: “$p$ is an invariant of the pseudogroup”. As we have mentioned above, a large part of the literature on “intransitive pseudogroups” refers to this rather superficial relaxation of transitivity: one requires that the orbits of $\Gamma$ are the fibers of a submersion $p: M\to B$. In other words, one is looking at full generalized pseudogroups supported by groupoids of type $M\times_{B}M$. Of course, the main point in this case is that, since $M\times_{B}M\subset M\times M$, one is still dealing with classical pseudogroups.  On $M=\mathbb{R}^2$ (with coordinates denoted by $(x, y)$) we consider the pseudogroup $\Gamma^{\textrm{naive}}$ consisting of all the transformations $(X, Y)$ of type $$\Gamma^{\textrm{naive}}: X= x\cdot \textrm{cos}(\alpha)+ y\cdot \textrm{sin}(\alpha), \ Y= - x\cdot \textrm{sin}(\alpha)+ y\cdot \textrm{cos}(\alpha),$$ with $\alpha \in \mathbb{R}$ variable. Intuitively, $\Gamma^{\textrm{naive}}$ is a Lie pseudogroup, because its elements can be described as the solutions of the system $$\frac{\partial X}{\partial x}= \frac{\partial Y}{\partial y}, \ \ \ \frac{\partial X}{\partial y}= - \frac{\partial Y}{\partial x}, \ \ \ X^2+ Y^2= x^2+ y^2 .$$ However, this groupoid has orbits the concentric circles around the origin, plus the origin itself, hence it fails the usual regularity assumptions; the origin poses smoothness problems in the associated jet groupoids (the starting components $(x, y, X, Y)$ are “free” except for $x= y= 0$, when one must have $X= Y= 0$). There is also a more philosophical criticism on $\Gamma^{\textrm{naive}}$: while this is clearly about the standard action of $S^1$ on $\mathbb{R}^2$ by rotations, this geometric fact is not really reflected in the objects associated to $\Gamma^{\textrm{naive}}$. Our point is to look at this example slightly differently and to encode the geometric situation in the associated groupoid. More precisely, consider the groupoid associated to the action of $S^1$ on $\mathbb{R}^2$ (see example \[ex-action-groupoids\]): $${\mathcal{G}}:= S^1\ltimes \mathbb{R}^2 .$$ Bisections of ${\mathcal{G}}$ correspond to functions $Z: \mathbb{R}^2\to S^1$ with the property that $(x, y)\mapsto Z\cdot (x, y)$ is a local diffeomorphism. Clearly, the ${\mathcal{G}}$-pseudogroup $\Gamma$ that is naturally associated to our problem is the one defined by the condition that $Z= \textrm{const}$: $$\Gamma: \frac{\partial Z}{\partial x}= \frac{\partial Z}{\partial y}= 0.$$ As it can be easily seen, $\Gamma$ is a smooth ${\mathcal{G}}$-pseudogroup of order one. Actually $\Gamma^{(0)}= {\mathcal{G}}$ and each $\Gamma^{(k)}$ with $k\geq 1$ is a subgroupoid of $J^k{\mathcal{G}}$ isomorphic to ${\mathcal{G}}$. Of course, the classical pseudogroup associated to $\Gamma$, $\Gamma^{\textrm{cl}}$, is precisely the naive version $\Gamma^{\textrm{naive}}$.  There are interesting cases of classical pseudogroups for which, although they are well behaved, it is interesting to realize them as the “classical shadow” of a generalized pseudogroup. For instance, the pseudogroup from example \[SL-2-example\], $$\Gamma:\ \ X= \frac{ax+b}{cx+d}, \ \ ad-bc= 1,$$ is clearly about the action of $SL_2(\mathbb{R})$ on $\mathbb{R}$ (a fact that was not really reflected by the discussion in the example that we mentioned). The full description of the geometric situation is very similar to the one from the previous example: one has the action groupoid ${\mathcal{G}}= SL_2(\mathbb{R})\ltimes \mathbb{R}$ and a ${\mathcal{G}}$-pseudogroup $\widetilde{\Gamma}$ characterized by $Z= \textrm{const}\in SL_2(\mathbb{R})$ (i.e. bisections of type $(x, y)\mapsto (A, x, y)$ with $A\in SL_2(\mathbb{R})$). Again, $\Gamma$ is just the classical shadow of $\widetilde{\Gamma}$.  Of course, the previous two examples fit into a more general class of examples, associated to an action of a Lie group $G$ on a manifold $M$, to which one associates a classical pseudogroup $\Gamma$ generated by multiplications by elements of $G$, as well as a ${\mathcal{G}}$-pseudogroup $\widetilde{\Gamma}$, where ${\mathcal{G}}= G\ltimes M$ and $\Gamma$ consists of the local bisections which are constant in the $G$-argument. Note that ${\mathcal{G}}$ admits a 1-form similar to the Cartan form, $\xi \in \Omega^1({\mathcal{G}}, \mathfrak{g})$, with values in the Lie algebra of $G$ and such that $\widetilde{\Gamma}$ is characterized by $b^*(\xi)= 0$. Of course, in general, while $\widetilde{\Gamma}$ is smooth, $\Gamma$ has little chance to be well-behaved. Let $(M, \pi)$ be a Poisson manifold, i.e. a manifold endowed with a bivector $\pi$ with the property that the bracket $$\{f, g\}:= df\wedge dg (\pi)= \pi(df, dg)$$ makes $C^{\infty}(M)$ into a Lie algebra. It is well-known that (integrable) Poisson manifolds correspond to symplectic groupoids (see e.g. [@Zung]). Starting with $(M, \pi)$, the corresponding groupoid is most easily seen via the corresponding infinitesimal object- its Lie algebroid. More precisely, associated to $\pi$ there is a Lie algebroid $$A_{\pi}:= T^{*}M,$$ with anchor $\pi^{\sharp}$ – that is $\pi$ interpreted as a linear map $T^*M\to TM$ (hence it is determined by $\xi(\pi^{\sharp}(\eta))= \pi(\eta, \xi)$) and with the bracket uniquely determined by the Leibniz identity and the requirement that, on exact $1$-forms, $$[df, dg]= d\{f, g\} .$$ On general elements $\xi, \eta\in \Omega^1(M)$, $$[\eta, \xi]_{\pi}= L_{\pi^{\sharp}(\eta)}(\xi)- L_{\pi^{\sharp}(\xi)}(\eta)- d\pi(\eta, \xi)).$$ One says that $(M, \pi)$ is integrable if the algebroid $A_{\pi}$ is integrable (note that this property is equivalent to the more geometric – and “groupoid-free” – condition on the existence of complete symplectic realizations- see [@CrainicFernandes:jdd]). In this case, one denotes the unique $s$-simply connected Lie groupoid integrating $A_\pi$ by $\Sigma(M, \pi)$. It then follows that $\Sigma(M, \pi)$ carries a unique symplectic form $\omega$, which is multiplicative (see definition \[def-mult-form\] for the general notion of multiplicativity) and has the property that the source map is a Poisson map. This statement is certainly non-trivial, as it involves an integrability theorem. Actually, that is precisely the integrability theorem that we will generalize in chapter \[The integrability theorem for multiplicative forms\] (theorem \[t1\]) to arbitrary forms (for the case of trivial coefficients that are relevant here, see subsection \[sec: trivial coefficients\] and example \[sympl-gpds-from-t1\]). Using the groupoid $\Sigma= \Sigma(M, \pi)$ and its symplectic form, there is a natural $\Sigma$-pseudogroup associated to $(M, \pi)$: the pseudogroup of Lagrangian bisections: $$\Gamma:= \{ b\in \operatorname{Bis}_{\text{\rm loc}}(\Sigma): b^*(\omega)= 0\}.$$ More geometrically, identifying a bisection $b$ with its graph $L_b\subset \Sigma$, $\Gamma$ consists of Lagrangian subspaces $L\subset \Sigma$ with the property that $s|_{L}$ and $t|_{L}$ are diffeomorphism into opens inside $M$.  There are several classes of examples that are similar to the previous one, obtained for various Poisson-related geometric structures. For Dirac structures, the discussion is completely similar (see e.g. [@BCWZ] for the relevant notions). The case of Jacobi structures is more interesting, and the relevant notions will be recalled later in the thesis, in subsection \[Contact groupoids\]. Let us only mention here that the “basic theory” for Jacobi manifolds and their associated (contact) groupoids is rather unsatisfactory in the existing literature. E.g., in [@Jacobi], one works very indirectly by passing to Poisson manifolds and symplectic groupoids (by the symplectization functor). A rather unexpected application of our integrability theorem (theorem \[t1\]) is that it allows us to understand the basics of Jacobi structures directly, shedding light even on the very definition of contact groupoids. The bottom line is that, associated to any integrable Jacobi structure $(L, \{\cdot, \cdot\})$ on $M$ (see subsection \[Contact groupoids\]), there is a contact groupoid $(\Sigma, \theta)$, where $\theta\in \Omega^1(\Sigma, L)$ is a contact form with coefficients in a line bundle, which is multiplicative. Hence one has a natural $\Sigma$-pseudogroup, consisting of bisections that kill $\theta$ (Legendrian bisections).  Starting with any classical pseudogroup $\Gamma$ on $M$, for any $k\geq 0$, the set of bisections of $J^k(M, M)$: $$\Gamma^{(k)}:= \{ j^kb: b\in \Gamma \}$$ form a generalized pseudogroup living on $J^k(M, M)$, which has $\Gamma$ as its classical shadow (in some sense, the two are isomorphic). Note that this example is similar to the ones from the previous two examples since it has a description of type: $$\Gamma^{(k)}= \{ b\in \operatorname{Bis}_{\text{\rm loc}}({\mathcal{G}}): b^*(\theta)= 0 \}$$ where, this time, ${\mathcal{G}}= J^k(M, M)$ and $\theta$ is the Cartan form.  On the other hand, to each generalized pseudogroup $\Gamma$ one can associate a classical pseudogroup. More precisely, given any bisection $b$ of a Lie groupoid ${\mathcal{G}}$, one has an induced right translation by $b$, $$R_b: {\mathcal{G}}{\longrightarrow}{\mathcal{G}}, \ \ R_b(g)= g \cdot \overline{b}(s(g)),$$ where $\overline{b}$ is the section of the target map $t: {\mathcal{G}}\to M$ induced by $b$: $$\overline{b}(y):= b(\phi_{b}^{-1}(y)) \ \ \ (\phi_b= t\circ b).$$ Of course, this makes sense also for local bisections. In particular, if $\Gamma$ is a ${\mathcal{G}}$-pseudogroup, then one has an induced classical pseudogroup induced by right translations, acting on ${\mathcal{G}}$, consisting of all transformations of type $R_b$ with $b\in \Gamma$.  Here is a more “linguistic example”. In general, Lie groups $G$ are interpreted as Lie pseudogroups by letting them act on $G$ by right (or left) translations. This defines a pseudogroup $\Gamma_G$ on $G$. With our terminology, we can say that $G$ can be viewed as a generalized $G$-pseudogroup, and $\Gamma_G$ is the classical pseudogroup induced by right translations, in the sense of the previous example. Outline of the main objects of this thesis ------------------------------------------ Here is a short outline of the main key-words of the thesis. Note that each one of them corresponds to a different chapter, but the order is different (here we present them in the more natural order). The ultimate concept is that of Pfaffian groupoid. One aim is to study it using Lie’s infinitesimal methods, and that gives rise to Spencer operators on Lie algebroids. However, before we start with the multiplicative (or “oid") story, we first go through the case of bundles (without any multiplication involved). In all these cases, we will carry out some of the standard theory from the geometry of PDEs (solutions, prolongations, Spencer cohomology), attempting to attain a more conceptual formulation or understanding of the objects involved (e.g. in the case of prolongations we will point out their universal property, while for prolongations of Pfaffian groupoids we will show that the compatibility condition of the Cartan forms involved is equivalent to a Maurer-Cartan equation).\ **Pfaffian bundles:** A Pfaffian bundle $$\pi:R{\longrightarrow}M,\quad H\subset TR$$ is a bundle $\pi$, together with a distribution $H\subset TR$ satisfying some compatibility conditions with $\pi$ (see definition \[def: pfaffian distributions\]). One of the main notions of Pfaffian bundles is that of a [**solution**]{}. These are sections $\sigma\in\Gamma(R)$ with the property that $$\begin{aligned} d_x\sigma(T_xM)\subset H_{\sigma(x)},\quad\text{for all }x\in M. \end{aligned}$$ Of course one also has the dual picture with one forms. The two points of view will be present in this thesis. We remark that our main interest is Pfaffian bundles along the source map of a Lie groupoid.\ **Relative connections:** Relative connections are linear Pfaffian bundles (this will require a precise formulation and a proof). They can be described using simpler data: some operator $$\begin{aligned} {\ensuremath{\mathfrak{X}}}(M)\times\Gamma(F){\longrightarrow}\Gamma(E), \end{aligned}$$ satisfying connection-like properties along a vector bundle map $l:F\to E$ (see definition \[definitions\]). These also arise as the linearization (along solutions) of general Pfaffian bundles.\ **Pfaffian groupoids:** Pfaffian groupoids are the multiplicative version of Pfaffian bundles; they can actually be viewed as Pfaffian bundles if we consider the source map $s:{\mathcal{G}}\to M$. More precisely, a Pfaffian groupoid is Lie groupoid ${\mathcal{G}}$, together with a (multiplicative) distribution $${\mathcal{H}}\subset T{\mathcal{G}}$$ satisfying some compatibility with the groupoid structure (see definition \[def: linear distributions\]). The dual point of view consists of (multiplicative) one forms on ${\mathcal{G}}$ with values in some representation, which are compatible with the groupoid structure (see definition \[def-mult-form\]). Pfaffian groupoids provide the main framework for all the examples of groupoids that arise from (generalized) Lie pseudogroups.\ **Spencer operators:** Spencer operators arise as the infinitesimal counterpart of Pfaffian groupoids. They are actually the linearization of a Pfaffian groupoid along the unit map. In particular they are relative connections $$\begin{aligned} {\ensuremath{\mathfrak{X}}}(M)\times \Gamma(A){\longrightarrow}\Gamma(E)\end{aligned}$$ on a Lie algebroid $A$, satisfying some compatibility conditions with the algebroid structure (see definition \[def1\]). It is remarkable that the Pfaffian groupoid itself can be recovered from its Spencer operator (see theorem \[t2\]).\ **Multiplicative forms and Spencer operators of degree $k$:** More generally we will consider multiplicative forms and Spencer operators of arbitrary degree. This generalization turns out to be related also to Poisson and related geometries (we will actually present an application that clarifies some basic aspects of Jacobi manifolds in subsection \[Contact groupoids\]).\ Relative connections {#Relative connections} ==================== Motivated by our attempt to understand linear Pfaffian bundles, we are brought to the notion of relative connection. They also appear as the complete data of linear distribution in the sense of \[def: linear distributions\]. An example one should keep in mind of a relative connection is the classical Spencer operator described in subsection \[sp-dc\].\ Throughout this chapter let $K,E,F_k$ denote the total spaces of smooth vector bundles over a smooth manifold $M$, for any integer $k\geq-1$. We denote by $\pi:J^kF\to M$ the $k$-jet (vector) bundle of $F$, and by $pr:J^{k+1}F\to J^kF$ the canonical projection. See section \[jet-bundles\] and remark \[remark-J-decomposition\] for notation. For ease of notation $T$ and $T^*$ denote the tangent and cotangent bundles of the manifold $M$ respectively. Given $\pi : F \to M$, let $T^\pi F$ denote the subbundle of $TF$ given by $\ker d\pi$, the bundle of vectors tangent to the fibers of $\pi$. We will be repeatedly using the Spencer decomposition $$\begin{aligned} \label{Spencer decomposition} \Gamma(J^1E)\simeq \Gamma(E)\oplus \Omega^1(M,E)\end{aligned}$$ and the classical Spencer operator relative to the projection $pr:J^1E\to E$ $$\begin{aligned} \label{Spencer operator} D^{\text{\rm clas}}:\Gamma(J^1E){\longrightarrow}\Omega^1(M,E)\end{aligned}$$ See remark \[remark-J-decomposition\]. Preliminaries, definitions and basic properties ----------------------------------------------- ### Relative connections: definitions and basic properties Let $l:F\to E$ be a surjective vector bundle map of vector bundles over $M$. \[definitions\] - An [**$l$-connection on $F$ (or a connection on $F$ relative to $l$)**]{}, is any linear operator $$D:\Gamma(F){\longrightarrow}\Omega^1(M,E)\quad s\mapsto D_X(s):=D(s)(X)$$ satisfying the Leibniz identity relative to $l$: $$D_X(fs)=fD_Xs+L_X(f)l(s)$$ for all $s\in\Gamma(F)$, $X\in{\ensuremath{\mathfrak{X}}}(M)$ and $f\in C^\infty (M)$. We use the notation $(D,l):F\to E$; we also say that $(F,D)$ is a relative connection. - We say that $s\in\Gamma(F)$ is a [**solution of $(F,D)$**]{} if $D_X(s)=0$ for all $X\in{\ensuremath{\mathfrak{X}}}(M)$. We denote the set of solutions of $(F,D)$ by $$\begin{aligned} \text{\rm Sol}(F,D)\subset \Gamma(F)\end{aligned}$$ - [The **symbol space of $D$**]{}, denoted by $$\begin{aligned} {\mathfrak{g}}(F,D)\subset F\end{aligned}$$ or simply by ${\mathfrak{g}}$, is the subbundle given by the kernel of $l$. - We call the [**symbol map of $D$**]{} the linear map over $M$ $$\begin{aligned} \partial_D:{\mathfrak{g}}{\longrightarrow}{\mathrm{Hom}}(TM,E)\end{aligned}$$ defined by $\partial_D(v)(X)=D_X(v)$ for $v\in {\mathfrak{g}}_x$ and $X\in T_xM.$ - The **$k$-prolongation of the symbol map $\partial_D$**, in the sense of definition , is denoted by $$\begin{aligned} {\mathfrak{g}}^{(k)}(F,D)\subset S^kT^*\otimes {\mathfrak{g}}\end{aligned}$$ or simply by ${\mathfrak{g}}^{(k)}$. - We say that an $l$-connection $D$ is **standard** if $\partial_D$ is injective. \[rk-convenient’\]An $l$-connection can be interpreted as a vector bundle map $$\begin{aligned} j_D:F{\longrightarrow}J^1E\end{aligned}$$ given at the level of sections by $$\begin{aligned} j_D(s)=(l(s),D(s))\in\Gamma(E)\oplus\Omega^1(M,E),\end{aligned}$$ where we used the Spencer decomposition . Note that the restriction of $j_D=(l,D)$ to the symbol space ${\mathfrak{g}}$ takes values in the subbundle ${\mathrm{Hom}}(TM,E)$ of $J^1E$. Indeed, if $s$ is a section of ${\mathfrak{g}}$, then $$\begin{aligned} pr(j_D(s))=pr((l(s),D(s))=pr(0,D(s))=0\end{aligned}$$ i.e. $j_D(s)\in\ker pr={\mathrm{Hom}}(TM,E)$, where $pr:J^1E\to E$ is the projection map. Note that $$\begin{aligned} \partial_D=j_D|_{{\mathfrak{g}}}.\end{aligned}$$ \[D-diff\] Given $(D,l):F\to E$, there exists a unique linear operator $$\begin{aligned} \bar{D}:\Omega^{*}(M,F)\to\Omega^{*+1}(M,E)\end{aligned}$$ such that - on $\Gamma(F)=\Omega^0(M,F)$ is equal to $D$, and - satisfies the Leibniz identity relative to $l$, i.e. $$\begin{aligned} \label{unique}\bar{D}(\omega\otimes s)=d\omega\otimes l(s)+(-1)^{k}\omega\wedge D(s)\end{aligned}$$ for any $k$-form $\omega\in\Omega^k(M)$ and any section $s\in\Gamma(F)$. Moreover, $\bar{D}$ is given by the Koszul formula $$\label{D-differential}\begin{aligned} \bar{D}_{(X_0,\ldots,X_k)}&(\theta)=\sum_{i}(-1)^{i}D_{X_i}(\theta(X_0,\ldots,\hat{X_i},\ldots,X_k))\\&+\sum_{i<j}(-1)^{i+j}l(\theta([X_i,X_j],X_0,\ldots,\hat{X_i},\ldots,\hat{X_j},\ldots,X_k)) \end{aligned}$$ for any $\theta\in\Omega^k(M,F).$ Uniqueness follows from the fact that locally any form $\theta\in\Omega^k(M,F)$, can be written as $$\begin{aligned} \theta=\sum_i\omega^{i}\otimes s_i\end{aligned}$$ where $\omega^{i}\in\Omega^k(M)$ and $s_i\in\Gamma(F)$. So $$\begin{aligned} \bar{D}(\theta)=\sum_i\bar{D}(\omega^{i}\otimes s_i)=\sum_i(d\omega^{i}\otimes l(s_i)+(-1)^{k}\omega^{i}\wedge D(s_i)).\end{aligned}$$ On the other hand, $\bar{D}$ defined by equation clearly extends $D$ and satisfies the Leibniz identity relative to $l$. Indeed, $$\begin{aligned} \bar{D}_{(X_0,\ldots,X_k)}&(\omega\otimes s)=\sum_{i}(-1)^{i}D_{X_i}(\omega(X_0,\ldots,\hat{X_i},\ldots,X_k)s)\\&+\sum_{i<j}(-1)^{i+j}l(\omega([X_i,X_j],X_0,\ldots,\hat{X_i},\ldots,\hat{X_j},\ldots,X_k)s)\\ =&\sum_{i}(-1)^{i}\omega(X_0,\ldots,\hat{X_i},\ldots,X_k)D_{X_i}(s)\\ &+\sum_{i}(-1)^{i}L_{X_i}(\omega(X_0,\ldots,\hat{X_i},\ldots,X_k))l(s)\\ &+\sum_{i<j}(-1)^{i+j}\omega([X_i,X_j],X_0,\ldots,\hat{X_i},\ldots,\hat{X_j},\ldots,X_k)l(s)\\ =& d\omega\otimes l(s)+(-1)^k\omega\wedge D(s). \end{aligned}$$ \[delta\]The restriction of $\bar{D}:\Omega^k(M,F)\to\Omega^{k+1}(M,E)$ to $\Omega^k(M,{\mathfrak{g}})$ is a $C^{\infty}(M)$-linear map and therefore it induces a vector bundle map over $M$ $$\begin{aligned} \bar\partial_{D}:\wedge^{k}T^*\otimes {\mathfrak{g}}{\longrightarrow}\wedge^{k+1}T^*\otimes E,\quad \bar\partial_D(\omega\otimes s)=(-1)^k\omega\wedge \partial_D(s)\end{aligned}$$ for any $\omega\in\wedge^kT^*$ and $s\in{\mathfrak{g}}$. The proof is an easy consequence of formula . \[standard\] An $l$-connection $(D,l):F\to E$ is standard if and only if $$j_D:F\to J^1E$$ is injective. In this case $D=D^{\text{\rm clas}}\circ j_D$, where $D^{\text{\rm clas}}$ is the classical Spencer operator of $E$. For $s,s'$ sections of $F$, $$j_D(s-s')=(l(s-s'),D(s-s')).$$ So, if $j_D(s-s')(x)=0$ for some $x\in M$ then $l(s(x))=l(s'(x))$, i.e. $s(x)-s'(x)\in{\mathfrak{g}}$, and therefore we can assume that the section $s-s'$ belongs to $\Gamma({\mathfrak{g}})$. With this, $j_D(s-s')(x)=0$ if and only if $D(s)(x)=D(s')(x)$, and $$\label{eq:2} \partial_D(s(x)-s'(x))=D(s-s')(x)=D(s)(x)-D(s')(x)=0.$$ That $D=D^{\text{clas}}\circ j_D$ follows from the fact that any $l$-connection $D$ can be recovered from $D^{\text {clas}}$ by $D=D^{\text {clas}}\circ j_D$. ### First examples and operations {#first examples and operations} This subsection has the first basic examples you encounter in the literature and some operations on relative connections which we will use later on. \[linear connections\]A linear connection $$\begin{aligned} \nabla:{\ensuremath{\mathfrak{X}}}(M)\times\Gamma(F){\longrightarrow}\Gamma(F)\end{aligned}$$ is a standard connection relative to the identity map $id:F\to F$. It is well-known that linear connections $\nabla$ correspond to horizontal linear distributions $H\subset TF$. This correspondence is given by $$\begin{aligned} \nabla_X(s)=[\tilde X,\tilde s]\end{aligned}$$ where $X\in{\ensuremath{\mathfrak{X}}}(M)$, $\tilde X\in\Gamma(H)\subset {\ensuremath{\mathfrak{X}}}(M)$ is the lift of $X$ to $H$, and $\tilde s\in\Gamma(T^\pi F)\subset{\ensuremath{\mathfrak{X}}}(F)$ is the extension of $s$ to the vector field constant on each fiber of $F$. It is natural to have a similar interpretation for $l$-connections. This will be done in corollary \[t:linear distributions\]. \[Spencer tower\] On higher jets, we also have a version of the classical Spencer operator (see remark \[remark-J-decomposition\] for $k=1$): On $\pi:J^{k}F\to M$, $D^{k\text{\rm-clas}}$ is the unique relative connection $$\begin{aligned} D^{k\text{\rm-clas}}:\Gamma(J^{k}F){\longrightarrow}\Omega^1(M,J^{k-1}F)\end{aligned}$$ satisfying the Leibniz identity relative to the projection $pr:J^{k}F\to J^{k-1}F$, and the holonomicity condition $$\begin{aligned} D^{k\text{-clas}}(j^{k}s)=0.\end{aligned}$$ Alternatively, $D^{k\text{\rm-clas}}$ is the restriction to $\Gamma(J^{k}F)\subset \Gamma(J^1(J^{k-1}F))$ of the classical Spencer operator $$\begin{aligned} D^{\text{\rm clas}}:\Gamma(J^1(J^{k-1}F)){\longrightarrow}\Omega^1(M,J^{k-1}F)\end{aligned}$$ associated to the vector bundle $J^{k-1}F \to M$. In this case, the symbol space ${\mathfrak{g}}_k=\ker pr$ is equal to $S^{k}T^*\otimes F$. This comes from the short exact sequence of vector bundles over $M$ $$\begin{aligned} 0{\longrightarrow}S^{k}T^*\otimes F\xrightarrow{i}J^{k}F\xrightarrow{pr}J^{k-1}F{\longrightarrow}0\end{aligned}$$ where $i$ at the point $x$ is determined by $$\begin{aligned} i((df_1\otimes\cdots\otimes df_{k})\otimes s)=j^{k}_x(f_1\cdots f_{k}s)\end{aligned}$$ for any $f_1,\ldots,f_{k}\in C^{\infty}(M)$ such that $f_1(x)=\cdots=f_{k}(x)=0$. Note also that $D^{k{\text-{\text{\rm clas}}}}$ is a standard $pr$-connection as $j_{D^{\text{clas}}}=id$. Now we concentrate on operations of relative connections. For this purpose let’s fix some notation. Let $(D^1,l^1):F_1\to E_1$ and $(D^2,l^2):F_2\to E_2$ be relative connections and let $\psi: P\to M$ be a smooth map. #### Composition Let $$\begin{aligned} K\xrightarrow{(\tilde D,\tilde l)} F\xrightarrow{(D,l)}E\end{aligned}$$ be two relative connections. We can compose $(K,\tilde D)$ with $(F,D)$ to obtain connections $$\begin{aligned} D^{i}:\Gamma(K){\longrightarrow}\Omega^1(M,E),\quad i=1,2\end{aligned}$$ relative to the composition $l\circ \tilde l:K\to E$ in two different ways, namely $$\begin{aligned} D^1({\alpha})&=l\circ \tilde D({\alpha})\\ D^2({\alpha})&= D(\tilde l({\alpha})), \end{aligned}$$ where ${\alpha}\in \Gamma(K)$. Note that $D^1=D^2$ if and only if $D\circ \tilde l-l\circ \tilde D=0$. #### Direct sum The direct sum $$\begin{aligned} D^1\oplus D^2:\Gamma(F_1\oplus F_2){\longrightarrow}\Omega^1(M,E_1\oplus E_2)\end{aligned}$$ is the connection relative to $l^1\oplus l^2:F_1\oplus F_2\to E_1\oplus E_2$ defined by the equation $$\begin{aligned} D^1\oplus D^1(s_1,s_2):=(D^1(s_1),D^2(s_2))\end{aligned}$$ for any $(s_1,s_2)\in \Gamma(F_1\oplus F_2)$, where we canonically identify $\Gamma(F_1\oplus F_2)$ with $\Gamma(F_1)\oplus\Gamma(F_2)$ as $C^\infty(M)$-modules. #### Tensor product The tensor product $$\begin{aligned} D^1\otimes D^2:\Gamma(F_1\otimes F_2){\longrightarrow}\Omega^1(M,E_1\otimes E_2)\end{aligned}$$ is the connection relative to $l^1\otimes l^2:F_1\otimes F_2\to E_1\otimes E_2$ defined by $$\begin{aligned} D^1\otimes D^1(s_1\otimes s_2):=D^1(s_1)\otimes l^2(s_2)+l^1(s_1)\otimes D^2(s_2)\end{aligned}$$ for any $s_1\otimes s_2\in\Gamma(F_1\otimes F_2)$, where we canonically identify $\Gamma(F_1\otimes F_2)$ with $\Gamma(F_1)\otimes\Gamma(F_2)$ as $C^\infty(M)$-modules. #### Pull-back The pull-back connection $$\begin{aligned} \psi^*D:\Gamma(\psi^*F){\longrightarrow}\Omega^1(P,\psi^* E)\end{aligned}$$ of the map $\psi:P\to M$, is a connection relative to $\psi^*l:\psi^* F\to \psi^*E$. It is given on sections of the form $$\begin{aligned} \psi^*s:=\psi\circ s,\quad s\in \Gamma(F)\end{aligned}$$ by the formula $$\begin{aligned} \label{q} (\psi^*D)_X(\psi^*s):=\psi^*(D_{d\psi(X)}(s))\end{aligned}$$ where $X\in TP$. Formula is extended to $\Gamma(\psi^* F)$ by imposing the Leibniz identity relative to $\psi^*l:\psi^* F\to \psi^*E$ $$\begin{aligned} (\psi^*D)_X(f\psi^*s)=f(\psi^*D)_X(\psi^*s)+L_X(f)\psi^*l(\psi^*s),\quad f\in C^{\infty}(P).\end{aligned}$$ Prolongations of relative connections {#section:prolongation} ------------------------------------- Roughly speaking, a prolongation of a relative connection $D$ is a relative connection sitting above $D$, defined on “the first order approximation of solutions of $D$”. The main property of a prolongation is that it gives an injection between solutions of the prolongation and solutions of the original relative connection. We will have existence results on solutions on relative connections under some homological and smoothness conditions on the process of prolonging.\ Throughout this section $(D,l):F\to E$ is an $l$-connection. ### Compatible connections (abstract prolongations) {#compatible connections} #### Definition and basic properties \[first definitions\] Let $$\begin{aligned} \label{seq} K\xrightarrow{(\tilde D,\tilde l)}F\xrightarrow{(D,l)}E\end{aligned}$$ be relative connections. - We say that $(D,\tilde D)$ are [**compatible connections**]{} if the following two conditions are satisfied: $$\begin{aligned} \label{compatibility1}& D\circ \tilde l-l\circ \tilde D=0\\ \label{compatibility2}& D_X\tilde D_Y-D_Y\tilde D_X-l\circ \tilde D_{[X,Y]}=0\end{aligned}$$ for any $X,Y\in{\ensuremath{\mathfrak{X}}}(M)$. - A [**morphism**]{} $\Psi$ from a pair of compatible connections $K\xrightarrow{(\tilde D,\tilde l)}F\xrightarrow{(D,l)}E$ to another $\hat{K}\xrightarrow{(\hat{\tilde D},\hat{\tilde l})}\hat{F}\xrightarrow{(\hat{D},\hat{l})}\hat{E}$ is a commutative diagram of vector bundles $$\begin{aligned} \xymatrix{ K \ar[r]^{\tilde l} \ar[d]^{\Psi_2} & F \ar[r]^l \ar[d]^{\Psi_1} & E \ar[d]^{\Psi_0}\\ \hat K \ar[r]^{\hat {\tilde {l}}} & \hat F \ar[r]^{\hat l} & \hat E }\end{aligned}$$ such that $$\begin{aligned} \label{conmuta} \hat {\tilde {D}}\circ \Psi_2=\Psi_1\circ\tilde{D} ,\quad\quad\hat{D}\circ \Psi_1=\Psi_0\circ D\end{aligned}$$ We say that $\Psi$ is injective if $\Psi_0,\Psi_1$ and $\Psi_2$ are injective. - We say that $(B,\tilde D)$ is a [**prolongation**]{} of $(F,D)$ if $(D,\tilde D)$ are compatible connections and $\tilde D$ is standard. Compatible relative connections already appear in [@Gold3; @Ngo1], in the more familiar form described by the following lemma \[complex\] Let $$\begin{aligned} K\xrightarrow{(\tilde D,\tilde l)}F\xrightarrow{(D,l)}E\end{aligned}$$ by relative connections. If $\dim M>0$, then $(D,\tilde D)$ are compatible connections if and only if the composition $$\begin{aligned} \Omega^*(M,K)\xrightarrow{\bar{\tilde D}}\Omega^{*+1}(M,F)\xrightarrow{\bar{D}}\Omega^{*+2}(M,E)\end{aligned}$$ is zero. Let $s\in\Gamma(K)$ and $\omega\in\Omega^k(M)$. Then $$\label{ax} \bar{D}\circ \bar{\tilde D}(s)(X,Y)=D_X(\tilde D_Y(s))-D_Y(\tilde D_X(s))-l(\tilde D_{[X,Y]}(s))$$ where $X,Y\in{\ensuremath{\mathfrak{X}}}(M)$, and by formula , we have that $$\label{bx} \begin{aligned} \bar{D}\circ \bar{\tilde D}(\omega\otimes s)=&D(d\omega\otimes \tilde l(s)+(-1)^k\omega\wedge \tilde D(s))\\ =&(-1)^{k+1}d\omega\wedge D(\tilde l(s))+(-1)^kd\omega \wedge l(\tilde D(s))\\&+(-1)^k\omega\wedge \bar{D}\circ\bar{D}(s) \end{aligned}$$ Hence, expression is zero for all $s\in\Gamma(F)$ if and only if condition for compatible connections is satisfied, and under this assumption expression is zero for all $\omega\in\Omega^k(M)$ if and only if condition is fulfilled. For $$\begin{aligned} K\xrightarrow{(\tilde D,\tilde l)}F\xrightarrow{(D,l)}E\end{aligned}$$ compatible connections, let ${\mathfrak{g}}$ and ${\mathfrak{g}}'$ be the symbol spaces of $D$ and $\tilde D$ respectively and consider the restriction map $$\begin{aligned} {\mathfrak{g}}'\xrightarrow{\partial_{\tilde D}}{\mathrm{Hom}}(TM,F)\end{aligned}$$ of $j_{\tilde D}:K\to J^1F$. \[values\] The map $$\begin{aligned} \partial_{\tilde D}:{\mathfrak{g}}'\to {\mathrm{Hom}}(TM,F)\end{aligned}$$ takes values in ${\mathrm{Hom}}(TM,{\mathfrak{g}})$. Recall that at the level of sections $$\partial_{\tilde D}(s)=(\tilde l(s),\tilde D(s))=(0,\tilde D(s))=\tilde D(s)$$ for any $s\in\Gamma({\mathfrak{g}}')$. On the other hand, by condition , $$l\circ \tilde D(s)=D\circ \tilde l(s)=0.$$ Hence, $l\circ \partial_{\tilde D}(s)=0$, or equivalently $\partial_{\tilde D}(s)\in{\mathrm{Hom}}(TM,{\mathfrak{g}}).$ The following corollary is a consequence of the previous lemma. \[secuencia\] Let $$\begin{aligned} K\xrightarrow{(\tilde D,\tilde l)}F\xrightarrow{(D,l)}E\end{aligned}$$ be compatible connections. Then for each integer $k$ the sequence of vector bundles $$\begin{aligned} \wedge^kT^*\otimes{\mathfrak{g}}'\xrightarrow{\bar\partial_{\tilde D}}\wedge^{k+1}T^*\otimes{\mathfrak{g}}\xrightarrow{\bar\partial_{D}}\wedge^{k+2}T^*\otimes E\end{aligned}$$ is a complex, i.e. $\bar\partial_{D}\circ\bar\partial_{\tilde D}=0$, where ${\mathfrak{g}}'$ and ${\mathfrak{g}}$ are the symbols of $\tilde D$ and $D$ respectively. See remark \[delta\] for the definition of $\bar\partial_D$. \[cohomologia\]Let $$\begin{aligned} K\xrightarrow{(\tilde D,\tilde l)}F\xrightarrow{(D,l)}E,\quad\quad \hat K\xrightarrow{(\hat{\tilde{D}},\hat{\tilde{l}})}\hat F\xrightarrow{(\hat D,\hat l)}\hat E\end{aligned}$$ be compatible connections. Then, if $\Psi$ is a morphism as in definition \[first definitions\], then $\Psi$ induces a morphism $\Psi_*$ of complexes $$\begin{aligned} \xymatrix{\Omega^*(M,K)\ar[r]^{{\tilde D}} \ar[d]^{\circ\Psi_2} &\Omega^{*+1}(M,F)\ar[r]^{{D}} \ar[d]^{\circ\Psi_1} &\Omega^{*+2}(M,E) \ar[d]^{\circ\Psi_0}\\ \Omega^*(M,\hat K) \ar[r]^{{\hat{\tilde D}}} &\Omega^{*+1}(M,\hat F) \ar[r]^{{\hat D}}& \Omega^{*+2}(M,E) }\end{aligned}$$ by pre-composing with $\Psi$, which restricts to a morphism of vector bundle complexes $$\begin{aligned} \xymatrix{ \wedge^kT^*\otimes{\mathfrak{g}}' \ar[r]^{\partial_{\tilde D}} \ar[d]^{\circ\Psi_2} & \wedge^{k+1}T^*\otimes{\mathfrak{g}}\ar[r]^{\partial_{D}} \ar[d]^{\circ\Psi_1} & \wedge^{k+2}T^*\otimes E \ar[d]^{\circ\Psi_0}\\ \wedge^kT^*\otimes\hat{\mathfrak{g}}' \ar[r]^{\partial_{\hat{\tilde D}}} & \wedge^{k+1}T^*\otimes\hat{\mathfrak{g}}\ar[r]^{\partial_{\hat D}} & \wedge^{k+2}T^*\otimes \hat E }\end{aligned}$$ where $\hat{\mathfrak{g}}'$ and $\hat{\mathfrak{g}}$ are the symbols of $\hat{\tilde{D}}$ and $\hat D$ respectively. Moreover, if $\Psi$ is injective then $\Psi_*$ is injective in cohomology, i.e. $$\begin{aligned} \frac{\ker(\Omega^{*+1}(M,F)\xrightarrow{\bar D}{}\Omega^{*+2}(M,E))}{{\text{\rm Im}\,}(\Omega^{*}(M,K)\xrightarrow{\bar \tilde D}{}\Omega^{*+1}(M,F))}\xrightarrow{\circ\Psi_1}\frac{\ker(\Omega^{*+1}(M,\hat F)\xrightarrow{\bar{\hat D}}{}\Omega^{*+2}(M,\hat E))}{{\text{\rm Im}\,}(\Omega^{*}(M,\hat K)\xrightarrow{\bar{\hat{\tilde{D}}}}{}\Omega^{*+1}(M,\hat F))}\end{aligned}$$ is injective. This is a standard argument for morphisms of complexes using equalities . The proof is left to the reader. #### Solutions of compatible connections Throughout this section let $$\begin{aligned} K\xrightarrow{(\tilde D,\tilde l)}F\xrightarrow{(D,l)}E\end{aligned}$$ be compatible connections, and let ${\mathfrak{g}}$ and ${\mathfrak{g}}'$ be the symbol spaces of $D$ and $\tilde D$ respectively. Consider $$\begin{aligned} \label{solution-map} \text{\rm Sol}(K,\tilde D)\xrightarrow{\tilde l}\text{\rm Sol}(F,D),\end{aligned}$$ the $C^{\infty}(M)$-linear map given by the restriction of $\Gamma(K)\xrightarrow{\tilde l}\Gamma(F), \tilde l(s):=\tilde l\circ s$ to $\text{\rm Sol}(K,\tilde D)$. The fact that $\tilde l(s)$ belongs to $\text{\rm Sol}(F,D)$ for any $s\in\text{\rm Sol}(B,\tilde D)$ follows from condition , i.e. $D\circ \tilde l(s)=l\circ \tilde D(s)=0$. In this subsection we will construct a map $$\begin{aligned} S:\text{Sol}(F,D){\longrightarrow}H^{0,1}({\mathfrak{g}})\end{aligned}$$ where $$\begin{aligned} H^{0,1}({\mathfrak{g}}):=\frac{\ker\{T^*\otimes{\mathfrak{g}}\xrightarrow{\partial_D}\wedge^2T^*\otimes E\}}{{\text{\rm Im}\,}\{\partial_{\tilde D}:{\mathfrak{g}}'\to T^*\otimes{\mathfrak{g}}\}}\end{aligned}$$ and such that the following proposition holds. \[S\] The sequence $$\begin{aligned} \text{\rm Sol}(K,\tilde D)\xrightarrow{\tilde l}\text{\rm Sol}(F,D)\xrightarrow{S}H^{0,1}({\mathfrak{g}})\end{aligned}$$ is exact. Moreover, if $\tilde D$ is a standard $\tilde l$-connection then $\tilde l$ is injective. Note that if $H^{0,1}({\mathfrak{g}})=0$, then $\tilde l$ on the above proposition is surjective. If, moreover, $\tilde D$ is standard then $\tilde l$ is a bijection. The complete proof of the above proposition will be given at the end of the subsection. Notice that if $(K,\tilde D)$ is a prolongation of $(F,D)$, i.e. $\tilde D$ is standard, the map is injective. Indeed, if $s,{\alpha}\in\text{\rm Sol}(K,\tilde D)$ are such that $\tilde l(s)=\tilde l({\alpha})$, then $s-{\alpha}\in\Gamma({\mathfrak{g}}')$ and $$\begin{aligned} \partial_{\tilde D}(s-{\alpha})=\tilde D(s-{\alpha})=\tilde D(s)-\tilde D({\alpha})=0-0=0,\end{aligned}$$ i.e. $s-{\alpha}\in\ker(\partial_{\tilde D})$ which is zero as $\tilde D$ is standard. Thus $s={\alpha}.$ Using the compatibility of the pair $(D,\tilde D)$, we construct the map $$S:{\text{\rm Sol}}(F,D){\longrightarrow}\frac{{\mathrm{Hom}}(TM,{\mathfrak{g}})}{{\text{\rm Im}\,}\{\partial_{\tilde D}:{\mathfrak{g}}'\to T^*\otimes{\mathfrak{g}}\}}$$ as follows: for $s$ a solution of $(F,D)$, by surjectivity of $\tilde l:K\to F$, we find a section ${\alpha}\in \Gamma(K)$ (not necessarily in $\text{\rm Sol}(K,\tilde D)$) such that $\tilde l\circ {\alpha}=s$. $\tilde D({\alpha}):TM\to F$ has the property that its image lies inside ${\mathfrak{g}}$, as $l\circ \tilde D({\alpha})=D\circ \tilde l({\alpha})=0$. We define $S(s)$ by the class of $\tilde D({\alpha}):TM\to{\mathfrak{g}}$. $$S:{\text{\rm Sol}}(F,D){\longrightarrow}\frac{{\mathrm{Hom}}(TM,{\mathfrak{g}})}{{\text{\rm Im}\,}\{\partial_{\tilde D}:{\mathfrak{g}}'\to T^*\otimes{\mathfrak{g}}\}}$$ is a well-defined map whose image lies in the subbundle $H^{0,1}({\mathfrak{g}})$ Let’s first prove that $S$ is well-defined. To this end take two section $s^1$ and ${\alpha}$ of $K$ such that $\tilde l\circ {\alpha}=\tilde l\circ s^1=s$, and $D(\tilde l({\alpha}))=D(\tilde l(s^1))=D(s)=0$. This means that $s^1-{\alpha}$ is a section of ${\mathfrak{g}}'$ and $$\begin{aligned} \partial_{\tilde D}(s^1-{\alpha})=\tilde D(s^1-{\alpha})=\tilde D(s^1)-\tilde D({\alpha})=0\end{aligned}$$ In other words, the class of $S(s)$ does not depend on the representative. Moreover, for $X,Y\in{\ensuremath{\mathfrak{X}}}(M)$ $$\begin{aligned} \partial_D(\tilde D(s^1))(X,Y)&=D_X\tilde D_Y(s^1)-D_Y\tilde D_X(s^1)\\ &=l\circ \tilde D_{[X,Y]}(s^1)=D_{[X,Y]}(\tilde l(s^1))=0 \end{aligned}$$ where in the second and third equality we use conditions and respectively. This proves that $S$ takes values where we claimed. We already noticed that if $\tilde D$ is standard then $$\tilde l:{\text{\rm Sol}}(K,\tilde D){\longrightarrow}{\text{\rm Sol}}(F,D)$$ is injective. Let’s concentrate on the exactness of the sequence. Let ${\alpha}\in\Gamma(F)$ be a solution of $(K,\tilde D)$, i.e. $\tilde D({\alpha})=0$. Then $S (\tilde l({\alpha}))$ is the class of the map $$\begin{aligned} \tilde D({\alpha}):TM{\longrightarrow}{\mathfrak{g}}\end{aligned}$$ which is identically zero. Conversely, if $s\in\text{\rm Sol}(F,D)$ is such that $S(s)=0$ and ${\alpha}\in\Gamma(K)$ is such that $\tilde l({\alpha})=s$, then there exists $\beta\in \Gamma(g')$ such that $$\tilde D({\alpha})=\partial_{\tilde D}(\beta)=\tilde D(\beta).$$ Therefore, the section ${\alpha}-\beta\in \Gamma(K)$ is such that $\tilde l({\alpha}-\beta)=\tilde l({\alpha})=s$ and $\tilde D({\alpha}-\beta)=0$. Equivalently, ${\alpha}-\beta\in\text{\rm Sol}(K,\tilde D)$ is such that $\tilde l({\alpha}-\beta)=s$. ### The partial prolongation of a relative connection {#basic notions} The partial prolongation of a fixed $l$-connection $(D,l):F\to E$, is our first attempt to find a prolongation of $(F,D)$. Although its relative connection may fail to be compatible with $D$, it does satisfy compatibility condition as we will see later on. \[partial prolongation\] [**The partial prolongation of $F$ with respect to $D$**]{}, denoted by $J_D^1F$, is the smooth vector subbundle of $\pi:J^1F\to M$ defined by $$\begin{aligned} (J^1_DF)_x:=\{j^1_xs\mid D_X(s)(x)=0\text{ for all }X\in{\ensuremath{\mathfrak{X}}}(M)\}\end{aligned}$$ for $x\in M.$ \[cocyclea\]As $D$ is a differential operator of order $1$, it induces a vector bundle map $$\begin{aligned} a:J^1F{\longrightarrow}{\mathrm{Hom}}(TM,E), \quad j^1_xs\mapsto D(s)(x)\end{aligned}$$ with $J^1_DF$ as the kernel of $a$. With the Spencer decomposition in mind, we have that $a$ at the level of sections is given by $$\begin{aligned} a({\alpha},\omega)=D({\alpha})-l\circ \omega,\end{aligned}$$ for $({\alpha},\omega)\in\Gamma(F)\oplus \Omega^1(M,F)$, and therefore $$\begin{aligned} \Gamma(J^1_DF):=\{({\alpha},\omega)\in\Gamma(F)\oplus\Omega^1(M,F)\mid D({\alpha})=l\circ \omega\}\end{aligned}$$ With this description is clear that $J^1_DF$ is a smooth vector subbundle as it is the kernel of a surjective vector bundle map and that the restriction $pr:J^1_DF\to F$ is surjective. Indeed, if $f:TM\to E$ is any $C^{\infty}(M)$-linear map, $a(0,\sigma\circ f)=f$ for any splitting $\sigma$ of $l:F\to E$. Also, for $e\in F_x$ one can always find a section ${\alpha}\in\Gamma(F)$ with ${\alpha}(x)=e$, and then consider the form $\omega\in\Omega^1(M,E)$ given by $\omega=\sigma\circ D({\alpha})$. So, $({\alpha},\omega)\in\Gamma(J^1_DF)$ and $pr(({\alpha},\omega)(x))={\alpha}(x).$ Using the restriction to $J^1_DF$ of the classical Spencer operator $D^{{\text{\rm clas}}}$ associated to $F$ (see ), we call $(J^1_DF,D^{{\text{\rm clas}}})$ [**the partial prolongation of $(F,D)$**]{}. Note that the relative connections $$\begin{aligned} J^1_DF\xrightarrow{D^{{\text{\rm clas}}}}F\overset{D}{{\longrightarrow}}E\end{aligned}$$ are not necessarily compatible. Nevertheless, the compatibility condition holds as it is equivalent to the fact that $D({\alpha})=l\circ\omega$ for sections $({\alpha},\omega)\in\Gamma(J^1F)$. \[inj\] The surjective vector bundle map $$\begin{aligned} pr:J^1_DF{\longrightarrow}F\end{aligned}$$ is a bijection if and only if $l$ is bijective. This is clear from the description of $\Gamma(J^1_DF)$ given in remark \[cocyclea\]: if $l$ is a bijection then for any ${\alpha}\in \Gamma(F)$ there is only one choice of $\omega\in \Omega^1(M,F)$, namely $\omega=l^{-1}(D({\alpha}))$, with the property that $({\alpha},\omega)\in \Gamma(J^1_DF)$. ### The classical prolongation of a relative connection The purpose of this subsection is to describe and give a universal characterization of the classical prolongation of a relative connection. The classical prolongation, as its name suggests, is a prolongation of $(F,D)$ (under some smoothness assumptions) and may be thought of as the complete infinitesimal data of solutions of $(F,D)$. It is also characterized by the universal property stated in proposition \[equiv\] below. See [@Gold3] for prolongations on linear differential equations. \[def:curvature\] The [**curvature of $D$**]{} is the vector bundle map over $M$ $$\begin{aligned} \varkappa_D:J^1_DF{\longrightarrow}{\mathrm{Hom}}(\wedge^2TM,E)\end{aligned}$$ given on sections $({\alpha},\omega)\in \Gamma(J^1_DF)$ by $$\begin{aligned} \label{curvature} D_X(\omega(Y))-D_Y(\omega(X))-l\circ\omega[X,Y].\end{aligned}$$ where $X,Y\in{\ensuremath{\mathfrak{X}}}(M)$. The [**classical prolongation space with respect to $D$**]{}, denoted by $$\begin{aligned} P_D(F)\subset J^1_DF\subset J^1F\end{aligned}$$ is set to be the kernel of $\varkappa_D$. We say that it is **smooth** if $\ker\varkappa_D$ is a smooth sub-bundle of $J^1_DF.$ In the above definition, the Leibniz identity relative to $l$ and the fact that $D({\alpha})=l\circ \omega$, implies that $\varkappa_D$ is indeed a vector bundle map. The proof is an easy computation and is left to the reader. \[maximality\]Note that, by construction, $P_D(F)$ is the maximal subspace of $J^1F$ where the relative connections $$\begin{aligned} J^1F\xrightarrow{D^{\text{\rm clas}}}F\xrightarrow{D} E\end{aligned}$$ are compatible. Indeed, if $({\alpha},\omega)\in\Gamma(J^1F)$, the compatibility condition holds if and only if $({\alpha},\omega)\in\Gamma(J^1_DA)$. As for the compatibility condition , we find that $$D_XD^{{\text{\rm clas}}}_Y({\alpha},\omega)-D_YD_X^{{\text{\rm clas}}}({\alpha},\omega)-l\circ D^{{\text{\rm clas}}}_{[X,Y]}({\alpha},\omega)=\varkappa({\alpha},\omega)(X,Y)$$ where $X,Y\in{\ensuremath{\mathfrak{X}}}(M)$. \[smoothly-defined\]Let $(F,D)$ be a relative $l$-connection. We say that $P_D(F)\subset J^1F$ is [**smoothly defined**]{} if 1. $P_D(F)$ is smooth, and 2. $pr:P_D(F)\to F$ is surjective. In this case, $$\begin{aligned} (P_D(F),D^{(1)}):P_D(F)\xrightarrow{(D^{(1)}.pr)}F\end{aligned}$$ (a relative connection!) is called the [**first classical prolongation of $(F,D)$**]{}, where $D^{(1)}$ is the restriction of the Spencer operator associated to $F$. \[invol\]In the case that $l:E \to F$ is a vector bundle isomorphism, then $D$ can be interpreted as a classical connection and $P_D(F)$ measures its failure in being flat in the following sense. By remark \[inj\] $$\begin{aligned} \label{pr} pr:P_D(F){\longrightarrow}F\end{aligned}$$ is an injection. Under the identification of $F$ with $E$ via $l$, $D$ is a flat linear connection if and only if is bijective. This follows from the general setting of Pfaffian bundles (see corollary \[corollaryB\]). The universal property of the classical prolongation of $(F,D)$ is the following: \[equiv\] $(D^{(1)},pr):P_D(F)\to F$ is a prolongation of $(F,D)$ which is universal in the sense that for any other prolongation $$\begin{aligned} K\xrightarrow{(\tilde D,\tilde l)} F\xrightarrow{(D,l)} E\end{aligned}$$ there exists a unique vector bundle map $j:K\to P_D(F)$ such that $$\begin{aligned} \label{co1} \tilde D=D^{(1)}\circ j.\end{aligned}$$ Moreover, $j$ is injective. The universal property given by proposition \[equiv\] of the classical prolongation of $(F,D)$ can be rephrased as follows: for any prolongation $(\tilde D,\tilde l):K\to F$ of $(D,l)$, there exists a unique vector bundle map $j:K\to P_D(F)$ such that the diagram $$\begin{aligned} \xymatrix{ K \ar[r]^{\tilde D,\tilde l} \ar@{.>}[d]_{j} & F \ar[r]^{D,l} & E \\ P_D(F) \ar[ur]_{D^{(1)},pr} }\end{aligned}$$ is commutative. That $(P_D(F),D^{(1)})$ is a prolongation of $(F,D)$, whenever $pr:P_D(F)\to R$ is a surjective vector bundle map of smooth vector bundles, was proved in remark \[maximality\]. Let’s prove now that $j:=j_{\tilde D}$ satisfies , whenever $(K,\tilde D)$ is a prolongation of $(F,D)$. Let ${\alpha}$ be a section of $K$. Then $$\begin{aligned} j_{\tilde D}({\alpha})=(\tilde D({\alpha}),\tilde l({\alpha}))\in \Omega^1(M,F)\oplus\Gamma(F)\end{aligned}$$ and therefore $j_{\tilde D}({\alpha})\in P_D(F)$ if and only if $$\label{1909}\begin{aligned} &D(\tilde l({\alpha}))=l(\tilde D({\alpha}))\\ \varkappa_D(\tilde D({\alpha}),\tilde l({\alpha}))(X,Y)=D_X(&\tilde D_Y({\alpha}))-D_Y(\tilde D_X({\alpha}))-l\circ \tilde D_{[X,Y]}({\alpha})=0 \end{aligned}$$ for any $X,Y\in {\ensuremath{\mathfrak{X}}}(M).$ We see then that equations are conditions and for compatible connections. For the uniqueness let $j:K\to P_D(F)$ be such that equation holds. Then for $j_1:\Gamma(K)\to\Gamma(F)$ and $j_2:\Gamma(K)\to \Omega^1(M,F)$ such that $$\begin{aligned} j({\alpha})=(j_1({\alpha}),j_2({\alpha}))\end{aligned}$$ for ${\alpha}\in\Gamma(K)$, one has that equation implies that $j_2({\alpha})=\tilde D({\alpha})$. Now, as $j$ is a vector bundle map then for any $f\in C^{\infty}(M)$ $$\begin{aligned} (j_1(f{\alpha}),\tilde D(f{\alpha}))=(fj_1({\alpha}),df\otimes j_1({\alpha})+f\tilde D({\alpha}))\end{aligned}$$ Since $\tilde D$ satisfies the Leibniz identity relative to $\tilde l$, then $j_1({\alpha})=\tilde l({\alpha})$. The following is another remarkable property of the classical prolongation of $(F,D)$ \[2218\] The map $$\begin{aligned} pr:\text{\rm Sol}(P_D(F),D^\text{\rm clas})\overset{}{{\longrightarrow}}\text{\rm Sol}(F,D)\end{aligned}$$ is a bijection between solution with inverse given by the map $$\begin{aligned} j^1:\text{\rm Sol}(F,D)\overset{}{{\longrightarrow}}\text{\rm Sol}(P_D(F),D^\text{\rm clas})\end{aligned}$$ sending $s\in \text{\rm Sol}(F,D)$ to $j^1s$. Take $s\in\Gamma(F)$ a solution of $(F,D)$, i.e., $D(s)=0$. Using the Spencer decomposition , we write the holonomic section $j^1s\in\Gamma(J^1F)$ as $(s,0)\in\Gamma(F)\oplus\Omega^1(M,F)$. Obviously, $D(s)=l\circ 0$ and $\varkappa_D(s,0)=0$. This means that $j^1s$ is a section of $P_D(F)$. Moreover, $$\begin{aligned} D^\text{clas}(s,0)=0\end{aligned}$$ which says that $j^1s=(s,0)$ is a solution of $(P_D(F),D^\text{clas})$. It is now straightforward to check that $j^1\circ pr=id$ and $pr\circ j^1=id$. \[smooth-prol\]Whenever the bundle map $pr:P_D(F)\to F$ (of possible non-smooth vector bundles) is surjective, the short exact sequence of vector bundles over $M$ $$\begin{aligned} 0{\longrightarrow}T^*\otimes F{\longrightarrow}J^1F\overset{pr}{{\longrightarrow}}F{\longrightarrow}0\end{aligned}$$ restricts to a short exact sequence $$\begin{aligned} \label{cassi} 0{\longrightarrow}{\mathfrak{g}}^{(1)}(F,D){\longrightarrow}P_D(F)\overset{pr}{{\longrightarrow}}F{\longrightarrow}0\end{aligned}$$ where ${\mathfrak{g}}^{(1)}(F,D)$ is the kernel of $pr:P_D(F)\to F$. Hence, if $P_D(F)$ is smoothly defined, $$\begin{aligned} {\mathfrak{g}}(P_D(F),D^{(1)})={\mathfrak{g}}^{(1)}(F,D)\end{aligned}$$ Indeed, to show we use the Spencer decomposition . With this, a section of $(0,\omega)\in\Gamma(F)\oplus\Omega^1(M,F)$ belongs to $P_D(F)$ at $x$ if $$\begin{aligned} l\circ \omega_x=0 \ \quad \text {and } \ \quad 0=\varkappa_D(0,\omega_x)(X,Y)=D_X(\omega_x(Y))-D_Y(\omega_x(X)),\end{aligned}$$ for any $X,Y\in T_xM$. This is: $\omega_x\in{\mathfrak{g}}^{(1)}(F,D)_x$. \[workable\] The classical prolongation space $P_D(F)$ is smoothly defined if and only if ${\mathfrak{g}}^{(1)}(F,D)$ is a vector bundle over $M$ of constant rank, and $pr:P_D(F)\to F$ is surjective. From the short exact sequence , we have that $P_D(F)$ is smooth if and only if $g^{(1)}(F,D)$ has constant rank. For the smooth relative connection $(P_D(F),D^{(1)})$, its classical prolongation space, denoted by $$P_D^2(F)$$ is called the classical $2$-prolongation space of $(F,D)$. On $P^2_D(F)$ we have the operator $$\begin{aligned} D^{(2)}:=(D^{(1)}):\Gamma(P^2_D(F)){\longrightarrow}\Omega^1(M,P^1_D(F)).\end{aligned}$$ As we shall see in corollary \[k-prolongations\], $P_D^2(F)\subset J^2F$ and $D^{(2)}$ is just the restriction of the classical Spencer operator $D^{2\text{-clas}}$ from example \[Spencer tower\]. We define the classical $k$-prolongation of $(F,D),$ $$\begin{aligned} (P_D^{k}(F),D^{(k)}) \end{aligned}$$ inductively. \[k-prolongation-space\] Let $(F,D)$ be a relative $l$-connection. We say that the [**classical $k$-prolongation space**]{} $P_D^{k}(F)$ is [**smooth**]{} if 1. $(P_D(F),D^{(1)}),\ldots (P_D^{k-1}(F),D^{(k-1)})$ are smoothly defined as relative connections, and 2. the classical prolongation space of $(P^{k-1}_D(F),D^{(k-1)})$: $$\begin{aligned} P_D^k(F):=P_{D^{k-1}}(P_D^{k-1}(F)) \end{aligned}$$ is smooth. In this case, we define the [**$k$-prolongation of $D$**]{}: $$\begin{aligned} D^{(k)}:=(D^{(k-1)})^{(1)}:\Gamma(P^{k}_D(F)){\longrightarrow}\Omega^1(M,P_D^{k-1}(F)). \end{aligned}$$ We say that the classical $k$-prolongation of $(F,D)$ is [**smoothly defined**]{} if, moreover, $$\begin{aligned} pr:P^{k}_D(F){\longrightarrow}P^{k-1}_D(F) \end{aligned}$$ is surjective. In this case, $$\begin{aligned} (P_D^k(F),D^{(k)}): P_D^k(F)\xrightarrow{(D^{(k)},pr)}{} P_D^{k-1}(F) \end{aligned}$$ (a relative connection!) is called the [**classical $k$-prolongation of $(F,D)$**]{}. \[def: formally inegrable\] A relative $l$-connection $(F,D)$ is called [**formally integrable**]{} if all the classical prolongations spaces $$\begin{aligned} P_D(F),P^2_D(F),\ldots, P^k_D(F),\ldots\end{aligned}$$ are smoothly defined. For a formally integrable relative connection $(F,D)$, we have a sequence of surjective vector bundle maps $$\begin{aligned} \cdots{\longrightarrow}P^k_D(F)\xrightarrow{D^{(k)},pr}{}\cdots{\longrightarrow}P^2_D(F)\xrightarrow{D^{(2)},pr}{} P^1_D(F)\xrightarrow{D^{(1)},pr} F\xrightarrow{D,l}{} E\end{aligned}$$ in which any pair of consecutive relative connections are compatible. This is the basic example of standard tower (to be defined in subsection \[subsection:prolongation and Spencer towers\]). \[newremark\] Let $(F,D)$ be a relative connection and let $k$ be an integer. If 1. the classical $k$-prolongation space $P^{k}_D(F)$ is smoothly defined, and 2. the (possibly non-smooth) vector bundle map $pr:P_D^{k+1}(F)\to P_D^kF$ is surjective. Then the short exact sequence of vector bundles over $M$ $$\begin{aligned} 0{\longrightarrow}S^{k+1}T^*\otimes F{\longrightarrow}J^{k+1}F\overset{pr}{{\longrightarrow}} J^kF{\longrightarrow}0\end{aligned}$$ restricts to the short exact sequence of (possibly non-smooth) vector bundle over $M$ $$\begin{aligned} \label{sec2} 0{\longrightarrow}{\mathfrak{g}}^{(k+1)}(F,D){\longrightarrow}P_D^{k+1}(F)\overset{pr}{{\longrightarrow}} P^k_DF{\longrightarrow}0.\end{aligned}$$ The previous proposition will be proved at the end of this subsection. A direct corollary from the definitions is the following: With the assumption of the previous proposition. If the classical $k$-prolongation space $P^{k}_D(F)$ is smoothly defined then the symbol space of the classical $k$-prolongation of $(F,D)$ is equal to the $k$-prolongation of the symbol map $\partial_D$ $$\begin{aligned} {\mathfrak{g}}(P_D^{k}(F),D^{(k)})={\mathfrak{g}}^{(k)}(F,D).\end{aligned}$$ \[corollary2\] Let $(F,D)$ be a relative connection. If $P_D^{k_0}(F)$ is smoothly defined, then for any integer $0\leq k\leq k_0$, there is a one to one correspondence between $\text{\rm Sol}(P^k_D(F),D^{(k)})$ and $\text{\rm Sol}(F,D)$ given by the surjective vector bundle map $$\begin{aligned} pr^k_0:P^k_D(F){\longrightarrow}F\end{aligned}$$ where $pr^k_0$ is the composition $pr\circ pr^2\circ\cdots\circ pr^k$. Use proposition \[2218\] each time you prolong. As an example, let’s look at the classical Spencer operator on the $k$-jet bundle $pr:J^kF\to J^{k-1}F$. See example \[Spencer tower\] \[jet-prolongation\] Let $k\geq 1$ be an integer. The classical prolongation of $(J^kF,D^{k\text{\rm-clas}})$ is $$\begin{aligned} (J^{k+1}F,D^{(k+1)\text{\rm-clas}}).\end{aligned}$$ [*Warning!*]{} The proofs of the previous proposition and the following lemma make use of results that will be given later on in this thesis... So the reader may want to skip the proofs and come back to them later. We use the description of $J^1(J^kF)$ as the set of pairs $(p,\zeta)$ where $p\in J^k_xF$ and $$\begin{aligned} \zeta:T_xM{\longrightarrow}T_pJ^kF\end{aligned}$$ is a splitting of $d_p\pi:T_pJ^kF\to T_xM$. See remark \[1-when working with jets\] in the case $R=J^kF$. With this description, we see that the section $({\alpha},\omega)\in\Gamma(J^kF)\oplus\Omega^1(M,J^kF)$ of $J^1(J^kF)$ at the point $x\in M$ corresponds to the pair $(p,\zeta)$ with $p={\alpha}(x)$ and $\zeta$ equal to $$\begin{aligned} d_x{\alpha}-\omega:T_xM{\longrightarrow}T_pJ^kF,\end{aligned}$$ where we regard $J^k_xF\subset T_pJ^kF$ as the vertical part $T^{\pi}_pJ^kF$. On the other hand, it is well-known that the subset $J^{k+1}F\subset J^1(J^kF)$ is given by pairs $(p,\zeta)$ as before with the properties that $$\begin{aligned} \label{condition1} \theta\circ\zeta=0 \ \text{ and } \ d_\nabla\theta(\zeta(\cdot),\zeta(\cdot))=0,\end{aligned}$$ where $\theta\in \Omega^1(J^kF,J^{k-1}F)$ is the linear Cartan form defined in example \[cartandist\] (in the case where $R=F$ is a vector bundle and therefore $T^\pi J^{k-1}F$ is canonically identified with $pr^* J^{k-1}F$), and $\nabla:{\ensuremath{\mathfrak{X}}}(J^kF)\times\Gamma(J^{k-1}F)\to\Gamma(J^{k-1}F)$ is any connection on the pullback bundle $\pi^*:J^{k-1}F\to J^kF$ (see e.g [@Bocharov; @Krasil; @Olver]). By example \[compa\] we have that the relative connection associated to $\theta$ given by corollary \[cor: linear one forms\] is precisely$$\begin{aligned} D^{k\text{-clas}}:\Gamma(J^kF){\longrightarrow}\Omega^1(M,J^{k-1}F)\end{aligned}$$ With the above description of $J^1(J^kF)$ in mind, we apply the following lemma in the case where $F=J^kF$, $E=J^{k-1}F$, $D=D^{k\text{-clas}}$ and $l=pr$ to obtain that $$\begin{aligned} \theta\circ\zeta=D^{k\text{-clas}}({\alpha})(x)-pr^k(\omega_x).\end{aligned}$$ And, whenever $\theta\circ\zeta=0$, we also have that $$\begin{aligned} d_{\nabla}\theta(\zeta(X),\zeta(Y))=\varkappa_{D^{k\text{-clas}}}({\alpha},\omega)(X,Y)\end{aligned}$$ for $X,Y\in T_xM$. These last two equalities complete the proof. For the following lemma, see definition \[lin-form\] and corollary \[cor: linear one forms\] \[lema2\] Let $p:F\to M$ and $E\to M$ be vector bundles. Let $\theta\in\Omega^1(F,p^*E)$ be a fiber-wise surjective linear form with corresponding $l$-connection $D:\Gamma(F)\to \Omega^1(M,E)$. Then for any $j^1_xs\in J^1F$ $$\begin{aligned} \label{AA} \theta\circ d_xs=D({\alpha})(x)-l\circ\omega_x\end{aligned}$$ where, using the Spencer decomposition , $({\alpha},\omega)\in \Gamma(F)\oplus\Omega^1(M,F)$ are such that $({\alpha},\omega)_x=j^1_xs$. Moreover, if either term in the above equality vanishes, then $$\begin{aligned} \label{BB} d_\nabla\theta(d_xs(\cdot),d_xs(\cdot))=\varkappa_D({\alpha},\omega)(x)\end{aligned}$$ where $\nabla:{\ensuremath{\mathfrak{X}}}(F)\times\Gamma(p^*E)\to \Gamma(p^*E)$ is any linear connection. By the description of $D$ in terms of $\theta$ given by corollary \[cor: linear one forms\] we have that $\theta\circ d_xs=D(s)(x)$. Therefore, by remark \[cocyclea\] the map $$\begin{aligned} J^1F{\longrightarrow}{\mathrm{Hom}}(TM,E),\quad j^1_xs\mapsto\theta\circ d_xs\end{aligned}$$ at the level of sections and using the Spencer decomposition is given by $D({\alpha})-l(\omega)$, for $({\alpha},\omega)\in\Gamma(A)\oplus\Omega^1(M,A)$. From this equation \[AA\] follows. For the second part we regard the vector bundle $F\to M$ as a Lie groupoid with multiplication given by fiber-wise addition, and we regard $E\to M$ as the trivial representation of $F$. In this case, linearity of $\theta\in\Omega^1(F,E)$ translates into multiplicativity of this form. Notice that, $$\begin{aligned} d_\nabla\theta(d_xs(X),d_xs(Y))=c_1(\theta)(j^1_xs)(X,Y)\end{aligned}$$ where $c_1$ is the curvature map defined on remark \[el-caso-forma\]. By the same remark we know that $c_1$ is a cocycle with linearization equal to $\varkappa_D$, but in the linear case a cocycle is just a vector bundle map and therefore its linearization is itself. This proves equation . The following corollary allows us to define the classical prolongations without smoothness assumptions. \[k-prolongations\] Let $(F,D)$ be a relative connection and let $k_0$ be an integer. If the classical $k_0$-prolongation space $P^{k_0}_D(F)$ is smoothly defined then, for any integer $0\leq k\leq k_0+1$, $$\begin{aligned} \label{general prolongation} P^k_D(F)=J^{k-1}(P_D(F))\cap J^kF\end{aligned}$$ and the operator $D^{(k)}$ is the restriction of the Spencer operator $D^{k\text{-{\text{\rm clas}}}}:\Gamma(J^kF)\to \Omega^1(M,J^{k-1}F)$ to $P_D^k(F)$. By construction, the relative connection $D^{(1)}:\Gamma(P_D(F))\to\Omega^1(M,F)$ is the restriction of the Spencer operator $D^{\text{\rm clas}}$ associated to $F$, therefore its prolongation $(P^2_D(F),D^{(2)})$ is given by $$\begin{aligned} P^2_D(F)=J^1(P_D(F))\cap P_{D^{{\text{\rm clas}}}}(J^1F)=J^1(P_D(F))\cap J^2F,\end{aligned}$$ with connection $D^{(2)}$ given by the restriction to $P^2_D(F)$ of the classical Spencer operator $D^{\text{\rm clas}}:\Gamma(J^1(P_D(F)))\to\Omega^1(M,P_D(F))$. Moreover, as $P_D(F)\subset J^1F$, the connection on $P^2_D(F)\subset J^2F$ is actually the restriction of the classical Spencer operator $D^{\text{\rm clas}}:\Gamma(J^1(J^1F))\to\Omega^1(M,J^1F)$. But this last operator restrict to $D^{2\text{-clas}}:\Gamma(J^2F)\to \Omega^1(M,J^1F)$ on $J^2F$, which proves the case $k=2$. For $k>2$ the results follows by an inductive argument. \[extender\]Note that to define $P^k_D(F)$ in the sense of definition \[general prolongation\] we do not use the smoothness of the space $P^{k-1}_D(F).$ The case $k=0$ was proved in remark \[smooth-prol\]. Let’s check the case $k=1$ (the arbitrary case follows by an inductive argument). By corollary \[k-prolongations\], $$\begin{aligned} P^2_D(F)=J^1(P_D(F))\cap J^2F,\end{aligned}$$ Consider now the commutative diagram $$\begin{aligned} \xymatrix{ 0 \ar[r] & T^*\otimes P_D(F) \ar@{^{(}->}[d]_i \ar[r] & J^1(P_D(F)) \ar@{^{(}->}[d] \ar[r] & P_D(F) \ar[r] \ar@{^{(}->}[d] & 0\\ 0 \ar[r] & T^*\otimes J^1F \ar[r] & J^1(J^1F) \ar[r] & J^1F \ar[r] & 0 \\ 0 \ar[r] & S^2T^*\otimes F \ar@{_{(}->}[u]^j \ar[r] & J^2F \ar@{_{(}->}[u] \ar[r] & J^1F \ar[r] \ar@{_{(}->}[u] & 0 }\end{aligned}$$ Hence, an element $\phi\in\ker(pr:P^2_D(F)\to P_D(F))$ should belong to the intersection $$\begin{aligned} j(S^2T^*\otimes F)\cap i(T^*\otimes P_D(F)),\end{aligned}$$ where $i$ is given by tensoring with the identity on $T^*M$ with the natural inclusion $P_D(F)\hookrightarrow J^1F$, $$\begin{aligned} j(\phi)(X)=\phi(X,\cdot):T_xM{\longrightarrow}F \in \ker (pr:J^1_xF\to F_x)\end{aligned}$$ for any $\phi:S^2T_x\to F_x$ and any $X\in T_xM.$ Therefore $j(\phi)\in (pr:P^2_D(F)\to P_D(F))$ if and only if $$\begin{aligned} j(\phi)(X)\in P_D(F)\cap\ker (pr:J^1_xF\to F_x)=\ker(pr:P_D(F)\to F)={\mathfrak{g}}^{(1)}(F,D)\end{aligned}$$ where in the last equality we used . This is, $\phi\in \ker(pr:P^2_D(F)\to P_D(F))$ if and only if $\phi\in {\mathfrak{g}}^{(2)}(F,D)$ by definition of the prolongations of ${\mathfrak{g}}$ (see \[digression\]). ### Towers (of prolongations) {#subsection:prolongation and Spencer towers} Let $$\begin{aligned} \label{sequence} \cdots {\longrightarrow}F_{k}\xrightarrow{(D^{k},l^{k})}{}F_{k-1}{\longrightarrow}\cdots{\longrightarrow}F_1\xrightarrow{(D^1,l^1)}{} F\end{aligned}$$ be an infinite sequence of relative connections. For ease of notation, instead of $(D^k,l^k):F_k\to F_{k-1}$ we write $(D,l):F_k\to F_{k-1}$ if there is no risk of confusion. - A [**standard tower**]{} is a sequence in which any two consecutive connections are compatible and all of them are standard. We denote a standard tower by $(F^\infty,D^\infty,l^\infty)$. - Let $(D,l):F\to E$ be a relative connection. A [**standard resolution of $(D,l)$**]{} is a standard tower such that $(F_1,D^1)$ is a prolongation of $(F,D).$ We use the notation $$\begin{aligned} F^\infty \xrightarrow{(D^\infty,l^\infty)}{} F\xrightarrow{(D,l)}{}E.\end{aligned}$$ In the previous definition, one can give up the condition that \[sequence\] is standard. We arise to the notions of [**tower**]{} and [**resolution**]{}. A [**morphism $\Psi$ of towers**]{}, denoted by $$\begin{aligned} \Psi:(F^\infty,D^\infty,l^\infty){\longrightarrow}(\tilde F^\infty,\tilde D^\infty,\tilde l^\infty),\end{aligned}$$ is a commutative diagram of vector bundles over $M$ $$\begin{aligned} \xymatrix{ \cdots \ar[r] & F^{k+1} \ar[r]^{l^{k+1}} \ar[d]_{\Psi_{k+1}} & F^k \ar[r] \ar[d]^{\Psi_k} & \cdots \ar[r] & F_1 \ar[d]_{\Psi^1} \ar[r]^{l^1} & F \ar[d]^{\Psi_0} \\ \cdots \ar[r] & \tilde F^{k+1} \ar[r]^{\tilde l^{k+1}} & \tilde F^k \ar[r] & \cdots \ar[r] & \tilde F^1 \ar[r]^{\tilde l^1} & \tilde F }\end{aligned}$$ such that $\Psi^k\circ D^{k+1}=\tilde D^{k+1}\circ\Psi^{k+1}$ for all $k\geq 0.$ We say that $\Psi$ is [**injective**]{} if each $\Psi_k$ is injective. \[ext.prol\]Consider a tower $(F^\infty,D^\infty,l^\infty)$. By applying lemma \[complex\] inductively, we get a complex $$\begin{aligned} \begin{aligned} \Gamma(F_k)\overset{ D^k}{{\longrightarrow}}\Omega^1(M,F_{k-1})\overset{D^{k-1}}{{\longrightarrow}}&\Omega^2(M,F_{k-1})\overset{}{{\longrightarrow}}\cdots\\ &\cdots \overset{}{{\longrightarrow}}\Omega^{k-1}(M,F_{1})\overset{ D^1}{{\longrightarrow}}\Omega^k(M,F_0)\end{aligned}\end{aligned}$$ for any $k\geq1$, which restricts to a vector bundle complex over $M$ $$\label{prol}\begin{aligned} 0\longrightarrow \mathfrak{g}_k\xrightarrow{\partial_{D^k}}{}T^*\otimes\mathfrak{g}_{k-1}\xrightarrow{\partial_{D^{k-1}}}{}&\wedge^2T^*\otimes\mathfrak{g}_{k-2}\xrightarrow{\partial_{D^{k-2}}}{}\cdots\\& \cdots\xrightarrow{\partial_{D^2}}{}\wedge^{k-1}T^*\otimes \mathfrak{g}_1\xrightarrow{\partial_{D^1}}{} \wedge^{k}T^*\otimes F_0, \end{aligned}$$ where ${\mathfrak{g}}_j$ is the symbol space of $D^j$. Note that the map $$\begin{aligned} \partial_{D^{k+1}}:{\mathfrak{g}}_{k+1}{\longrightarrow}{\mathrm{Hom}}(TM ,{\mathfrak{g}}_k)\end{aligned}$$ takes values in the first prolongation ${\mathfrak{g}}_k^{(1)}(\partial_{D^k})$ of the map $\partial_{D^k}:{\mathfrak{g}}_k\to T^*\otimes {\mathfrak{g}}_{k-1}$, as $\partial_{D^{k+1}}\circ\partial_{D^k}=0$ and hence, if all the connections are standard, one can identify ${\mathfrak{g}}_{k+1}$ with its image in $g_k^{(1)}$. \[inclusion\]Let $(F^\infty, D^\infty,l^\infty)$ be a standard tower, and let ${\mathfrak{g}}_k$ be the symbol space of $D^k$, for $k\geq 1$. Then $$\begin{aligned} {\mathfrak{g}}_{k+1}\subset {\mathfrak{g}}_k^{(1)}\end{aligned}$$ via the inclusion $\partial_{D^{k+1}}$. Moreover, equality holds if and only if $$\begin{aligned} F_{k+1}=P_{D^k}(F_k),\end{aligned}$$ where we identify $F_{k+1}$ with the image of $j_{D^{k+1}}:F_{k+1}\to J^1F_k.$ That ${\mathfrak{g}}_{k+1}\subset {\mathfrak{g}}_k^{(1)}$ whenever the connections are standard was already observed at the end of remark \[ext.prol\]. Let’s prove then that equality holds if and only if $F_{k+1}=P_{D^k}(F_k)$. The only facts that we will use in the proof are that $D^{k+1}$ is standard and $(D^{k+1},D^k)$ are compatible. Take the commutative diagram of vector bundles over $M$ $$\begin{aligned} \xymatrix{ 0 \ar[r] &T^*\otimes F_k \ar[r] & J^1F_k \ar[r] & F_k \ar[r] & 0\\ 0 \ar[r] &{\mathfrak{g}}_{k+1} \ar[u]^{\partial_{D^{k+1}}} \ar[r] & F_{k+1} \ar[u]^{j_{D^{k+1}}} \ar[r]^{l^{k+1}} & F_k \ar[u]^{id} \ar[r] & 0 }\end{aligned}$$ Since the image of $j_{D^{k+1}}:F_{k+1}\to J^1F_k$ lies inside $P_{D^k}(F_k)$, as $(D^k,D^{k+1})$ are compatible (see proposition \[equiv\]), we have that $j_{D^{k+1}}(F_{k+1})=P_{D^k}(F_{k})$ if and only if $$\begin{aligned} \partial_{D^{k+1}}({\mathfrak{g}}_{k+1})=\ker(pr:P_{D^k}(F_k)\to F_k),\end{aligned}$$ but by remark \[smooth-prol\] we know that $\ker(pr:P_{D^k}(F_k)\to F_k)={\mathfrak{g}}_k^{(1)}$, which completes our proof. \[classicalSpencertower\]The sequence $$\begin{aligned} (J^\infty F,D^{\infty\text{-{\text{\rm clas}}}}):\cdots {\longrightarrow}J^{k+1}F\overset{D^{\text{\rm clas}}}{{\longrightarrow}}J^kF\overset{D^{\text{\rm clas}}}{{\longrightarrow}} \cdots{\longrightarrow}J^1F\overset{D^{\text{\rm clas}}}{{\longrightarrow}}F \end{aligned}$$ is an example of a standard tower (see cf. [@quillen]). For each $k>0$, the induced complex is the classical Spencer complex $$\begin{aligned} 0\longrightarrow S^kT^*\otimes F\overset{\partial}{\longrightarrow}T^*\otimes S^{k-1}V^*\otimes F\overset{\partial}{\longrightarrow}\wedge^2T^*\otimes S^{k-2}T^*\otimes F\overset{\partial}{\longrightarrow}\cdots\\ \cdots\overset{\partial}{\longrightarrow}\wedge^{n}T^*\otimes S^{k-n}T^*\otimes F\longrightarrow 0.\end{aligned}$$ Taking the Lie function of a smooth Lie pseudogroup $\Gamma\subset\operatorname{Bis}({\mathcal{G}})$, one has the standard tower of Lie algebroids as in subsection \[Lie pseudogroups\] $$\cdots {\longrightarrow}A^{(k)}(\Gamma)\overset{D^{(k)}}{{\longrightarrow}} \cdots{\longrightarrow}A^{(1)}(\Gamma)\overset{D^{(1)}}{{\longrightarrow}}A^{(0)}(\Gamma)$$ where $D^{(k)}$ is the restriction of the Spencer operator $(D^{\text{\rm clas}},pr):J^kA\to J^{k-1}A$, $A=Lie({\mathcal{G}}).$ For each $k>0$, the induced complex is the complex of the associated bundle of tableaux of $\Gamma$ $$\begin{aligned} \begin{split} 0\longrightarrow {\mathfrak{g}}^{k}(\Gamma)\overset{\partial}{\longrightarrow}T^*\otimes {\mathfrak{g}}^{k-1}(\Gamma)\overset{\partial}{\longrightarrow}\wedge^2T^*\otimes {\mathfrak{g}}^{k-2}(\Gamma)\overset{\partial}{\longrightarrow} \cdots\overset{\partial}{\longrightarrow}\wedge^{n}T^*\otimes{\mathfrak{g}}^{1}(\Gamma)\end{split}\end{aligned}$$ If $(D,l):F\to E$ is formally integrable, we obtain the classical standard resolution $$\begin{aligned}\label{pro} (P^\infty_D(F),D^{(\infty)}): \cdots{\longrightarrow}P_D^{k}(F){\longrightarrow}\cdots{\longrightarrow}P_D(F)\overset{D^{(1)}}{{\longrightarrow}}F\overset{D}{{\longrightarrow}}E. \end{aligned}$$ For each integer $k>0$, the induced complex is the extension complex of the tableau bundle ${\mathfrak{g}}^{(1)}(F,D)$ in the sense of definition \[exten\], given by $$\begin{aligned} \begin{split} 0\longrightarrow {\mathfrak{g}}^{(k)}\overset{\partial}{\longrightarrow}T^*\otimes {\mathfrak{g}}^{(k-1)}\overset{\partial}{\longrightarrow}&\wedge^2T^*\otimes {\mathfrak{g}}^{(k-2)}\overset{\partial}{\longrightarrow}\cdots\\& \cdots\overset{\partial}{\longrightarrow}\wedge^{n}T^*\otimes {\mathfrak{g}}^{(1)}\overset{\partial_D}{{\longrightarrow}}\wedge^{n+1}T^*\otimes E \end{split}\end{aligned}$$ Note that is a subtower of $(J^\infty F,D^{\infty\text{-{\text{\rm clas}}}})$. Using corollary \[k-prolongations\] and remark \[extender\], this sequence still makes sense even if $(F,D)$ is not formally integral, but this involves non-smooth subbundles. The following is the universal property of (the possible non-smooth) classical standard resolution. \[clasical\] Let $(D,l):F\to E$ be an $l$-connection not necessarily formally integrable. The (possibly non-smooth) subtower $(P_D^\infty(F),D^{(\infty)})$ of $(J^\infty F,D^\infty)$ is universal among the resolutions of $(D,l)$ in the sense that for any other resolution $$\begin{aligned} F^\infty\xrightarrow{D^\infty,l^\infty}{}F\xrightarrow{D,l}{}E,\end{aligned}$$ there exists a unique morphism $\Psi:(F^\infty,D^{\infty})\to (J^\infty F,D^{\infty\text{-{\text{\rm clas}}}})$ of towers such that $\Psi^0:F\to F$ is the identity map, and for $k\geq 1$ $$\begin{aligned} \Psi^k(F_k)\subset P_D^k(F).\end{aligned}$$ Moreover, if $(F^\infty,D^\infty)$ is a standard resolution of $(F,D)$ then $\Psi$ is injective. We will construct the vector bundle maps $\Psi_k:F_k\to J^kF$ with the desired properties inductively. For $k=1$, we define $\Psi_1$ to be $$\begin{aligned} \Psi_1:=j_{D^1}:F_1{\longrightarrow}J^1F.\end{aligned}$$ By construction, $$\begin{aligned} pr\circ\Psi_1({\alpha})=pr(l^1({\alpha}),D^1({\alpha}))=l^1({\alpha})\end{aligned}$$ for ${\alpha}\in\Gamma(F_1)$, and hence the first square of the diagram commutes. On the other hand, as $(D,D^1)$ are compatible connections then by (the proof of) proposition \[equiv\] we get that the image of $\Psi_1=j_{D^1}$ lies inside of $P_D(F)$, $D^1=D\circ\Psi_1$, and that $\Psi_1$ is injective. Assume now that for a $k$ fixed, we constructed $\Psi_l:F_l\to J^lF$ for $1\leq l\leq k$ with the desired properties. Let’s construct $\Psi_{k+1}:F_{k+1}\to J^{k+1}F$. Let $prol(\Psi_k):J^1(F_k)\to J^1(J^kF)$ be the vector bundle map defined at the level of sections by $$\begin{aligned} prol(\Psi_k)({\alpha},\omega)=(\Psi_k\circ {\alpha},\Psi_k\circ\omega)\end{aligned}$$ for $({\alpha},\omega)\in\Gamma(F_k)\oplus\Omega^1(M,F_k)\simeq\Gamma(J^1(F_k))$. We define $\Psi_{k+1}$ to be the composition $$\begin{aligned} \Psi_{k+1}:F_{k+1}\xrightarrow{j_{D^{k+1}}}{}J^1(F_k)\xrightarrow{prol(\Psi_k)}{}J^1(J^kF).\end{aligned}$$ For the commutativity, take ${\alpha}\in \Gamma(F_{k+1})$. Then $$\begin{aligned} pr^{k+1}\circ\Psi_{k+1}({\alpha})&=pr^{k+1}\circ prol(\Psi_k)\circ j_{D^{k+1}}({\alpha})\\ &=pr^{k+1}\circ prol(\Psi_k)(l^{k+1}({\alpha}),D^{k+1}({\alpha}))\\&=pr^{k+1}(\Psi_k\circ l^{k+1}({\alpha}),\Psi_k\circ D^{k+1}({\alpha}))\\&=\Psi_k\circ l^{k+1}({\alpha}). \end{aligned}$$ Let’s show that $\Psi_{k+1}(F_{k+1})\subset P^{k+1}_DF$. As $(D^k,D^{k+1})$ are compatible connections then $j_{D^{k+1}}$ lies inside $P_{D^{k}}(F_k)$. On the other hand, $\Psi^{k-1}\circ D^{k}=D^{k\text{-clas}}\circ \Psi_k$ implies that $$\begin{aligned} prol(\Psi_k)(P_{D^k}F_k)\subset P_{D^{\text{clas}}}(J^kF)=J^{k+1}F,\end{aligned}$$ where in the last equality we used proposition \[jet-prolongation\].\ Indeed, let $({\alpha},\omega)\in \Gamma(F_k)\oplus \Omega^1(M,F_k)\simeq\Gamma(J^1F^k)$ be such that at the point $x\in M$ $$\begin{aligned} D^k({\alpha})(x)=l\circ\omega_x\quad\text{and}\quad \varkappa_{D^k}({\alpha},\omega)(x)=0;\end{aligned}$$ in other words, $({\alpha},\omega)(x)\in P_{D^k}(F_k)$. Then, $$\begin{aligned} prol(\Psi_k)({\alpha},\omega)(x)=(\Psi_k\circ{\alpha},\Psi_k\circ\omega)(x).\end{aligned}$$ Hence, $$\begin{aligned} D^{k\text{-clas}}(\Psi_k\circ{\alpha})(x)-pr^k(\Psi_k\circ\omega)(x)&=\Psi_{k-1}\circ D^k({\alpha})(x)-\Psi_{k-1}\circ l^k\circ\omega_x\\&=\Psi_{k-1}(D^k({\alpha})(x)- l^k\circ\omega_x)=0 \end{aligned}$$ and $$\begin{aligned} \varkappa_{D^{k\text{-clas}}}(\Psi_k\circ{\alpha},\Psi_k\circ\omega)&(X,Y)(x)= D^{k\text{-clas}}_X(\Psi_k\circ\omega(Y))(x)\\&-D^{k\text{-clas}}_Y(\Psi_k\circ\omega(X))(x)-pr^k(\Psi_k\circ\omega[X,Y])(x)\\&= \Psi_{k-1}\circ D^k_X(\omega(Y))(x)-\Psi_{k-1}\circ D^k_Y(\omega(X))(x)\\&-\Psi_{k-1}\circ l^k\omega[X,Y](x)=\Psi_{k-1}(\varkappa_D^k({\alpha},\omega)(x))=0. \end{aligned}$$ By the inductive hypothesis we also know that $\Psi_k(F_k)\subset P_D^k(F)$, thus $$\begin{aligned} prol(\Psi_k)(J^1F^k)\subset J^1(P_D^k(F)).\end{aligned}$$ From the last inclusions, $$\begin{aligned} \Psi_{k+1}(F_{k+1})\subset prol(\Psi_k)\circ j_{D^{k+1}}(F_{k+1})&\subset prol(\Psi_k)(P_{D^{k}}(F_k))\subset \\&\subset J^1(P_D^k(F))\cap J^{k+1}F=P^{k+1}_D(F). \end{aligned}$$ Finally let’s show that $\Psi^k\circ D^{k+1}=D^{(k+1)\text{-clas}}\circ\Psi_{k+1}$. Let ${\alpha}\in \Gamma(F_{k+1})$, then $$\begin{aligned} D^{(k+1)\text{-clas}}\circ\Psi_{k+1}({\alpha})&=D^{(k+1)\text{-clas}}\circ prol(\Psi_k)(j_{D^{k+1}}({\alpha}))\\&=D^{(k+1)\text{-clas}}\circ prol(\Psi_k)(l^{k+1}({\alpha}),D^{k+1}({\alpha}))\\&=D^{(k+1)\text{-clas}}(\Psi_k\circ l^{k+1}({\alpha}),\Psi_k\circ D^{k+1}({\alpha}))\\&=\Psi_k\circ D^{k+1}({\alpha}). \end{aligned}$$ Note also that if $D^{k+1}$ is standard and $\Psi_k$ is injective, then $\Psi_{k+1}$ is injective by construction. The uniqueness is left to the reader. \[ab-cla\]Let $(F^\infty, D^\infty)$ be a standard resolution of $(F,D)$ and consider a morphism of towers $\Psi:(F^\infty,D^\infty)\to (J^\infty F,D^{\infty\text{-{\text{\rm clas}}}})$. By proposition \[cohomologia\], for any $k\geq1$ we have an induced morphism of complexes $$\begin{aligned} \xymatrix{ {\mathfrak{g}}_k \ar[r]^{\partial_{D}} \ar[d]^{\Psi_k} & T^*\otimes {\mathfrak{g}}_{k-1} \ar[r]^{\partial_{D}} \ar[d]^{\Psi_{k-1}} &\cdots \wedge^{k-1}T^*\otimes {\mathfrak{g}}_1 \ar[r]^{\partial_{D}} \ar[d]^{\Psi_1} & \wedge^kT^*\otimes F \ar[d]^{\Psi_0}\\ S^kT^*\otimes F \ar[r]^{\partial} & T^*\otimes S^{k-1}T^*\otimes F \ar[r]^{\partial} & \cdots \wedge^{k-1}T^*\otimes T^*\otimes F \ar[r]^{\partial} & \wedge^k T^*\otimes F }\end{aligned}$$ where ${\mathfrak{g}}_k$ is the symbol space of $D^k$. As $\Psi$ is injective and all the relative connections $D^k$ are standard, we have that the sequence $$\begin{aligned} (\Psi_0(\partial _{D^1}({\mathfrak{g}}_1)),\Psi_1(\partial_{D^2}({\mathfrak{g}}_2)),\ldots)\end{aligned}$$ is a tower of tableaux as in definition \[tableaux tower\]. This follows from lemma \[inclusion\] and the fact that injectivity of $\Psi$ implies that $$\begin{aligned} \Psi_k(\partial_{D^k}({\mathfrak{g}}_{k+1}))^{(1)}=\Psi_{k}({\mathfrak{g}}_k^{(1)}(\partial_D))\end{aligned}$$ where $\Psi_k(\partial_{D^k}({\mathfrak{g}}_{k+1}))^{(1)}\subset T^*\otimes \Psi_k({\mathfrak{g}}_k)$ is the usual prolongation of a tableau (see subsection \[digression\]). Let $(F^\infty,D^\infty,l^\infty)$ be a standard tower. Then, there exists an integer $p$ such that for all $k\geq p$, $$\begin{aligned} F_{k+1}=P_{D^k}(F_k),\end{aligned}$$ where we regard $F_{k+1}$ as a subset of $J^1F_k$ via the inclusion $j_{D^{k+1}}:F_{k+1}\to J^1F_k$, and $$\begin{aligned} D^{k+1}=D^{{\text{\rm clas}}}\circ j_{D^{k+1}}.\end{aligned}$$ From the proof of theorem \[clasical\], we know that there exists an injective morphism $\Psi:(F^\infty,D^\infty,l^\infty)\to (J^\infty F,D^{\infty\text{-{\text{\rm clas}}}},pr^\infty)$. As in remark \[ab-cla\], we have a tableau tower $$\begin{aligned} (\Psi_0(\partial _{D}({\mathfrak{g}}_1)),\Psi_1(\partial_{D}({\mathfrak{g}}_2)),\ldots).\end{aligned}$$ By proposition \[stability\], there exists $p$ such that for all $k\geq p$ $$\begin{aligned} \Psi_{k}(\partial_{D}({\mathfrak{g}}_{k+1}))=\Psi_{k-1}( \partial_D({\mathfrak{g}}_k))^{(1)}.\end{aligned}$$ Using that $\Psi_{k-1}(\partial_D({\mathfrak{g}}_k))^{(1)}=\Psi_{k-1}({\mathfrak{g}}_k^{(1)}(\partial_D))$ and that $\Psi$ is injective, we get that $$\begin{aligned} \partial_{D}({\mathfrak{g}}_{k+1})={\mathfrak{g}}_k^{(1)}(\partial_D)\end{aligned}$$ for all $k\geq p$. Now we apply lemma \[inclusion\] to get that $F_{k+1}=P_{D^k}(F_k)$. For the equality $D^{k+1}=D^{\text{\rm clas}}\circ j_{D^k}$ see remark \[standard\]. ### Formal integrability of relative connections {#classical resolution} Here we concentrate on formal integrability (defined in \[def: formally inegrable\]). For $(F,D)$ formally integrable, the tower $$\begin{aligned} \cdots{\longrightarrow}P_D^{k}(F)&\overset{D^{(k)}}{{\longrightarrow}}P_D^{k-1}(F){\longrightarrow}\cdots{\longrightarrow}P_D(F)\overset{D^{(1)}}{{\longrightarrow}}F\overset{D}{{\longrightarrow}}E \end{aligned}$$ is a standard tower by construction, with the universal property of resolutions given by theorem \[clasical\]. We call this tower [**the classical resolution**]{} of $(F,D)$. One of the main properties of formal integrability is that one has the following existence result in the analytic case. The proof is explained in the more general setting of Pfaffian bundles (see theorem \[anal\]). Let $(F,D)$ be an analytic relative connection which is formally integrable. Then, given $p\in P^k_D(F)$ with $\pi(p)=x\in M$, there exists an analytic solution $s\in\text{\rm Sol}(F,D)$ over a neighborhood of $x$ such that $j^k_xs=p.$ The following result is a workable criteria for formal integrability; the rest of the section will be devoted to its proof. A more general theorem (for Pfaffian bundles) will be presented in chapter \[Pfaffian bundles\]. Similar results can be found in [@Gold3; @Gold2]. \[formally integrable\] Let $(D,l):F\to E$ be a relative connection and let ${\mathfrak{g}}$ be its symbol space. Assume that 1. $pr:P_D(F)\to F$ is surjective, 2. ${\mathfrak{g}}^{(1)}$ is a smooth vector bundle over $M$, and 3. $H^{2,k}({\mathfrak{g}})=0$ for $k\geq0$. Then, $(F,D)$ is formally integrable. We will now introduce some notions concerning the smoothness of the prolongations and the surjectivity of $pr:P^k_D(F)\to P^{k-1}_D(F)$.\ The following result, generalizing lemma \[workable\], is a workable criteria for the smoothness of the classical prolongation spaces. For this purpose we consider the possible non-smooth bundles $P^k_D(F)$ given by $$\begin{aligned} J^{k-1}(P_D(F))\cap J^kF\end{aligned}$$ See corollary \[k-prolongations\] and remark \[extender\]. Recall that, if the classical $k$-prolongation space is smoothly defined, we have a short exact sequence of (possibly non-smooth) vector bundles over $M$ $$\begin{aligned} \label{ses} 0{\longrightarrow}{\mathfrak{g}}^{(k+1)}{\longrightarrow}P_D^{k+1}(F)\overset{pr}{{\longrightarrow}}P_D^{k}(F){\longrightarrow}0,\end{aligned}$$ where ${\mathfrak{g}}^{(k)}:={\mathfrak{g}}^{(k)}(F,D)$ is the $k$-prolongation of the map $\partial_D$ (see proposition \[newremark\]). \[l\] Let $k$ be a natural number. If 1. the classical $k$-prolongation space $P_D^{k}(F)$ is smoothly defined, 2. ${\mathfrak{g}}^{(k+1)}$ is a vector bundle over $M$, and 3. $pr:P^{k+1}_D(F)\to P^k_D(F)$ is surjective for all $0\leq k\leq k_0$ Then $P^{k+1}_D(F)\subset J^{k+1}F$ is smoothly defined. See [@Gold2; @Gold3] for similar results. This is a direct consequence of proposition \[newremark\]: by the short exact sequence , ${\mathfrak{g}}^{(k+1)}$ is smooth if and only if $P^{k+1}_D(F)$ is smooth. #### Higher curvatures Now we will prove theorem \[formally integrable\]. This will be done inductively by showing: Let $(F,D)$ be a relative connection and let $k$ be an integer. If the classical $k$-prolongation space $P^k_D(F)$ is smoothly defined, then there exists an exact sequence of vector bundles over $M$: $$\begin{aligned} P^{k+1}_D(F)\overset{pr}{{\longrightarrow}} P^k_D(F)\xrightarrow{\kappa_{k+1}}{}H^{(2,k-1)}({\mathfrak{g}}).\end{aligned}$$ It is clear now that by an inductive argument, the previous proposition together with proposition \[l\] proves theorem \[formally integrable\]. Throughout this subsection let’s assume that for a fixed integer $k_0$, proposition \[l\] holds. Hence, $$\begin{aligned} P_D^{k_0+1}\overset{pr}{{\longrightarrow}}P_D^{k_0}\overset{pr}{{\longrightarrow}}\cdots {\longrightarrow}P_D(F)\overset{pr}{{\longrightarrow}}F\end{aligned}$$ is a sequence of surjective smooth vector bundle maps. We will construct, for each natural number $0\leq k\leq k_0+1$, the [**$(k+1)$-reduced curvature map**]{} of $(F,D)$ $$\begin{aligned} \kappa_{k+1}:P^k_D(F)\longrightarrow C^{2,k-1}\end{aligned}$$ where $C^{2,k-1}$ is the bundle of vector spaces over $M$ $$\begin{aligned} C^{2,k-1}:= \frac{\wedge^2T^*\otimes\mathfrak{g}^{(k-1)}}{\partial(T^*\otimes\mathfrak{g}^{(k)})},\end{aligned}$$ Here we set $P^0_D(F)=F$, ${\mathfrak{g}}^{(0)}={\mathfrak{g}}$ and ${\mathfrak{g}}^{(-1)}=E.$ In [@Gold3] one can also find the construction of the reduced curvature maps for differential equations, and the following results regarding them, but this is done from an algebraic point of view. We begin by showing the construction of $$\begin{aligned} \kappa_1:F{\longrightarrow}C^{2,-1}:= \frac{\wedge^2T^*\otimes E}{\partial_D(T^*\otimes{\mathfrak{g}})}\end{aligned}$$ using the vector bundle map $$\begin{aligned} \varkappa_D:J_D^1F\to \wedge^2T^*\otimes E\end{aligned}$$ which has $P_D(F)$ as its kernel (see subsection \[basic notions\]). Recall that $\varkappa_D({\alpha},\omega)$, where $({\alpha},\omega)\in \Gamma(F)\oplus \Omega^1(M,F)$ is such that $D({\alpha})=l\circ\omega$, is given by $$\begin{aligned} \label{equation:si} \varkappa_D({\alpha},\omega)(X,Y)=D_X\omega(Y)- D_Y\omega(X)-l\circ\omega[X,Y]\end{aligned}$$ for $X,Y\in{\ensuremath{\mathfrak{X}}}(M)$, and therefore if $\omega\in \Omega^1(M,{\mathfrak{g}})$ $$\begin{aligned} \varkappa_D(0,\omega)=\partial_D(\omega),\quad\text{\rm for }\partial_D:T^*\otimes {\mathfrak{g}}\to \wedge^2 T^*\otimes E.\end{aligned}$$ On the other hand, the short exact sequence of vector bundles over $M$ (for a proof see remark \[remark1\]) $$\begin{aligned} 0{\longrightarrow}T^*\otimes {\mathfrak{g}}{\longrightarrow}J^1_DF\overset{pr}{{\longrightarrow}}F{\longrightarrow}0,\end{aligned}$$ shows that $$\begin{aligned} F\simeq J^1_DF/(T^*\otimes {\mathfrak{g}}),\end{aligned}$$ and that $$\begin{aligned} \label{1-curvature} \kappa_1:\frac{J^1_DF}{T^*\otimes {\mathfrak{g}}}{\longrightarrow}C^{2,-1},\quad [p]\mapsto[\varkappa_D(p)]\end{aligned}$$ is well defined. The [**$1$-reduced curvature map**]{} $$\begin{aligned} \kappa_1:F{\longrightarrow}C^{2,-1}\end{aligned}$$ is defined by equation . The sequence $$\begin{aligned} P_D(F)\overset{pr}{{\longrightarrow}} F\overset{\kappa_1}{{\longrightarrow}}C^{2,-1}\end{aligned}$$ of bundles over $M$ is exact. That $\kappa_1\circ pr=0$ follows from the fact that $P_D(F)=\ker\varkappa_D$. Now, $$\begin{aligned} \in F\simeq J^1_D(F)/(T^*\otimes {\mathfrak{g}})\end{aligned}$$ is such that $\kappa_1([({\alpha},\omega)]_x)=0$ if and only if we find $\phi_x:T_xM\to {\mathfrak{g}}_x$ such that $$\begin{aligned} \begin{aligned} \varkappa_D({\alpha},\omega)(X,Y)(x)&=\partial_D(\phi(Y))(X)(x)-\partial_D(\phi(X))(Y)(x)\\ &=D_X(\phi(Y))(x)-D_Y(\phi(X))(x) \end{aligned}\end{aligned}$$ for all $X,Y\in{\ensuremath{\mathfrak{X}}}(M)$, and where $\phi\in\Gamma(T^*\otimes{\mathfrak{g}})$ is any extension of $\phi_x$. Since ${\mathfrak{g}}=\ker l$, and $\varkappa_D({\alpha},\omega)(X,Y)(x)$ is given by formula , the last equality shows that $({\alpha},\omega-\phi)(x)\in P_D(F)$. And of course $pr(({\alpha},\omega-\phi)(x))=[({\alpha},\omega)(x)].$ For $1\leq k\leq k_0+1$, the map $$\begin{aligned} \kappa_{k+1}:P^k_D(F){\longrightarrow}C^{2,k-1}\end{aligned}$$ is just the 1-reduced curvature map of $D^{(k)}:\Gamma(P_D^k(F))\to \Omega^1(M,P^{k-1}_D(F)).$ Having in mind the short exact sequence $$\begin{aligned} 0{\longrightarrow}T^*\otimes {\mathfrak{g}}^{(k)}{\longrightarrow}J^1_{D^{(k)}}(P^{k}_D(F))\overset{pr}{{\longrightarrow}} P^k_D(F){\longrightarrow}0\end{aligned}$$ and that $P^{k+1}_D(F)$ is the prolongation of $(P^{k}_D(F),D^{(k)})$, we have the following definition and lemma. \[curv(D)\]For $1\leq k\leq k_0+1$, we define the [**$(k+1)$-reduced curvature map**]{} by $$\begin{aligned} \kappa_{k+1}:P^k_D(F){\longrightarrow}C^{2,k-1}, \ \ \kappa_{k+1}([p])=[\varkappa_{D^{(k)}}(p)]\end{aligned}$$ for $[p]\in P^k_D(F)\simeq J^1_{D^{(k)}}(P^{k}_D(F))/T^*\otimes {\mathfrak{g}}^{(k)}.$ The sequence $$\begin{aligned} P_D^{k+1}(F)\overset{pr}{{\longrightarrow}} P^k_D(F)\overset{\kappa_{k+1}}{{\longrightarrow}}C^{2,k-1}\end{aligned}$$ of bundles over $M$ is exact. \[39\] For $1\leq k\leq k_0+1$, the image of $\varkappa_{D^{(k)}}$ lies in the family of subspaces $$\begin{aligned} \ker\{\partial:\wedge^2T^*\otimes {\mathfrak{g}}^{(k-1)}{\longrightarrow}\wedge^3T^*\otimes{\mathfrak{g}}^{(k-2)} \}.\end{aligned}$$ Hence, $\kappa_{k+1}$ takes values in $$\begin{aligned} H^{2,k-1}({\mathfrak{g}}):=\frac{\ker\{\partial:\wedge^2T^*\otimes {\mathfrak{g}}^{(k-1)}\to \wedge^3T^*\otimes{\mathfrak{g}}^{(k-2)} \}}{{\text{\rm Im}\,}\{\partial:T^*\otimes\mathfrak{g}^{(k)}\to \wedge^2T^*\otimes {\mathfrak{g}}^{(k-1)}\}}\end{aligned}$$ Let $({\alpha},\Omega)\in \Gamma(P^k_D(F))\oplus \Omega^1(M,P^k_D(F))$ be such that $$\begin{aligned} D^{k\text{-{\text{\rm clas}}}}{\alpha}=pr\omega\end{aligned}$$ or, in other words, $({\alpha},\Omega)$ is a section of $J^1_{D^{(k)}}(P^k_D(F))$. Then, $$\begin{aligned} \begin{aligned} pr(\varkappa_{D^{(k)}}({\alpha},\Omega)(X,Y)) =pr(D_X^{k\text{-{\text{\rm clas}}}}\Omega(Y)-D^{(k)}_Y\Omega(X)-pr\Omega[X,Y]) \end{aligned}\end{aligned}$$ for all $X,Y\in{\ensuremath{\mathfrak{X}}}(M)$. Since $pr\circ D^{(k)}=D^{(k-1)}\circ pr$ because $(D^{(k-1)},D^{(k)})$ are compatible, and $D^{(k)}{\alpha}=pr\Omega$ we get that $$\begin{aligned} pr(\varkappa_{D^{(k)}}({\alpha},\Omega)(X,Y))=\bar{D}^{(k-1)}\circ \bar{D}^{(k)}({\alpha})=0.\end{aligned}$$ for $\bar{D}^{(k-1)}\circ \bar{D}^{(k)}:\Gamma(P^k_D(F))\to \Omega^2(M,P^{k-1}_D(F))$ as in lemma \[complex\]. This shows, by the short exact sequence , that $$\begin{aligned} \varkappa_{D^{(k)}}({\alpha},\Omega)(X,Y)\in{\mathfrak{g}}^{(k-1)}(F,D).\end{aligned}$$ On the other hand, $$\begin{aligned} \label{a} \begin{aligned} \partial(&\varkappa_{D^{(k)}}({\alpha},\Omega))(X,Y,Z)=\partial(\varkappa_{D^{(k)}}({\alpha},\Omega)(X,Y))(Z)\\&+\partial(\varkappa_{D^{(k)}}({\alpha},\Omega)(Y,Z))(X)+\partial(\varkappa_{D^{(k)}}({\alpha},\Omega)(Z,X))(Y)\\& =D^{(k-1)}_Z(\varkappa_{D^{(k)}}({\alpha},\Omega)(X,Y))+D^{(k-1)}_X(\varkappa_{D^{(k)}}({\alpha},\Omega)(Y,Z))\\&+D^{(k-1)}_Y\varkappa_{D^{(k)}}({\alpha},\Omega)(Z,X)) \end{aligned}\end{aligned}$$ for $Z\in{\ensuremath{\mathfrak{X}}}(M)$. To prove that this expression vanishes let’s look closer at the expression on the right hand side: as ${\alpha}$ is a section of $P^k_D(F)$, we can write it as $(\beta,\omega)\in\Gamma (P_D^{k-1}(F))\oplus \Omega^1(M,P^{k-1}_D(F))$ with $$\begin{aligned} \label{b} \begin{split} &\omega=D^{(k)}({\alpha})=pr\Omega \\ D^{(k-1)}_X\omega(Y)&-D^{(k-1)}_Y\omega(X)-pr\omega[X,Y]=0 \end{split}\end{aligned}$$ for all $X,Y\in {\ensuremath{\mathfrak{X}}}(M)$. Note that $\omega\in\ker\{pr:P^{k-1}_D(F)\to P^{k-2}_D(F)\}$, which is equal to ${\mathfrak{g}}^{k-1}$, by the exact sequence . Since ${\mathfrak{g}}^{k-1}$ is the first prolongation of ${\mathfrak{g}}^{k-2}$ we have that $$\begin{aligned} \label{h} \partial(\omega(X))(Y)-\partial(\omega(Y))(X)=D^{(k-1)}_Y\omega(X)-D^{(k-1)}_X\omega(Y)=0.\end{aligned}$$ Now, $\Omega(X)$ is also a section of $P^k_D(F)$ and therefore, using , we can also write it as $$\begin{aligned} (\omega(X),\zeta_X)\in \Gamma(P^{k-1}_D(F))\oplus \Omega^1(M,P^{k-1}_D(F))\end{aligned}$$ where $$\begin{aligned} \label{cc}\zeta_X=D^{(k)}\Omega(X), \end{aligned}$$ and such that $$\begin{aligned} \label{d} \begin{split} &D^{(k-1)}\omega(X)=pr\zeta_X\\ D^{(k-1)}_Y\zeta_X(Z&)-D^{(k-1)}_Z\zeta_X(Y)-pr\zeta_X[Y,Z]=0. \end{split}\end{aligned}$$ From and , we write $D^{(k-1)}_Z(\varkappa_{D^{(k)}}({\alpha},\Omega)(X,Y))$ as $$\begin{aligned} D^{(k-1)}_Z(D_X^{(k)}\Omega(Y)-D^{(k)}_Y\Omega(X)-pr\Omega[X,Y])=\\ D^{(k-1)}_Z(\zeta_Y(X)-\zeta_X(Y)-\omega[X,Y])=\\ D^{(k-1)}_Z\zeta_Y(X)-D^{(k-1)}_Z\zeta_X(Y)-D^{(k-1)}_Z\omega[X,Y],\end{aligned}$$ and the right hand side of equation becomes $$\begin{aligned} \label{f} D^{(k-1)}_Z\zeta_Y(X)-D^{(k-1)}_Z\zeta_X(Y)-D^{(k-1)}_Z\omega[X,Y]\\ +D^{(k-1)}_X\zeta_Z(Y)-D^{(k-1)}_X\zeta_Y(Z)-D^{(k-1)}_X\omega[Y,Z]\\ +D^{(k-1)}_Y\zeta_X(Z)-D^{(k-1)}_Y\zeta_Z(X)-D^{(k-1)}_Y\omega[Z,X].\end{aligned}$$ Using , expression reduces to $$\begin{aligned} D^{(k-1)}_{[Z,X]}\omega(Y)+D^{(k-1)}_{[X,Y]}\omega(Z)+D^{(k-1)}_{[Y,Z]}\omega(X)\\ -D^{(k-1)}_Z\omega[X,Y]-D^{(k-1)}_X\omega[Y,Z]-D^{(k-1)}_Y\omega[Z,X]\end{aligned}$$ which is zero by . Relative connections of finite type {#finite type} ----------------------------------- We say that the $l$-connections $(F,D)$ is of [**finite type**]{} if there exists an integer $k$ such that ${\mathfrak{g}}^{(k)}(F,D)=0$. The smallest $k$ with this property is called the [**order of $D$**]{}. For a vector space $V$, denote by $V_M$ the trivial vector bundle over $M$ with fiber $V$. It comes equipped with the obvious flat connection, denoted by $\nabla^{\text{\rm flat}}$. One has $$\begin{aligned} \text{Sol}(V_M,\nabla^{\text{\rm flat}})\simeq V, \end{aligned}$$ where $v\in V$ is identified with the constant section of $V_M$. In this section we will prove the following result \[finitecase\]Let $D$ be a relative connection of finite type over a simply connected manifold, and let $k\geq 1$ be the order of $D$. If 1. $P^{k}_D(F)$ is smoothly defined, and 2. $pr:P^{k+1}_D(F)\to P^{k}_D(F)$ is surjective then ${\text{\rm Sol}}(F,D)$ is a finite dimensional vector space of dimension $$\begin{aligned} r:={\text{\rm rk}\,}F+{\text{\rm rk}\,}{\mathfrak{g}}^{(1)}+{\text{\rm rk}\,}{\mathfrak{g}}^{(2)}+\cdots+{\text{\rm rk}\,}{\mathfrak{g}}^{(k-1)}.\end{aligned}$$ More precisely, choosing $V={\mathbb R}^r$, there exists a morphism $p:(V_M,\nabla^{\text{\rm flat}})\to (F,D)$ of relative connections, inducing a bijection on the space of solutions. In the previous theorem, a morphism of relative connections means that $p:V_M\to F$ is a vector bundle map such that $$\begin{aligned} \label{formula11} D\circ p=(l\circ p)\circ\nabla^{\text{\rm flat}}\end{aligned}$$ The message here is the following: a relative operator satisfying the hypothesis of theorem \[finitecase\] is the quotient of a trivial bundle with the canonical flat connection $\nabla^{\text{\rm flat}}$. Intuitively the situation is as follows $$\begin{aligned} \xymatrix{ V_M \ar[r]^{\nabla^{\text{\rm flat}}} \ar[d]_p & V_M \ar[d]^{l\circ p}\\ F \ar[r]^{D} & E. }\end{aligned}$$ Although the above diagram is not precise, it helps to illustrate our situation. Theorem \[finitecase\] follows from the following lemma. \[lema\]With the hypotheses of theorem \[finitecase\], one has that 1. $P^{k}_D(F)$ is isomorphic to the trivial bundle $V_M$, 2. $pr:P^{k}_D(F)\to P^{k-1}_D(F)$ is an isomorphism of vector bundles over $M$. Moreover, under the identification given by $pr$, $D^{(k)}$ becomes the trivial connection $\nabla^{\text{\rm flat}}$. By proposition \[newremark\] we have a short exact sequence $$\begin{aligned} \label{si} 0{\longrightarrow}{\mathfrak{g}}^{(l+1)}{\longrightarrow}P^{l+1}_D(F)\overset{pr}{{\longrightarrow}} P^{l}(F){\longrightarrow}0 \end{aligned}$$ of vector bundles over $M$ for any $0\leq l\leq k$. As ${\mathfrak{g}}^{(k)}=0$ we have that $P^{k}_D(F)$ is isomorphic to $P^{k-1}_D(F)$ via $pr$, and under this identification we get that the connection $D^{(k)}:\Gamma(P^{k}(F))\to \Omega^1(M,P^{k-1}(F))$ is a classical linear connection on $P^{k}_D(F)$. We claim that this connection is flat. Indeed since ${\mathfrak{g}}^{(k)}=0$ then its prolongation ${\mathfrak{g}}^{(k+1)}=0$, and therefore the sequence for $l=k$ implies that $pr:P^{k+1}_D(F)\to P^{k}_D(F)$ is a bijection. From remark \[invol\] it follows that $D^{(k)}$ is flat. We are now in the situation of a vector bundle $L$ over a simply connected manifold $M$, which admits a flat linear connection $\nabla:{\ensuremath{\mathfrak{X}}}(M)\times\Gamma(L)\to \Gamma(L)$. It is a well-known fact that in this case, $L$ is isomorphic to the trivial bundle $V_M$, where $V$ is the finite dimensional vector space $V\subset\Gamma(L)$ given by parallel sections of $\nabla$. The isomorphism is given by $$\begin{aligned} V_M\overset{\psi}{{\longrightarrow}} L, \ (s,x)\mapsto s(x)\end{aligned}$$ From this isomorphism, it is clear that $\text{Sol}(V_M,\nabla^{\text{\rm flat}})\simeq V$ is mapped onto $\text{Sol}(L,\nabla)$. From the Leibniz identity for $\nabla$ and $\nabla^{\text{\rm flat}}$, and the fact that the set $V\subset \Gamma(V_M)$ generates the space $\Gamma(V_M)$ as a $C^{\infty}(M)$-module, we have that $\nabla$ becomes $\nabla^{\text{\rm flat}}$, i.e. $$\begin{aligned} \psi\circ\nabla^{\text{\rm flat}}=\nabla\circ \psi\end{aligned}$$ In our case take $L=P^{k}_D(F)$ and $\nabla=D^{(k)}$. To show that the dimension of $V$ is equal to $r$, we count the rank of $P^{k}_D(F)$ by the recursive formula $$\begin{aligned} \begin{aligned} &{\text{\rm rk}\,}P_D(F)={\text{\rm rk}\,}F+{\text{\rm rk}\,}{\mathfrak{g}}^{(1)}\\ &{\text{\rm rk}\,}P^l_D(F)={\text{\rm rk}\,}P^{l-1}_D(F)+{\text{\rm rk}\,}{\mathfrak{g}}^{(l)},\quad\text{for }l\geq2 \end{aligned}\end{aligned}$$ which is true by the exact sequence . From lemma \[lema\], one takes $V_M=P^{k}_D(F)$ and $p=pr^{k}_0$. Since $pr^{k}_0$ gives a bijection between $\text{Sol}(P^{k}_D(F),D^{(k)})$ and $\text{Sol}(F,D)$, then formula is well-defined. Moreover, as $p$ is surjective and in this case $\Gamma(P^{k}_D(F))$ is generated by $\text{Sol}(P^{k}_D(F),D^{(k)})$ as a $C^{\infty}(M)$-module, it follows that $D$ is determined by formula . Moreover, $$\begin{aligned} r={\text{\rm rk}\,}P^{k}_D(F)=\dim \text{\rm Sol}(P^{k}_D(F),\nabla^{\text{flat}})=\dim \text{\rm Sol}(P^{k}_D(F),D^{(k)}), \end{aligned}$$ and from corollary \[corollary2\] we have that $pr^k_0$ gives a linear bijection between $$\text{\rm Sol}(P^{k}_D(F),D^{(k)})$$ and $\text{\rm Sol}(F,D)$. Let $(F,D)$ be a relative connection of finite type over a simply connected manifold $M$, and let $k\geq 1$ be the order of $D$. If 1. $pr:P_D(F)\to F$ is surjective, 2. ${\mathfrak{g}}^{(1)}$ is a smooth vector bundle over $M$, and 3. $H^{2,l}({\mathfrak{g}})=0$ for $0\leq l\leq k-1$, then ${\text{\rm Sol}}(F,D)$ is a finite dimensional vector space of dimension $$\begin{aligned} r:={\text{\rm rk}\,}F+{\text{\rm rk}\,}{\mathfrak{g}}^{(1)}+{\text{\rm rk}\,}{\mathfrak{g}}^{(2)}+\cdots+{\text{\rm rk}\,}{\mathfrak{g}}^{(k-1)}.\end{aligned}$$ More precisely, choosing $V={\mathbb R}^r$, there exists a morphism $p:(V_M,\nabla^{\text{\rm flat}})\to (F,D)$ of relative connections, inducing a bijection on the space of solutions. Note that ${\mathfrak{g}}^{(k)}=0$ implies that ${\mathfrak{g}}^{(l)}=0$ for all $l\geq k$, and therefore $H^{2,l}=0$ for all $l\geq 1$. Hence, we can apply theorem \[formally integrable\] in this case to get that $(F,D)$ is formally integrable, and therefore the hypotheses of theorem \[finitecase\] are fulfilled. Pfaffian bundles {#Pfaffian bundles} ================ In this chapter we will define two equivalent notions of Pfaffian bundles, one using distributions $H\subset TR$ and one using surjective one forms on $R$ with values in some vector bundle. The connection between the two notions is the kernel of the one form which is a honest smooth distribution thanks to the surjectivity of the one form under consideration. These two equivalent languages allow us to develop the theory of Pfaffian bundles using either distributions or forms, depending on which is more convenient at each specific step. However, the approach using one forms allows us to have a slightly more general version of Pfaffian bundles when one does not require the surjectivity assumption. Bearing this in mind, throughout this chapter we will point out which properties and results still hold in the more general case of a one form which is not necessarily surjective.\ Through this chapter $\pi:R\to M$ denotes a surjective submersion. By $T^\pi R$ we denote the subbundle of $TR$ given by $\ker\pi$, the vectors tangent to the fibers of $R$. By $\pi:J^kR\to M,\pi(j^k_xs)=x$ we denote the smooth bundle over $M$ which consists of $k$-jets $j_x^ks$ of local sections of $\pi:R\to M$, and we set $pr:J^kR\to J^{k-1}R$ to be the surjective submersion mapping $j^k_xs$ to $j^{k-1}_xs.$ See subsection \[jet-bundles\]. For ease of notation $T$ and $T^*$ denote the tangent and cotangent bundles of a manifold $M$ respectively. Also we will regard the vector bundles over $M$ as vector bundles over $R$ via the pullback of $\pi$, unless otherwise stated. Pfaffian bundles and Pfaffian forms ----------------------------------- Let $H\subset TR$ be a distribution. By $E$ we denote the vector bundle over $R$ given by the quotient $$\begin{aligned} E:=TR/H\end{aligned}$$ We denote by ${\mathfrak{g}}:={\mathfrak{g}}(H)$ the bundle (not necessarily of constant rank) over $R$ given by $$\begin{aligned} {\mathfrak{g}}=H^\pi:=H\cap T^{\pi}R\end{aligned}$$ We call ${\mathfrak{g}}$ [**the symbol space of $H$.**]{} \[def: pfaffian distributions\] - We say that $H$ is [**$\pi$-transversal**]{} if $$\begin{aligned} TR=H+T^\pi R.\end{aligned}$$ - We call $H$ [**$\pi$-involutive**]{} if $H^\pi$ is closed under the Lie bracket of vector fields on ${\ensuremath{\mathfrak{X}}}(R)$. - A [**Pfaffian distribution**]{} $H$ is a distribution which is both $\pi$-transversal and $\pi$-involutive. - A [**Pfaffian bundle**]{} is a surjective submersion $\pi:R\to M$ together with a Pfaffian distribution $H\subset TR$. In this case we use the notation $\pi:(R,H)\to M$. - A [**solution of $(R,H)$**]{} is a section $s:M\to R$ with the property that $$\begin{aligned} d_xs(T_xM)\subset H_{s(x)}\end{aligned}$$ for all $x\in M.$ We denote the set of solutions by $$\begin{aligned} {\text{\rm Sol}}(R,H)\subset \Gamma(R).\end{aligned}$$ - A [**partial integral element of $H$**]{} is any linear subspace $V\subset T_pR$ of dimension equal to the dimension of $M$, such that $$\begin{aligned} V\subset H_p \qquad\text{and}\qquad T_pR=V\oplus T^\pi_p R.\end{aligned}$$ The set of partial integral elements, denoted by $J^1_HR$, is called [**the partial prolongation with respect to $H$**]{}. Some comments about notions and names that appear in the literature. In [@BC], a Pfaffian system was an exterior differential system $\mathcal{I}\subset \Omega^*(R)$ with independence condition generated as an exterior differential ideal in degree one. This data was encoded in sub-bundles $$\begin{aligned} I\subset J\subset T^*R\end{aligned}$$ which, in analogy with our approach, is given by $$\begin{aligned} J=(TR/H^\pi)^*, \ \text{ and } \ \ I=(TR/H)^*,\end{aligned}$$ where $H\subset TR$ is a $\pi$-transversal distribution. What they called a linear Pfaffian system in [@BC] is in our language equivalent to $H$ being $\pi$-involutive. We warn that linearity in the sense of [@BC] is a different notion of what we will call linear Pfaffian bundles. \[transversal-properties\] - When $H$ is $\pi$-transversal, then ${\mathfrak{g}}$ is a smooth vector subbundle of $TR$ as it has constant rank and $$\begin{aligned} H/{\mathfrak{g}}=\pi^*TM.\end{aligned}$$ Indeed, for any $p\in R$, $$\begin{aligned} \dim{\mathfrak{g}}_p={\text{\rm rk}\,}TR-{\text{\rm rk}\,}T^\pi R-{\text{\rm rk}\,}H.\end{aligned}$$ On the other hand, $$\begin{aligned} \pi^*TM\simeq TR/T^\pi R\simeq (T^\pi R+H)/T^\pi R\simeq H/{\mathfrak{g}}.\end{aligned}$$ - If $H$ is $\pi$-transversal then the restriction of $d_p\pi$ to $H_p$ $$\begin{aligned} d_p\pi:H_p{\longrightarrow}T_{\pi(p)}M\end{aligned}$$ is surjective for any $p\in R.$ For a partial integral element $V\subset H_p$ of $\pi:(R,H)\to M$ we have that the restriction $$\begin{aligned} \label{iso} d_p\pi:V\overset{\simeq}{{\longrightarrow}} T_{\pi(p)}M\end{aligned}$$ is an isomorphism as $V$ is transversal to $T^\pi_pR$ and $\dim V=\dim M.$ - When $H$ is $\pi$-transversal, the set of partial integral elements $J^1_HR$ can be regarded as a sub-bundle of $J^1R$ in the following way. The fact that is an isomorphism for a partial integral element $V\subset H_p$ means that $V$ is the image of a splitting $\sigma_p:T_{\pi(p)}M\to T_pR$ of $d_p\pi:T_pR\to T_{\pi(p)}M$. But the set of splittings $\sigma_p$, where $p\in R$, is identified with $J^1 R$. The involutivity of $H$ is measured by the so called curvature map: \[curvature-map\] The [**curvature map**]{} of $H$ is the $C^{\infty}(R)$-bilinear map, defined by $$\begin{aligned} c:=c(H):H\times H{\longrightarrow}E, \quad c(u,v)=[U,V]_p\;{\operatorname{mod}}H,\end{aligned}$$ where $U,V\in{\ensuremath{\mathfrak{X}}}(R)$ are vector fields tangent to $H$ such that $U_p=u$ and $V_p=v$. Note that the Leibniz identity implies that $c$ is a well-defined bilinear map. \[involutive-properties\] - $H$ is involutive if and only if $c=0$. - $H$ is $\pi$-involutive if and only if $c_{{\mathfrak{g}}\times {\mathfrak{g}}}=0$. If $H$ is also $\pi$-transversal we have an induced vector bundle map over $R$ $$\begin{aligned} {\mathfrak{g}}\times H/{\mathfrak{g}}{\longrightarrow}E,\quad (v,[X])\mapsto c(v,X)\end{aligned}$$ or analogously the dual vector bundle map over $R$ $$\begin{aligned} \label{sm} \partial_H:{\mathfrak{g}}{\longrightarrow}{\mathrm{Hom}}(TM,E),\end{aligned}$$ where we are using the vector bundle isomorphism $\pi^*TM\simeq H/{\mathfrak{g}}$. We call the map , and we extend it to the linear map $\partial_H:\wedge^kT^*\otimes {\mathfrak{g}}\to \wedge^{k+1}T^*\otimes E$ by the usual formula. We set $$\begin{aligned} {\mathfrak{g}}^{(k)}:={\mathfrak{g}}^{(k)}(R,H)\subset S^k T^*\otimes {\mathfrak{g}}\end{aligned}$$ the bundle over $R$ (not necessarily of constant rank) equal to the $k$-prolongation of the map $\partial_H$. See definition \[1st-prolongation\]. Associated to any distribution $H\subset TR$ we have a canonical surjective form $\theta_H\in\Omega(R,TR/H)$ which is nothing more than the projection map $$\begin{aligned} \label{quotient-form} \theta_H:TR{\longrightarrow}TR/H,\quad U\mapsto[U].\end{aligned}$$ Moreover, any surjective form $\theta\in\Omega^1(R,E')$ arises in this way (up to isomorphism of $E'$): using the identification $$\begin{aligned} TR/ H\overset{\simeq}{{\longrightarrow}}E',\quad[U]\mapsto \theta(U) \end{aligned}$$ of vector bundles over $R$ with $H=\ker\theta$, $\theta$ becomes the canonical projection $TR\to TR/\ker\theta$. Let $\theta\in\Omega^1(R,E')$ be a one form with values on the vector bundle $E'\to R$ - [**The symbol space of $\theta$**]{} is the bundle over $R$ (not necessarily of constant rank) given by $$\begin{aligned} {\mathfrak{g}}={\mathfrak{g}}(\theta):=\ker\theta\cap T^{\pi}R.\end{aligned}$$ - We say that $\theta$ is [**regular**]{} if it surjective and its kernel is $\pi$-transversal, i.e. $$\begin{aligned} TR= \ker\theta+TR^\pi.\end{aligned}$$ - The 1-form $\theta$ is [**$\pi$-involutive**]{} if for any $p\in R$ $$\begin{aligned} \theta_p([U,V])=0\end{aligned}$$ for any vector fields $U,V\in{\ensuremath{\mathfrak{X}}}^\pi(R)$ such that $U,V\in\Gamma(\ker\theta)$. - $\theta$ is called a [**Pfaffian form**]{} if it is regular an $\pi$-involutive. - A [**Pfaffian bundle**]{} is a surjective submersion $\pi:R\to M$ together with a Pfaffian form $\theta$. In this case we use the notation $\pi:(R,\theta)\to M$. - A [**solution of $(R,\theta)$**]{} is a section $s:M\to R$ with the property that $s^*\theta=0$. The set of solutions of $(R,\theta)$ is denoted by $$\begin{aligned} {\text{\rm Sol}}(R,\theta)\subset \Gamma(R).\end{aligned}$$ - A [**partial integral element of $(R,\theta)$**]{} is a linear space $V\subset T_pR$ of dimension equal to the dimension of $M$, and such that $$\begin{aligned} \theta_p(V)=0\quad{and}\quad V\oplus T^\pi_pR=T_pR.\end{aligned}$$ The set of partial integral elements, denoted by $J^1_\theta R$, is called [**the partial prolongation with respect to $\theta$**]{}. Again for a one form $\theta\in\Omega^1(M,E')$, the involutivity of $H:=\ker\theta$ is measured by the so-called partial differentiation of $\theta$: \[partial-diff\] For a point-wise surjective $1$-form $\theta\in \Omega^1(R,E')$, [**the partial differential of $\theta$**]{} is the $C^{\infty}(R)$-bilinear map over $R$ defined by $$\begin{aligned} \label{deltatheta} \delta\theta:H\times H{\longrightarrow}E',\quad \delta\theta(u,v):=\theta_p([U,V])\end{aligned}$$ where $U,V\in{\ensuremath{\mathfrak{X}}}(R)$ are vector fields such that $\theta(U)=\theta(V)=0$, with $U_p=u$ and $V_p=v$. The Leibniz identity implies again that $\delta\theta$ is a well-defined map. Note that $\theta$ is $\pi$-involutive if $\delta\theta_{{\mathfrak{g}}\times{\mathfrak{g}}}=0$. So for a Pfaffian form $\theta$ we have an induced vector bundle map over $R$ $$\begin{aligned} \label{partialtheta} \partial_\theta:{\mathfrak{g}}{\longrightarrow}{\mathrm{Hom}}(TM,E'),\quad \partial_\theta(v)(X)=\delta\theta(v,[X]),\end{aligned}$$ where again we are using the vector bundle isomorphism $H/{\mathfrak{g}}\simeq \pi^*TM$. The map is called [**the symbol map of $\theta$**]{}. We extend $\partial_\theta$ to the linear map $\partial_\theta:\wedge^kT^*\otimes {\mathfrak{g}}\to \wedge^{k+1}T^*\otimes E$ by the usual formula. The bundle over $R$ given by the $k$-prolongation of $\partial_\theta$ is denoted by $$\begin{aligned} {\mathfrak{g}}^{(k)}:={\mathfrak{g}}^{(k)}(R,\theta)\subset S^kT^*\otimes {\mathfrak{g}}.\end{aligned}$$ When $\theta$ is not surjective but is both $\pi$-involutive and $\pi$-transversal, the bundle map still makes sense as a $C^\infty(R)$-linear map but we deal with a possibly non-smooth bundle ${\mathfrak{g}}(\theta).$ From now on we should always keep in mind the following result which will allow us to work either with Pfaffian distributions or with the equivalent dual picture of Pfaffian forms. \[lema1\] Let $H\subset TR$ be a distribution and $\theta:=\theta_H\in\Omega^1(R,E)$ its associated canonical form where $E=TR/H$. Then, 1. $H$ is $\pi$-transversal if and only if $\theta$ is regular, 2. $H$ is $\pi$-involutive if and only if $\theta$ is $\pi$-involutive, 3. a section $s\in\Gamma(R)$ is a solution of $(R,H)$ if and only if it is a solution of $(R,\theta)$, 4. $V\subset T_pR$ is partial integral element of $(R,H)$ if and only if it is a partial integral element of $(R,\theta)$, i.e. $J^1_HR=J^1_\theta R,$ 5. the curvature map $c(H)$ of $H$ and the partial differential $\delta\theta$ of $\theta$ coincide, i.e, for any $u,v\in H$ $$\begin{aligned} c(u,v)=\delta\theta(u,v).\end{aligned}$$ The proof of the above lemma is a direct consequence of the definitions. From item 5 of lemma \[lema1\] we get that the symbol maps $\partial_H,\partial_\theta:{\mathfrak{g}}\to T^*\otimes E$ of $H$ and $\theta$ respectively coincide, therefore all their $k$-prolongations do as well. That is $$\begin{aligned} {\mathfrak{g}}^{(k)}(R,H)={\mathfrak{g}}^{(k)}(R,\theta).\end{aligned}$$ Assume that our Pfaffian bundle $\pi:(F, \theta)\to M$ is linear in the sense that $\pi:F\to M$ is a vector bundle and the distribution $H\subset TF$ is a sub-bundle of the vector bundle $d\pi:TF\to TM$, where the structural maps of $TF$ are given by the differential of the structural maps of $F$. In the dual picture we get a linear one form $\theta_H\in\Omega^1(M,\pi^*E')$, where $E'$ is now a vector bundle over $M$ defined by $$\begin{aligned} E':=TF/H|_{M}.\end{aligned}$$ The linearity condition of $\theta_H$ is given by the equation $$\begin{aligned} a^*\theta=p_1^*\theta+p_2^*\theta,\end{aligned}$$ where $a,p_1,p_2:F\times_MF\to F$ are the fiber-wise addition, the projection to the first and second components respectively. In this setting the bundles under consideration become pullbacks of vector bundles over $M$, for example the symbol space ${\mathfrak{g}}(H)\subset H$ is now the pullback via $\pi$ of the vector bundle $H^\pi|_M\to M$, and the bundle maps $c$ and $\partial_\theta$ are now pullbacks of vector bundle maps over $M$. Another very useful fact is that in this case one can associate to $H$ (or rather to $\theta$) a canonical connection $D$ relative to the projection $F\to F/(H^\pi|_M)$, and therefore we can apply all the theory developed in chapter \[Relative connections\]. This be fully treated in section \[linear Pfaffian bundles\]. \[cartandist\]Continuing with the description of jet bundles given in subsection \[jet-bundles\], recall that for any surjective submersion $\pi:R\to M$, and $k\geq 1$ integer, the $k$-jet bundle $\pi:J^kR\to M$ carries a canonical Pfaffian distribution $C_k$, knows as the Cartan distribution. It is designed to detect holonomic sections in the sense that a section ${\alpha}:M\to J^kR$ is of the form ${\alpha}=j^ks$ for some $s\in \Gamma(R)$ if and only if ${\alpha}$ is a solution of $(J^kR,C^k):$ $$\begin{aligned} j^k:\Gamma(R)\overset{\simeq}{{\longrightarrow}}{\text{\rm Sol}}(J^kR,C_k).\end{aligned}$$ Recall the definition of $C_k$. As a set, $C_k$ is generated as a $C^{\infty}(J^kR)$-module by the image of the differential of holonomic sections of $\pi:J^kR\to M$, and therefore it is transversal to $\pi$. Its vertical part $C_k^{\pi}$ is the image of the inclusion $i:S^kT^*\otimes T^{\pi}R\to T^{\pi}J^kR$ given by the short exact sequence of vector bundles over $J^kR$ $$\begin{aligned} \label{sesj} 0{\longrightarrow}S^kT^*\otimes T^\pi R\overset{i}{{\longrightarrow}} TJ^kR\overset{dpr}{{\longrightarrow}} TJ^{k-1}R{\longrightarrow}0.\end{aligned}$$ This sequence also implies that $C_k$ is $\pi$-involutive since it is the kernel of the differential of a surjective submersion. From this we have that $$\begin{aligned} pr:(J^kR,C_k){\longrightarrow}M\end{aligned}$$ is a Pfaffian bundle. See [@Gold2] for the sequence . In the dual picture we recover the so called Cartan forms $$\begin{aligned} \theta^k\in\Omega^1(J^kR,T^{\pi}J^{k-1}R).\end{aligned}$$ Their main importance is that they detect holonomic section: $\xi\in \Gamma(J^kR)$ is of the form $j^ks$ for some $s\in\Gamma(R)$ if $$\xi^*\theta^k=0.$$ At the point $p=j^k_xs$, $\theta_p$ is given by the formula $$\begin{aligned} \theta_p(X):=dpr(X)-d_x(j^{k-1}s)\circ d\pi(X),\end{aligned}$$ where $X\in T_pJ^kR$. That $\theta_p(X)$ is indeed an element of $T^\pi J^{k-1}R$ follows from the fact that $\pi\circ pr=\pi$ and $\pi\circ j^{k-1}s=id_M$. As the kernel of $\theta^k$ is precisely the Cartan distribution $C_k\subset TJ^kR$ (see e.g [@Bocharov; @Krasil; @Olver]), $pr:(j^kR,\theta^k)\to M$ is a Pfaffian bundle.\ The following remark is for later use: the symbol map $\partial_k$ of $C_k$ actually takes values in $$\begin{aligned} T^*\otimes(S^{k-1}T^*\otimes T^\pi R)\subset T^*\otimes TJ^{k-1}R,\end{aligned}$$ where we are using the identification of the quotient $TJ^kR/C_k\simeq TJ^{k-1}R$ given by sequence \[sesj\]. Moreover, $$\begin{aligned} \partial_k:S^kT^*\otimes T^\pi R{\longrightarrow}T^*\otimes (S^{k-1}T^*\otimes T^\pi R)\end{aligned}$$ is the formal differentiation described in . There are two questions for a Pfaffian bundle $\pi:(R, H)\to M$ that are equivalent (modulo a topological condition): 1. \[A1\] When can one find a bundle $\tilde R$ over $M$ and an immersion $i: R\hookrightarrow J^1(\tilde R)$ such that $\theta_H$ is the pull-back via $i$ of the Cartan form $\theta^1$ on $J^1\tilde R$? 2. \[A2\] When is the symbol map $\partial_H: H^\pi\to {\mathrm{Hom}}(\pi^*TM, TR/H)$ injective? In the less interesting direction \[A1\] implies \[A2\]. One remarks that $\partial_H$ is the restriction of the differential of $i$ to $H^\pi$. For the converse, recall that $H^\pi$ is an involutive distribution on $R$. Take the leaf space of this foliation and call it $\tilde R$. It is here that the topological condition comes in. We require that this quotient is a manifold (so that, strictly speaking, \[A1\] is equivalent to \[A2\] under the assumption that the foliation $H^\pi$ on $R$ is simple). Our inclusion $i$ is the canonical map from $R$ to $J^1\tilde R$ sending $p\in R_x$ to $$\begin{aligned} [\sigma_p]:T_xM{\longrightarrow}T_{[p]}\tilde R\simeq T_pR/H^\pi_p\end{aligned}$$ where $\sigma_p:T_xM\to T_pR$ is any splitting of $d\pi$ with the property that its image lies inside $H$. Prolongations of Pfaffian bundles {#prolongations of Pfaffian bundles} --------------------------------- ### The partial prolongation Recall from remark \[transversal-properties\], that for a distribution $H\subset TR$ transversal to $\pi:R\to M$, the partial prolongation $J^1_HR\subset J^1R$ is given by $$\begin{aligned} \label{part-prol} J^1_HR=\{j^1_xs\mid d_xs(T_xM)\subset H_{s(x)}\}.\end{aligned}$$ In the dual picture, for a one form $\theta\in\Omega^1(M,E)$, the partial prolongation is $$\begin{aligned} J^1_\theta R=\{j^1_xs\mid s^*\theta_x=0\}\end{aligned}$$ Note that for $\theta=\theta_H$, $J^1_HR=J^1_\theta R.$ \[smothness\] Let $H\subset TR$ be $\pi$-transversal. The space $J^1_HR$ of partial integral elements is a smooth subbundle of $pr:J^1R\to R$, i.e. $J^1_HR$ is a smooth submanifold of $J^1R$ and the restriction $pr:J^1_HR\to R$ is a surjective submersion. $J^1_HR$ is the kernel of the smooth bundle map over $R$ given by $$\begin{aligned} ev:J^1R{\longrightarrow}T^*\otimes E,\quad j^1_xs\mapsto d_xs(\cdot)\;{\operatorname{mod}}H_{s(x)}.\end{aligned}$$ Using the exact sequence of vector bundles over $J^1R$ (see proposition 5.2 of [@Gold2]) $$\begin{aligned} 0{\longrightarrow}T^*\otimes T^\pi R{\longrightarrow}TJ^1R\overset{dpr}{{\longrightarrow}} TR{\longrightarrow}0,\end{aligned}$$ one has $$\begin{aligned} \ker dev\cap\ker dpr=T^*\otimes {\mathfrak{g}},\end{aligned}$$ which implies that $ev$ is of constant rank. On the other hand, the condition that $H$ is $\pi$-transversal ensures the existence of partial integral elements at any $p\in R$, and therefore $z(R)\subset ev(J^1R)$, where $z:R\to T^*\otimes E$ is the zero section. So, we are left in the situation of a bundle map $ev$ of constant rank of two fibered manifolds $ev:X\to Y$ over $R$ with the property that $z(X)\subset ev(X)$. Hence we can apply proposition 2.1 of [@Gold2] to ensure that $\ker_zev$ is a fibered submanifold of $X\to R$, which in our case says that $pr:J^1_HR\to R$ is a smooth subbundle of $pr:J^1R\to R$ . \[remark1\]From the above proof we have that for a distribution $H\subset TR$ $\pi$-transversal, the sequence of vector bundles over $J^1_HR$ $$\begin{aligned} 0{\longrightarrow}T^*\otimes{\mathfrak{g}}{\longrightarrow}TJ^1_HR\overset{dpr}{{\longrightarrow}} TR{\longrightarrow}0\end{aligned}$$ is exact, where ${\mathfrak{g}}=H\cap T^\pi R$. \[corollary1\] Let $H\subset TR$ be a transversal distribution. If $\dim M>0$ then $H$ is an Ehresmann connection, i.e. $$\begin{aligned} TR=H\oplus T^\pi R\end{aligned}$$ if and only if $pr:J^1_HR\to R$ is a bijection. If $TR=H\oplus T^\pi R$, then ${\mathfrak{g}}:=H\cap T^\pi R=0$. So, if $j^1_xs,j^1_xu\in J^1_HR$ are such that $u(x)=s(x)$ then $$\begin{aligned} d_xs-d_xu:T_xM{\longrightarrow}T_{s(x)}R\end{aligned}$$ has image inside ${\mathfrak{g}}_{s(x)}=0$. Therefore $j^1_xs=j^1_xu$, which proves that $pr$ is a bijection. Conversely, if $pr:J^1_HR\to R$ is a bijection then by the short exact sequence of remark \[remark1\] we have that $T^*\otimes{\mathfrak{g}}=0$. This happens only if ${\mathfrak{g}}=0$, or in other words, if $H$ is an Ehresmann connection. For a regular one form $\theta\in\Omega^1(R,E')$ we have analogous versions of lemma \[smothness\], remark \[remark1\] and corollary \[corollary1\], using $H=\ker\theta$. Let $H\subset TR$ be a $\pi$-transversal distribution and let $\theta\in\Omega^1(R,E')$ be a regular form. - [**The partial prolongation of $(R,H)$**]{} is the Pfaffian bundle $J^1_HR$ endowed with the distribution $$\begin{aligned} H^{(1)}=C_1\cap TJ^1_HR,\end{aligned}$$ where $C_1\subset J^1R$ is the Cartan distribution. - [**The partial prolongation of $(R,\theta)$**]{} is the Pfaffian bundle $J^1_\theta R$ endowed with the Pfaffian form$$\begin{aligned} \theta^{(1)}=\theta^1|_{ TJ^1_\theta R},\end{aligned}$$ where $\theta^1\in\Omega^1(J^1R,pr^*T^\pi R)$ is the Cartan form. From example \[cartandist\] we have the following proposition. Let $H\subset TR$ be a $\pi$-transversal distribution, and let $\theta:=\theta_H$ be its associated regular one form. Then $$\begin{aligned} J^1_HR=J^1_\theta R\qquad\text{and}\qquad H^{(1)}=\ker\theta^{(1)}\end{aligned}$$ where $H^{(1)}:=C_1\cap TJ^1_HR$ and $\theta^{(1)}=\theta^1|_{J^1_\theta R}$. One of the main properties of the partial prolongation is: \[proposition1\] Let $H\subset TR$ be a $\pi$-transversal distribution and let $\theta\in\Omega^1(M,E')$ be a regular form. The partial prolongations $(J^1_HR,H^{(1)})$ and $(J^1_\theta R,\theta^{(1)})$ are indeed Pfaffian bundles. Moreover the map $$\begin{aligned} pr:{\text{\rm Sol}}(J^1_HR, H^{(1)})\overset{}{{\longrightarrow}}{\text{\rm Sol}}(R,H)\end{aligned}$$ is a bijection with inverse $j^1:{\text{\rm Sol}}(R,H)\to {\text{\rm Sol}}(J^1_HR,H^{(1)}).$ Analogously, $$\begin{aligned} \label{ese'} pr:{\text{\rm Sol}}(J^1_\theta R, \theta^{(1)})\overset{}{{\longrightarrow}}{\text{\rm Sol}}(R,\theta)\end{aligned}$$ is a bijection with inverse $j^1:{\text{\rm Sol}}(R,\theta)\to {\text{\rm Sol}}(J^1_\theta R,\theta^{(1)})$. This result is inspired by similar results of [@Gold3] in the setting of partial differential equations. We will prove that $(J^1_\theta R,\theta^{(1)})$ is a Pfaffian bundle. The case of Pfaffian distributions follows from the case of Pfaffian forms taking $\theta=\theta_H$. Let’s first show that for any $\sigma\in J^1_\theta R$, $$\begin{aligned} \theta^{(1)}:T_\sigma J^1_{\theta}R{\longrightarrow}T_{pr(\sigma)}^\pi R\end{aligned}$$ is surjective. Note that $\theta^{(1)}$ restricted to $T^\pi_\sigma J^1_\sigma R$ is equal to the projection $dpr:T^\pi_\sigma J^1_\theta R\to T_{pr(\sigma)}^\pi R$. By the analogous version of remark \[remark1\] we have that that $dpr$ restricted to $T^\pi J^1_\theta R$ is surjective. This also shows that $$\begin{aligned} \label{dimension}\begin{split} \dim H^{(1)}_\sigma&=\dim T_\sigma J^1_\theta R-\dim T^\pi_{pr(\sigma)}R\\ &=\dim M+\dim T^\pi_\sigma J^1_\theta R-\dim T^\pi_{pr(\sigma)}R\\ &=\dim M+\dim(T^*_{\pi(\sigma)}\otimes {\mathfrak{g}}_{pr(\sigma)}), \end{split}\end{aligned}$$ where ${\mathfrak{g}}=\ker\theta\cap T^\pi R$, and that $$\begin{aligned} \label{dimension2} \ker\theta\cap T^\pi J^1_\theta R=T^*\otimes {\mathfrak{g}}\end{aligned}$$ as vector bundles over $J^1_\theta R$. Using and we can show that $\theta^{(1)}$ is regular with respect to $\pi_1:J^1_\theta R\to M$ as follows: let $H^{(1)}:=\ker\theta^{(1)}$, then $$\begin{aligned} \dim(H_\sigma^{(1)}+T^\pi_\sigma J^1_\theta R)&=&\dim H_\sigma^{(1)}+\dim T^\pi_\sigma J^1_\theta R\\&&-\dim H_\sigma^{(1)}\cap T^\pi_\sigma J^1_\theta R\\&=&\dim M+\dim(T^*_{\pi(\sigma)}\otimes {\mathfrak{g}}_{pr(\sigma)})+\dim T^\pi_\sigma J^1_\theta R\\&&-\dim(T^*_{\pi(\sigma)}\otimes {\mathfrak{g}}_{pr(\sigma)})\\&=&\dim M+\dim T^\pi_\sigma J^1_\theta R\\&=&\dim T_\sigma J^1_\theta R.\end{aligned}$$ That $\theta^{(1)}$ is $\pi$-closed is a consequence of the fact that the Cartan form $\theta^1$ is $\pi$-closed. As for the second part, let ${\alpha}:M\to J^1_HR$ be a solution of $(J^1_HR, H^{(1)})$, then for all $x\in M$, $d{\alpha}(T_xM)\subset C_1$, which implies that ${\alpha}$ is of the form $j^1s$ for $s=pr({\alpha})$. On the other hand, since ${\alpha}(x)=j^1_xs\in J^1_HR$, then $ds(T_xM)\subset H_{s(x)}$. This means that $s=pr({\alpha})$ is a solution of $(R,H)$. It is clear that if $s$ is a solution of $(R,H)$, then the holonomic section $j^1s:M\to J^1R$ has its image in $J^1_HR$ and it is a solution of $(J^1_HR, H^{(1)}).$ The part with Pfaffian forms is analogous to this case. For a general one form $\theta\in\Omega^1(M,E')$, the map still makes sense whenever $J^1_\theta R$ is smooth. In fact, the map is a bijection with inverse given by $j^1.$ Let $H\subset TR$ be a $\pi$-transversal distribution. A key property of $J^1_HR$ is that the differential of the projection $pr:J^1_HR\to R$ is such that $$\begin{aligned} \theta\circ dpr=\theta\circ\theta^{(1)}.\end{aligned}$$ Indeed, for $X\in T^\pi J^1_HR$ this is trivially true since $\theta^{(1)}(X)=dpr(X)$. For $X\in H^{(1)}_{j^1_xs}$ (i.e. $�\theta^{(1)}(X)=0$), with $j^1_xs\in J^1_HR$, one has that $$\begin{aligned} dpr(X)-d_xs(d\pi(X))=0,\end{aligned}$$ and therefore $$\begin{aligned} \theta(dpr(X))=\theta(d_xs(d\pi(X)))=0.\end{aligned}$$ ### The classical prolongation The classical prolongation of $H\subset TR$ is a Pfaffian bundle (under some smoothness conditions) sitting above $R$, and may be thought as the complete infinitesimal data of solutions of $(R,H)$. See [@Gold2] for prolongations of PDE’s. \[primer\] Let $H\subset TR$ be a $\pi$-transversal distribution and let $\theta\in\Omega^1(R,E')$ be a regular one form. - The [**1-curvature map $c_1:=c_1(H)$**]{} is the bundle map over $R$ $$\begin{aligned} c_1:J^1_HR&{\longrightarrow}\wedge^2T^*\otimes E,\quad j^1_xs\mapsto c(d_xs(\cdot),d_xs(\cdot)),\end{aligned}$$ where $c$ is the curvature map of $H$ (definition \[curvature-map\]). The [**classical prolongation space with respect to $H$**]{}, denoted by $$\begin{aligned} P_H(R)\subset J^1_HR\subset J^1R,\end{aligned}$$ is set to be $ \ker c_1$. We say that $P_H(R)$ is [**smooth**]{} if it is a smooth submanifold of $J^1R$, and that is [**smoothly defined**]{} if, moreover, $pr:P_H(R)\to R$ is a surjective submersion. - The [**1-curvature map $c_1:=c_1(\theta)$**]{} is the bundle map over $R$ $$\begin{aligned} c_1:J^1_\theta R{\longrightarrow}\wedge^2T^*\otimes E,\quad j^1_xs\mapsto \delta\theta(d_xs(\cdot),d_xs(\cdot)).\end{aligned}$$ where $\delta\theta$ The [**classical prolongation space with respect to $\theta$**]{}, denoted by $$\begin{aligned} P_\theta(R)\subset J^1_\theta R,\end{aligned}$$ is set to be $\ker c_1.$ For a one form $\theta\in\Omega^1(R,E')$, $c_1(\theta)$ in definition \[primer\] still makes sense even though the objects under consideration are not necessarily smooth bundles. \[compare\]Let $H\subset TR$ be a Pfaffian distribution and let $\theta:=\theta_H$ be its associated Pfaffian form. Then the 1-curvature map $c_1(H)$ of $H$ and the 1-curvature map $c_1(\theta)$ of $\theta$ coincide. Hence, $$\begin{aligned} P_H(R)=P_\theta(R).\end{aligned}$$ Moreover, if their classical prolongations are smooth, then $$\begin{aligned} H^{(1)}=\ker\theta^{(1)},\end{aligned}$$ where $H^{(1)}:=C_1\cap TP_H(R)$ and $\theta^{(1)}=\theta^1|_{P_\theta(R)}.$ From lemma \[lema1\] we have that $J^1_\theta R=J^1_HR$ and $c(H)=\delta\theta$. Therefore $c_1(\theta)=c_1(H)$, which implies that $P_H(R)=P_\theta(R)$. That $H^{(1)}=\ker\theta^{(1)}$ is a consequence of remark \[remark1\]. \[propositionA\] For a Pfaffian distribution $H\subset TR$ the following are equivalent: 1. $H$ is involutive, 2. $P_H(R)=J^1_HR$ and $\partial_H$ vanishes. Assume that $P_H(R)=J^1_HR$, or equivalently $c_1=0$, and that $\partial_H$ vanishes. Now, for $U,V\in H_p$, let $s\in \Gamma(A)$ be such that $s(\pi(p))=p$ and $j^1_xs\in J^1_HR$. Using $d_xs$ we write $$\begin{aligned} U=d_xs(d\pi(U))+u,\qquad V=d_xs(d\pi(V))+v,\end{aligned}$$ where $u,v\in {\mathfrak{g}}$ are defined by $u=U-d_xs(d\pi(U))$ and $v=V-d_xs(d\pi(V))$. Hence $$\begin{aligned} \begin{split} c(U,V)=&c(d_xs(d\pi(U)),d_xs(d\pi(V)))+c(d_xs(d\pi(U)),v) \\&+c(u,d_xs(d\pi(V)))+c(u,v) \\=&c_1(j^1_xs)(d\pi(U),d\pi(V))-\partial_H(v)(d\pi(U))+\partial_H(u)(d\pi(V))\\=&0, \end{split}\end{aligned}$$ where we use the $\pi$-involutivity of $H$ passing from the second to the third line. This shows that $c$ vanishes, and therefore $H$ is involutive. Conversely, if $H$ is involutive then the map $c$ is identically zero, and therefore $c_1=0$ and $\partial_H=0$ by definition. As for Ehresmann connections $H\subset TR$, one gets the following corollary. \[corollaryB\] For any Ehresmann connection $H$, 1. $pr:P_H(R)\to R$ is injective, and 2. $H$ is involutive if and only if $pr:P_H(R)\to R$ is a bijection. Form corollary \[corollary1\] one has that $H$ is an Ehresmann connection if $pr:J^1_HR\to R$ is a bijection. Since $P_H(R)$ is a subset of $J^1_HR$ then the injective map $pr:P_H(R)\to R$ is a bijection if and only if $J^1_H(R)=P_H(R)$. Now we apply the previous proposition in our case to have that $H$ is involutive if and only if $J^1_HR=P_H(R)$ as $\partial_H$ is zero (${\mathfrak{g}}=0$), i.e. if and only if $pr:P_H(R)\to R$ is a bijection. Regarding smoothness of the partial prolongation $P_H(R)$ we have the following two results: \[model\]Let $H\subset TR$ be a Pfaffian distribution. If $P_H(R)\subset J^1R$ is smoothly defined, then there is a commutative diagram over $P_H(R)$ with short exact rows $$\begin{aligned} \label{use} \xymatrix{ 0 \ar[r] & pr^*{\mathfrak{g}}^{(1)}(R,H) \ar@{^{(}->}[d] \ar[r] & T^\pi P_H(R) \ar@{^{(}->}[d] \ar[r]^{dpr} & T^\pi R \ar[r] \ar[d]^{=} & 0\\ 0 \ar[r] & T^*\otimes T^\pi R \ar[r] & T^\pi J^1R \ar[r]^{dpr} & T^\pi R \ar[r] & 0. }\end{aligned}$$ In particular, $$\begin{aligned} pr^*\mathfrak{g}^{(1)}(R,H)\simeq T^\pi P_H(R)\cap(T^*\otimes T^\pi R).\end{aligned}$$ First of all, for $\sigma=j^1_xs\in J^1R$, the inclusion $$\begin{aligned} T^*_{x}\otimes T_{s(x)}^\pi R\hookrightarrow T_\sigma^\pi J^1R\end{aligned}$$ is given by $$\begin{aligned} \phi:T_{x}M\to T_p^vR\mapsto \frac{d}{dt}\left(d_xs+t\phi:T_{x}M\to T_{s(x)}R\right)_{|t=0}\in T_\sigma^\pi J^1R.\end{aligned}$$ So, if $j^1_xs\in P_H(R)\subset J^1R$ and $\phi\in \mathfrak{g}^{(1)}_{s(x)}$, i.e. $$\begin{aligned} d_xs(T_{x}M)\subset H_{s(x)}\quad\text{ and }\quad c_1(j^1_xs)=0,\end{aligned}$$ and for every $X,Y\in T_xM$ $$\begin{aligned} \phi(T_{x}M)\subset {\mathfrak{g}}_{s(x)}\quad\text{ and }\quad \partial_H(\phi(X))(Y)-\partial_H(\phi(Y))(X)=0,\end{aligned}$$ then one has that for $t\in\mathbb{R}$, $d_xs+t\phi:T_{x}M\to T_{s(x)}R$ belongs to $P_H(R)$ since $(d_xs+t\phi)(T_{x}M)\subset H_{s(x)}$. In other words $d_xs+t\phi\in J^1_HR$, and $$\begin{aligned} c(d_xs+t\phi(X),d_xs+t\phi(Y))&=&c(d_xs(X),d_xs(Y))+c(\phi(X),\phi(Y))\\ && +c(d_xs(X),\phi(Y))+c(\phi(X),d_xs(Y))\\ &=&c_1(j_x^1s)(X,Y)+c(d_xs(X),\phi(Y))\\ && +c(\phi(X),d_xs(Y))\\ &=&-\partial_H(\phi(Y))(X)+\partial_H(\phi(Y))(X)\\ &=&0,\end{aligned}$$ where in the second equality we used that $c(\phi(X),\phi(Y))=0$ since ${\mathfrak{g}}$ is involutive, and in the third equality we used that $c(d_xs(X),\phi(Y))=-\partial_H(\phi(Y))(X)$ by definition of $\partial_H$. Hence $pr^*\mathfrak{g}^{(1)}\subset (T^*M\otimes T^\pi R)|_{P_H(R)}\cap T^\pi(P_H(R))$. To show the other inclusion, note that since $pr:P_H(R)\to R$ is a submersion then $$\begin{aligned} (T_x^*M\otimes T_{s(x)}^\pi R)\cap T_\sigma^\pi(P_H(R))=\ker(d_\sigma(pr_{|P_H(R)}))=T_\sigma pr_{|P_H(R)}^{-1}(s(x))\end{aligned}$$ So, an element of $T_\sigma pr_{|P_H(R)}^{-1}(s(x))$ can be represented as the derivative of a path $$\begin{aligned} \Phi(t):T_{x}M\to T_{s(x)}R\end{aligned}$$ with the properties that $\Phi(0)=d_xs$ and that for all $t\in I$ $$\begin{aligned} \Phi(t)(T_{x}M)\subset H_s(x)\quad\text{ and }\quad c_1(\Phi(t))=c(\Phi(t)(\cdot),\Phi(t)(\cdot))=0.\end{aligned}$$ This implies that the image of $\Phi(t)-\Phi(0)$ lies in ${\mathfrak{g}}_{s(x)}$, and also that $$\begin{aligned} \phi:=\dot\Phi(0):T_{x}M\to {\mathfrak{g}}_{s(x)}\end{aligned}$$ satisfies the equation $$\begin{aligned} \partial_H(\phi(X))(Y)-\partial_H(\phi(Y))(X)=0,\end{aligned}$$ for any $X,Y\in T_xM$, i.e. $\phi\in{\mathfrak{g}}^{(1)}$. This last formula follows since $c$ is a bilinear map and therefore $$\begin{aligned} 0&=&\frac{d}{dt}c(\Phi(t)(X),\Phi(t)(Y))_{|t=0}\\ &=&c(\frac{d}{dt}\Phi(t)(X)_{|t=0},\Phi(0)(Y))+c(\Phi(0)(X),\frac{d}{dt}\Phi(t)(Y)_{|t=0})\\ &=&\partial_H(\phi(X))(Y)-\partial_H(\phi(Y))(X).\end{aligned}$$ But even more interestingly: \[integrable1\] Let $H\subset TR$ be a Pfaffian distribution. Then the following statements are equivalent: 1. ${\mathfrak{g}}^{(1)}(R,H)$ is a smooth vector bundle over $R$ and the map $pr:P_H(R)\to R$ is surjective, 2. $P_H(R)$ is smoothly defined. We will prove that 1 implies 2. From lemma \[smothness\] one has that $pr:J^1_HR\to R$ is a fibered submanifold of $pr:J^1R\to R$. Moreover, it is easy to check that the short exact sequence for $k=1$ restricts to the short exact sequence over $J^1_HR$ $$\begin{aligned} 0{\longrightarrow}T^*\otimes {\mathfrak{g}}{\longrightarrow}TJ^1_HR\overset{dpr}{{\longrightarrow}} TR{\longrightarrow}0.\end{aligned}$$ Now, $P_H(R)\subset J^1_HR$ is the kernel of the fiber bundle map over R, $$\begin{aligned} c_1:J^1_HR {\longrightarrow}\wedge^2T^*\otimes E, \quad \sigma\mapsto c(\sigma(\cdot),\sigma(\cdot)).\end{aligned}$$ As in the proof of proposition \[model\] one can prove that $$\begin{aligned} \ker d_\sigma c_1\cap\ker d_\sigma pr={\mathfrak{g}}^{(1)}_{pr(\sigma)}\end{aligned}$$ for $\sigma\in \ker c_1$, and therefore $c_1$ in this case is of constant rank as ${\mathfrak{g}}^{(1)}$ is of constant rank by assumption. On the other hand, the surjectivity of $pr:P_H(R)\to R$ implies that $z(R)\subset c_1(J^1_HR)$, where $z:R\to \wedge^2T^*\otimes E$ is the zero-section. We are left again in the situation of a fibered bundle map $c_1$ of constant rank of two fibered manifolds $a:X\to Y$ over $R$ with the property that $z(X)\subset c_1(X)$. We apply proposition 2.1 of [@Gold2] to ensure that $\ker_zc_1$ is a fibered submanifold of $X\to R$, which in our case says that $pr:P_H(R)\to R$ is a smooth subbundle of $pr:J^1_H(R)\to R$. By proposition \[compare\], we have analogous versions of propositions \[model\] and \[integrable1\] for the classical prolongation $P_\theta(R)$ of a Pfaffian form $\theta\in\Omega^1(M,E')$. \[prolongationdistributions\]Let $H\subset TR$ be a distribution and let $\theta\in\Omega^1(R,E')$ be a one form, and assume that $P_H(R)$ and $P_\theta(R)$ are smoothly defined. - [**The classical prolongation of $(R,H)$**]{} is $P_H(R)$ endowed with the Pfaffian distribution$$\begin{aligned} H^{(1)}=C_1\cap TP_H(R),\end{aligned}$$ where $C_1\subset TJ^1R$ is the Cartan distribution. - [**The classical prolongation of $(R,\theta)$**]{} is $P_\theta(R)$ endowed with the Pfaffian form $$\begin{aligned} \theta^{(1)}:=\theta^1|_{TP_\theta(R)}\in\Omega^1(P_\theta(R),T^{\pi}R),\end{aligned}$$ where $\theta^1\in\Omega^1(J^1R,T^\pi R)$ is the Cartan form. \[bundle-prop\] Let $(R,H)$ be a Pfaffian bundle and suppose that $P_H(R)$ is smoothly defined. Then, $$\begin{aligned} pr:{\text{\rm Sol}}(P_H(R),H^{(1)}){\longrightarrow}{\text{\rm Sol}}(R,H) \end{aligned}$$ is a bijection with inverse $j^1:{\text{\rm Sol}}(R,H)\to {\text{\rm Sol}}(P_H(R),H^{(1)})$. By example \[cartandist\] and the definition of the classical prolongation, we have that if $\xi\in{\text{\rm Sol}}(P_H(R),H^{(1)})$, then $\xi=j^1s$ for some $s\in\Gamma(R)$. Since for any $x\in M$, $j^1_xs\in P_H(R)$, then it follows that $s\in {\text{\rm Sol}}(R,H)$. On the other hand, if $s\in{\text{\rm Sol}}(R,H)$, then it is easy to show that $j^1s\in{\text{\rm Sol}}(P_H(R),H^{(1)})$ \[lemaese\]Let $H\subset TR$ be a distribution and let $\theta\in\Omega^1(R,E')$ be a one form. If $P_H(R)$ is smoothly defined, then $$\begin{aligned} \pi:(P_H(R),H^{(1)}){\longrightarrow}M\end{aligned}$$ is indeed a Pfaffian bundle. The same holds for ($P_\theta(R),\theta^{(1)})$ whenever $P_\theta(R)$ is smoothly defined. See [@Gold3] for similar results in the setting of partial differential equations. By proposition \[compare\] and proposition \[compare\] it is enough to prove the result for a $\pi$-transversal distribution $H\subset TR$. Let’s show that $H^{(1)}\subset TP_H(R)$ has constant rank equal to $\dim M+{\text{\rm rk}\,}K^{(1)}$, where $K^{(1)}$ is the vector bundle over $P_H(R)$ given by the kernel of the point-wise surjective map $$\begin{aligned} \label{uses} dpr:T^\pi P_H(R){\longrightarrow}T^\pi R.\end{aligned}$$ Let $\theta^1\in \Omega^1(J^1R;T^\pi R)$ be the Cartan form. Since $H^{(1)}=\ker\theta^1|_{TP_H(R)}$, it’s enough to show that $\theta^{(1)}:T_\sigma P_H(R)\to T_{pr(\sigma)}^\pi R$ is surjective at every point $\sigma\in P_H(R)$. For this, notice that the restriction $\theta^{(1)}:T_ \sigma^\pi P_H(R)\to T^\pi_{pr(\sigma)}R$ is equal to $dpr:T_\sigma^\pi P_H(R)\to T_{pr(\sigma)}^\pi R$ on the one hand, and on the other hand $dpr:T_\sigma^\pi P_H(R)\to T_{pr(\sigma)}^\pi R$ is surjective since $pr:P_H(R)\to R$ is a surjective submersion and the commutativity of the diagram $$\begin{aligned} \xymatrix{ P_H(R) \ar[rr]^{pr} \ar[dr]_{\pi} & & R \ar[dl]^\pi \\ & M. & }\end{aligned}$$ This also shows that $$\begin{aligned} \dim H^{(1)}_\sigma&=&\dim T_\sigma P_H(R)-\dim T^\pi_{pr(\sigma)}R\\ &=&\dim M+\dim T_\sigma^\pi P_H(R)-\dim T_{pr(\sigma)}^\pi R\\ &=&\dim M+\dim K^{(1)}_\sigma,\end{aligned}$$ where in the third equality we used the surjectivity of . To show that $H^{(1)}$ is transversal to the fibers of $\pi:P_H(R)\to M$ we see that $$\begin{aligned} \dim(H^{(1)}_\sigma+\ker d_\sigma(\pi|_{P_H(R)}))&=&\dim H^{(1)}_\sigma+\dim\ker d_\sigma(\pi|_{P_H(R)})\\&& -\dim(H^{(1)}_\sigma\cap\ker d_\sigma(\pi|_{P_H(R)}))\\ &=&\dim M+\dim K^{(1)}_\sigma+\dim\ker d_\sigma(\pi|_{P_D(R)})\\ &&-\dim K^{(1)}_\sigma\\ &=&\dim M+\dim T_\sigma^\pi P_H(R)\\ &=&\dim T_\sigma P_H(R),\end{aligned}$$ where in the second equality we used that $$\begin{aligned} \begin{split} (H^{(1)})^{\pi}&=H^{(1)}\cap\ker d(\pi|_{P_H(R)})=(C_1)^\pi\cap T^\pi P_H(R)\\&=(T^*\otimes T^\pi R)|_{P_H(R)}\cap T^\pi P_H(R)=K^{(1)}\end{split}\end{aligned}$$ again by the surjectivity of . Now as $(H^{(1)})^{\pi}$ is equal to $(C_1)^\pi\cap T^\pi P_H(R)$, and $(C_1)^\pi$ and $T^\pi P_H(R)$ are both involutive, this implies that $(H^{(1)})^\pi$ is involutive. Therefore $\pi:(P_H(R),H^{(1)})\to M$ is a Pfaffian fibration. \[key-properties\]Two remarkable properties that the pair $(H,H^{(1)})$, satisfies are the following: 1. The differential $dpr:TP_H(R)\to TR$ is such that $$\begin{aligned} dpr(H^{(1)})\subset H.\end{aligned}$$ 2. For any $X,Y\in H^{(1)}_\sigma\subset T_\sigma P_H(R)$, $$\begin{aligned} c_H(dpr(X),dpr(Y))=0.\end{aligned}$$ where $c_H$ is the Lie bracket modulo $H$. Indeed, the first property was already shown for the partial prolongation $(J^1_HR,H^{(1)})$. For the second property, take $X,Y\in H^{(1)}_\sigma$, i.e. $$\begin{aligned} dpr(X)=\sigma(d\pi (X))\quad\text{and}\quad dpr(Y)=\sigma(d\pi(Y)),\end{aligned}$$ and $$\begin{aligned} c_H(dpr(X),dpr(Y))=c_H(\sigma(d\pi(X)),\sigma(d\pi(Y)))=0.\end{aligned}$$ Paraphrasing the above properties in the dual picture of a point-wise surjective form $\theta\in \Omega^1(R,E')$, one finds that the classical prolongation $(P_\theta(R),\theta^{(1)})$ satisfies the following: 1. The following diagram commutes: $$\begin{aligned} \xymatrix{ TP_\theta(R) \ar[r]^{dpr} \ar[d]_{\theta^{(1)}} & TR \ar[d]^{\theta} \\ T^\pi R \ar[r]^{\theta} & E'. }\end{aligned}$$ 2. For any $X,Y\in\ker\theta^{(1)}_\sigma$, $$\begin{aligned} \delta\theta(dpr(X),dpr(Y))=0.\end{aligned}$$ The previous properties can serve as an inspiration for introducing the notions of morphism and abstract prolongations of Pfaffian bundles. We will do this in the case of Pfaffian groupoids (see chapter \[Pfaffian groupoids\]). For instance, the higher jet bundles $J^kR$ of a surjective submersion $\pi:R\to M$, endowed with the Cartan distributions $C_k\subset TJ^kR$ or rather with the Cartan forms $\theta^k\in\Omega^1(J^kR,T^\pi J^{k-1}R)$ (see example \[cartandist\]), are “compatible under prolongations”. More precisely, \[prol-jet\] For $k\geq 1$ a natural number, $$\begin{aligned} P_{\theta^k}(J^kR)=P_{C_k}(J^kR)=J^{k+1}R\subset J^1(J^kR)\end{aligned}$$ and $$\begin{aligned} C_k^{(1)}=C_{k+1},\qquad (\theta^{k})^{(1)}=\theta^{k+1}.\end{aligned}$$ That $P_{\theta^k}(R)=J^{k+1}R\subset J^1(J^kR)$ its a well-known fact (see e.g [@Bocharov; @Krasil; @Olver]). That $(\theta^k)^{(1)}=\theta^{k+1}$ is true since the restriction of the Cartan form $\theta\in\Omega^1(J^1(J^kR), T^{\pi}J^kR)$ to $J^{k+1}R$ coincides precisely with $\theta^{k+1}$. Formal integrability of Pfaffian bundles ---------------------------------------- Throughout this subsection $\pi:(R,H)\to M$ is a Pfaffian bundle with symbol space given by ${\mathfrak{g}}$, symbol map denoted by $\partial_H:{\mathfrak{g}}\to T^*\otimes E$, and curvature map denoted by $c:H\times H\to E$, where $E:=TR/H$. In the rest of this section the equivalent languages of Pfaffian distributions and Pfaffian forms will be used interchangeably according to which suits our purposes better. The reason to consider formally integrable Pfaffian bundles is that in some cases (e.g. for analytic Pfaffian bundles) they have local solutions at any point of the base manifold. This will be discussed later on in this subsection. ### The classical prolongations of Pfaffian bundles {#sec:class-prol-pfaff} We now define the classical prolongations of a Pfaffian bundle $\pi:(R,H)\to M$ inductively: \[higher-prol\] Let $(R,H)$ be a Pfaffian bundle. We say that the [classical **$k$-prolongation space $P^k_H(R)$**]{} is [**smooth**]{} if 1. $(P_H(R), H^{(1)}),\dots,(P_H^{k-1}(R),H^{(k-1)})$ are smoothly defined, and 2. the classical prolongation space of $(P_H^{k-1}(R),H^{(k-1)})$ $$\begin{aligned} P^k_H(R):=P_{H^{(k-1)}}(P_H^{k-1})\end{aligned}$$ is smooth. In this case, we define the [**$k$-prolongation of $H$**]{}: $$\begin{aligned} H^{(k)}:=(H^{(k-1)})^{(1)}\subset TP^k_H(R).\end{aligned}$$ We say that the classical prolongation space $P_H^k(R)$ is [**smoothly defined**]{} if, moreover, $$\begin{aligned} pr: P^k_H(R)\to P_H^{k-1}(R)\end{aligned}$$ is a surjective submersion. In this case, $$\begin{aligned} \pi:(P_H^k(R),H^{(k)}){\longrightarrow}M\end{aligned}$$ (a Pfaffian bundle: see proposition \[compare\]) is called the [**classical $k$-prolongation of $(R,H)$.**]{} \[bundle-prop2\] Let $(R,H)$ be a Pfaffian bundle and suppose that $P^k_H(R)$ is smoothly defined. Then, $$\begin{aligned} pr^k_0:{\text{\rm Sol}}(P^k_H(R),H^{(k)}){\longrightarrow}{\text{\rm Sol}}(R,H) \end{aligned}$$ is a bijection with inverse $j^k:{\text{\rm Sol}}(R,H)\to {\text{\rm Sol}}(P^k_H(R),H^{(k)})$. Apply proposition \[bundle-prop\] each time you prolong. For now let us study the smoothness of the prolongations spaces as it is one of the conditions on definition \[fi\]. The following proposition allows us to define higher classical prolongations of $(R,H)$ with no smoothness assumptions on the preceding prolongation. \[jet-prol\] Let $(R,H)$ be a Pfaffian bundle and let $k_0$ be an integer. If the classical $k_0$-prolongation space $P^{k_0}_H(R)$ is smoothly defined, then for any $0\leq k\leq k_0+1$, $$\begin{aligned} \label{yay} P^k_H(R)=J^{k-1}(P_H(R))\cap J^kR\end{aligned}$$ and $H^{(k)}$ is the intersection of the Cartan distribution $C_k\subset TJ^kR$ with $TP^k_H(R)$, the tangent bundle of $P^k_H(R)$. For $k=2$, we have that $P^2_H(R)$ is by definition the set of jets $j^1_xs\in J^1(P_H(R))\subset J^1(J^1(R))$ such that $$\begin{aligned} d_xs(T_xM)\subset H^{(1)}\qquad{\text{and}}\qquad c_1(H^{(1)})(j^1_xs)=0.\end{aligned}$$ As $H^{(1)}\subset C_1$, then $j^1_xs\in P^2_H(R)$ if and only if $j^1_xs\in J^1(P_H(R))$ and $$\begin{aligned} \label{312} d_xs(T_xM)\subset C_1\qquad{\text{and}}\qquad c_1(C_1)(j^1_xs)=0.\end{aligned}$$ By proposition \[prol-jet\], conditions mean that $j^1_xs\in J^2R$. In other words, $$\begin{aligned} P^2_H(R)=J^1(P_H(R))\cap J^2R\end{aligned}$$ The reader may verify with an inductive argument that the equality is valid for $k\geq 2$. Now, the Cartan form $\theta^1\in \Omega^1(J^1(J^{k-1}R), T^\pi J^{k-1}R)$ associated to the fiber bundle $\pi:J^{k-1}R\to M$, restricts to the sub-bundle $J^{k}R\subset J^1(J^{k-1}R)$ to the Cartan form $\theta^k\in\Omega^1(J^kR,T^\pi J^{k-1}R)$, and therefore the associated Pfaffian form $\theta^{(k)}$ of $P^k_H(R)$ is $$\begin{aligned} \theta^1|_{J^1(P^{k-1}_H(R))\cap J^kR}=\theta^k|_{J^1(P^{k-1}_H(R))\cap J^kR}=\theta^k|_{P^k_H(R)}.\end{aligned}$$ Since $H^{(k)}$ is the kernel of $\theta^{(k)}$, by the above equality it follows that $H^{(k)}=C_k\cap TP^k_H(R).$ For what follows let’s keep in mind the exact sequence of vector bundles over $J^kR$ $$\begin{aligned} \label{sesj2} 0{\longrightarrow}S^kT^*\otimes T^\pi R\overset{i}{{\longrightarrow}} T^\pi J^kR\overset{dpr}{{\longrightarrow}} T^\pi J^{k-1}R{\longrightarrow}0.\end{aligned}$$ When working with the first jet $J^1R$ we may consider the elements of $J^1R$ as pairs $(p,\sigma)$, where $p\in R$ and $\sigma:T_{\pi(p)}M\to T_pR$ is a splitting of $d\pi:T_pR\to T_{\pi(p)}M$.\ The next proposition puts together proposition \[integrable1\] and proposition \[compare\], and explains the compatibility between higher prolongations. More precisely, \[integrable\] The following statements are equivalent: 1. ${\mathfrak{g}}^{(1)}(R,H)$ is a smooth vector bundle over $R$ and the map $pr:P_H(R)\to R$ is surjective, 2. $P_H(R)$ is smoothly defined. Moreover, if either of the above two conditions holds, then $$\begin{aligned} \pi:(P_H(R),H^{(1)}){\longrightarrow}M\end{aligned}$$ is a Pfaffian bundle, and the first classical prolongation space of $(P_H(R),H^{(1)})$ is equal to the classical $2$-prolongation space of $(R,H)$. See again [@Gold3] for similar results in the setting of partial differential equations. It remains to prove that the first prolongation of $(P_H(R),H^{(1)})$ is equal to the classical $2$-prolongation of $(R,H)$. One has that the first classical prolongation of $\pi:(P_H(R),H^{(1)})\to M$ is the subset of $J^1(P_H(R))\subset J^1(J^1R)$, given by splittings $\xi:T_{\pi(\sigma)}M\to T_\sigma P_H(R)\subset T_\sigma J^1R$ of $d\pi:T_\sigma P_H(R)\to T_{\pi(\sigma)}M$ such that $$\begin{aligned} \xi(T_{\pi(\zeta)}M)\subset H^{(1)}\subset C_1\quad\quad\text{and }\quad\quad c(\xi(X),\xi(Y))=0\end{aligned}$$ for all $X,Y\in T_{\pi^1(\sigma)}M$, where $c:H^{(1)}\times H^{(1)}\to TP_H(R)/H^{(1)}\simeq TJ^1R/C_1$ is the curvature of $H^{(1)}$, which coincides with the curvature of $C_1$ restricted to $H^{(1)}$. These two conditions are equivalent to the fact that $\xi$ is actually an element of $J^2R$ (see e.g [@Bocharov; @Krasil; @Olver]). Therefore, the first classical prolongation of $\pi:(P_H(R),H^{(1)})\to M$ is defined on $$\begin{aligned} J^1(P_H(R))\cap J^2R=P_H^2(R).\end{aligned}$$ For the next proposition we choose to define the classical prolongations spaces of the Pfaffian bundle $\pi:(R,H)\to M$ by equation , where no smoothness assumptions are needed. See also remark \[involutive-properties\] for the definition of ${\mathfrak{g}}^{(k)}$. \[affine\] Let $\pi:(R,H)\to M$ be a Pfaffian bundle and let $k$ be an integer. If 1. $P_H^k(R)$ is smoothly defined, 2. $\mathfrak{g}^{(k+1)}(R,H)$ is a vector bundle over $R$, and 3. $pr:P_H^{k+1}(R)\to P_H^k(R)$ is surjective for $0\leq k\leq l$, then 1. $P_H^{k+1}(R)$ is smoothly defined, and 2. the sequence restricts to the short exact sequence $$\begin{aligned} \label{sesj3} 0{\longrightarrow}pr^*\mathfrak{g}^{(k+1)}{\longrightarrow}T^\pi P^{k+1}_H(R)\overset{dpr}{{\longrightarrow}}T^\pi P^k_H(R){\longrightarrow}0\end{aligned}$$ of vector bundles over $P^{k+1}_H(R)$ for any $0\leq k\leq l$. Moreover, $\pi:(P^{k+1}_H(R),H^{(k+1)})\to M$ is a Pfaffian bundle, and its classical $l$-prolongation space is the same as the classical $l+k$-prolongation space of $(R,H)$ We proceed by induction. For $k=0$, the conclusion holds by proposition \[integrable\]. For $l\geq 1$ we apply proposition 7.2 of [@Gold2] to the partial differential equation $R_1=P_H(R)\subset J^1R.$ \[partevertical\]In proposition \[affine\], the vector bundle $pr^*{\mathfrak{g}}^{(k)}$ over $P^{k}_H(R)$ is equal to the vertical part of $H^{(k)}$, i.e. $$\begin{aligned} {\mathfrak{g}}(P^k_H(R), H^{(k)})\simeq pr^*{\mathfrak{g}}^{(k)}.\end{aligned}$$ This is clear since $C_{k}^{\pi}=S^kT^*\otimes T^\pi R$ by example \[cartandist\], and $pr^*{\mathfrak{g}}^{(k)}=S^kT^*\otimes T^\pi R\cap TP_H^k(R).$ ### Formal integrability \[fi\] The Pfaffian bundle $(R,H)$ is called [**formally integrable**]{} if all the classical prolongation spaces $$\begin{aligned} P_H(R),P_H^2(R),\ldots,P^k_H(R),\ldots\end{aligned}$$ are smoothly defined. The next corollary follows by the definitions and proposition \[jet-prol\], and is stated here for future use. \[337\] If $\pi:(R,H)\to M$ is a formally integrable Pfaffian bundle, then for each $k\geq0$, $pr:P_H^{k+1}(R)\to P_H^k(R)$ is a smooth fiber subbundle of $pr:J^{k+1}R\mid_{P^{k}_H(R)}\to P^k_H(R)$. Hence, for a formally integrable Pfaffian bundle $(R,H)$, we have a sequence of Pfaffian bundles over $M$ $$\begin{aligned} \cdots{\longrightarrow}(P^k_H(R),H^{(k)})\overset{pr}{{\longrightarrow}}\cdots {\longrightarrow}(P_H(R),H^{(1)})\overset{pr}{{\longrightarrow}} (R,H) \end{aligned}$$ called [**the classical resolution of $(R,H)$**]{}. The following existence result in the analytic case shows the importance of formally integrable Pfaffian bundles. See [@Gold3] for the same result in the setting of partial differential equations. \[anal\] Suppose that $\pi:(R,H)\to M$ is a formally integrable analytic Pfaffian bundle. Then, given $p\in P^l_H(R)$ with $\pi(p)=x\in M,$ there exists an analytic solution $s$ of $\pi:(R,H)\to M$ over a neighborhood of $x$ such that $j^l_xs=p.$ We know that if $(R,H)$ is formally integrable, then $P_H(R)\subset J^1R$ is a formally integrable differential equation in the sense of [@Gold2]. So, if $P_H(R)\subset J^1R$ is an analytic submanifold we can apply theorem 9.1 of [@Gold2] to $P_H(R)$ to obtain the result (since any solution of the differential equation $P_H(R)$ is a solution of $\pi:(R,H)\to M$). So, we only need to check that $P_H(R)\subset J^1R$ is an analytic submanifold. This follows from the fact that $P_H(R)$ is the kernel of the analytic morphism $$\begin{aligned} c_1:J^1_HR {\longrightarrow}& {\mathrm{Hom}}_R(\wedge^2TM;TR/H),\quad j^1_xs\mapsto c(d_xs(\cdot),d_xs(\cdot)),\end{aligned}$$ and of course, $J^1_HR$ is itself analytic as it is the kernel of the analytic morphism $$\begin{aligned} J^1R{\longrightarrow}{\mathrm{Hom}}_R(TM;TR/H),\quad j^1_xs\mapsto pr\circ d_xs:T_{x}M\to T_{s(x)}R/H_{s(x)}.\end{aligned}$$ One of the aims of this subsection is to prove following workable criteria for formal integrability of Pfaffian bundles. This is a slight generalization of results in [@Gold3] in the setting or partial differential equations. \[workable-pfaffian-bundles\] Let $\pi:(R,H)\to M$ be a Pfaffian bundle such that: 1. $pr:P_H(R)\to R$ is surjective, 2. $\mathfrak{g}^{(1)}(R,H)$ is a vector bundle over $R$, and 3. $H^{2,l}({\mathfrak{g}})=0$ for $l\geq0.$ Then, $(R,H)$ is formally integrable. In the previous theorem we are considering the $\partial_H$-Spencer cohomology of ${\mathfrak{g}}^{(1)}$. See definition \[exten\]. ### Higher curvatures Now we concentrate on the proof of theorem \[workable-pfaffian-bundles\]; this will be achieved using induction. One of the main ingredients of the proof is given by proposition \[affine\] together with the following. Let $(R,H)$ be a Pfaffian bundle and let $k$ be an integer. If the classical $k$-prolongation space $P^k_H(F)$ is smoothly defined, then there exists an exact sequence of bundles over $R$: $$\begin{aligned} P^{k+1}_H(R)\overset{pr}{{\longrightarrow}} P^k_H(R)\xrightarrow{\bar c_{k+1}}{}H^{(2,k-1)}({\mathfrak{g}}).\end{aligned}$$ In the previous proposition, by exactness of the non-linear sequence we mean that $${\text{\rm Im}\,}(pr) = \ker(\bar{c}_{k+1}),$$ where $Z(\bar{c}_{k+1})$ is the zero set of $\bar{c}_{k+1}$.\ Assume that for an integer $k_0$, proposition \[affine\] holds. Hence we have a sequence of Pfaffian bundles $$\begin{aligned} (P_H^{k_0+1}(R),H^{(k_0+1)})\overset{pr}{{\longrightarrow}}(P_H^{k_0}(R),H^{(k_0)})\overset{pr}{{\longrightarrow}}\cdots {\longrightarrow}(P_H(R),H^{(1)})\overset{pr}{{\longrightarrow}}(R,H) \end{aligned}$$ each consecutive pair satisfying the properties \[key-properties\]. We will construct, for each natural number $0\leq k\leq k_0+1$, the [**$(k+1)$-reduced curvature**]{} map of $\pi:(R,H)\to M$, $$\begin{aligned} \bar c_{k+1}:P^k_H(R)\longrightarrow C^{2,k-1},\end{aligned}$$ where $C^{2,k-1}$ is the family of vector spaces over $R$ $$\begin{aligned} \label{c} C^{2,k-1}:=\frac{\wedge^2T^*\otimes\mathfrak{g}^{(k-1)}}{\partial(T^*\otimes\mathfrak{g}^{(k)})}.\end{aligned}$$ Here we set $P^0_H(R)=R$, $P_H^{-1}(R)=M$ and ${\mathfrak{g}}^{(-1)}=TR/H$. The definition of reduced curvature map is given in [@Gold3] in the setting of partial differential equation, from an algebraic point of view. Similar results regarding them can be also found in [@Gold3]. We begin by constructing $$\begin{aligned} \bar c_{1}:R\longrightarrow C^{2,-1}=\frac{\wedge^2T^*\otimes TR/H}{\partial_H(T^*\otimes {\mathfrak{g}})}\end{aligned}$$ using the bundle map over $R$ $$\begin{aligned} c_1:J^1_HR\longrightarrow \wedge^2T^*\otimes TR/H,\quad j^1_xs\mapsto c(d_xs(\cdot),d_xs(\cdot))\end{aligned}$$ whose kernel is precisely $P_H(R)$. Note that if $j^1_xs,j^1_x{\alpha}\in J^1_HR$ are such that $s(x)=u(x)$, then the image of the map $\Phi:=d_xs-d_xu$ lies in ${\mathfrak{g}}$, and $$\begin{aligned} \begin{split} c_1(j^1_xs)(X,&Y)-c_1(j^1_xu)(X,Y)=c(d_xs(X),d_xs(Y))-c(d_xu(X),d_xu(Y))\\& =c(d_xs(X)-d_xu(X),d_xs(Y))+c(d_xu(X),d_xs(Y)-d_xu(Y))\\& =\partial_H(\Phi(X))(Y)-\partial_H(\Phi(Y))(X). \end{split}\end{aligned}$$ So the map $$\begin{aligned} \label{defi} R\ni s(x)\mapsto c_1(j^1_xs)\; {\operatorname{mod}}\partial_H(T^*\otimes {\mathfrak{g}})\end{aligned}$$ does not depend on the element $j^1_xs\in J^1_HR$. Computing the vertical derivative of $c_1$, i.e. the derivative of $c_1$ restricted to $T^*\otimes\mathfrak{g}$ (see remark \[remark1\]), we see that its value at $\Phi:T_{\pi(p)}M\to \mathfrak{g}_p$ is equal to $\partial_H(\Phi).$ \[curvature-map1\] Let $\pi:(R,H)\to M$ be a Pfaffian fibration. The [**$1$-reduced curvature map**]{} $$\begin{aligned} \bar c_1:R {\longrightarrow}C^{2,-1}\end{aligned}$$ is a well-defined map given by . The sequence $$\begin{aligned} P_H(R)\overset{pr}{\longrightarrow}R\overset{\bar c_1}{\longrightarrow}C^{2,-1}\end{aligned}$$ of bundles over $R$ is exact. If $j^1_xs\in P_H(R)$, of course $\bar c_1(s(x))=0$ as $c_1(j^1_xs)=0$. On the other hand, if $\bar c_1(s(x))=0$ then there exists $j^1_xs\in J^1_HR$ and $\Phi:T_xM\to {\mathfrak{g}}_{s(x)}$ such that $$\begin{aligned} \begin{split} c(d_xs(X),d_xs(Y))&=\partial_H(\Phi(X))(Y)-\partial_H(\Phi(Y))(X)\\&=c(\Phi(X),d_xs(Y))-c(\Phi(Y),d_xs(X)). \end{split}\end{aligned}$$ Then, as ${\mathfrak{g}}$ is involutive, $\sigma=d_xs-\Phi:T_xM\to T_{s(x)}R$ belongs to $P_H(R)$ and $pr(\sigma)=s(x)$. This shows the exactness of the sequence. For $0\leq k\leq k_0$, the map $$\begin{aligned} \bar c_{k+1}:P^k(H)\longrightarrow C^{2,k-1}\end{aligned}$$ is the first reduced curvature map of $(P_H^k(R))$: we consider the curvature map over $P^k_H(R)$ $$\begin{aligned} \label{aca} c_1(H^{(k)}):J^1_{H^{(k)}}(P^k_H(R))\longrightarrow \wedge^2 T^*\otimes T^\pi P_H^{k-1}(R),\end{aligned}$$ where we are using the exact sequence to identify $T^\pi P_H^{k-1}(R)$ with $T^\pi P^k_H(R)/pr^*{\mathfrak{g}}^{(k)}=TP^k_HR/H^{(k)}$. \[curv(H)\] For $1\leq k\leq k_0+1$, we define the $(k+1)$[**-reduced curvature map**]{} $$\begin{aligned} \bar c_{k+1}:P_H^k(R){\longrightarrow}C^{2,k-1}\end{aligned}$$ by $$\begin{aligned} \bar c_{k+1}(s(x))=c_1(H^{(k)})(j^1_xs)\; {\operatorname{mod}}\partial(T^*\otimes {\mathfrak{g}}^{(k)}),\end{aligned}$$ where $j^1_xs\in J^1_{H^{(k)}}(P^k_H(R)).$ The following lemma is immediate by construction: \[surjectivity\] The sequence $$\begin{aligned} P_H^{k+1}(R)\overset{pr}{\longrightarrow}P^k_H(R)\overset{\bar c_{k+1}}{\longrightarrow}C^{2,k-1}\end{aligned}$$ of bundles over $R$ is exact. \[lemma-curvature\] For $1\leq k\leq l+1$, the image of $c_1(H^{(k)})$ lies in the family of subspaces $$\begin{aligned} \ker\left\{\partial:\wedge^2T^*\otimes\mathfrak{g}^{(k-1)}\longrightarrow \wedge^3T^*\otimes\mathfrak{g}^{(k-2)}\right\}.\end{aligned}$$ Hence, $\bar c_{k+1}$ takes values in $$\begin{aligned} H^{2,k-1}({\mathfrak{g}}):=\frac{\ker\left\{\partial:\wedge^2T^*\otimes\mathfrak{g}^{(k-1)}\to \wedge^3T^*\otimes\mathfrak{g}^{(k-2)}\right\}}{{\text{\rm Im}\,}\{\partial:T^*\otimes {\mathfrak{g}}^{(k)}\to\wedge^2T^*\otimes{\mathfrak{g}}^{(k-1)}\}}.\end{aligned}$$ To prove the previous lemma we will need the following result which will be used in other contexts: \[delta-commutes\] Let $\theta\in\Omega^1(R,E')$ be a one form of constant rank and let $p:P\to R$ be a submersion, then $$\begin{aligned} \delta p^*\theta=p^*\delta \theta.\end{aligned}$$ Explicitly, for any $X,Y\in dp^{-1}\ker\theta$, $$\begin{aligned} \delta p^*\theta(X,Y)=\delta\theta(dp(X),dp(Y)).\end{aligned}$$ It is a straightforward computation using projectable vector fields. For a distribution $H\subset TR$, the analogous version of lemma \[delta-commutes\] says that for a submersion $p:P\to R$, $$\begin{aligned} c_H(dp(X),dp(Y))=c_{dp^{-1}H}(X,Y)\end{aligned}$$ for $X,Y\in dp^{-1}H.$ We will check the case $k=1$. The case $k>1$ follows by an inductive argument. Consider $\theta\in\Omega^1(R,TR/H)$ and $\theta^{(1)}$, the Cartan form on $P_H(R)$. Let’s first see that for any $j^1_xs\in J^1_{H^{(1)}}(P_H(R))$ (i.e. $d_xs:T_xM\to T_{b(x)}P_HR$ has its image in $H^{(1)}_{b(x)}$), and any $X,Y\in T_xM$, $$\begin{aligned} c^1(H^{(k)})(j^1_xs)(X,Y)\in{\mathfrak{g}}.\end{aligned}$$ Take $\tilde X,\tilde Y\in {\ensuremath{\mathfrak{X}}}(P_HR)$ to be extensions of $d_xs(X)$ and $d_xs(Y)$ respectively, then $$\begin{aligned} \begin{split} \theta&(c^1(H^{(k)})(j^1_xs)(X,Y))=\theta(\delta\theta^{(1)}(d_xs(X),d_xs(Y)))=\theta(\theta^{(1)}[\tilde X,\tilde Y])=\\&\theta(dpr[\tilde X,\tilde Y])=\delta pr^*\theta(d_xs(X),d_xs(Y))=\delta\theta(dpr(d_xs(X)),dpr(d_xs(Y)))=0, \end{split}\end{aligned}$$ where we used the key property 1 of remark \[key-properties\] in the third equality, and lemma \[delta-commutes\] for the last equality. To prove that $\partial_H(c_1(j^1_xs))(X,Y,Z)=0$ for any $X,Y,Z\in T_xM$, let $\tilde X,\tilde Y,\tilde Z\in {\ensuremath{\mathfrak{X}}}(J^1_{H^{(1)}}(P_H(R)))$ be projectable vector fields along the submersion $$\begin{aligned} pr\circ pr^1:J^1_{H^{(1)}}(P_H(R)){\longrightarrow}R\end{aligned}$$ such that $$\begin{aligned} &\tilde X_{j^1_xs},\tilde Y_{j^1_xs},\tilde Z_{j^1_xs}\in H^{(2)}_{j^1_xs},\\ &d\pi^2(\tilde X_{j^1_xs})=X,\quad d\pi^2(\tilde Y_{j^1_xs})=Y\quad\text{and}\quad d\pi^2(\tilde Z_{j^1_xs})=Z.\end{aligned}$$ The first line means that $$\begin{aligned} dpr^1(\tilde X_{j^1_xs})=d_xs(X),\quad dpr^1(\tilde Y_{j^1_xs})=d_xs(Y)\quad\text{and}\quad dpr^1(\tilde Z_{j^1_xs})=d_xs(Z).\end{aligned}$$ Therefore, for $\tilde s:=pr(s(x))$, $$\begin{aligned} \begin{split} &-\partial_H(c_1(j^1_xs)(Y,Z))(X)= \delta\theta(dpr\circ dpr^1(\tilde X_{j^1_xs}),\delta\theta^{(1)}(d_xs(Y),d_xs(Z)))\\&=\delta\theta(dpr\circ dpr^1(\tilde X_{j^1_xs}),\theta^{(1)}_{s(x)}(dpr^1[\tilde Y,\tilde Z]))\\&= \delta\theta(dpr\circ dpr^1(\tilde X_{j^1_xs}),[dpr(dpr^1(\tilde Y)),dpr(dpr^1(\tilde Z))]_{\tilde s}-dpr\circ d_xs(d\pi^1[\tilde Y,\tilde Z])\\ &=\delta\theta(dpr\circ dpr^1(\tilde X_{j^1_xs}),[dpr(dpr^1(\tilde Y)),dpr(dpr^1(\tilde Z))]_{\tilde s})\\&\quad-\delta\theta(dpr\circ dpr^1(\tilde X_{j^1_xs}),dpr\circ d_xs(d\pi^1[\tilde Y,\tilde Z])\\ &=\delta\theta(dpr\circ dpr^1(\tilde X_{j^1_xs}),d_{s(x)}pr[dpr^1(\tilde Y),dpr^1(\tilde Z)]), \end{split}\end{aligned}$$ where in the passage to the last line we use the key property 2 for prolongations given in remark \[key-properties\]. Using $\tilde X,\tilde Y,\tilde Z$ for the computations we then have that $$\begin{aligned} \begin{split} \partial_H(c_1(j^1_xs))(X,Y,Z)&=\delta\theta(dpr\circ dpr^1(\tilde Z_{j^1_xs}),[dpr(dpr^1(\tilde X)),dpr(dpr^1(\tilde Y))]_{\tilde s})\\&\quad+\delta\theta(dpr\circ dpr^1(\tilde X_{j^1_xs}),[dpr(dpr^1(\tilde Y)),dpr(dpr^1(\tilde Z))]_{\tilde s})\\ &\quad+\delta\theta(dpr\circ dpr^1(\tilde Y_{j^1_xs}),[dpr(dpr^1(\tilde Z)),dpr(dpr^1(\tilde X))]_{\tilde s}) \end{split}\end{aligned}$$ is zero by the Jacobi identity. Now we have all the ingredients to prove theorem \[workable-pfaffian-bundles\]. This proof is based on the argument given in [@Gold2] for theorem 8.1. $H^{2,l}=0$ for $l\geq 0$ is equivalent to the fact that the sequences in lemma \[exact\] are exact. Since $\mathfrak{g}^{(1)}$ is a vector bundle, lemma \[exact\] applies and therefore $\mathfrak{g}^{(l)}$ is a vector bundle over $R$ for $l\geq 1$. We now proceed by induction on $l$. Assume that for $l\geq0$, $pr:P^{m+1}_H(R)\to P^m_H(R)$ is surjective for $0\leq m\leq l$. Since $\mathfrak{g}^{(m+1)}$ are vector bundles, we are under the hypothesis of proposition \[affine\], and then we can define the curvature map $$\begin{aligned} \bar c_{l+2}:P^{l+1}_H(R){\longrightarrow}H^{2,l}.\end{aligned}$$ By lemma \[surjectivity\] for $k=l$, we get that $$\begin{aligned} P_H^{l+2}(R)\overset{pr}{{\longrightarrow}}P_H^{l+1}(R)\overset{\bar c_{l+2}}{\longrightarrow}H^{2,l}\end{aligned}$$ is exact. By condition 3 in the statement of the theorem, we have that $\bar c_{l+2}$ is identically zero and therefore $pr^{}:P_H^{l+2}(R){\longrightarrow}P_H^{l+1}(R)$ is surjective. ### Abstract resolutions of Pfaffian bundles Let $\pi:(R,H)\to M$ be a Pfaffian bundle. A [**standard resolution of $(R,H)$**]{} is a sequence $\{R_k\mid k\geq 0\}$ of fibered manifolds over $M$ with $R_0=R$, together with immersions $i_k:R_k\to J^1R_{k-1}$ for $k\geq0$, with the following properties: - The map $\bar i_k:R_k\to R_{k-1}$ is a surjective submersion for $k\geq 0$, where $\bar i_k$ is the composition $$\begin{aligned} R_k\overset{i_k}{{\longrightarrow}}J^1R_{k-1}\overset{pr}{{\longrightarrow}}R_{k-1}.\end{aligned}$$ - $i_1(R_1)\subset P_H(R)$ and for $k\geq1$, $i_{k+1}(R_{k+1})\subset P^{k+1}_H(R).$ The following result is a consequence of the Cartan-Kähler theorem and theorem 3.2 of [@BC], adapted to our setting of Pfaffian bundles, and will be of use for us in the case of Pfaffian groupoids as we will see later on. \[resolution\] Let $\pi:(R,H)\to M$ be an analytic Pfaffian bundle. If $(R,H)$ admits a standard resolution, then for every point $p\in R$ there exists real analytic solutions of $(R,H)$ passing through $p$. Linear Pfaffian bundles ($=$ linear connections) {#linear Pfaffian bundles} ------------------------------------------------ Linear Pfaffian bundles are Pfaffian bundles where all objects are linear. One of the main advantages of linear Pfaffian bundles is that not only they are easier to handle, but each of them is endowed with a relative connection canonically associated to the linear distribution (or to the linear form), allowing us to apply the whole theory developed in chapter \[Relative connections\].\ ### Definitions and basic properties The aim of this section is to establish the usual duality between distributions and forms in the linear case. We will prove: \[linear-one-form\]Let $\pi:F\to M$ and $E'\to M$ be vector bundles. For any regular linear 1-form $\theta\in\Omega^1(M,\pi^*E')$ $$\begin{aligned} H_\theta:=\ker\theta\subset TF\end{aligned}$$ is a linear distribution. Conversely, any linear distribution arises in this way. We start by explaining the definitions. Recall that the vector bundle structure of $\pi:F\to M$ induces a vector bundle structure on $d\pi:TF\to TM$; its structure maps are the differentials of the structure maps of $F$. \[def: linear distributions\] A [**linear distribution**]{} $H$ on $F$ is any distribution $H\subset TF$ which is also a subbundle of $TF\to TM$ (with the same base $TM$). \[linear-transversal\] Linear distributions are Pfaffian distributions in the sense of definition \[def: pfaffian distributions\]. More precisely, if $H\subset TF$ is linear then it is $\pi$-transversal and $\pi$-involutive. The fact that $H$ is a subbundle of $TF\to TM$ implies that $H$ is $\pi$-transversal. To see this let’s first show that $H^\pi$ is of constant rank equal to ${\text{\rm rk}\,}H-\dim M$. Indeed, as $dz(TM)\subset H$, where $z:M\to F$ is the zero-section, one has that $$\begin{aligned} \dim H^\pi_{z(x)}={\text{\rm rk}\,}H-n\end{aligned}$$ is independent of the point $x\in M$, where $n=\dim M$. Moreover, for $e\in F_x$, the map $$\begin{aligned} T^\pi_{z(x)}F\ni U\mapsto d_{(z(x),e)}a(U,0)\in T_e^\pi F\end{aligned}$$ is an isomorphism sending $H^\pi_{z(x)}$ onto $H^\pi_e$, where $a:F\times_MF\to F$ is the addition and $0\in T_{z(x)}F$ is the zero vector, since $H$ is closed under the differential of the addition. Thus, $H^\pi$ has constant rank. On the other hand, $$\begin{aligned} \dim (H_e+T^\pi_eF)&=\dim H_e+\dim T_e^\pi F-\dim H^\pi_e\\&={\text{\rm rk}\,}H+{\text{\rm rk}\,}F-({\text{\rm rk}\,}H-n)=\dim T_eF, \end{aligned}$$ which shows that $H$ is transversal to the fibers of $F$. To show that $H$ is $\pi$-involutive notice that $\Gamma({\mathfrak{g}})$ is generated as a $C^{\infty}(F)$-module by vector fields $X\in\Gamma({\mathfrak{g}})$ of $F$, constant along the fibers of $\pi$. Since the Lie bracket of any such two vector fields $X$ and $Y$ is zero (as they are tangent and constant along the fibers of $\pi$), then for $g,f\in C^{\infty}(F)$ $$\begin{aligned} [gX,fY]=gL_X(f)Y-fL_Y(g)X\in\Gamma({\mathfrak{g}})\end{aligned}$$ which completes the proof of our claim. The linearity of $H$ implies that the isomorphism of vector bundles $T:T^\pi F\to \pi^*F$ given by translation of vector tangent to the fibers of $\pi$ induces an isomorphism $$\begin{aligned} {\mathfrak{g}}(H)\simeq \pi^*{\mathfrak{g}}(H)|_M,\end{aligned}$$ where ${\mathfrak{g}}(H)=H^\pi$. Hence, $T$ descends to the quotient to an isomorphism of $TF/H\simeq T^\pi F/H^\pi$ with the pullback via $\pi$ of the vector bundle over $M$ given by $$\begin{aligned} E:=T^\pi F/H^\pi|_M.\end{aligned}$$ Therefore the associated Pfaffian form $\theta_H$ (see \[quotient-form\]) becomes a one form with values on the pullback bundle $\pi^*E$. In fact, $\theta_H$ is a regular linear form in the following sense: \[lin-form\] Let $E'$ be a vector bundle over $M$. A [**linear form $\theta\in\Omega^k(F,\pi^*E')$**]{} is a differential form such that $$\begin{aligned} a^*\theta=pr_1^*\theta+pr_2^*\theta,\end{aligned}$$ where $a,pr_1,pr_2:F\times_\pi F\to F$ are the fiber-wise addition, the projection to the first component and the projection to the second component respectively. \[points\]If we regard a one form $\theta\in \Omega^1(F,\pi^*E')$ as a vector bundle map $$\begin{aligned} \label{sum} \theta:TF{\longrightarrow}E'\end{aligned}$$ over the map $\pi:F\to M$, saying that $\theta$ is linear is the same as requiring that is also a vector bundle map over $TM\to M$, where the vector bundle structure of $d\pi:TF\to TM$ is the one given by the differentials of the structural maps of the vector bundle $\pi:F\to E'.$ Note that for a linear form $\theta$, $\theta(dz(X))=0$ where $X\in{\ensuremath{\mathfrak{X}}}(M)$ and $z:M\to F$ is the zero section, as $dz(X)=da(dz(X),dz(X)).$ This fact follows from lemmas \[from-theta-H\] and \[lemma-from-H-to-theta\] regarding $F$ as a Lie groupoid with multiplication given by fiber-wise addition. ### Equivalence between linear Pfaffian bundles and relative connections Another point of view of linear Pfaffian bundles is that of relative connections. More precisely, \[cor: linear one forms\]Let $\pi:F\to M$ and $E\to M$ be two vector bundle maps. There is a one to one correspondence between 1. point-wise surjective linear one forms $\theta\in\Omega^1(F,\pi^*E)$, and 2. relative connections $(D,l):F\to E$ In this correspondence $$\begin{aligned} \label{for} D(s)=s^*\theta\quad\quad\text{and}\quad\quad l(v)=\theta(v)\end{aligned}$$ where $s\in\Gamma(F)$ and $v\in F_x\simeq T^\pi_{z(x)}F$. As remarked in \[ex: linear forms\] a linear form $\theta$ is a multiplicative form on the Lie groupoid $F\to M$ (multiplication here is given by fiber-wise addition) with values in the trivial representation $E$ of $F$. Applying theorem \[t1\] we get a one to one correspondence between linear one forms and $E$-valued Spencer operators of order $1$. Note that in this case the Lie algebroid of $F\to M$ is itself with trivial Lie bracket and zero anchor, and therefore the Spencer operator associated to a regular linear one form is nothing more than a relative connection. Formulas follow from remark \[points\], equations , and the fact that in this case $$\begin{aligned} d\phi^\epsilon_s(\cdot)=da(dz(\cdot),\epsilon ds(\cdot)),\end{aligned}$$ as $\phi^\epsilon_s(x)=z(x)+\epsilon s(x)$. An analogous result regarding multiplicative forms and relative connections is the following: \[t:linear distributions\] Let $F\to M$ be a vector bundle. There is a one to one correspondence between 1. linear distributions $H\subset TF$, 2. subbundles ${\mathfrak{g}}\subset F$ together with connections relative to the quotient map $F\to F/{\mathfrak{g}}$. In this correspondence, ${\mathfrak{g}}$ is the symbol space of $H$ and $$\begin{aligned} D_Xs(x)=[\tilde X,\tilde s]_x\; {\operatorname{mod}}H^\pi_{z(x)},\end{aligned}$$ where $\tilde X\in\Gamma(H)\subset {\ensuremath{\mathfrak{X}}}(F)$ is any vector field $\pi$-projectable to $X$ and extending $dz(X)$, and $\tilde s\in\Gamma(T^\pi F)\subset{\ensuremath{\mathfrak{X}}}(F)$ is the extension of $s$ to the vector field constant on each fiber of $F$. The previous result is a special instance of theorem \[t2\]. We regard $\pi:F\to M$ as the Lie groupoid with source and target maps equal to $\pi$, and multiplication given by fiberwise addition. In this case, the Lie algebroid of $F$ is $\pi:F\to M$ itself with trivial Lie bracket and zero anchor. In this sense, a linear distribution $H\subset TF$ is the same as a multiplicative distribution of the Lie groupoid $F$ in the sense of definition \[def-pf-syst\]. Moreover, regarding $F$ as the Lie algebroid with trivial bracket and zero anchor, a Spencer operator on $F$ relative to a subbundle ${\mathfrak{g}}\subset F$ in the sense of definition \[def1\], is the same as a connection relative to the quotient map $F\to F/{\mathfrak{g}}$, since conditions and are trivially satisfied ($``0=0"$). In this case, we see then that theorem \[t2\] translates into corollary \[t:linear distributions\]. The next corollary shows us that the relative connection associated to a linear distribution $H$ is the equal to the one associated to $\theta_H.$ Let $D_H$ be the relative connection associated to $H$, and $D_{\theta_H}$ the relative connection associated to $\theta_H$. Then $$\begin{aligned} D_H=D_{\theta_H}.\end{aligned}$$ Let $s\in\Gamma(F)$. We must show that for any $X\in{\ensuremath{\mathfrak{X}}}(M)$ $$\begin{aligned} s^*\theta_H(X)=[\tilde X,\tilde s]|_M\; {\operatorname{mod}}{\mathfrak{g}}\end{aligned}$$ or in other words, $$\begin{aligned} \label{eq30009} s^*\theta_H(X)=\theta_H([\tilde X,\tilde s]|_M),\end{aligned}$$ where $\tilde X\in\Gamma(H)\subset {\ensuremath{\mathfrak{X}}}(F)$ is any vector field that is $\pi$-projectable to $X$ and extending $dz(X)$, and $\tilde s\in\Gamma(T^\pi F)\subset{\ensuremath{\mathfrak{X}}}(F)$ is the extension of $s$ to the vector field constant on each fiber of $F$. Equation follows from lemma \[commutator\] in the following way: one regards $F$ as a Lie groupoid with multiplication given by the fiber-wise addition (and with trivial representation on $E$), and therefore $\tilde s$ becomes a right invariant vector field with flow equal to $$\begin{aligned} \varphi_{\tilde s}^\epsilon:F{\longrightarrow}F,\quad e_x\mapsto e_x+\epsilon s(x).\end{aligned}$$ On the other hand $\tilde X\in\ker\theta$, and therefore $$\begin{aligned} \begin{split} s^*\theta_H(X)=\frac{d}{d\epsilon}|_{\epsilon=0}&(\theta_H(dz(X))+\epsilon \theta_H(ds(X)))\\&=\frac{d}{d\epsilon}|_{\epsilon=0}\theta_H(da(dz(X),\epsilon ds(X)))=(L_{s}\theta_H)(dz(X))\\&=([i_{\tilde X},L_s]\theta_H)|_M=\theta_H([\tilde X,\tilde s]|_M). \end{split}\end{aligned}$$ ### Equivalence between the theory of Linear Pfaffian bundles and that of relative connections Theorems \[cor: linear one forms\] and \[t:linear distributions\] allow us to make an explicit connection between the theory of relative connections (see chapter \[Relative connections\]) and that of linear Pfaffian bundles. Here we explain that the two theories are equivalent. First of all, one of the most relevant notions of both theories is that of a solution; as expected the two notions coincide. More precisely, if the relative connection $D$, the linear 1-form $\theta$ and the linear distribution $H$ are all related by theorems \[cor: linear one forms\] and \[t:linear distributions\], then $$\begin{aligned} {\text{\rm Sol}}(F,D)={\text{\rm Sol}}(F,\theta)={\text{\rm Sol}}(F,H).\end{aligned}$$ This is clear by definition \[definitions\]. As a direct consequence we get that $$\begin{aligned} J^1_HF=J^1_\theta F=J^1_DF.\end{aligned}$$ See definition \[partial prolongation\]. \[compa\]For an integer $k>0$, the associated relative connection associated to the Cartan form $\theta^k\in\Omega^1(J^kF,J^{k-1}F)$, or equivalently to the Cartan distribution $C_k\subset J^kF$, is the classical Spencer operator $$\begin{aligned} D^{\text{\rm clas}}:\Gamma(J^kF){\longrightarrow}\Omega^1(M,J^{k-1}F)\end{aligned}$$ relative to the projection $pr:J^kF\to J^{k-1}F$. See subsection \[example: jet groupoids\], where ${\mathcal{G}}=F$ is interpreted as a Lie groupoid with multiplication given by fiber-wise addition and its Lie algebroid is $F$ with trivial Lie bracket and zero anchor. From the previous example we have the following two results: \[corazon\] Let $H\subset TF$ be a linear distribution with $D$ and $\theta$ the associated relative connection and linear form respectively. Then $$\begin{aligned} \label{equation:star} P_D(F)=P_\theta(F)=P_H(F).\end{aligned}$$ Moreover, if one of the previous spaces are smoothly defined then $$\begin{aligned} H^{(1)}:=C_1\cap TP_H(F)\end{aligned}$$ is a linear distribution with associated relative connection $D^{(1)}:\Gamma(P_D(F))\to \Omega^1(M,F)$ and linear form $\theta^{(1)}=\theta^1|_{P_\theta(F)}$. Equality follows from the next lemma since $P_D(F)=\ker\varkappa_D$ and $P_H(F)=P_\theta(F)=\ker c_1$. That $\theta^{(1)}$ is linear follows from its definition. Since $H^{(1)}=\ker\theta^{(1)}$, the linearity of $H^{(1)}$ follows. As for the correspondence of $H^{(1)}$ with $D^{(1)}$, recall that $D^{(1)}$ is the restriction to $P_D(F)$ of the classical Spencer operator $D^{\text{\rm clas}}:\Gamma(J^1F)\to\Omega^1(M,F)$. The correspondence then is clear from example \[compa\]. \[high\]As higher prolongations are defined inductively, the previous proposition implies that the same holds for the $k$-classical prolongation of $D$, $\theta$ and $H$, whenever they are smooth. This is, under smoothness assumptions, $$\begin{aligned} P_D^k(F)=P_\theta^k(F)=P_H^k(F)\end{aligned}$$ and the associated relative connection of $H^{(k)}$ (or rather of $\theta^{(k)}$) is $D^{(k)}$. See corollary \[k-prolongations\]. \[curv-map\] The curvature map of $\theta$ (or rather of $H$) $$\begin{aligned} c_1(\theta):J^1_\theta F{\longrightarrow}\pi^*(\wedge^2T^*\otimes E)\end{aligned}$$ coincides with the curvature map of $D$ $$\begin{aligned} \varkappa_D:J^1_DF\to\wedge^2T^*\otimes E.\end{aligned}$$ Explicitly, for any $j^1_xs\in J^1_\theta F=J^1_DF,$ and $X,Y\in T_xM$ $$\begin{aligned} c_1(j^1_xs)(X,Y)=\varkappa_D(X,Y)\in E_x.\end{aligned}$$ This follows from lemma \[lema2\], as $$\begin{aligned} d_\nabla\theta=\delta\theta\end{aligned}$$ when restricting $d_\nabla\theta$ to the $H$, the kernel of $\theta$. \[lin-pf\] - The symbol map $\partial_H:{\mathfrak{g}}\to\pi^*(T^*\otimes E)$ of $H$ is the pullback of the symbol map $\partial_D:{\mathfrak{g}}_M\to T^*\otimes E$, i.e. for $v\in{\mathfrak{g}}_p=({\mathfrak{g}}_M)_{\pi(p)}$ and $X\in T_{\pi(p)}M$ $$\begin{aligned} \partial_H(v)(X)=\partial_D(v)(X)\in E_{\pi(p)}.\end{aligned}$$ This follows directly from the definition of $\partial_H$, and the definition of $D$ in terms of $H$. - We have isomorphisms of vector bundles over $F$ $$\begin{aligned} {\mathfrak{g}}^{(k)}(F,H)\simeq \pi^*{\mathfrak{g}}^{(k)}(F,D),\end{aligned}$$ where ${\mathfrak{g}}^{(k)}(F,H)\subset\pi^*(S^kT^*\otimes E)$ and ${\mathfrak{g}}^{(k)}(F,D)\subset S^kT^*\otimes E$ are the $k$-prolongation of the symbol maps $\partial_H$ and $\partial_D$ respectively. - The pullback of the cohomology groups of the $\partial_D$-Spencer cohomology are isomorphic as bundles over $F$ with the ones of the $\partial_H$- Spencer cohomology: $$\begin{aligned} H^{(p,k)}({\mathfrak{g}}(F,H))\simeq \pi^*H^{(p,k)}({\mathfrak{g}}(F,D)).\end{aligned}$$ See definition \[exten\] - From lemma \[curv-map\] and remark \[high\] it follows that the higher curvatures of $D$ and $H$, whenever they are well-defined, are equal. That is $$\begin{aligned} \kappa^{k+1}(D)=\bar c^{k+1}(H)\end{aligned}$$ for $\kappa^{k+1}(D):P_D^k(R)\to C^{2,k-1}$ and $\bar c^{k+1}(H):P_H^k(R)\to C^{2,k-1}$. See definitions \[curv(D)\] and \[curv(H)\]. Linearization along solutions ----------------------------- ### Linearization of distributions {#linearization of distributions} Let $\pi:R\to M$ be a surjective submersion and let $H\subset TR$ be a $\pi$-transversal distribution with ${\mathfrak{g}}:=H\cap T^\pi R$ the symbol of $H$. See [@Cullen; @Hermann] for related work.\ Out of a solution $\sigma:M\to R$ of $(R,H)$ one constructs a canonical linear Pfaffian bundle over $M$, denoted by $(L_\sigma(R),H^\sigma)$, in the following way. Let $$\begin{aligned} L_\sigma(R):=\sigma^*T^\pi R\end{aligned}$$ together with the connection $$\begin{aligned} D^\sigma:\Gamma(L_\sigma(R)){\longrightarrow}\Omega^1(M,E^\sigma)\end{aligned}$$ relative to the projection map $$\begin{aligned} p^\sigma:L_\sigma(R){\longrightarrow}\sigma^*(T^\pi R/{\mathfrak{g}})=:E^\sigma\end{aligned}$$ given by the formula $$\begin{aligned} \label{check} D^\sigma_X(s):=[X^\sigma,s^\sigma]|_{\sigma(M)}\; {\operatorname{mod}}H,\end{aligned}$$ where $X^\sigma\in\Gamma(H)$ is any $\pi$-projectable vector field to $X$ and extending $d\sigma(X)$, and $s^\sigma\in\Gamma(T^\pi R)$ is any vertical extension of $s\in\Gamma(L_\sigma(R))$. The [**linearization along $\sigma$**]{} is the relative connection (or the associated linear Pfaffian bundle: see theorem \[t:linear distributions\]) $$\begin{aligned} (D^\sigma,p^\sigma):L_\sigma(R){\longrightarrow}E^\sigma.\end{aligned}$$ The formula is well-defined and it defines a connection relative to the projection $L_\sigma(R)\to E^\sigma$. Let’s first check that $D^\sigma_X(s)$ does not depend on the choice of $s^\sigma\in\Gamma(T^\pi R)$. For this it suffices to verify that if $s=0$, then $D^\sigma_X(s)=0$. Without loss of generality let $s^\sigma\in\Gamma(T^\pi R)$ be of the form $$\begin{aligned} s^\sigma=f\beta\end{aligned}$$ with $\beta\in\Gamma(T^\pi R)$ and $f\in C^{\infty}(R)$ is such that $f|_{\sigma(M)}\equiv0$. Then, $$\begin{aligned} \begin{split} [X^\sigma,f\beta]|_{\sigma(M)}&=f|_{\sigma(M)}[X^\sigma,\beta]|_{\sigma(M)}+L_{X^\sigma}(f)|_{\sigma(M)}\beta|_{\sigma(M)}\\&=L_{d\sigma(X)}(f|_{\sigma(M)})\beta|_{\sigma(M)}=0, \end{split}\end{aligned}$$ where in the last equation we used that $X^{\sigma}|_{\sigma(M)}=d\sigma(X)\in{\ensuremath{\mathfrak{X}}}(\sigma(M))$. Let’s now show that $D^\sigma_X(s)$ does not depend on $X^\sigma$. For this it is enough to check $D^\sigma_X(s)=0$ whenever $X=0$. Again, without loss of generality let $X^\sigma$ be of the form $$\begin{aligned} X^{\sigma}=f\beta\end{aligned}$$ with $\beta\in\Gamma({\mathfrak{g}})$ and $f\in C^{\infty}(R)$ is such that $f|_{\sigma(M)}\equiv0$. Then, $$\begin{aligned} \begin{split} [f\beta,s^\sigma]|_{\sigma(M)}&=f|_{\sigma(M)}[\beta,s^\sigma]|_{\sigma(M)}-L_{s^\sigma}(f)\beta|_{\sigma(M)} \\&=-L_{s^\sigma}(f)\beta|_{\sigma(M)}=0\; {\operatorname{mod}}H. \end{split}\end{aligned}$$ To show that $D^\sigma(s)$ is indeed a form (i.e. it is $C^{\infty}(M)$-linear) and that $D^\sigma$ satisfies the Leibniz identity relative the projection $L_\sigma(R)\to E^\sigma$ follows from the Leibniz identity on vector field of $R$. This is left to the reader. From the construction the symbol space ${\mathfrak{g}}_\sigma$ of $D^\sigma$ (see definition \[definitions\]) satisfies the following properties. Let $H\subset TR$ be a $\pi$-transversal distribution and let $\sigma:M\to R$ be a solution of $(R,H)$. - The symbol space ${\mathfrak{g}}_\sigma$ of $D^\sigma$ is given by the pullback $$\begin{aligned} {\mathfrak{g}}_\sigma=\sigma^*{\mathfrak{g}}\simeq {\mathfrak{g}}|_{\sigma(M)}.\end{aligned}$$ - If $H$ is $\pi$-involutive, the symbol map $\partial_{D^\sigma}:{\mathfrak{g}}_\sigma\to T*\otimes E^\sigma$ of $D^\sigma$ is $$\begin{aligned} \partial_{D^\sigma}=\partial_H|_{\sigma(M)},\end{aligned}$$ where $\partial_H:{\mathfrak{g}}\to T^*\otimes E$ is the symbol map of $H$. - If $H$ is $\pi$-involutive, $$\begin{aligned} {\mathfrak{g}}_\sigma^{(k)}=\sigma^*{\mathfrak{g}}^{(k)},\end{aligned}$$ where ${\mathfrak{g}}_\sigma^{(k)}\subset S^kT^*\otimes E^\sigma$ and ${\mathfrak{g}}^{(k)}\subset \pi^*(S^kT^*)\otimes E$ are the $k$-prolongations of the symbol maps $\partial_{D^\sigma}$ and $\partial_{H}$ respectively. One can define $$\begin{aligned} L_\xi(R,H)\end{aligned}$$ for any $\xi \in \Gamma(J^1_HR)$ so that for $\xi=j^1\sigma$, $\sigma\in {\text{\rm Sol}}(R,H)$ $$\begin{aligned} L_{\sigma}(R,H)=L_{j^1\sigma}(R,H).\end{aligned}$$ Many of the properties of $L_\sigma(R,H)$ will extend to $L_\xi$ provided $\xi\in \Gamma(J^1_HR)$. For the precise definition, write $\sigma_\xi\in\Gamma(R)$ as the projection of $\xi$, so $\xi$ is viewed as a map $$\begin{aligned} \xi_x:T_xM{\longrightarrow}T_{\sigma_\xi(x)}R.\end{aligned}$$ See remark \[1-when working with jets\]. The condition that $\xi\in \Gamma(J^1_HR)$ means that $\xi_x$ takes values in $H_{\sigma_\xi(x)}$. With this, $D^\xi$ is defined exactly like before, just that, this time, $X^\xi\in\Gamma(H)$ is required to extend $\xi(X)$.\ [**Conclusion:**]{} To compute ${\mathfrak{g}}^{(k)}(R,H)_r$, $r\in R$, choose any $\xi\in J^1_HR$ around $x:=\pi(r)$, with $\sigma_\xi(x)=r$. Consider the linearization of $(L_\xi(R),D^\xi)$, and compute ${\mathfrak{g}}^{(k)}(D^\xi)_x$. The result is: $$\begin{aligned} {\mathfrak{g}}^{(k)}(D^\xi)_x={\mathfrak{g}}^{(k)}(R,H)_r.\end{aligned}$$ ### Heuristics of the linearization of one forms Using one forms $\theta\in\Omega^1(R,E)$ we have an alternative description of the linearization of $(R,\theta)$ along a solution $\sigma\in{\text{\rm Sol}}(R,\theta)$, which will give us more insight into the linearization along solutions and will allow us to treat the slightly more general case of a non-surjective one form $\theta$. We will give an idea of the procedure without proofs.\ The infinite dimensional picture realizes ${\text{\rm Sol}}(R,\theta)$ as zeros of a section: - the base manifold is $\mathcal{M}=\Gamma(R)$, - the vector bundle $\mathcal{E}$ over $\mathcal{M}$ has fiber over $\sigma\in\mathcal{M}$: $$\begin{aligned} \mathcal{E}_\sigma:=\Omega^1(M,\sigma^*E),\end{aligned}$$ - the section $$\begin{aligned} Eq:\mathcal{M}{\longrightarrow}\mathcal{E},\quad\sigma\mapsto \sigma^*\theta.\end{aligned}$$ With this, $$\begin{aligned} {\text{\rm Sol}}(R,\theta):=\{\sigma:Eq(\sigma)=0\}.\end{aligned}$$ So, naturally, the linearization of $(R,\theta)$ along a solution $\sigma$ would be the usual vertical differential of a section at a zero: $$\begin{aligned} d^{\text{v}}_\sigma Eq:T_\sigma\mathcal{M}{\longrightarrow}\mathcal{E}_\sigma.\end{aligned}$$ Now: $$\begin{aligned} \label{star} T_\sigma\mathcal{M}\simeq \Gamma(\sigma^*T^\pi R).\end{aligned}$$ Hence, the linearization appears as an operator $$\begin{aligned} d^{\text{v}}_\sigma Eq:\Gamma(\sigma^*T^\pi R){\longrightarrow}\Omega^1(M,\sigma^*E)\end{aligned}$$ which actually will be a relative connection on $L_\sigma(R):=\sigma^*T^\pi R$ with values on $E^\sigma:=\sigma^*E$. For explicit formulas we have to be more precise with the identifications (e.g. with ). First of all, a section $s\in\Gamma(L_\sigma(R))$ is given by the derivative at $t=0$ of a variation of $\sigma$. This is a family $\sigma^t\in\Gamma(R)$ with the property that $\sigma^0=\sigma$ and such that for any $x\in M$ $$\begin{aligned} \frac{d}{dt}\sigma^t(x)|_{t=0}=s(x)\in T^\pi_{\sigma(x)}R.\end{aligned}$$ The associated connection $D^\sigma:\Gamma(L_\sigma(R))\to \Omega^1(M,E^\sigma)$ relative to the not necessarily surjective vector bundle $l^\sigma:L_\sigma(R)\to E^\sigma,U_{\sigma(x)}\mapsto\theta_{\sigma(x)}(U)$ is defined at $D^\sigma_X(s)(x)$ by the projection to the second component of $$\begin{aligned} \label{w-d} \frac{d}{dt}\theta\circ d_x\sigma^t(X)\in T_{z(\sigma(x))}E\simeq T_{\sigma(x)}R\oplus E_{\sigma(x)}.\end{aligned}$$ Note that in we are using that $\sigma$ is a solution of $(R,\theta)$ as $\theta\circ d\sigma=z,$ $z:R\to E$ being the zero-section of $E$.\ To see some properties of this construction let’s examine it further. The derivative at $t=0$ of the variation $$\begin{aligned} j^1\sigma^t:M{\longrightarrow}J^1R\end{aligned}$$ of $j^1\sigma\in{\text{\rm Sol}}(J^1R,\theta^1)$, where $\theta^1$ is the Cartan form is, by construction, a section $\phi:M\to L_{j^1\sigma}(J^1R)$ of the vector bundle over $M$ given by $L_{j^1\sigma}(J^1R):=(j^1\sigma)^*T^\pi(J^1R)$. The section $\phi$ can be identified with the first jet of the section $$\begin{aligned} \frac{d}{dt}\sigma^t|_{t=0}:M{\longrightarrow}L_\sigma(R).\end{aligned}$$ Hence, we have an identification $$\begin{aligned} \label{jetes} L_{j^1\sigma}(J^1R)\simeq J^1(L_\sigma(R))\end{aligned}$$ of vector bundles over $M$. Moreover, under this identification, one has: - The linearization along $j^1\sigma$ of the bundle map over $R$ $$\begin{aligned} J^1R{\longrightarrow}T^*\otimes E,\quad j^1_xb\mapsto \theta\circ d_xb\end{aligned}$$ is equal to the vector bundle map over $M$ given by $$\begin{aligned} J^1(L_\sigma(R)){\longrightarrow}T^*\otimes E^\sigma,\quad j^1_xs\mapsto D^\sigma(s)(x).\end{aligned}$$ - The partial prolongation $J^1_{D^\sigma}(L_\sigma(R))$ of $(L_\sigma(R),D^\sigma)$ has the property that $$\begin{aligned} J^1_{D^\sigma}(L_\sigma(R))\simeq L_{j^1\sigma}(J^1_\theta R).\end{aligned}$$ - The curvature map of $D^\sigma$ $$\begin{aligned} \varkappa_{D^\sigma}:J^1_{D^\sigma}(L_\sigma(R)){\longrightarrow}\wedge^2T^*\otimes E^\sigma\end{aligned}$$ is equal to the linearization along $j^1\sigma$ of the curvature map $$\begin{aligned} c_1(\theta):J^1_\theta R{\longrightarrow}\wedge^2T^*\otimes E.\end{aligned}$$ - The relative connection $D^{j^1\sigma}$ obtained by the linearization of $\theta^1$ along $j^1\sigma$ is such that $$\begin{aligned} D^{j^1\sigma}=D^{\text{\rm clas}},\end{aligned}$$ where $D^{\text{\rm clas}}:\Gamma(J^1(L_\sigma(R)))\to\Omega^1(M,L_\sigma(R))$ is the Spencer operator associated to the vector bundle $L_\sigma(R)\to M$ (see definition \[Spencer operator\]). From the previous discussion we want to state the following lemma, again without proof. Let $\theta\in\Omega^1(R,E)$ be a one form and let $\sigma\in\Gamma(R)$ be a solution of $(R,\theta)$. The identification restricts to an identification $$\begin{aligned} P_{D^\sigma}(L_\sigma(R))\simeq L_{j^1\sigma}(P_\theta(R))\end{aligned}$$ of vector bundles over $M$. Moreover, if these spaces are smooth then $$\begin{aligned} D^{(1)}=D^{j^1\sigma},\end{aligned}$$ where $D^{(1)}:\Gamma(P_{D^\sigma}(L_\sigma(R)))\to \Omega^1(M,L_\sigma(R))$ is the restriction of the Spencer operator associated to $L_{\sigma}(R)$, and $D^{j^1\sigma}:\Gamma(L_{j^1\sigma}(P_\theta(R)))\to (M,L_\sigma(R))$ is the linearization along $j^1\sigma$ of $\theta^{(1)}\in \Omega^1(P_\theta(R),T^\pi R).$ The integrability theorem for multiplicative forms {#The integrability theorem for multiplicative forms} ================================================== In the process of understanding Pfaffian bundles in the setting of Lie groupoids, we are brought back to the question of understanding the linearization of multiplicative forms with coefficients and the corresponding integrability problem. We prove an integrability result, in the spirit of Lie, which allows us to pass from the (more interesting) global picture to the (easier-to-handle) linear picture. As an outcome we find that the associated infinitesimal data are certain “connection-like operators", which we call ‘Spencer operators’, and which are the analogue of relative connections in the context of Lie algebroids. The main example is the classical Cartan system (see example \[difeo\]) on the groupoids of jets of diffeomorphisms [@Cartan1904; @Kuranishi; @GuilleminSternberg:deformation; @Olver:MC]. On the infinitesimal side we recover the classical Spencer operator [@Spencer; @KumperaSpencer; @Gold3; @GuilleminSternberg:deformation]. See subsection \[sp-dc\]. Hence, using Lie groupoids, we learn that the classical Cartan forms and the classical Spencer operators are the same thing, modulo the Lie functor. Working with forms of arbitrary degree is natural from Cartan’s point of view (exterior differential systems). However, the main reason for us to allow general forms is the fact that multiplicative $2$-forms are central to Poisson and related geometries. Moreover, while multiplicative $2$-forms with trivial coefficients (!) are well understood, the question of passing from trivial to arbitrary coefficients has been around for a while. Surprisingly enough, even the statement of an integrability theorem for multiplicative forms with non-trivial coefficients was completely unclear. This shows, we believe, that even the case of trivial coefficients was still hiding phenomena that we did not understand. The fact that our work related to Pfaffian groupoids clarifies this point came as a nice surprise to us and, looking back, we can now say what was missing from the existing picture in multiplicative $2$-forms: Spencer operators.\ And here are a few connections with the existing literature. On one hand, there is the literature related to Poisson geometry. Symplectic groupoids were discovered as the global counterparts to Poisson manifolds, thought of as infinitesimal (i.e. Lie algebroids) objects [@Weinstein], while Ping Xu realised the relevance of the multiplicativity condition [@Xu]. Multiplicative $0$-forms (1-cocycles) were studied in the context of quantization [@WeinsteinXu] (see also our subsection \[1-cocyles and relations to the van Est map\]). Motivated by Dirac geometry and the theory of Lie group-valued moment maps, the case of closed multiplicative $2$-forms was analyzed in [@BCWZ]. The case of multiplicative $1$-forms (not necessarily closed) appeared in [@prequantization] in the context of pre-quantization. General multiplicative forms, i.e. which are not necessarily closed and are of arbitrary degree (but still with trivial coefficients!) were understood in [@BursztynCabrera; @CrainicArias].\ The work in this chapter is largely based on the preprint [@Maria]. Multiplicative forms of degree $k$ ---------------------------------- \[def-mult-form\]Given a Lie groupoid ${\mathcal{G}}$ and a representation $E$ of ${\mathcal{G}}$, an **$E$-valued multiplicative $k$-form on ${\mathcal{G}}$** is any form $\theta \in \Omega^k({\mathcal{G}}, t^{\ast}E)$ satisfying $$\label{eq: multiplicative} (m^{\ast}\theta)_{(g,h)} = {pr}_1^{\ast}\theta + g\cdot({pr}_2^{\ast}\theta)$$ for all $(g,h) \in {\mathcal{G}}_2$, where ${pr}_1,{pr}_2: {\mathcal{G}}_2 \to {\mathcal{G}}$ denote the canonical projections. See subsection \[Lie groupoids\] for the definition of representation. \[ex: linear forms\] A vector bundle $\pi:F \to M$ can be seen as a Lie groupoid with multiplication given by fiberwise addition (a bundle of abelian groups). In this case, any vector bundle $E$ over $M$ can be seen as a trivial representation of $F$ ($e\cdot f= f$). In this case, a multiplicative form $\theta\in \Omega^k(F, \pi^{\ast}E)$ is called a linear form (see definition \[lin-form\]). Thus, if $a: F\times_{\pi}F \to F$ denotes the addition of $F$, then $\theta$ is linear if and only if $$a^{\ast}\theta = {pr}_1^{\ast}\theta + {pr}_2^{\ast} \theta.$$ An example is the linear Cartan 1-form associated to a vector bundle $E$, $$\theta\in \Omega^1({J}^1E, E).$$ Here $F= {J}^1E\to M$ is the first jet bundle of $E$. The 1-form $\theta$ is the Cartan form given in \[PDEs; the Cartan forms:\] taking $R=E$ a vector bundle, and using the canonical identification of $T^\pi E$ with $E$. An important examples are classical Cartan forms on jet groupoids. However, they will be treated separately in chapter \[Pfaffian groupoids\]. \[ex: form on base\] Any form $\omega \in \Omega^k(M,E)$ induces a multiplicative form $\delta(\omega)\in \Omega^k({\mathcal{G}}, t^{\ast}E)$: $$\delta(\omega)_g = g\cdot s^{\ast}\omega - t^{\ast}\omega .$$ Forms of this type will be called cohomologically trivial (for indications of the terminology, see also subsection \[1-cocyles and relations to the van Est map\]). Note that the classical Cartan form $\theta\in \Omega^1(\Pi^1(M), TM)$ is of this type: it is $\delta(\iota)$, where $\iota\in \Omega^1(M, TM)$ is the identity of $TM$. For higher jets however, $\theta\in \Omega^k(\Pi^k(M), J^{k-1}TM)$ is not cohomologically trivial. \[bisections\] Here is a point which, at least implicitly, is at the heart of our approach to multiplicative forms: recall form definition \[definition-bisections\] that the set of bisections $\operatorname{Bis}({\mathcal{G}})$ forms a group. Given a multiplicative form $\theta \in \Omega^k({\mathcal{G}}, t^{\ast}E)$, one can talk about $\theta$-holonomic bisections of ${\mathcal{G}}$, i.e. those with the property that $b^{\ast}\theta= 0$. The multiplicativity of $\theta$ ensures that the set $\textrm{Bis}({\mathcal{G}},\theta)$ of $\theta$-holonomic bisections is a subgroup of $\textrm{Bis}({\mathcal{G}})$. Spencer operators of degree $k$ {#Spencer operators of degree $k$} ------------------------------- Passing to the infinitesimal side, let $E$ be a representation of a Lie algebroid $A$ with $A$-connection $$\begin{aligned} \nabla:\Gamma(A)\times \Gamma(E){\longrightarrow}\Gamma(E). \end{aligned}$$ See subsection \[Lie algebroids\]. Each $\alpha\in \Gamma(A)$ induces a Lie derivative operator ${L}_{\alpha}$ acting on $\Omega^k(M, E)$, which acts as ${L}_{\rho(\alpha)}$ on forms and as $\nabla_{\alpha}$ on $\Gamma(E)$: $$({L}_{{\alpha}}\omega)(X_1, \ldots, X_k) = \nabla_{{\alpha}}(\omega(X_1, \ldots, X_k)) - \sum_i\omega(X_1, \ldots, [\rho({\alpha}),X_i], \ldots , X_k).$$ \[def-Spencer-oprt\]Given a Lie algebroid $A$ over $M$ and a representation $E$ of $A$, an **$E$-valued $k$-Spencer operator on $A$** is a linear operator $$D: \Gamma(A) {\longrightarrow}\Omega^k(M, E)$$ together with a vector bundle map $$l:A {\longrightarrow}\wedge^{k-1}T^{\ast}M\otimes E,$$ called the **symbol** of the Spencer operator, satisfying the Leibniz identity $$\label{eq: Leibniz identity} D(f{\alpha}) = fD({\alpha}) + {d}f\wedge l({\alpha}),$$ and the compatibility conditions: $$\begin{aligned} & D([{\alpha},{\beta}] ) = {L}_{{\alpha}}D({\beta}) - {L}_{{\beta}}D({\alpha}) \label{eq: compatibility-1}\\ & l([{\alpha},{\beta}]) = {L}_{{\alpha}}l({\beta}) - i_{\rho({\beta})}D({\alpha}) \label{eq: compatibility-2}\\ & i_{\rho({\alpha})}l({\beta}) = -i_{\rho({\beta})}l({\alpha}), \label{eq: compatibility-3} \end{aligned}$$ for all ${\alpha},{\beta}\in \Gamma(A)$, and $f \in {\ensuremath{\mathrm{C}^{\infty}}}(M)$. When we are not in the “special case” $k= \textrm{dim}(M)+ 1$, the entire information is contained in $D$ and we only have to require (\[eq: compatibility-1\]) and (\[eq: compatibility-3\]). Indeed, in this case $l$ will be unique and (\[eq: compatibility-2\]) follows from (\[eq: compatibility-1\]) and Leibniz identities (plug in the first equation $f\beta$ instead of $\beta$ and expand using Leibniz). The fact that one has to adopt the previous definition so that it includes the “special case” $k= \textrm{dim}(M)+ 1$ is unfortunate because this case is not important for our main motivating purpose (when $k= 1$). Keeping all these in mind, we will simply say that $D$ is a Spencer operator and that $l$ is the symbol of $D$. Again, a large class of examples is the classical Spencer operator on the $k$-jet bundles of a vector bundle. This was treated in example \[Spencer tower\]. \[rk-convenient\] Note that a general $E$-valued Spencer operator of degree $1$ as in the previous definition can be encoded/interpreted in a vector bundle map $$j_{D}: A{\longrightarrow}{J}^1E .$$ in the same way as in remark \[rk-convenient’\]. Again, $D$ itself can be recovered as the composition $D^{\textrm{clas}}\circ j_{D}$. For the classical Spencer operator, it corresponds to $j_{D^{\textrm{clas}}}= \textrm{Id}$. \[ex: form on base-2\]Here is the infinitesimal analogue of the cohomologically trivial forms of example \[ex: form on base\]: for any algebroid $A$ and any representation $E$ of $A$, any form $\omega\in \Omega^k(M, E)$ induces an $E$-valued Spencer operator by $$D({\alpha}) = {L}_{{\alpha}}\omega, \quad l({\alpha}) = i_{\rho({\alpha})}\omega .$$ The integrability theorem for multiplicative forms; Theorem 1 {#The Lie functor} ------------------------------------------------------------- \[t1\] Let $E$ be a representation of a Lie groupoid ${\mathcal{G}}$ and let $A$ be the Lie algebroid of ${\mathcal{G}}$. Then any multiplicative form $\theta \in \Omega^k({\mathcal{G}}, t^{\ast}E)$ induces an $E$-valued Spencer operator $D_{\theta}$ of order $k$ on $A$, given by $$\label{eq: explicit formula}\small{ \left\{\begin{aligned} D_{\theta}({\alpha})_x(X_1, \ldots, X_k) &= \frac{{d}}{{d}{\epsilon}}\Big|_{{\epsilon}= 0} \phi^{{\epsilon}}_{{\alpha}}(x)^{-1}\cdot\theta(({d}\phi^{{\epsilon}}_{{\alpha}})_x(X_1), \ldots, ({d}\phi^{{\epsilon}}_{{\alpha}})_x(X_k)), \\ \\l_{\theta}({\alpha}) &= u^{\ast}(i_{{\alpha}}\theta) .\end{aligned}\right.}$$ where $\phi_{\alpha}^\epsilon:M\to R$ is the flow of $\alpha$ (see remark \[flows of sections\]). If ${\mathcal{G}}$ is $s$-simply connected, then this construction defines a 1-1 correspondence between $E$-valued $k$-forms on ${\mathcal{G}}$ and $E$-valued $k$-Spencer operators on $A$. \[linear Spencer\] Let us look again at the case when $A= F$ is a Lie algebroid with trivial bracket and anchor; then theorem \[t1\] gives a one-to-one correspondence between linear forms $\theta \in \Omega^k(F,\pi^{\ast}E)$ and operators $D: \Gamma(F) \to \Omega^k(M, E)$, with a symbol map $l:F \to \wedge^{k-1}T^{\ast}M\otimes E$, satisfying Leibniz (all compatibility conditions are automatically satisfied). This actually indicates a possible strategy, in the spirit of [@BursztynCabrera], but which we will not follow here, to prove theorem \[t1\]: given $\theta\in\Omega^k({\mathcal{G}}, t^{\ast}E)$, first “linearize” $\theta$ to a linear form $\theta_{0}\in \Omega^k(A, t^{\ast}E)$ then consider the associated Spencer operator $D$, carefully book-keeping all the equations involved. ### The case $k=0$: 1-cocycles and the van Est map {#1-cocyles and relations to the van Est map} Another interesting case of the main theorem is $k= 0$, when we recover the van Est map relating differentiable and algebroid cohomology [@WeinsteinXu; @Crainic:Van], in degree $1$. Since this case will be used later on and also in order to fix the terminology, we discuss it separately here. In particular, we will provide a simple direct argument, based on Lie’s II theorem [@MoerdijkMrcun; @Mackenzie:General] which says that, if ${\mathcal{G}}$ is $s$-simply connected and ${\mathcal{H}}$ is any Lie groupoid, then for any Lie algebroid morphism $\varphi: A({\mathcal{G}}) \to A({\mathcal{H}})$, there exists a unique Lie groupoid morphisms $\Phi: {\mathcal{G}}\to {\mathcal{H}}$ such that $\varphi = {d}\Phi |_{A({\mathcal{G}})}$. Let ${\mathcal{G}}$ be a Lie groupoid over $M$, and $E$ representation of ${\mathcal{G}}$. We will denote by ${\mathcal{G}}^{(p)}$ the space of strings of $p$ composable arrows on ${\mathcal{G}}$, and by $t: {\mathcal{G}}^{(p)} \to M$ the map which associates to $(g_1, \ldots, g_p)$ the point $t(g_1)$. A differentiable $p$-cochain on ${\mathcal{G}}$ with values in $E$ is a smooth section $c: {\mathcal{G}}^{(p)} \to t^{\ast}E$. We denote the space of all such cochains by $C^p({\mathcal{G}},E)$. The differential $\delta: C^p({\mathcal{G}},E) \to C^{p+1}({\mathcal{G}},E)$ is $$\begin{aligned} \delta c (g_1, \ldots, g_{p+1}&) = g_1c(g_2, \ldots, g_{p+1}) +\\ &+\sum_{i=1}^p(-1)^i c(g_1, \ldots , g_ig_{i+1}, \ldots ,g_{p+1}) +(-1)^{p+1}c(g_1, \ldots, c_p). \end{aligned}$$ For each $p$ we consider the space of $p$-cocycles $$Z^p({\mathcal{G}},E) = \ker (\delta: C^p({\mathcal{G}},E) \to C^{p+1}({\mathcal{G}},E)).$$ We recognize multiplicative forms $c\in \Omega^0({\mathcal{G}}, E)$ as elements of $Z^1(G,E)$. At the infinitesimal level, given a representation $\nabla$ of $A$ on $E$, one defines the de Rham cohomology of $A$ with coefficients in $E$ as the cohomology of the complex $(C^{\ast}(A,E), d)$, where $C^{p}(A,E) = \Gamma(\wedge^pA^{\ast}\otimes E)$ and $$\begin{aligned} d\omega({\alpha}_0, \ldots, {\alpha}_p) = \sum_{i}&(-1)^i\nabla_{{\alpha}_i}\omega({\alpha}_0, \ldots, \widehat{{\alpha}_i},\ldots, {\alpha}_p) +\\ &+\sum_{i<j}(-1)^{i+j}\omega([{\alpha}_i, {\alpha}_j], {\alpha}_0, \ldots, \widehat{{\alpha}_i}, \ldots, \widehat{{\alpha}_j}, \ldots, {\alpha}_p). \end{aligned}$$ As above, the space of $p$-cocycles is denoted by $Z^p(A ,E)$ and we recognize the $E$-valued $0$-Spencer operators as $1$-cocycles on ${A}$ with values in $E$. The van Est map is a map of cochain complexes $${\vartheta}: C^{\ast}({\mathcal{G}}, E) {\longrightarrow}C^{\ast}(A,E),$$ which induces an isomorphism in cohomology under certain connectedness conditions on the $s$-fibers [@WeinsteinXu; @Crainic:Van]. We will concentrate on degree $1$. In this case, for $c\in C^{1}({\mathcal{G}}, A)$, ${\vartheta}(c)$ is given by: $$\vartheta_x(c)({\alpha}) = \frac{{d}}{{d}{\epsilon}}\big{|}_{{\epsilon}= 0}g_{{\epsilon}}^{-1}c(g_{{\epsilon}}),$$ where ${\alpha}\in A_x$, and $g_{{\epsilon}}$ is any curve in $s^{-1}(x)$ such that $g_0 = 1_x$, $\frac{{d}}{{d}{\epsilon}}\big{|}_{{\epsilon}= 0}g_{{\epsilon}} = {\alpha}$. Using $g_{\epsilon}= \phi^{\epsilon}_{\alpha}(x)$, we recognize our formula for the Spencer operator from theorem \[t1\]. Hence our main theorem gives: \[prop: Van Est\] If ${\mathcal{G}}$ is $s$-simply connected, then the Van Est map induces an isomorphism $${\vartheta}: Z^1({\mathcal{G}}, E) {\longrightarrow}\Omega^1_{\mathrm{cl}}(A,E),$$ where $\Omega_{\mathrm{cl}}^1(A,E)$ denotes the closed elements of $\Omega^1(A,E)$. Consider the semi-direct product ${\mathcal{G}}\ltimes E {\rightrightarrows}M$, whose space of arrows consists of pairs $(g,v) \in {\mathcal{G}}\times E$ with $t(g) = \pi(v)$ and $$s(g,v) = s(g), \quad t(g,v) = t(g) = \pi(v), \quad (g,v)(h,w) = (gh, v+gw) .$$ The fact that $c \in Z^1({\mathcal{G}},E)$ is a cocycle condition is equivalent to the fact that $$\tilde{c}:= (\textrm{Id}, c): {\mathcal{G}}{\longrightarrow}{\mathcal{G}}\ltimes E$$ is a morphism of groupoids. The Lie algebroid of ${\mathcal{G}}\ltimes E$ is $A\ltimes E = A\oplus E$ with $$\rho({\alpha},s) = \rho({\alpha}),\quad\text{and}\quad [({\alpha},s),(\tilde {\alpha},s')] = ([{\alpha},\tilde {\alpha}], \nabla_{{\alpha}}s' - \nabla_{\tilde {\alpha}}s).$$ As before, $\omega \in C^1(A,E)$ is a cocycle if and only if $\tilde{\omega}:= (\textrm{Id}, \omega) : A{\longrightarrow}A \ltimes E$ is a Lie algebroid morphism. It is easy to see that, for $c \in Z^1({\mathcal{G}},E)$, the Lie functor applied to $\tilde{c}$ is $\widetilde{{\vartheta}(c)}$; hence the result follows from Lie II. \[along-s\] Sometimes it is more natural to consider cochains along $s$, i.e. sections of the pull-back of $E$ via $s: {\mathcal{G}}^{(p)} \to M$, $(g_1, \ldots, g_p)\mapsto s(g_p)$. In degree $1$ one deals with $c\in \Gamma({\mathcal{G}}, s^{\ast}E)$ and the cocycle condition becomes $$c(gh)= h^{-1}\cdot c(g)+ c(h) .$$ Of course, one can just pass to cocycles in the previous sense by considering $$\overline{c}(g)= g^{-1}\cdot c(g).$$ Note that the associated algebroid cocycle is simply ${\vartheta}(\overline{c})(\alpha_x)= (dc)_x(\alpha_x)$. \[remark-cocycles\] Yet another interpretation of $1$-cocycles is obtained when $E= \mathbb{R}$ is the trivial representation of ${\mathcal{G}}$ (and of $A$). On the groupoid side, any 1-cocycle $c\in C^{1}({\mathcal{G}})$ induces a representation, denoted $\mathbb{R}_c$, of ${\mathcal{G}}$ on the trivial line bundle: $$g\cdot t:= e^{c(g)}t .$$ When the $s$-fibers of ${\mathcal{G}}$ are connected, it is not difficult to see that this gives a 1-1 correspondence. Similarly, one has a 1-1 correspondence between $1$-cocycles on $A$ and structures of representations of $A$ on the trivial line bundle; given $a\in Z^1(A)$, the corresponding representation, denoted $\mathbb{R}_{a}$, is determined by $$\nabla_{{\alpha}}(1)= a({\alpha}) .$$ It is clear that, in this case, the van Est map (and proposition \[prop: Van Est\]) becomes the Lie functor between representations of ${\mathcal{G}}$ and those of $A$. For later use, we give the following: \[prop: appendix\] Let $v \in T_g{\mathcal{G}}$ be a vector tangent to the $t$-fiber of ${\mathcal{G}}$, and let $c \in Z^1({\mathcal{G}},E)$ be a cocycle. Then $${d}_gc(v) = g\cdot {d}_{s(g)} c({d}_gL_{g^{-1}}v).$$ Let $\gamma: I \to {\mathcal{G}}$ be a curve in the $t$-fiber through $g$ whose velocity at ${\epsilon}= 0$ is $v$. Since $c$ is a cocycle, it follows that $$c(g^{-1}\cdot\gamma({\epsilon})) = g^{-1}\cdot c(\gamma({\epsilon})) + c(g^{-1}),$$ for all ${\epsilon}\in I$. Finally, one differentiates w.r.t. ${\epsilon}$. ### Example: trivial coefficients {#sec: trivial coefficients} Another interesting case of the main theorem is when $E$ is the trivial representation. This case was well studied because of its relevance to Poisson geometry. Our Theorem \[t1\] recovers the most general results in this context. The key remark is that, when $E= \mathbb{R}$ with the trivial action, any bundle map $l:A \to \wedge^{k-1}T^*M$ is canonically the symbol of a $E$-Spencer operator, namely ${\alpha}\mapsto d(l({\alpha}))$. We see that any $E$-Spencer operator can be decomposed as $$D({\alpha})= \nu({\alpha})+ d(l({\alpha})),$$ where, this time, $\nu$ is tensorial. Rewriting everything in terms of $\nu$ and ${\alpha}$ we obtain the result of [@CrainicArias; @BursztynCabrera]: \[prop: trivial coefficients\] Let ${\mathcal{G}}$ be an $s$-simply connected Lie groupoid with Lie algebroid $A$. Then there is a one-to-one correspondence between multiplicative forms $\theta \in \Omega^k({\mathcal{G}})$ and pairs $(\nu, l)$ consisting of vector bundle maps $$\left\{\begin{aligned} \nu:A&{\longrightarrow}\wedge^kT^*M\\ l:A&{\longrightarrow}\wedge^{k-1}T^*M \end{aligned}\right.$$ satisfying $$\label{eq: compatibility-tc}\left\{ \begin{aligned} \nu([{\alpha},{\beta}] ) &= {L}_{\rho({\alpha})}\nu({\beta}) - i_{\rho({\beta})}d\nu({\alpha})\\ i_{\rho({\beta})}\nu({\alpha}) &= {L}_{\rho({\alpha})}l({\beta}) - i_{\rho({\beta})}dl({\alpha})- l([{\alpha},{\beta}])\\ i_{\rho({\alpha})}l({\beta}) &= -i_{\rho({\beta})}l({\alpha}), \end{aligned}\right.$$ for all ${\alpha},{\beta}\in \Gamma(A)$. The correspondence $\theta\mapsto (\nu_{\theta}, l_{\theta})$ is given explicitly by $$\nu_{\theta}({\alpha})=u^{\ast}(i_{{\alpha}}{d}\theta), \ \ l_{\theta}({\alpha})=u^{\ast}(i_{\alpha}\theta).$$ For $\phi\in \Omega^{k+1}(M)$, the cohomologically trivial form $\delta(\phi)= s^{\ast}\phi- t^{\ast}\phi$ gives (see examples \[ex: form on base\] and \[ex: form on base-2\]) $$\nu_{\delta(\phi)}({\alpha})= -i_{\rho({\alpha})}(d\phi) ,\qquad l_{\delta(B)}(\phi)= -i_{\rho({\alpha})}(\phi),$$ hence one obtains an infinitesimal characterization for cohomological triviality. Note that in the case of trivial coefficients it makes sense to talk about the DeRham differential of a multiplicative form (itself multiplicative). From the last formulas in the proposition, we have: $$\nu_{d\theta}= 0,\qquad l_{d\theta}= \nu_{\theta},$$ hence one immediately obtains infinitesimal characterizations of multiplicative forms which are closed. More generally, given $\phi\in \Omega^{k+1}(M)$ closed, one says that $\theta\in \Omega^k({\mathcal{G}})$ is $\phi$-closed if $d\theta= s^{\ast}\phi- t^{\ast}\phi$. One obtains for instance the following, which when $k= 2$ gives the main result of [@BCWZ]. \[cor-B-field\] Assume that ${\mathcal{G}}$ is $s$-simply connected, $\phi\in \Omega^{k+1}(M)$ closed. Then there is a bijection between $\phi$-closed, multiplicative $\theta \in \Omega^k({\mathcal{G}})$ and $$l: A {\longrightarrow}\Lambda^{k-1}T^{\ast}M$$ which are vector bundle maps satisfying $$\left\{\begin{aligned} l([{\alpha},{\beta}]) &= {L}_{\rho({\alpha})}l({\beta}) - i_{\rho({\beta})}{d}l ({\alpha})+ i_{\rho({\alpha})\wedge \rho({\beta})}(B) \\ i_{\rho({\alpha})} l({\beta}) &= -i_{\rho({\beta})}l({\alpha}). \end{aligned}\right.$$ \[sympl-gpds-from-t1\] We briefly recall here the Poisson case. Let $(M, \pi)$ be a Poisson manifold and let $A= T^{\ast}M$ be endowed with the induced algebroid structure (see e.g. [@Zung]), which is assumed to come from an $s$-simply connected Lie groupoid $\Sigma$. Then proposition \[prop: trivial coefficients\] can be applied to $k= 2$, $\phi= 0$ and $l= \textrm{Id}_{T^{\ast}M}$; this gives rise to $\omega\in \Omega^2(\Sigma)$ closed and multiplicative, and also non-degenerate (because $l$ is an isomorphism), making $\Sigma$ into a symplectic groupoid. ### Proof of Theorem \[t1\] {#sec: linearization} #### Rough idea and some heuristics behind the proof {#Rough idea and some heuristics behind the proof} Let us briefly indicate the intuition behind our approach (the pseudogroup point of view). The main idea is to reinterpret $\theta$ in terms of bisections: it gives rise to (and is determined by) a family of $k$-forms $\{ \theta_b: b\in \textrm{Bis}({\mathcal{G}})\}$; here $\theta_b\in \Omega^k(M, E)$ is obtained by pull-backing $\theta$ to $M$ via $b$ and using the action of ${\mathcal{G}}$ on $E$. In other words, $\theta$ is encoded in the map $$\Theta: \textrm{Bis}({\mathcal{G}}){\longrightarrow}\Omega^k(M, E),\ \ b\mapsto \theta_b ;$$ the multiplicativity of $\theta$ translates into a cocycle condition for $\Theta$ on the group $\textrm{Bis}({\mathcal{G}})$. Hence, morally (because we are in infinite dimensions), the infinitesimal counterpart of $\theta$ is encoded in the linearization ${\vartheta}(\Theta)$ of $\Theta$ (as in subsection \[1-cocyles and relations to the van Est map\]). While $\Gamma(A)$ plays the role of the Lie algebra of $\textrm{Bis}({\mathcal{G}})$ (cf. Remark \[flows of sections\]), one arrives at ${\vartheta}(\Theta)= D$ given in the theorem. However, to prove the theorem, we have to avoid the infinite dimensional problem and work (still in the spirit of Lie pseudogroups) with jet spaces: since $\Theta$ depends only on first order jets of bisections, it can be reinterpreted as a finite dimensional object- a map $$c: {J}^1{\mathcal{G}}\longrightarrow {\mathrm{Hom}}(\wedge^kTM, E),$$ which is a $1$-cocycle for the groupoid ${J}^1{\mathcal{G}}$. Hence, instead of applying the integration of cocycles (as in Proposition \[prop: Van Est\]) to the infinite dimensional $\textrm{Bis}({\mathcal{G}})$, we will apply it to the groupoid ${J}^1{\mathcal{G}}$. Here one encounters a small technical problem: ${J}^1{\mathcal{G}}$ may have $s$-fibers which are not simply connected (not even connected), so we will have to pass to the closely related groupoid $\widetilde{{J}^1{\mathcal{G}}}$, which has the same Lie algebroid ${J}^1A$, but which is $s$-simply connected: $$\tilde{c}: \widetilde{{J}^1{\mathcal{G}}} \longrightarrow {\mathrm{Hom}}(\wedge^kTM, E).$$ Then one has to concentrate on the linearization cocycle $$\eta: {J}^1A \longrightarrow {\mathrm{Hom}}(\wedge^kTM, E),$$ which, together with the decomposition (\[J-decomposition\]) in mind, is precisely the pair $(D, l)$ consisting of the Spencer operator and its symbol. The rest is about working out the details and finding out the precise equations that $c$ and $\eta$ have to satisfy. Throughout this section, ${\mathcal{G}}$ denotes a Lie groupoid over $M$, $E$ is a representation of ${\mathcal{G}}$, ${J}^1{\mathcal{G}}$ denotes the Lie groupoid of $1$-jets of bisections of ${\mathcal{G}}$. Each one of the next subsections is devoted to one of the 1-1 correspondences $$\theta \longleftrightarrow c \longleftrightarrow \tilde{c} \longleftrightarrow \eta .$$ #### From multiplicative forms to differentiable cocycles {#sec: step 1} Recall that $\lambda: {J}^1{\mathcal{G}}\to \operatorname{GL}(TM)$ denotes the canonical representation of ${J}^1{\mathcal{G}}$ on $TM$ and ${\text{\rm Ad}\,}: {J}^1{\mathcal{G}}\to \operatorname{GL}(A)$ denotes the adjoint representation of ${J}^1{\mathcal{G}}$ on the Lie algebroid $A$ of ${\mathcal{G}}$ (see subsection \[Jet groupoids and algebroids\]). Combining these two actions, $${\mathrm{Hom}}(\wedge^kTM, E)$$ becomes a representation of ${J}^1{\mathcal{G}}$. Using notation from subsection \[1-cocyles and relations to the van Est map\], we will denoted the associated space of $1$-cocycles by $$Z^{1}({J}^1{\mathcal{G}}, {\mathrm{Hom}}(\wedge^kTM, E)).$$ Moreover, as in remark \[when working with jets\], we will view the elements of ${J}^1{\mathcal{G}}$ as splittings $\sigma_g: T_{s(g)}\to T_g{\mathcal{G}}$ of $ds$. In particular, for $\sigma_g, \sigma_g'\in {J}^1{\mathcal{G}}$ sitting above the same $g\in {\mathcal{G}}$, $\sigma_g- \sigma_g'$ takes values in $Ker(ds)_g$; identifying the last space with $A_{t(g)}$, we consider the resulting map $$\sigma_g \ominus \sigma_g':= R_{g^{-1}}\circ (\sigma_g - \sigma'_g): T_{s(g)}M{\longrightarrow}A_{t(g)}.$$ \[prop: multiplicative forms as cocycles\] There is a one-to-one correspondence between multiplicative $k$-forms $\theta \in \Omega^k({\mathcal{G}}, t^{\ast}E)$, and pairs $(c, l)$ with $$\left\{\begin{aligned} c\in Z^{1}({J}^1{\mathcal{G}}, {\mathrm{Hom}}(\wedge^kTM, E))\\ l: A \longrightarrow {\mathrm{Hom}}(\wedge^{k-1}TM, E) \end{aligned}\right.$$ satisfying the following equations: $$\label{eq: condition 1}\begin{aligned} c&(\sigma_g)(\lambda_{\sigma_g}v_1, \ldots , \lambda_{\sigma_g}v_k) - c(\sigma_g')(\lambda_{\sigma_g'}v_1, \ldots , \lambda_{\sigma_g'}v_k) = \\ &=\sum_{i = 1}^k(-1)^{i+1}l((\sigma_g \ominus \sigma_g')(v_i))(\lambda_{\sigma_g}v_1,\ldots,\lambda_{\sigma_g}v_{i-1}, \lambda_{\sigma_g'}v_{i+1},\ldots, \lambda_{\sigma_g'}v_k), \end{aligned}$$ $$\label{eq: condition 2} i_{\rho({\alpha})}l({\beta}) = -i_{\rho({\beta})}l({\alpha}),$$ $$\label{eq: condition 3}\begin{aligned} l({\text{\rm Ad}\,}_{\sigma_g}{\alpha})&(\lambda_{\sigma_g}v_1, \ldots , \lambda_{\sigma_g}v_{k-1}) - g\cdot l({\alpha})(v_1,\ldots,v_{k-1}) = \\ &=c(\sigma_g)(\lambda_{\sigma_g}\rho({\alpha}),\lambda_{\sigma_g}v_1, \ldots , \lambda_{\sigma_g}v_{k-1}), \end{aligned}$$ for all splittings $\sigma_g, \sigma_g'\in {J}^1{\mathcal{G}}$, $v_1, \ldots, v_k \in T_{x}M$, and ${\alpha}, {\beta}\in A_x$, where $x= s(g)$. For the proof of the proposition we will need the following lemma: \[lemma: theta vertical\]For any multiplicative form $\theta \in \Omega^k({\mathcal{G}},t^{\ast}E)$, and any ${\alpha}_g \in \ker {d}_gs$, we have that $$\label{eq: theta vertical 1} \theta_g({\alpha}_g, X_1, \ldots, X_{k-1}) = \theta_{t(g)}({\alpha}_{t(g)},{d}_g t(X_1), \ldots, {d}_g t(X_{k-1})),$$ for all $X_1, \ldots, X_{k-1} \in T_g{\mathcal{G}}$, where ${\alpha}_{t(g)} = R_{g^{-1}}({\alpha}_g)$. Notice that we can express ${\alpha}_g$ and $X_i$ as $${\alpha}_g = {d}_{t(g)}R_g({\alpha}_{t(g)}) = {d}_{(t(g),g)}m({\alpha}_{t(g)}, 0_g), \ X_i = {d}_{(t(g),g)}m({d}_gt(X_i), X_i).$$ Equation then follows from the multiplicativity equation (\[eq: multiplicative\]) on the vectors (tangent to ${\mathcal{G}}_2$) $({\alpha}_{t(g)}, 0_g)$, $({d}_gt(X_1), X_1), \ldots$. In one direction, given $\theta \in \Omega^k({\mathcal{G}}, t^{\ast}E)$, $$c(\sigma_g)(w_1, \ldots, w_k) = \theta_g(\sigma_g(\lambda_{\sigma_g}^{-1}(w_1)), \ldots, \sigma_g(\lambda_{\sigma_g}^{-1}(w_k))),$$ for all $\sigma_g\in {J}^1{\mathcal{G}}$ and $w_1, \ldots, w_k \in T_{t(g)}M$, and $$l({\alpha})(v_1, \ldots, v_{k-1}) = \theta_x({\alpha}, v_1, \ldots, v_{k-1}),$$ for all ${\alpha}\in A_x$, and $v_1, \ldots, v_{k-1}\in T_xM$. The desired equations for $(c,l)$ will be proven for $k= 2$, which reveals all the necessary arguments but keeps the notation simpler. Note that in this case lemma \[lemma: theta vertical\] translates into $$\label{eq: theta vertical} \theta_g({\alpha}_g, X) = l({\alpha}_{t(g)})({d}_gt(X)),$$ for all $X \in T_g{\mathcal{G}}$, ${\alpha}_g \in \ker {d}_gs$, where ${\alpha}_{t(g)} = R_{g^{-1}}({\alpha}_g)$. To prove (for $k= 2$), using the definition of $c$, we find that the hand side of the equation is $$\begin{aligned} \theta_g(\sigma_g(v_1), & \sigma_g(v_2)) - \theta_g(\sigma_g'(v_1),\sigma_g'(v_2)) = \\ & =\theta_g(\sigma_g(v_1) - \sigma'_g(v_1), \sigma_g(v_2)) + \theta_g(\sigma_g'(v_1), \sigma_g(v_2) - \sigma_g'(v_2)). \end{aligned}$$ Applying with ${\alpha}_{g}= \sigma_g(v_i) - \sigma_g'(v_i)$, $i\in \{1, 2\}$, we obtain . The same , combined with skew symmetry of $\theta$, gives : $$l({\alpha})(\rho({\beta})) = \theta({\alpha},{\beta}) = -\theta({\beta}, {\alpha}) = -l({\beta})(\rho({\alpha})).$$ Next we prove . Using the formula for the adjoint representation, $$l({\text{\rm Ad}\,}_{\sigma_g}{\alpha})(\lambda_{\sigma_g}v) = \theta_{t(g)}(R_{g^-1} \circ {d}_{(g, s(g))}m(\sigma_g(\rho({\alpha})), {\alpha}),\lambda_{\sigma_g}v).$$ Using again the equation , the last expression is $$\theta_g({d}_{(g, s(g))}m(\sigma_g(\rho({\alpha})), {\alpha}),\sigma_g(v)).$$ Combining with $\sigma_g(v) = {d}_{(g,s(g))}m(\sigma_g(v), v)$ and then applying the multiplicativity equation for $\theta$, we arrive at the right hand side of . We are left with proving the cocycle equation $\delta c = 0$. Let $(\sigma_g, \sigma_h) \in {J}^1{\mathcal{G}}^{(2)}$ be a pair of composable arrows. Then $\delta c(\sigma_g, \sigma_h)$ is the map $\wedge^2T_{t(g)}M \to E_{t(g)}$, $$\delta c(\sigma_g, \sigma_h)(w_1, w_2) = c(\sigma_g)(w_1, w_2) + g\cdot(c(\sigma_h)(\lambda_{\sigma_g}^{-1}w_1, \lambda_{\sigma_g}^{-1}w_2)) - c(\sigma_g\cdot\sigma_h)(w_1, w_2).$$ Let $v_i = \lambda_{\sigma_g}^{-1}w_i \in T_{t(h)}M$. For the sum of the first two terms in the right hand side, after applying the definition of $c$, we find $$\theta_g(\sigma_g(v_1), \sigma_g(v_2))+ g\cdot\theta_h(\sigma_h(\lambda_{\sigma_h}^{-1}v_1), \sigma_h(\lambda_{\sigma_h}^{-1}v_2)).$$ For the last term, using the description (\[mult-i-J1\]) for $\sigma_g\cdot\sigma_h$, we find $$\theta_{gh}({d}_{(g,h)}m(\sigma_g(v_1),\sigma_h(\lambda_{\sigma_h}^{-1}v_1)), {d}_{(g,h)}m(\sigma_g(v_2),\sigma_h(\lambda_{\sigma_h}^{-1}v_2)).$$ Finally, the multiplicativity of $\theta$ implies that the last two expressions coincide. For the reverse direction, let $c$ and $l$ be given, and we construct $\theta$. In order to avoid clumsier notation, we extend $l$ to the entire $\ker ds$: for $\alpha_g\in \ker(ds)_g$: $$l(\alpha_g):= l(R_{g^{-1}}\alpha_g).$$ Given $g$, choose $\sigma_g \in {J}^1{\mathcal{G}}$ and use it to split a vector $X \in T_g{\mathcal{G}}$ into $$X = \sigma_g(v) + {\alpha}_X,$$ where $v = {d}_gs(X) \in T_{s(g)}M$, and ${\alpha}_X = X - \sigma_g(v) \in \ker{d}_gs$. We define $$\begin{aligned} \begin{split} &\theta_g(X_1, \ldots, X_k) = c(\sigma_g)(\lambda_{\sigma_g}v_1, \ldots \lambda_{\sigma_g}v_k) + \\ &\sum_{\substack{p+q=k \\ \tau \in S(p,q)}}(-1)^{|\tau|}l({\alpha}_{X_{\tau(1)}})(\rho({\alpha}_{X_{\tau(2)}}),\ldots, \rho({\alpha}_{X_{\tau(p)}}), \lambda_{\sigma_g}v_{\tau(p+1)}, \ldots, \lambda_{\sigma_g}v_{\tau(k)}), \end{split}\end{aligned}$$ where $p \geq 1$, the second summation is taken over all $(p,q)$-shuffles of ${\left\{1,\ldots,k\right\}}$. The rest of the proof will be given again in the case $k= 2$, when the previous formula becomes $$\begin{aligned} \theta_g(X_1, X_2) = c(\sigma_g)&(\lambda_{\sigma_g}v_1,\lambda_{\sigma_g}v_2) - l({\alpha}_{X_2})(\lambda_{\sigma_g}v_1) +\\ &+ l({\alpha}_{X_1})(\lambda_{\sigma_g}v_2) + l({\alpha}_{X_1})(\rho({\alpha}_{X_2})). \end{aligned}$$ Note that implies that $\theta_g$ is skew-symmetric. Let us show that it does not depend on the choice of $\sigma_g$. Choose another splitting $\sigma_g'$ and write $\tilde {\alpha}_X = X-\sigma_g'(v)$. Let $\theta'_g$ be the form obtained by using the splitting $\sigma_g'$. Then $$\begin{aligned} (\theta_g - \theta_g')(X_1, X_2) = c(&\sigma_g)(\lambda_{\sigma_g}v_1, \lambda_{\sigma_g}v_2) - c(\sigma_g')(\lambda_{\sigma_g'}v_1, \lambda_{\sigma_g'}v_2) +\\ & - l({\alpha}_{X_2})(\lambda_{\sigma_g}v_1) + l(\tilde {\alpha}_{X_2})(\lambda_{\sigma_g'}v_1)\\ &+ l({\alpha}_{X_1})(\lambda_{\sigma_g}v_2) - l(\tilde {\alpha}_{X_1})(\lambda_{\sigma'_g}v_2) + \\ & + l({\alpha}_{X_1})(\rho({\alpha}_{X_2})) - l(\tilde {\alpha}_{X_1})(\rho(\tilde {\alpha}_{X_2})). \end{aligned}$$ Let us denote by ${\alpha}_{v_i} = \sigma_g(v_i) - \sigma'_g(v_i)$ and notice that $$\label{eq: alpha - alpha'} {\alpha}_{X_i} - \tilde {\alpha}_{X_i} = - {\alpha}_{v_i}, \text{ and } \lambda_{\sigma_g}v_i - \lambda_{\sigma'_g}v_i = \rho({\alpha}_v).$$ By using and the polarization formula for $\theta$, it follows that $$\begin{aligned} (\theta_g - \theta_g')(X_1, X_2) &= l({\alpha}_{v_1})(\lambda_{\sigma_g}v_2) - l({\alpha}_{v_2})(\lambda_{\sigma_g'}v_1) \\ &\quad + l({\alpha}_{v_2})(\lambda_{\sigma_g}v_1) - l(\tilde {\alpha}_{v_2})(\rho({\alpha}_{v_1}))\\ &\quad- l({\alpha}_{v_1})(\lambda_{\sigma_g}v_2) + l(\tilde {\alpha}_{X_1})(\rho({\alpha}_{v_2})) \\ &\quad - l({\alpha}_{v_1})(\rho({\alpha}_{X_2})) - l(\tilde {\alpha}_{X_1})(\rho({\alpha}_{v_2})). \end{aligned}$$ Thus, if we substitute into this expression the consequence $$l({\alpha}_{v_2})(\lambda_{\sigma_g'}v_1) = l({\alpha}_{v_2})(\lambda_{\sigma_g}v_1) - l({\alpha}_{v_2})(\rho({\alpha}_{v_1}))$$ of , almost all of the terms cancel out two-by-two and we are left with $$\begin{aligned} (\theta_g - \theta_g')(X_1, X_2) &= l({\alpha}_{v_2})(\rho({\alpha}_{v_1})) - l(\tilde {\alpha}_{X_2})(\rho({\alpha}_{v_1})) - l({\alpha}_{v_1})\rho({\alpha}_{X_2})\\ &= l({\alpha}_{v_2} - \tilde {\alpha}_{X_2})(\rho({\alpha}_{v_1})) - l({\alpha}_{v_1})(\rho({\alpha}_{X_2}))\\ &= -l({\alpha}_{X_2})(\rho({\alpha}_{v_1})) - l({\alpha}_{v_1})(\rho({\alpha}_{X_2})). \end{aligned}$$ Because of , this expression vanishes and $\theta$ is well defined. We are left with verifying that $\theta$ is multiplicative. Let $g,h \in {\mathcal{G}}$ be composable arrows, and let $(X_i,Y_i) \in T_{(g,h)}{\mathcal{G}}^{(2)}$ so that ${d}_ht(Y_i) = {d}_gs(X_i)$. We fix splittings $\sigma_g,\sigma_h \in {J}^1{\mathcal{G}}$ and use them to write $$X_i = {\alpha}_i + \sigma_g(v_i),\quad \text{and}\quad Y_i = {\beta}_i +\sigma_h(w_i).$$ From $(X_i,Y_i) \in T_{(g,h)}{\mathcal{G}}^{(2)}$ it follows that $v_i = \rho({\beta}_i) + \lambda_{\sigma_h}(w_i)$, hence $$X_i = {\alpha}_i +\sigma_g(\rho(\beta_i))+ \sigma_g(\lambda_{\sigma_h}w_i).$$ Decomposing $${d}m(X_i,Y_i) = {d}m({\alpha}_i,0) + {d}m(\sigma_g(\rho(\beta_i)), {\beta}_i)+ {d}m(\sigma_g(\lambda_{\sigma_h}w_i) ,\sigma_h(w_i)).$$ $m^{\ast}\theta_{(g,h)}((X_1,Y_1), (X_2, Y_2))$ gives six types of terms (where $1\leq i \neq j \leq2$): Type 1: : $\theta_{gh}({d}m({\alpha}_1,0), {d}m({\alpha}_2, 0)),$ Type 2: : $\theta_{gh}({d}m({\alpha}_i,0),{d}m(\sigma_g(\rho(\beta_j)), {\beta}_j)),$ Type 3: : $\theta_{gh}({d}m({\alpha}_i,0), {d}m(\sigma_g(\lambda_{\sigma_h}w_j) ,\sigma_h(w_j))),$ Type 4: : $\theta_{gh}({d}m(\sigma_g(\rho(\beta_1)), {\beta}_1),{d}m(\sigma_g(\rho(\beta_2)), {\beta}_2)),$ Type 5: : $\theta_{gh}({d}m(\sigma_g(\rho(\beta_i)), {\beta}_i), {d}m(\sigma_g(\lambda_{\sigma_h}w_j) ,\sigma_h(w_j)))$ Type 6: : $\theta_{gh}({d}m(\sigma_g(\lambda_{\sigma_h}w_1) ,\sigma_h(w_1)), {d}m(\sigma_g(\lambda_{\sigma_h}w_2) ,\sigma_h(w_2)))$ In order to simplify the terms of type 1,2, and 3, we note that $$\begin{aligned} {d}_{(g,h)}m({\alpha}_g,0_h) = R_h({\alpha}_g)\end{aligned}$$ for all $\alpha_g\in \ker(ds)_g$ and thus, by the definition of $\theta$ $$\theta_{gh}({d}m({\alpha}_1,0), {d}m({\alpha}_2, 0)) = l({\alpha}_1)(\rho({\alpha}_2)),$$$$\theta_{gh}({d}m({\alpha}_i,0),{d}m(\sigma_g(\rho(\beta_j)), {\beta}_j)) = l({\alpha}_i)(\lambda_{\sigma_g}\rho({\beta}_j)),$$$$\theta_{gh}({d}m({\alpha}_i,0), {d}m(\sigma_g(\lambda_{\sigma_h}w_j) ,\sigma_h(w_j))) = l({\alpha}_i)(\lambda_{\sigma_g\sigma_h}w_j).$$ On the other hand, using again the formula for the adjoint action, as well as condition , we simplify the terms of type 4 and 5 into $$\begin{aligned} \theta_{gh}({d}m(\sigma_g(\rho(\beta_1)), {\beta}_1),&{d}m(\sigma_g(\rho(\beta_2)), {\beta}_2)) = l({\text{\rm Ad}\,}_{\sigma_g}{\beta}_1)(\lambda_{\sigma_g}(\rho({\beta}_2))), \\ &=c(\sigma_g)(\lambda_{\sigma_g}\rho({\beta}_1), \lambda_{\sigma_g}\rho({\beta}_2)) +g\cdot l({\beta}_1)(\rho({\beta}_2)) \end{aligned}$$$$\begin{aligned} \theta_{gh}({d}m(\sigma_g(\rho(\beta_i)), {\beta}_i), & {d}m(\sigma_g(\lambda_{\sigma_h}w_j) ,\sigma_h(w_j))) = l({\text{\rm Ad}\,}_{\sigma_g}{\beta}_i)(\lambda_{\sigma_g\sigma_h}w_j) \\ &= c(\sigma_g)(\lambda_{\sigma_g}\rho({\beta}_i),\lambda_{\sigma_g\sigma_h}w_j) + g\cdot l({\beta}_i)(\lambda_{\sigma_h}w_j). \end{aligned}$$ Finally, we use condition to express $$\begin{aligned} \theta_{gh}({d}m(\sigma_g(\lambda_{\sigma_h}w_1) &,\sigma_h(w_1)), {d}m(\sigma_g(\lambda_{\sigma_h}w_2) ,\sigma_h(w_2))) = \\&=\theta_g(\sigma_g(\lambda_{\sigma_h}w_1),\sigma_g(\lambda_{\sigma_h}w_2)) + g\cdot \theta_h(\sigma_h(w_1),\sigma_h(w_2)). \end{aligned}$$ Adding everything up, we recognize $\theta_g(X_1,X_2) +g\cdot \theta_h(Y_1,Y_2)$, thus concluding the proof of the proposition. #### Realizing source-simply connectedness {#sec: step 2} As mentioned at the start of the section, we need to pass from ${J}^1{\mathcal{G}}$ to $\widetilde{{J}^1{\mathcal{G}}}$ – the source simply connected Lie groupoid with the same Lie algebroid ${J}^1A$ as ${J}^1{\mathcal{G}}$. See subsection \[Lie functor\] for the construction. Recall that it comes equipped with a groupoid map $$p: \widetilde{{J}^1{\mathcal{G}}} {\longrightarrow}{J}^1{\mathcal{G}},$$ whose image is the subgroupoid $({J}^1{\mathcal{G}})^{0}$ made of the connected component of the identities in ${J}^1{\mathcal{G}}$. For elements in $\widetilde{{J}^1{\mathcal{G}}}$ we will use the notation $\sigma_g$ whenever we want to indicate the point $g\in {\mathcal{G}}$ onto which $\sigma_g$ projects. For $X\in T_{s(g)}{\mathcal{G}}$ we will use the notation $$\sigma_g(X):= p(\sigma_g)(X)$$ and, for $\sigma_g, \sigma_g'\in \widetilde{{J}^1{\mathcal{G}}}$, consider $$\sigma_{g}\ominus \sigma_g':= p(\sigma_g)\ominus p(\sigma_g'): T_{s(g)}M{\longrightarrow}A_{t(g)}.$$ We will use the map $p$ to pull-back structures from ${J}^1{\mathcal{G}}$ to $\widetilde{{J}^1{\mathcal{G}}}$. For instance, any representation of ${J}^1{\mathcal{G}}$ can also be seen as a representation of $\widetilde{{J}^1{\mathcal{G}}}$ and there is an induced pull-back map at the level of the resulting cocycles. In particular, we will consider $$\label{p-ast} p^{\ast}: Z^1({J}^1{\mathcal{G}},{\mathrm{Hom}}(\wedge^kTM,E)) {\longrightarrow}Z^{1}(\widetilde{{J}^1{\mathcal{G}}}, {\mathrm{Hom}}(\wedge^kTM,E)).$$ It is clear that the pairs $(c, l)$ of Proposition \[prop: multiplicative forms as cocycles\] and the equations that they satisfy have an analogue with ${J}^1{\mathcal{G}}$ replaced by $\widetilde{{J}^1{\mathcal{G}}}$, giving rise to pairs $(\tilde{c}, l)$ satisfying identical equations. \[corol: passage to cover\] Let ${\mathcal{G}}$ be an $s$-simply connected Lie groupoid. Then $(c, l) \mapsto (p^{\ast}(c), l)$ defines a 1-1 correspondence between pairs $(c, l)$ satisfying the conditions from proposition \[prop: multiplicative forms as cocycles\] and pairs $(\tilde{c}, l)$ consisting of $$\left\{\begin{aligned} \tilde{c}\in Z^{1}(\widetilde{{J}^1{\mathcal{G}}}, {\mathrm{Hom}}(\wedge^kTM, E))\\ l: A \longrightarrow {\mathrm{Hom}}(\wedge^{k-1}TM, E) \end{aligned}\right.$$ satisfying the conditions from proposition \[prop: multiplicative forms as cocycles\] but with ${J}^1{\mathcal{G}}$ replaced by $\widetilde{{J}^1{\mathcal{G}}}$. Of course, the statement is about $c\mapsto p^{\ast}c$, i.e. we can fix $l$. We begin by showing that $p^{\ast}$ is injective when restricted to the set of $c$ for which holds. To do so we first show that $c$ is determined by its value on the the Lie groupoid $({J}^1{\mathcal{G}})^0$ whose $s$-fibers are the connected component of the identity in the $s$-fibers of ${J}^1{\mathcal{G}}$. Observe that for any $g \in {\mathcal{G}}$, there exists $\sigma_g \in ({J}^1{\mathcal{G}})^0$. In fact, since $({J}^1{\mathcal{G}})^0 \to {\mathcal{G}}$ is a submersion, and ${\mathcal{G}}$ is $s$-connected, we can lift any path in $s^{-1}(s(g))$, starting at the identity and ending at $g$, to a path in $({J}^1{\mathcal{G}})^0$ starting at the identity and ending over $g$. The corresponding end point is an element $\sigma_g \in ({J}^1{\mathcal{G}})^0$. It follows from that for any other $\sigma'_g \in {J}^1{\mathcal{G}}$, $c(\sigma'_g)$ is determined by $c(\sigma_g)$, and $l$. Next, we note that if $p^{\ast}c = p^{\ast}c'$, then $c$ and $c'$ coincide on $({J}^1{\mathcal{G}})^0$. In fact, for any $\sigma_g \in ({J}^1{\mathcal{G}})^0$ we can find a path inside the $s$-fiber of $\sigma_g$, joining the identity ${d}_{s(g)}u$ of $({J}^1{\mathcal{G}})^0$, and $\sigma_g$. Taking its homotopy class, this path gives rise to an element $\xi_g$ of $\widetilde{{J}^1{\mathcal{G}}}$ which projects to $\sigma_g$. But then, $$c(\sigma_g) = c(p(\xi_g)) = c'(p(\xi_g)) = c'(\sigma_g).$$ Finally, we prove that if $(\tilde{c},l)$ satisfies , then $\tilde{c}$ lies in the image of $p^{\ast}$. For this, note that if $p(\xi_g) = p(\xi'_g)$, then the actions of $\xi_g$ and $\xi'_g$ on $TM$ coincide. Moreover, they induce the same splittings of ${d}_gs$. Thus, the right hand side of vanishes, which implies that $\tilde{c}(\xi_g) = \tilde{c}(\xi'_g)$. It follows that $\tilde{c}$ induces a map $c: {J}^1{\mathcal{G}}\to t^{\ast}{\mathrm{Hom}}(\wedge^kTM,E)$ such that $p^{\ast}c = \tilde{c}$. Thus, we have just proven that $p^{\ast}$ determines a one-to-one correspondence $$\left\{ \begin{split} c \in Z^1&({J}^1{\mathcal{G}},{\mathrm{Hom}}(\wedge^kTM,E)) \\ &(c,l) \text{ satisfies \eqref{eq: condition 1}} \end{split} \right\} \longleftrightarrow \left\{ \begin{split} \tilde{c} \in Z^1&(\widetilde{{J}^1{\mathcal{G}}},{\mathrm{Hom}}(\wedge^kTM,E)) \\ &(\tilde{c},l) \text{ satisfies \eqref{eq: condition 1}} \end{split} \right\}.$$ A simple verification shows that $(c,l)$ satisfies and if and only if $(\tilde{c},l)$ satisfies and . This concludes the proof. #### Passing to algebroid cocycles {#sec: algebroid cocycles} \[prop: algebroid cocycles\] Let ${\mathcal{G}}$ be $s$-simply connected. Then there is a one-to-one correspondence between pairs $(\tilde{c}, l)$ as in Proposition \[corol: passage to cover\] and pairs $(\eta,l)$ with $$\left\{\begin{aligned} \eta \in Z^1({J}^1A, {\mathrm{Hom}}(\wedge^kTM, E))\\ l: A \longrightarrow {\mathrm{Hom}}(\wedge^{k-1}TM, E) \end{aligned}\right.$$ satisfying the equations: $$\label{eq: condition 1''} \eta({d}f \otimes {\alpha}) = {d}f \wedge l({\alpha}),$$ $$\label{eq: condition 2''} i_{\rho({\alpha})}l({\beta}) = - i_{\rho({\beta})}l({\alpha}),$$ $$\label{eq: condition 3''} l([{\alpha},{\beta}]) - {L}_{{\alpha}}l({\beta}) = i_{\rho({\alpha})}\eta({j}^1{\beta}),$$ for all ${\alpha}, {\beta}\in \Gamma(A)$, and all $f \in \mathrm{C}^{\infty}(M)$. For we are using the inclusion $i$ from the exact sequence $$0{\longrightarrow}\textrm{Hom}(TM, A)\stackrel{i}{{\longrightarrow}} {J}^1A \stackrel{{pr}}{{\longrightarrow}} A{\longrightarrow}0$$ We use the isomorphism $${\vartheta}: Z^1(\widetilde{{J}^1{\mathcal{G}}},{\mathrm{Hom}}(\wedge^kTM, E)) {\longrightarrow}Z^1({J}^1A, {\mathrm{Hom}}(\wedge^kTM, E))$$ induced by the Van Est map (Proposition \[prop: Van Est\]); of course, $\eta= {\vartheta}(\tilde{c})$. Fix $(\tilde{c},l)$ and $x\in M$. We prove that and for $(\tilde{c},l)$ are equivalent to and for $({\vartheta}(\tilde{c}),l)$. We start with the equivalence of with . We interpret $l$ as $$l\in \Gamma(A^{\ast}\otimes\wedge^{k-1}T^{\ast}M\otimes E)= C^0(\widetilde{{J}^1{\mathcal{G}}}, A^{\ast}\otimes\wedge^{k-1}T^{\ast}M\otimes E),$$ a zero-cochain on $\widetilde{{J}^1{\mathcal{G}}}$. Of course, ${\vartheta}(l)= l$, with $l$ interpreted as a $0$-cochain on ${J}^1A$. On the other hand, the anchor $\rho$ induces a morphism of representations $$\rho^{\ast}: \wedge^kT^{\ast}M\otimes E {\longrightarrow}A^{\ast}\otimes\wedge^{k-1}T^{\ast}M\otimes E$$ hence also a map of complexes $$\rho^{\ast}: C(\widetilde{{J}^1{\mathcal{G}}}, \wedge^kT^{\ast}M\otimes E) {\longrightarrow}C({J}^1A, A^{\ast}\otimes\wedge^{k-1}T^{\ast}M\otimes E),$$ and similarly on algebroid cohomology, compatible with ${\vartheta}$. In particular, $${\vartheta}(d(l)- \rho^{\ast}(\tilde{c}))= d({\vartheta}(l))- \rho^{\ast}({\vartheta}(\tilde{c}))= d(l)- \rho^{\ast}(\eta).$$ Finally, note that is just the explicit form of the equation $d(l)= \rho^{\ast}(\tilde{c})$, while is just $d(l)= \rho^{\ast}(\eta)$; hence one just uses the injectivity of ${\vartheta}$. We are left with proving the equivalence of with . We fix $x\in M$ and we show that is satisfied at $x$ if and only if is satisfied for all $g$ that start at $x$. In the sequence $$\widetilde{{J}^1({\mathcal{G}})}\stackrel{p}{{\longrightarrow}} {J}^1({\mathcal{G}}) \stackrel{{pr}}{{\longrightarrow}} {\mathcal{G}},$$ we consider the $s$-fibers of the three groupoids above $x$, denoted by $$\tilde{P}{\longrightarrow}P{\longrightarrow}B .$$ Both $P$ and $\tilde{P}$ become principal bundles over $B$, with structure groups $$K= {pr}^{-1}(1_x),\ \hat{K}= ({pr}\circ p)^{-1}(1_x),$$ respectively (the action is the one induced by the groupoid multiplication). Note also that the map $p: \tilde{P}\to P$ has as image the connected component $P^0$ of $P$ containing the unit at $x$ and $p: \tilde{P}\to P^0$ is a covering projection. The following is immediate. \[lemma: K\] $\hat{K}$ is connected. Assume now that $({\vartheta}(\tilde{c}),l)$ satisfies for all $g\in B$. Let $\eta = {\vartheta}(\tilde{c})$ and ${d}f \otimes {\alpha}\in T^{\ast}M\otimes A$. Using $p : \tilde{P} \to P$ to identify a neighborhood of the identity in $\tilde{P}$, with a neighborhood of the identity in $P$, we can view, for ${\epsilon}$ small enough, $$\gamma_x({\epsilon}) = {d}_xu + {\epsilon}({d}_x f \otimes {\alpha}_x)$$ as a path in $\tilde{P}$ such that $$\gamma(0) = {d}_x u, \quad \frac{{d}}{{d}{\epsilon}}\big{|}_{{\epsilon}= 0} \gamma_x = {d}_x f \otimes {\alpha}_x.$$ Since for each ${\epsilon}$, $\gamma_x({\epsilon})$ acts trivially on $E$, it follows that $$\label{eq: around the identity} \eta({d}_x f \otimes {\alpha}_x) = \frac{{d}}{{d}{\epsilon}}\big{|}_{{\epsilon}= 0} \tilde{c}(\gamma_x({\epsilon})).$$ However, since $\tilde{c}$ is a cocycle, it follows that $\tilde{c}({d}_xu) = 0$, thus implies that $$\tilde{c}(\gamma_x({\epsilon}))(v_1, \ldots, v_k) = {\epsilon}({d}_xf\wedge l({\alpha}_x))(v_1,\ldots, v_k),$$ for all ${\epsilon}$ and all $v_i \in T_xM$. By differentiating at ${\epsilon}= 0$ one obtains at $x$. We now prove the converse, namely that at $x$ implies at all $g\in B$. Fix $g$ and let $\xi_g$ and $\xi_g'$ be two elements of $\widetilde{{J}^1{\mathcal{G}}}$ which lie over $g$. Let $\gamma_1$ be any path in $\tilde{P}$ joining ${d}_xu$ to $\xi_g$, and let $\gamma_2$ be any path fiber of $\tilde{P} \to B$ joining $\xi_g$ to $\xi'_g$, which exists because of Lemma \[lemma: K\]. Furthermore, we may assume that $$\gamma_1({\epsilon}) = \xi_g \text{ for all } \frac{1}{2}\leq{\epsilon}\leq 1,\ \ \gamma_2({\epsilon}) = \xi_g \text{ for all } 0\leq{\epsilon}\leq\frac{1}{2}.$$\ Thus, we obtain two smooth paths $$\gamma_{\xi_g}({\epsilon}) = \left\{\begin{aligned}&\gamma_1(2{\epsilon}) &\text{ if } 0 \leq {\epsilon}\leq\frac{1}{2}\\ &\xi_g &\text{ if } \frac{1}{2}\leq {\epsilon}\leq 1, \end{aligned}\right. ,\ \ \gamma_{\xi_g'}({\epsilon}) = \left\{\begin{aligned}&\gamma_1(2{\epsilon}) &\text{ if } 0 \leq {\epsilon}\leq\frac{1}{2}\\ &\gamma_2(2{\epsilon}- 1) &\text{ if } \frac{1}{2}\leq {\epsilon}\leq 1 \end{aligned}\right.$$ 0.5ex Finally, we consider the path $f: [0,1] \to E$ given by $$\begin{aligned} &f({\epsilon}) = \tilde{c}(\gamma_{\xi_g}({\epsilon}))(\lambda_{\gamma_{\xi_g}({\epsilon})}v_1, \ldots , \lambda_{\gamma_{\xi_g}({\epsilon})}v_k) - \tilde{c}(\gamma_{\xi_g'}({\epsilon}))(\lambda_{\gamma_{\xi_g'}({\epsilon})}v_1, \ldots , \lambda_{\gamma_{\xi_g'}({\epsilon})}v_k) + \\ &-\sum_{i = 1}^k(-1)^{i+1}l((\gamma_{\xi_g} \ominus \gamma_{\xi_g'})({\epsilon})(v_i))(\lambda_{\gamma_{\xi_g}({\epsilon})}v_1,\ldots,\lambda_{\gamma_{\xi_g}({\epsilon})}v_{i-1}, \lambda_{\gamma_{\xi_g'}({\epsilon})}v_{i+1},\ldots, ) ,\end{aligned}$$ where $v_i \in T_{s(g)}M$. We must show that $f({\epsilon})$ is contained in the zero section $E$ for all ${\epsilon}\in [0,1]$. It is clear that this is true for ${\epsilon}\in [0, 1/2]$. On the other hand, for ${\epsilon}\in [1/2, 1]$, the path $f({\epsilon})$ lies inside the fiber $E_{t(g)}$. Thus, it suffices to show that the derivative of $f$ at ${\epsilon}$ vanishes for all ${\epsilon}\in [1/2,1]$. However, by proposition \[prop: appendix\] this is reduced to the computation performed at the identity ${d}_{s(g)}u$, which vanishes by virtue of . We put propositions \[prop: multiplicative forms as cocycles\], \[corol: passage to cover\] and \[prop: algebroid cocycles\] together. Of course, we recognize the $\eta$’s from the last proposition as the bundle maps $j_{D}$ associated to Spencer operators (cf. remark \[rk-convenient\]); hence the relation between $\eta$ and $D$ is: $$\eta({j}^1{\alpha})=: D(\alpha) .$$ Using that $${L}_{{\alpha}}(D({\beta})) = \nabla_{{j}^1{\alpha}}D({\beta}),$$ it is immediate that the equations on $(\eta, l)$ from the previous proposition are equivalent to the equations that ensure that $(D,l)$ is a Spencer operator. Spencer Operators {#Relative connections on Lie algebroids} ================= Spencer operators are the infinitesimal counterpart of point-wise surjective multiplicative 1-forms (theorem \[t1\] in the case $k=1$). They also arise as the linearization of Pfaffian groupoids along the unit map and therefore they are the natural notion of relative connection in the setting of Lie algebroids (see conditions and ). It is remarkable that the Pfaffian groupoid itself can be recovered from its Spencer operator (theorem \[t2\]). Of course, the main example is the classical Spencer operator associated to a Lie algebroid (see \[higher jets on algebroids\]). We will show that in this setting all the notions developed in chapter \[Relative connections\] become Lie theoretic.\ Throughout this chapter $A$ is a Lie algebroid with Lie bracket denoted by $[\cdot,\cdot]$ and anchor map $\rho:A\to TM$, $(D,l):A\to E$ is a relative connection with ${\mathfrak{g}}\subset A$ the symbol space of $D$. For ease of notation, $T$ and $T^*$ will denote the tangent and cotangent bundle of $M$, respectively. We will use of the notions given on subsection \[Lie algebroids\] and the Lie algebroid structure on $J^1A$ described in subsection \[Jet groupoids and algebroids\]. Spencer operators {#spencer-operators} ----------------- In this section we study relative connection compatible with the Lie algebroid structures. ### Definitions and basic properties Let $A$ be a Lie algebroid over $M$, $E$ a vector bundle over $M$, and $l:A\to E$ a surjective vector bundle map. \[def1\] An [Spencer operator]{} relative to $l$ is an $l$-connection $$\begin{aligned} D:\Gamma(A){\longrightarrow}\Omega^1(M,E)\end{aligned}$$ satisfying the following two compatibility conditions $$\begin{aligned} \label{horizontal} D_{\rho(\beta)}\alpha=-l[\alpha, \beta],\end{aligned}$$ $$\begin{aligned} \label{horizontal2} D_X[\alpha,\alpha']=\nabla_\alpha(D_X\alpha')-D_{[\rho(\alpha),X]}\alpha'-\nabla_{\alpha'}(D_X\alpha)+D_{[\rho(\alpha'),X]}\alpha\end{aligned}$$ for all $\alpha, \tilde {\alpha}\in \Gamma(A)$, ${\beta}\in \Gamma({\mathfrak{g}})$, $X\in\mathfrak{X}(M)$, $f\in C^{\infty}(M)$ and where $$\label{coneccion} \nabla:\Gamma(A)\times \Gamma(E)\to \Gamma(E), \quad \nabla_\alpha(l(\alpha'))=D_{\rho(\alpha')}\alpha+l[\alpha,\alpha'].$$ Condition implies that $\nabla$ given by is indeed well-defined. \[action-lema\] $\nabla$ makes $E$ into a representation of $A$. We need to show that $$\begin{aligned} \label{flat} \nabla_{\alpha}\nabla_{\beta}(l(\tilde {\alpha}))-\nabla_{\beta}\nabla_{\alpha}(l(\tilde {\alpha}))=\nabla_{[{\alpha},{\beta}]}(l(\tilde {\alpha}))\end{aligned}$$ for any ${\alpha},{\beta},\tilde {\alpha}\in\Gamma(A)$. This will follow from the compatibility condition applied to $X=\rho(\tilde {\alpha})$. On the one hand, one has that $$\begin{aligned} \nabla_{[{\alpha},\beta]}(l(\tilde {\alpha}))=D_{\rho(\tilde {\alpha})}[{\alpha},\beta]+l[[{\alpha},{\beta}],\tilde {\alpha}].\end{aligned}$$ On the other, $$\begin{aligned} \nabla_{\alpha}\nabla_{\beta}(l(\tilde {\alpha}))=\nabla_{\alpha}(D_{\rho(\tilde {\alpha})}{\beta})+D_{\rho[{\beta},\tilde {\alpha}]}{\alpha}+l[{\alpha},[{\beta},\tilde {\alpha}]]\end{aligned}$$ and $$\begin{aligned} \nabla_{\beta}\nabla_{\alpha}(l(\tilde {\alpha}))=\nabla_{\beta}(D_{\rho(\tilde {\alpha})}{\alpha})+D_{\rho[{\alpha},\tilde {\alpha}]}{\beta}+l[{\beta},[{\alpha},\tilde {\alpha}]].\end{aligned}$$ Using the Jacobi identity for $[[{\alpha},{\beta}],\tilde {\alpha}]$ and the fact that $\rho$ commutes with the Lie bracket, replacing the above equations into one has that becomes the compatibility condition for $X=\rho(\tilde {\alpha}).$ \[remark-dual\]We see that definition \[def1\] is just a reformulation of the notion of $1$-Spencer operator of section \[Spencer operators of degree $k$\] in the case when the symbol map is surjective. Indeed becomes and also implies from the previous definition; also, is just for $k= 1$. In analogy with theorems \[t:linear distributions\] and \[cor: linear one forms\], there is a 1-1 correspondence between Spencer operators $D$ on $A$ and - distributions $H\subset TA$ with the property that $H\to TM$ is a Lie subalgebroid of the tangent algebroid $TA\to TM$, and - point-wise surjective linear $1$-forms $\theta\in\Omega^1(A,E)$, with the property that the map $$\begin{aligned} \theta:TA{\longrightarrow}A\ltimes E, \ \ \ T_aA\ni v\mapsto(a,\theta(v))\end{aligned}$$ is a Lie algebroid morphism. For an idea of the proof we refer to [@BursztynCabrera] where they treat the case in which $E={\mathbb R}_M$ is the trivial representation... The general case can be treated in a similar way. For any Spencer operator $D$, ${\text{\rm Sol}}(A,D)\subset\Gamma(A)$ is a Lie sub-algebra. This is immediate from the compatibility condition Another interpretation of a Spencer operator is in terms of Lie algebroid cocycles on $J^1A$ with values in $T^*\otimes E\in{\text{\rm Rep}\,}(J^1A)$. Here we use that $TM\in{\text{\rm Rep}\,}(J^1A)$ by the adjoint action (see ), $E\in{\text{\rm Rep}\,}(J^1A)$ by pulling back the action of lemma \[action-lema\] via $pr:J^1A\to A$, and we consider $T^*\otimes E\in{\text{\rm Rep}\,}(J^1A)$ the tensor product representation. Hence the action is $$\label{equation:nabla'} \begin{split} &\tilde \nabla:\Gamma(J^1A)\times \Gamma(T^*\otimes E){\longrightarrow}\Gamma(T^*\otimes E) \\ &(\tilde \nabla_\xi\omega)(X)=\nabla_{pr(\xi)}(\omega(X))-\omega({\text{\rm ad}\,}_\xi(X)) \end{split}$$ for $X\in {\ensuremath{\mathfrak{X}}}(M)$ and $\omega\in\Omega^1(M,E)$. \[J\^1\_DA\] An $l$-connection $D$ is a Spencer operator relative to $l$ if and only if the vector bundle map $$\begin{aligned} \label{a-map} a:J^1A\to T^*M\otimes E,\quad a(j^1_xb)=D(b)(x) \end{aligned}$$ is an algebroid cocycle. Using the Spencer decomposition , recall from remark \[cocyclea\] that $a$ is given at the level of sections by $$\begin{aligned} a({\alpha},\omega)=D({\alpha})-l\omega,\end{aligned}$$ where $({\alpha},\omega)\in\Gamma(A)\oplus\Omega^1(M,A)$. What we must show is that equations and are fulfilled if and only if $$\begin{aligned} \tilde \nabla_\xi a(\eta)-\tilde \nabla_\eta a(\xi)-a([\xi,\eta])=0\end{aligned}$$ for any $\xi,\eta\in\Gamma(J^1A)$. Let ${\alpha},\beta\in\Gamma(A)$ and $\omega,\theta\in\Omega^1(M,A)$ be such that $$\begin{aligned} \xi=({\alpha},\omega),\qquad\eta=(\beta,\theta).\end{aligned}$$ By formula , we have that the Lie bracket of $\xi$ and $\eta$, using again the Spencer decomposition , is given by $$\begin{aligned} \label{1} [\xi,\eta]=([{\alpha},\beta],L_\xi(\theta)-L_\eta(\omega)).\end{aligned}$$ Therefore, using formula $$\begin{aligned} \begin{aligned} a([\xi,\eta])(X)=&D_X[{\alpha},\beta]-l[{\alpha},\theta(X)]-l\omega(\rho(\theta(X)))+l\theta([\rho({\alpha}),X])\\&+l[{\beta},\omega(X)]+l\theta(\rho(\omega(X)))-\omega([\rho({\beta}),X]) \end{aligned}\end{aligned}$$ for any $X\in{\ensuremath{\mathfrak{X}}}(M).$ On the other hand, using formulas and $$\begin{aligned} \begin{aligned} \tilde \nabla_\xi a(\eta)(X)=&\nabla_{{\alpha}}(D_X\beta-l\theta(X))-(D\beta-l\theta)({\text{\rm ad}\,}_\xi(X))\\ =&\nabla_{{\alpha}}(D_X\beta-l\theta(X))-D_{[\rho({\alpha}),X]}-D_{\rho(\omega(X))}\beta+l\theta([\rho({\alpha}),X])\\ &+l\theta(\rho(\omega(X))), \end{aligned}\end{aligned}$$ and similarly $$\begin{aligned} \begin{aligned} \tilde \nabla_\eta a(\xi)(X)=&\nabla_{{\beta}}(D_X{\alpha}-l\omega(X))-(D{\alpha}-l\omega)({\text{\rm ad}\,}_\eta(X))\\ =&\nabla_{{\beta}}(D_X{\alpha}-l\omega(X))-D_{[\rho({\beta}),X]}-D_{\rho(\theta(X))}{\alpha}+l\omega([\rho({\beta}),X])\\ &+l\omega(\rho(\theta(X))). \end{aligned}\end{aligned}$$ Replacing the above equations, we get that the right hand side of is equal to $$\begin{aligned} \begin{aligned} &\nabla_{\alpha}(D_X{\beta})-\nabla_{\alpha}(l\theta(X))-D_{[\rho({\alpha}),X]}{\beta}-D_{\rho(\omega(X))}{\beta}-\nabla_{\beta}(D_X{\alpha})\\&+\nabla_{\beta}(l\omega(X))+D_{[\rho({\beta}),X]}{\alpha}+D_{\rho(\theta(X))}{\alpha}-D_X[{\alpha},{\beta}]+l[{\alpha},\theta(X)]-l[{\beta},\omega(X)], \end{aligned}\end{aligned}$$ but this equation is identically zero for any ${\alpha},\beta\in\Gamma(A)$, $\omega,\theta\in\Omega^1(M,A)$ and $X\in{\ensuremath{\mathfrak{X}}}(M)$ if and only if $$\begin{aligned} \nabla_\alpha(D_X{\beta})-D_{[\rho(\alpha),X]}{\beta}-\nabla_{{\beta}}(D_X\alpha)+D_{[\rho({\beta}),X]}\alpha-D_X[\alpha,{\beta}]=0,\end{aligned}$$ and for any section ${\gamma}\in\Gamma(A)$ $$\begin{aligned} \nabla_\alpha(l({\gamma}))=D_{\rho({\gamma})}\alpha+l[\alpha,{\gamma}]\end{aligned}$$ which completes our proof :-) ### Lie-Spencer operators {#Lie-Spencer operators} The notion of Spencer operators from the previous subsection has more remarkable properties when the surjective vector bundle map is a Lie algebroid morphism $$\begin{aligned} l:A{\longrightarrow}E.\end{aligned}$$ To emphasize that we are in such a setting (i.e. $E$ is a Lie algebroid and $l$ is a Lie algebroid morphism) we will say that $D$ is a [**Lie-Spencer operator relative to $l$**]{} (although the definition remains unchanged). Also, in this case we will use the letter $L$ instead of $E$ to denote the target of $l$. By abuse of notation we will denote the anchor of $L$ by $\rho$ and the Lie bracket by $[\cdot,\cdot]$, as we do with $A$. Remark that, in this case $$\begin{aligned} {\mathfrak{g}}\subset \ker(\rho);\end{aligned}$$ hence the condition is superfluous, the definition of $\nabla$ becomes more direct, and therefore equation can be expressed without reference to $\nabla.$ Hence definition \[def1\] becomes: \[Lie-Spencer\] Let $l:A\to L$ be a surjective Lie algebroid morphism. A [**Lie-Spencer operator relative to $l$**]{} is an $l$-connection $$\begin{aligned} D:\Gamma(A){\longrightarrow}\Omega^1(M,L)\end{aligned}$$ with the property that $$\begin{aligned} \label{L-S} \begin{split} D_X[{\alpha},{\beta}]-[D_X{\alpha},l({\beta}&)]-[l({\alpha}),D_X{\beta}]\\&=D_{\rho D_X({\beta})+[\rho({\beta}),X]}{\alpha}-D_{\rho D_X({\alpha})+[\rho({\alpha}),X]}{\beta}\end{split}\end{aligned}$$ for ${\alpha},\beta\in\Gamma(A)$ and $X\in{\ensuremath{\mathfrak{X}}}(M)$. Intuitively, measures the compatibility of $D$ with the Lie bracket and $l$. This is clear from the left hand side. But the left hand side of is not $C^\infty(M)$-linear, and the right hand side can be seen as making the expression of the left hand side $C^\infty(M)$-linear. The rather complicated right hand side has some more conceptual meaning, as we shall explain later on. #### Symbols From the general theory of $l$-connections (see definitions \[definitions\]), one associates to $D$ the symbol bundle $$\begin{aligned} {\mathfrak{g}}=\ker(l)\subset A\end{aligned}$$ and the symbol map $$\begin{aligned} \partial_D:{\mathfrak{g}}{\longrightarrow}{\mathrm{Hom}}(TM,L),\qquad \partial_D(v)(X)=D_X(v).\end{aligned}$$ In the Lie-Spencer case, all these objects become Lie theoretic. First of all: For any surjective Lie algebroid map $l:A\to L,$ ${\mathfrak{g}}:=\ker(l)$ has a natural structure of bundle of Lie algebras, uniquely determined by the condition that $\Gamma({\mathfrak{g}})$ becomes a Lie sub-algebra of $\Gamma(A)$. For $a,b\in {\mathfrak{g}}_x$ its Lie bracket is defined by $$\begin{aligned} \label{lie} [a,b]_x:=[{\alpha},{\beta}](x)\end{aligned}$$ where ${\alpha},{\beta}\in\Gamma(A)$ are any sections such that ${\alpha}(x)=a$ and ${\beta}(x)=b$. That the previous formula does define a Lie bracket on ${\mathfrak{g}}_x$ follows from the fact that ${\mathfrak{g}}_x \subset \ker\rho$, where $\rho$ is the anchor of $A$. Indeed, $$\begin{aligned} \rho(a)=\rho(l(a))=0,\end{aligned}$$ where in the left hand side $\rho$ refers to the anchor of $L$. Hence, that is well-defined and it is bilinear is a consequence of the formula $$\begin{aligned} [{\alpha},f{\beta}](x)=f(x)[{\alpha},{\beta}](x)+L_{\rho({\alpha}(x))}(f){\beta}(x)=f(x)[{\alpha},{\beta}](x)\end{aligned}$$ for any $f\in C^{\infty}(M).$ \[\[\]\] For any vector bundle map $\rho:L\to T$, the bracket $[\cdot,\cdot]_\rho$ on ${\mathrm{Hom}}(T,L)$ given by $$\begin{aligned} [\xi,\eta]_\rho=\xi\circ\rho\circ \eta-\eta\circ\rho\circ\xi\end{aligned}$$ makes ${\mathrm{Hom}}(T,L)$ into a bundle of Lie algebras. The Jacobi identity for $[\cdot,\cdot]_\rho$ is a straightforward computation. Finally, \[int\] For any Lie-Spencer operator $D$, $$\begin{aligned} \partial_D:{\mathfrak{g}}{\longrightarrow}T^*\otimes L\end{aligned}$$ is a morphism of bundles of Lie algebras. Let ${\alpha},{\beta}\in\Gamma({\mathfrak{g}})$. By the condition and the facts that ${\alpha},{\beta}\in\ker (l)$, and thus ${\alpha},{\beta}\in\ker(\rho)$, $$\begin{aligned} \begin{split} \partial_D([{\alpha},{\beta}])(X)&=D_X[{\alpha},{\beta}]=D_{\rho D_X{\beta}}{\alpha}-D_{\rho D_X{\alpha}}{\beta}\\&=\partial_D({\alpha})\circ\rho\circ \partial_D({\beta})(X)-\partial_D({\beta})\circ\rho\circ \partial_D({\alpha})(X)\\&=[\partial_D({\alpha}),\partial_D({\beta})]_\rho(X) \end{split}\end{aligned}$$ for any $X\in {\ensuremath{\mathfrak{X}}}(M).$ The previous lemma is a direct consequence of the fact that $\partial_D$ is part of a Lie algebroid map. Keeping in mind that - ${\mathfrak{g}}=\ker(l:A\to L)$, - ${\mathrm{Hom}}(TM,L)=\ker(pr:J^1L\to L)$, - The relative connection $D$ can be interpreted as a vector bundle map $$\begin{aligned} j_D:A{\longrightarrow}J^1L,\end{aligned}$$ lemma \[int\] can be “improved” as $j_D$ is a Lie algebroid morphism as follows. \[J\^1E\]Let $l:A\to L$ be a surjective Lie algebroid morphism. An $l$-connection $D$ is a Lie-Spencer operator if and only if $$\begin{aligned} j_D:A{\longrightarrow}J^1L\end{aligned}$$ is a Lie algebroid map. Using the Spencer decomposition , recall that $j_D({\alpha})=(l({\alpha}),D({\alpha}))$ for ${\alpha}\in\Gamma(A)$. So, if ${\beta}\in\Gamma(A)$ then, as in equation , we have that $$\begin{aligned} \begin{split} [j_D({\alpha}),j_D(\beta)]_{J^1L}&=[(l({\alpha}),D({\alpha})),(l({\beta}),D({\beta}))]_{J^1L}\\&=([l({\alpha}),l({\beta})],L_{j_D({\alpha})}(D({\beta}))-L_{j_D({\beta})}(D({\alpha}))). \end{split}\end{aligned}$$ On the other hand, $$\begin{aligned} j_D[{\alpha},{\beta}]=(l[{\alpha},{\beta}],D[{\alpha},{\beta}]).\end{aligned}$$ Hence, $j_D$ is a Lie algebroid morphism if and only if $$\begin{aligned} \label{eqn:A} [l({\alpha}),l({\beta})]=l[{\alpha},{\beta}]\end{aligned}$$ and $$\begin{aligned} \label{eqn:B} L_{j_D({\alpha})}(D({\beta}))-L_{j_D({\beta})}(D({\alpha}))-D[{\alpha},{\beta}]=0.\end{aligned}$$ As $l:A\to L$ is surjective, equation is equivalent to the fact that $l$ is a Lie algebroid morphism. Indeed, the only thing to check is that $\rho\circ l=\rho$ whenever is satisfied. From Leibniz we have that, for any $f\in C^{\infty}(M)$, $$\begin{aligned} \begin{aligned} fl[{\alpha},\beta]+L_{\rho({\alpha})}(f)l({\beta})=l[{\alpha},f{\beta}]=[l({\alpha}),fl({\beta})]=f[l({\alpha}),l({\beta})]+L_{\rho(l({\alpha}))}l({\beta}). \end{aligned}\end{aligned}$$ Since this equation is valid for any ${\alpha},{\beta}\in\Gamma(A)$ and $l$ is surjective, then $\rho({\alpha})=\rho(l({\alpha}))$. Now, if we assume that $l$ is a Lie algebroid morphism, then for $X\in{\ensuremath{\mathfrak{X}}}(M)$ $$\begin{aligned} \begin{split} L_{j_D({\alpha})}(D({\beta}&))(X)-L_{j_D({\beta})}(D({\alpha}))(X)-D_X[{\alpha},{\beta}]\\ =&[l({\alpha}),D_X{\beta}]+D_{\rho D_X{\beta}}{\alpha}-D_{[\rho(l({\alpha})),X]}{\beta}-[l({\beta}),D_X{\alpha}]\\&-D_{\rho D_X{\alpha}}{\beta}+D_{[\rho(l({\beta})),X]}{\alpha}-D_X[{\alpha},{\beta}]\\ =&[l({\alpha}),D_X{\beta}]+D_{\rho D_X{\beta}}{\alpha}-D_{[\rho({\alpha}),X]}{\beta}-[l({\beta}),D_X{\alpha}]\\&-D_{\rho D_X{\alpha}}{\beta}+D_{[\rho({\beta}),X]}{\alpha}-D_X[{\alpha},{\beta}]\\=& \nabla_\alpha^{L}(D_X\beta)-D_{[\rho(\alpha),X]}\beta-\nabla_{\beta}^{L}(D_X\alpha)+D_{[\rho(\beta),X]}\alpha-D_X[\alpha,\beta]\\= &R_D({\alpha},{\beta})(X) \end{split}\end{aligned}$$ Therefore, equation holds if and only if $D$ is an Lie-Spencer operator relative to $l:A\to L$. #### The associated representations Some of the expressions appearing in and can be put in the form of two operators ($A$-connections): $$\begin{aligned} \label{equation:A} \nabla^{T}:\Gamma(A)\times \Gamma(TM){\longrightarrow}\Gamma(TM), \quad\nabla^T_{\alpha}(X)=\rho D_X({\alpha})+[\rho({\alpha}),X]\end{aligned}$$ and $$\begin{aligned} \label{equation:B} \nabla^{L}:\Gamma(A)\times \Gamma(L){\longrightarrow}\Gamma(L), \quad\nabla^{L}_{\alpha}(\lambda)= D_{\rho(\lambda)}({\alpha})+[l({\alpha}),\lambda],\end{aligned}$$ where ${\alpha}\in\Gamma(A)$, $X\in\Gamma(TM)$ and $\lambda\in\Gamma(L)$. We call the previous $A$-connections [**the basic connections on $TM$ and $L$**]{} respectively. \[rep\] For any Lie-Spencer operator $D$ relative to $l:A\to L$, $\nabla^T$ and $\nabla^L$ make $TM$ and $L$ into representations of $A$. Let ${\alpha},\beta\in\Gamma(A)$ and $X\in{\ensuremath{\mathfrak{X}}}(M)$. Using that $\rho$ commutes with the Lie bracket and that $\rho\circ l=\rho$, where in the right hand side we refer to the anchor of $A$, we have that $$\begin{aligned} \begin{aligned} \nabla&^{T}_{\alpha}\nabla^T_{\beta}(X)-\nabla_{{\beta}}^{T}\nabla^T_{\alpha}(X)-\nabla_{[{\alpha},{\beta}]}^{T}(X)\\=& \rho D_{\nabla^T_{\beta}(X)}{\alpha}+[\rho({\alpha}),\nabla^T_{\beta}(X)]-\rho D_{\nabla^T_{\alpha}(X)}{\beta}-[\rho({\beta}),\nabla^T_{\alpha}(X)]\\&-\rho D_X[{\alpha},{\beta}]-[\rho[{\alpha},{\beta}],X]\\=& \rho D_{\nabla^T_{\beta}(X)}{\alpha}+[\rho({\alpha}),\rho D_X{\beta}+[\rho({\beta}),X]]-\rho D_{\nabla^T_{\alpha}(X)}{\beta}\\&-[\rho({\beta}),\rho D_X{\alpha}+[\rho({\alpha}),X]]-\rho D_X[{\alpha},{\beta}]-[\rho[{\alpha},{\beta}],X]\\=& \rho D_{\nabla^T_{\beta}(X)}{\alpha}-\rho D_{\nabla_{\alpha}^{T}(X)}{\beta}+[\rho D_X{\alpha},\rho({\beta})]+[\rho({\alpha}),\rho D_X{\beta}]-\rho D_X[{\alpha},{\beta}]\\=& \rho(D_{\nabla^T_{\beta}(X)}{\alpha}-D_{\nabla_{\alpha}^{T}(X)}{\beta}+[D_X{\alpha},l({\beta})]+[l({\alpha}),D_X{\beta}]-D_X[{\alpha},{\beta}])\\=& \rho(0)=0, \end{aligned}\end{aligned}$$ where in the passage from the third to the four line we used the Jacobi identity for $[[\rho({\alpha}),\rho({\beta})],X]$, and from the fifth to the sixth line we used the condition . On the other hand, for $\lambda\in\Gamma(L)$, $$\begin{aligned} \begin{aligned} \nabla&^{L}_{\alpha}\nabla^L_{\beta}(\lambda)-\nabla_{{\beta}}^{L}\nabla^L_{\alpha}(\lambda)-\nabla_{[{\alpha},{\beta}]}^{L}(\lambda)\\=& \nabla^L_{\alpha}(D_{\rho(\lambda)}{\beta}+[l({\beta}),\lambda])-\nabla^L_{\beta}(D_{\rho(\lambda)}{\alpha}+[l({\alpha}),\lambda])-D_{\rho(\lambda)}[{\alpha},{\beta}]-[l[{\alpha},{\beta}],\lambda]\\=& \nabla^L_{\alpha}(D_{\rho(\lambda)}{\beta})-\nabla^L_{\beta}(D_{\rho(\lambda)}{\alpha})+D_{\rho[l({\beta}),\lambda]}{\alpha}+[l({\alpha}),[l({\beta}),\lambda]]-D_{\rho[l({\alpha}),\lambda]}{\beta}\\&-[l({\beta}),[l({\alpha}),\lambda]]-D_{\rho(\lambda)}[{\alpha},{\beta}]-[l[{\alpha},{\beta}],\lambda]\\=& \nabla^L_{\alpha}(D_{\rho(\lambda)}{\beta})-D_{[\rho({\alpha}),\rho(\lambda)]}{\beta}-\nabla^L_{\beta}(D_{\rho(\lambda)}{\alpha})+D_{[\rho({\beta}),\rho(\lambda)]}{\alpha}-D_{\rho(\lambda)}[{\alpha},{\beta}]\\=&0, \end{aligned}\end{aligned}$$ where we used that $l$ commutes with the Lie bracket and the Jacobi identity for $[[l({\alpha}),l({\beta})],\lambda]$, and condition . Let $l:A\to L$ be a Lie algebroid morphism. For any $l$-connection $$\begin{aligned} D:\Gamma(A){\longrightarrow}\Omega^1(M,L)\end{aligned}$$ one defines $\nabla^T$ and $\nabla^L$ given by and respectively, and also [**the basic curvature of $D$**]{} $$\begin{aligned} R_D\in{\mathrm{Hom}}(\wedge^2A,{\mathrm{Hom}}(TM,L))\end{aligned}$$ given by $$\begin{aligned} \label{intermsofE} \begin{aligned} R_D({\alpha},\beta)(X)=\nabla_\alpha^{L}(D_X\beta)-D_{[\rho(\alpha),X]}&\beta-\nabla_{\beta}^{L}(D_X\alpha)\\&+D_{[\rho(\beta),X]}\alpha-D_X[\alpha,\beta], \end{aligned}\end{aligned}$$ where ${\alpha},\beta\in\Gamma(A)$ and $X\in{\ensuremath{\mathfrak{X}}}(M)$. The proof of lemma \[rep\] shows that $$\begin{aligned} R_{\nabla^T}=\rho\circ R_D,\qquad R_{\nabla^L}=\rho^*R_D,\end{aligned}$$ where $$\begin{aligned} R_{\nabla^T}\in{\mathrm{Hom}}(\wedge^2A,{\mathrm{Hom}}(TM,TM)),\qquad R_{\nabla^L}\in{\mathrm{Hom}}(\wedge^2A,{\mathrm{Hom}}(L,L))\end{aligned}$$ are the basic curvatures of $\nabla^T$ and $\nabla^L$ respectively. We find ourselves in the setting of representations up to homotopy of [@homotopy]: $$\begin{aligned} \rho:L{\longrightarrow}TM\end{aligned}$$ becomes a representation up to homotopy of $A$. Let us state one of the conclusions of this discussion: A relative connection $D$ is a Lie-Spencer operator if and only if its basic curvature $R_D$ vanishes. ### Examples \[higher jets on algebroids\]Of course in the setting of Lie algebroids one has the classical Spencer operator of $A$, denoted by $$\begin{aligned} D^{{\text{\rm clas}}}:\Gamma(J^1A){\longrightarrow}\Omega^1(M,A).\end{aligned}$$ The construction and properties of $D^{\text{\rm clas}}$ in the setting of vector bundles were explained in example \[Spencer tower\]. As one may expect, $D^{\text{\rm clas}}$ is compatible with the Lie algebroid structure of $J^1A$. More precisely, recall that $J^1A$ acts on $A$ and $TM$ with the adjoint actions (see subsection \[Jet groupoids and algebroids\] and definition ). For a Lie algebroid $A$, the classical Spencer operator $$\begin{aligned} D^{{\text{\rm clas}}}:\Gamma(J^1A){\longrightarrow}\Omega^1(M,A)\end{aligned}$$ is a Lie-Spencer operator relative to the projection $pr:J^1A\to A$, and the induced action on $A$ and $M$ are the adjoint actions. We should check that the curvature map $$R_{D^{\text{\rm clas}}}\in {\mathrm{Hom}}(\wedge^2J^1A,{\mathrm{Hom}}(TM,A))$$ vanishes. Note that it is enough to check that it does on sections of $\Gamma(J^1A)$ of the form $j^1{\alpha}$, ${\alpha}\in\Gamma(A)$, as $\Gamma(J^1A)$ is (locally) generated by sections of this form as a $C^\infty(M)$-module (see subsection \[Jet groupoids and algebroids\]), and $R_{D^{\text{\rm clas}}}$ is $C^{\infty}(M)$-bilinear. Let ${\alpha},{\beta}\in \Gamma(A)$. Then, for any $X\in {\ensuremath{\mathfrak{X}}}(M)$, $$\begin{aligned} \begin{split} R_{D^{\text{\rm clas}}}(j^1{\alpha},j^1{\beta})(X)&=D^{\text{\rm clas}}_X[j^1{\alpha},j^1{\beta}]-[D^{\text{\rm clas}}_Xj^1{\alpha},pr({\beta})]-[pr({\alpha}),D^{\text{\rm clas}}_Xj^1{\beta}]\\&\quad-D^{\text{\rm clas}}_{\rho D^{\text{\rm clas}}_X(j^1{\beta})+[\rho(j^1{\beta}),X]}j^1{\alpha}+D^{\text{\rm clas}}_{\rho D^{\text{\rm clas}}_X(j^1{\alpha})+[\rho(j^1{\alpha}),X]}j^1{\beta}\\ &=D_X^{\text{\rm clas}}j^1[{\alpha},\beta]=0, \end{split}\end{aligned}$$ where we used that $D^{\text{\rm clas}}$ vanishes on holonomic sections and that $[j^1{\alpha},j^1{\beta}]=j^1[{\alpha},{\beta}]$ is again holonomic. It is straightforward to check, using the formulas , that the induced representations on $A$ and $TM$ are indeed the adjoint actions. The above proposition has, of course, a version for higher jets (see subsection \[Jet groupoids and algebroids\]). More precisely, following the construction of example \[Spencer tower\], the classical Spencer operator on $J^kA$ $$\begin{aligned} D^{{\text{\rm clas}}}:=D^{k\text{-{\text{\rm clas}}}}:\Gamma(J^kA){\longrightarrow}\Omega^1(M,J^{k-1}A)\end{aligned}$$ is a Lie-Spencer operator relative to the projection $pr:J^kA\to J^{k-1}A$. There are a few things to remark here: - The structure of bundle of Lie algebras ${\mathrm{Hom}}(TM,A)\subset J^1A$ as a Lie subalgebroid of $J^1A$, is given at the point $x\in A$ by $$\begin{aligned} [\phi,\psi]_x(X)=\phi(\rho(\psi(X)))-\psi(\rho(\phi(X)))\end{aligned}$$ for $\phi,\psi\in{\mathrm{Hom}}(T_xM,A_x)$ and $X\in T_xM.$ Note that this structure coincides with that of ${\mathrm{Hom}}(TM,A)$ as the symbol space of the Lie-Spencer operator $D^{\text{\rm clas}}.$ In the same way, the structure of bundle of Lie algebras of $S^kT^*\otimes A\subset J^kA$ (see example \[Spencer tower\] ) coincides with that of the symbol space of $D^{k\text{-{\text{\rm clas}}}}$, which, for $k\geq2$, is trivial. - From example \[classicalSpencertower\], it follows that $(J^kA,D^{{\text{\rm clas}}})$ is an example of a Lie prolongation of $(J^{k-1}A,D^{{\text{\rm clas}}})$ (to be defined in definition \[algb-prol\]) for $k\geq 2$. Moreover, the tower $$\begin{aligned} \cdots {\longrightarrow}J^{k+1}A\xrightarrow{D^{{\text{\rm clas}}}}{}J^kA\xrightarrow{D^{{\text{\rm clas}}}}{} \cdots{\longrightarrow}J^1A\xrightarrow{D^{\text{\rm clas}}}{}A \end{aligned}$$ is the classical example of a standard Spencer tower to be defined in subsection \[algebroid prolongation and spencer towers\]. For a smooth generalized Lie pseudogroup $\Gamma\subset \operatorname{Bis}({\mathcal{G}})$, one gets a sequence of Lie algebroids together with Lie-Spencer operators $$\begin{aligned} \cdots{\longrightarrow}A^{(k)}(\Gamma)\xrightarrow{D^{(k)}}{}\cdots{\longrightarrow}A^{(1)}(\Gamma)\xrightarrow{D^{(1)}}{} A^{(0)}(\Gamma)\end{aligned}$$ where $D^{(k)}$ is the restriction of the classical Lie-Spencer operator $D^{\text{\rm clas}}:J^kA\to J^{k-1}A$, for $A=Lie({\mathcal{G}}).$ See subsection \[Lie pseudogroups\] Let $$\begin{aligned} \tilde A\overset{\tilde D,\tilde l}{{\longrightarrow}}A\overset{D,l}{{\longrightarrow}}E\end{aligned}$$ be two Spencer operators where $\tilde l:\tilde A\to A$ is a Lie algebroid map. We saw in subsection \[first examples and operations\] that we can compose the above relative connections in two different ways to get connections $$\begin{aligned} D^{i}:\Gamma(\tilde A){\longrightarrow}\Omega^1(M,E),\quad i:1,2\end{aligned}$$ relative to the vector bundle map $l\circ \tilde l:\tilde A\to E$, and defined by $$\begin{aligned} D^1({\alpha})=l\circ \tilde D({\alpha})\quad\text{and}\quad D^2({\alpha})=D(\tilde l({\alpha})),\end{aligned}$$ where ${\alpha}\in\Gamma(\tilde A)$. In the setting of Lie algebroids we have: - $D^2$ is a Spencer operator, - $D^1$ is a Spencer operator if $$\begin{aligned} \label{comp} l\circ \tilde D_{\rho_A(\gamma)}({\alpha})=D_{\rho_A(\gamma)}(\tilde l({\alpha})),\end{aligned}$$ or equivalently $$\begin{aligned} \label{compcon} l\circ\nabla^{A}_{\alpha}(\gamma)=\nabla^E_{\tilde l({\alpha})}l(\gamma)\end{aligned}$$ for ${\alpha}\in \Gamma(\tilde A)$ and $\gamma\in\Gamma(A)$. Let’s check that $D^2$ is a Spencer operator relative to $l\circ \tilde l$, i.e. it satisfies equations and . Let $\beta\in\Gamma(\tilde A)$ be such that $l(\tilde l({\beta}))=0$, or equivalently $\tilde l({\beta})\in\ker l$. Then, as $\rho_{\tilde A}({\beta})=\rho_A(\tilde l({\beta}))$, $$\begin{aligned} D^2_{\rho_{\tilde A}}({\alpha})=D_{\rho_{\tilde A}({\beta})}(\tilde l({\alpha}))=D_{\rho_A(\tilde l({\beta}))}(\tilde l({\alpha}))=-l[\tilde l({\alpha}),\tilde l({\beta})]_A=-l\circ \tilde l[{\alpha},{\beta}]_{\tilde A},\end{aligned}$$ which shows that equation holds. To check that equation holds, we take the well-defined $\tilde A$-connection $\nabla:\Gamma(\tilde A)\times \Gamma(E)\to \Gamma(E)$ given by $$\begin{aligned} \label{eq:3}\begin{split} \nabla_{\alpha}(l\circ \tilde l(\tilde {\alpha}))&=D^2_{\rho_{\tilde A}(\tilde {\alpha})}({\alpha})+l\circ \tilde l[{\alpha},\tilde {\alpha}]_{\tilde A}\\&=D_{\rho_{\tilde A}(\tilde {\alpha})}(\tilde l({\alpha}))+l\circ \tilde l[{\alpha},\tilde {\alpha}]_{\tilde A}\\ &=D_{\rho_A(\tilde l(\tilde {\alpha}))}(\tilde l({\alpha}))+l[\tilde l({\alpha}),\tilde l(\tilde {\alpha})]_{A}=\nabla^E_{\tilde l({\alpha})}(l\circ \tilde l(\tilde {\alpha}))), \end{split}\end{aligned}$$ where $\tilde {\alpha}\in \Gamma(\tilde A)$, and $\nabla^E:\Gamma(A)\times \Gamma(E)\to \Gamma(E)$ is the associated flat $A$-connections associated to $D$. As $D$ satisfies , we have that $$\begin{aligned} \label{conexion}\begin{aligned} \nabla^E_{\tilde l(\alpha)}(D_X\tilde l(\alpha'))-D&_{[\rho_{A}(\tilde l(\alpha)),X]}\tilde l(\alpha')-\nabla^E_{\tilde l(\alpha')}(D_X\tilde l(\alpha))\\&+D_{[\rho_{A}(\tilde l(\alpha')),X]}\tilde l(\alpha)-D_X[\tilde l(\alpha),\tilde l(\alpha')]_A=0. \end{aligned}\end{aligned}$$ Using a splitting $\Psi:E\to \tilde A$ of $l\circ \tilde l$ and equation we can rewrite $$\begin{aligned} \label{prim} \begin{aligned} \nabla^E_{\tilde l(\alpha)}(D_X\tilde l(\alpha'))&=\nabla^E_{\tilde l(\alpha)}(l\circ \tilde l(\Psi D_X\tilde l(\alpha')))\\&=\nabla_{\alpha}(l\circ \tilde l(\Psi D_X\tilde l(\alpha')))=\nabla_{\alpha}(D_X\tilde l(\alpha')); \end{aligned}\end{aligned}$$ therefore, since $\tilde l$ is a Lie algebroid map, equation becomes $$\begin{aligned} \begin{aligned} \nabla_{\alpha}(D^2_X(\alpha'))-D^2_{[\rho_{\tilde A}(\alpha),X]}&(\alpha')-\nabla_{\alpha'}(D^2_X(\alpha))\\&+D_{[\rho_{\tilde A}(\alpha'),X]}\alpha-D^2_X[\alpha,\alpha']_{\tilde A}=0. \end{aligned}\end{aligned}$$ Let’s check that whenever equation holds, $D^1$ is a Spencer operator. Let ${\beta}\in\Gamma(\tilde A)$ be such that $l(\tilde l({\beta}))=0$, or equivalently $\tilde l({\beta})\in\ker l$. Then, $$\begin{aligned} \begin{split} D^1_{\rho_{\tilde A}({\beta})}({\alpha})=l\circ \tilde D_{\rho_{\tilde A}({\beta})}({\alpha})&=l\circ \tilde D_{\rho_{A}(\tilde l({\beta}))}({\alpha})=D_{\rho_A(\tilde l({\beta}))}(\tilde l({\alpha}))\\&=\nabla^E_{\tilde l({\alpha})}(l(\tilde l{\beta}))-l[\tilde l({\alpha}),\tilde l({\beta})]_A=-l\circ \tilde l[{\alpha},{\beta}]_{\tilde A}.\end{split}\end{aligned}$$ In order to check that $D^1$ satisfies the compatibility condition , we consider the well-defined $\tilde A$-connection $\nabla:\Gamma(\tilde A)\times \Gamma(E)\to \Gamma(E)$ $$\begin{aligned} \begin{split} \nabla_{\alpha}(l\circ \tilde l(\tilde {\alpha}))&=D^1_{\rho_{\tilde A}(\tilde {\alpha})}({\alpha})+l\circ \tilde l[{\alpha},\tilde {\alpha}]_{\tilde A}\\&=l(\tilde D_{\rho_{\tilde A}(\tilde {\alpha})}(\tilde l({\alpha}))+ \tilde l[{\alpha},\tilde {\alpha}]_{\tilde A})\\ &=l(\tilde D_{\rho_A(\tilde l(\tilde {\alpha}))}(\tilde l({\alpha}))+[\tilde l({\alpha}),\tilde l(\tilde {\alpha})]_{A})=l\circ\nabla^{A}_{{\alpha}}(\tilde l(\tilde {\alpha})). \end{split}\end{aligned}$$ Moreover, as $\tilde D$ satisfies the compatibility condition , then $$\begin{aligned} \label{conexion2}\begin{aligned} l(\nabla^{A}_{\alpha}(\tilde D_X(\alpha'))-\tilde D&_{[\rho_{\tilde A}(\alpha),X]}(\alpha')-\nabla^{A}_{\alpha'}(\tilde D_X(\alpha))\\&+\tilde D_{[\rho_{\tilde A}(\alpha'),X]}\tilde l(\alpha)-D_X[\alpha,\alpha']_{\tilde A})=0. \end{aligned}\end{aligned}$$ Using again a splitting of vector bundle maps $\Phi:A\to \tilde A$ of $\tilde l:\tilde A\to A$, we rewrite $$\begin{aligned} \label{seg}\begin{aligned} l\circ\nabla^{A}_{\alpha}(\tilde D_X(\alpha'))&=l\circ\nabla^{A}_{\alpha}(\tilde l(\Psi \tilde D_X(\alpha')))\\&=\nabla_{{\alpha}}(l\circ \tilde l(\Psi \tilde D_X(\alpha')))=\nabla_{{\alpha}}(l\circ \tilde D_X(\alpha'))). \end{aligned}\end{aligned}$$ Plugging in equation into equation , we get the compatibility condition for $D^1$. Note that by , $\nabla$ is the same $\tilde A$-connection associated to $D^2$. Prolongations of Spencer operators ---------------------------------- As a continuation of section \[section:prolongation\], we will discuss prolongations of Spencer operators. We will see how in this setting the objects are “Lie theoretic”, for example the classical prolongation becomes now a Lie subalgebroid, and the curvature map becomes a cocycle. Throughout this section $(D,l):A\to E$ is a Spencer operator relative to the vector bundle map $l:A\to E$, with associated flat $A$-connection $\nabla:\Gamma(A)\times\Gamma(E)\to \Gamma(E)$. ### The classical Lie prolongation From the general theory of relative connections, recall from subsection \[basic notions\] that one has the inclusion : $$\begin{aligned} P_D(A)\subset J^1_DA\subset J^1A. \end{aligned}$$ In the setting of Spencer operators these spaces are all Lie theoretical, \[prop:jet\_subalgebroid\] For any Spencer operator $D$ on $A$, $J^1_DA$ is a Lie subalgebroid of $J^1A.$ From lemma \[a-map\] one has that $J^1_DA$ is the kernel of a cocycle on $J^1A$, and therefore it is a subalgebroid. Indeed, for any 1-cocycle $c:B\to E$ of constant rank on a Lie algebroid $B$ with coefficients in some representation $E\in{\text{\rm Rep}\,}(A)$, its kernel is a Lie subalgebroid: for ${\alpha},{\beta}\in\ker c$, the cocycle condition for $c$ implies that $$\begin{aligned} 0=\nabla_{\alpha}c({\beta})-\nabla_{\beta}c({\alpha})-c[{\alpha},{\beta}]=-c[{\alpha},{\beta}], \end{aligned}$$ where $\nabla$ is the $A$-connection associated to $E$. Hence $[{\alpha},{\beta}]\in\ker c$. \[522\] $P_D(A)$, whenever smooth, is a Lie subalgebroid of $J^1A$. The proof uses the same idea as the proof of proposition \[prop:jet\_subalgebroid\]. In particular, we use the description of $P_D(A)$ as the kernel of the curvature map $$\begin{aligned} \label{varkappa-map} \varkappa_D:J^1_DA{\longrightarrow}{\mathrm{Hom}}(\wedge^2TM,E)\end{aligned}$$ introduced in definition \[def:curvature\]. In this setting the curvature map becomes a cocycle with values in ${\mathrm{Hom}}(\wedge^2TM,E)\in{\text{\rm Rep}\,}(J^1_DA)$, where here we used the representation $TM\in{\text{\rm Rep}\,}(J^1_DA)$ given by the restriction of the adjoint action of $J^1A$ on $TM$ (see ), $E\in {\text{\rm Rep}\,}(J^1_DA)$ by the pullback of the representation $E\in{\text{\rm Rep}\,}(A)$ via $pr:J^1_DA\to A$, and ${\mathrm{Hom}}(\wedge^2TM,E)$ is endowed with the tensor product representation. Hence the action is given by the flat $J^1_DA$-connection $$\begin{aligned} \tilde \nabla:\Gamma(J^1_DA)\times \Omega^2(M,E){\longrightarrow}\Omega^2(M,E)\end{aligned}$$ defined by $$\begin{aligned} \label{nabla'} \tilde \nabla_\xi\omega(X,Y)=\nabla_{pr(\xi)}(\omega(X,Y))-\omega({\text{\rm ad}\,}_\xi(X),Y)-\omega(Y,{\text{\rm ad}\,}_\xi(Y)),\end{aligned}$$ where $\omega\in\Omega^2(M,E)$, $\xi\in\Gamma(J^1_DA)$ and $X,Y\in{\ensuremath{\mathfrak{X}}}(M)$. \[aja\] The curvature of the Spencer operator $D$ $$\begin{aligned} \varkappa_D:J^1_DA{\longrightarrow}{\mathrm{Hom}}(\wedge^2TM,E)\end{aligned}$$ defined by equation is an algebroid cocyle. Let $\xi,\eta\in\Gamma(J^1A)$ be two sections of $J^1_DA$, i.e. for ${\alpha},\beta\in\Gamma(A)$, $\omega,\theta\in\Omega^1(M,A)$ be such that $\xi=({\alpha},\omega)$ and $\eta=({\beta},\theta)$, then $$\begin{aligned} D({\alpha})=l\circ\omega\quad\text{and}\quad D({\beta})=l\circ\theta.\end{aligned}$$ Here we are using the Spencer decomposition . We must show that $$\begin{aligned} \label{A} \tilde \nabla_\xi\varkappa_D(\eta)-\tilde \nabla_\eta\varkappa_D(\xi)-\varkappa_D[\xi,\eta]=0.\end{aligned}$$ Expanding formula given in the proof of lemma \[J\^1\_DA\], we have that $$\begin{aligned} \label{B}\begin{split} \varkappa_D[\xi,\eta](X,Y)=&T(X,Y,{\alpha},\theta,\omega)-T(X,Y,{\beta},\omega,\theta)-T(Y,X,{\alpha},\theta,\omega)\\&+T(Y,X,{\beta},\omega,\theta)-L(X,Y,{\alpha},\theta,\omega)+L(X,Y,{\beta},\omega,\theta), \end{split}\end{aligned}$$ where for $X,Y\in{\ensuremath{\mathfrak{X}}}(M)$ $$\begin{aligned} T(X,Y,{\alpha},\theta,\omega)=D_X([{\alpha},\theta(Y)]+\omega(\rho(\theta(Y))))-\theta[\rho({\alpha}),Y]),\end{aligned}$$ and $$\begin{aligned} L(X,Y,{\alpha},\theta,\omega)=l([{\alpha},\theta[X,Y]]+\omega(\rho(\theta[X,Y]))-\theta[\rho({\alpha}),[X,Y]]).\end{aligned}$$ On the other hand, using the definition of the flat $J^1_DA$-connection $\tilde \nabla$ (see equation ), we have that $$\begin{aligned} \label{C}\begin{split} (\tilde \nabla_\xi\varkappa_D(\eta))(X,Y)=&\nabla_{\alpha}(D_X\theta(Y)-D_Y\theta(X)-l\theta[X,Y])-N(X,Y,{\alpha},\theta,\omega)\\&+N(Y,X,{\alpha},\theta,\omega), \end{split}\end{aligned}$$ where $$\begin{aligned} \begin{split} N(X,Y,{\alpha},\theta,\omega)=&D_X(\theta([\rho({\alpha}),Y]+\rho(\omega(Y))))-D_{[\rho({\alpha}),Y]}\theta(X)\\&-D_{\rho(\omega(Y))}\theta(X) -l\theta([X,[\rho({\alpha}),Y]+\rho(\omega(Y))]).\end{split}\end{aligned}$$ Similarly one finds that $$\begin{aligned} \label{D}\begin{split} (\tilde \nabla_\eta\varkappa_D(\xi))(X,Y)=&\nabla_{\beta}(D_X\omega(Y)-D_Y\omega(X)-l\omega[X,Y])-N(X,Y,{\beta},\omega,\theta)\\&+N(Y,X,{\beta},\omega,\theta). \end{split}\end{aligned}$$ Using formulas , , , and the Jacobi identity for $[[\rho({\alpha}),X],Y]$ and for $[[\rho({\beta}),X],Y]$, the left hand side of equation becomes $$\begin{aligned} \label{equation:final} \begin{split} V(X,Y,{\alpha},\theta&,\omega)-V(Y,X,{\alpha},\theta,\omega)-V(X,Y,\beta,\omega,\theta)+V(Y,X,\beta,\omega,\theta)\\ -\nabla_{\alpha}(l\theta&[X,Y])+l[{\alpha},\theta[X,Y]])+l\omega(\rho(\theta[X,Y]))\\&+\nabla_\beta(l\omega[X,Y]-l[{\beta},\omega[X,Y]])-l\theta(\rho(\omega[X,Y])), \end{split}\end{aligned}$$ where $$\begin{aligned} \label{into}\begin{split} V(X,Y,{\alpha},\theta,\omega)=&\nabla_{\alpha}(D_X\theta(Y))-D_{[\rho({\alpha}),X]}\theta(Y)-l\omega[X,\rho(\theta(Y))]\\&-D_X[{\alpha},\omega(Y)]-D_{\rho(\omega(X))}\theta(Y). \end{split}\end{aligned}$$ Since $D{\alpha}=l\omega$, by the compatibility condition we have that $$\begin{aligned} D_{\rho(\omega(X))}\theta(Y)=\nabla_{\theta(Y)}-l[\theta(Y),\omega(X)];\end{aligned}$$ replacing this equation into we get by the compatibility condition , that $$\begin{aligned} V(X,Y,{\alpha},\theta,\omega)=l[\theta(Y),\omega(X)].\end{aligned}$$ Moreover, by the compatibility condition we have that $$\begin{aligned} -\nabla_{\alpha}(l\theta[X,Y])+l[{\alpha},\theta[X,Y]])+l\omega(\rho(\theta[X,Y]))=0\end{aligned}$$ and $$\begin{aligned} -\nabla_{\beta}(l\omega[X,Y])+l[{\beta},\omega[X,Y]])+l\theta(\rho(\omega[X,Y]))=0\end{aligned}$$ since $D{\alpha}=l\omega$ and $D{\beta}=l\theta$; therefore becomes $$\begin{aligned} l[\theta(Y),\omega(X)]-l[\theta(X),\omega(Y)]-l[\omega(Y),\theta(X)]+l[\omega(X),\theta(Y)]=0.\end{aligned}$$ This finally shows equation . If $P_D(A)\subset J^1A$ is smoothly defined in the sense of \[smoothly-defined\], then the restriction of the classical Lie-Spencer operator to $P_D(A)$ $$\begin{aligned} D^{(1)}:\Gamma(P_D(A)){\longrightarrow}\Omega^1(M,A)\end{aligned}$$ is a Lie-Spencer operator relative to $pr:P_D(A)\to A$. The fact that $D^{(1)}$ satisfies the compatibility condition of a Lie-Spencer operator follows from example \[higher jets on algebroids\]. In analogy with the discussion of relative connections (see section \[compatible connections\]), we can talk about compatible Spencer operators and the universal nature of $P_D(A).$ \[algb-prol\] Let $$\begin{aligned} \tilde A\overset{(\tilde D,\tilde l)}{{\longrightarrow}}A\overset{(D,l)}{{\longrightarrow}} E\end{aligned}$$ be Spencer operators. We say that $(D,\tilde D)$ are [**compatible Spencer operators**]{} if 1. $D$ and $\tilde D$ are compatible connections, and 2. $\tilde D$ is a Lie-Spencer operator. W say that $\tilde D$ is a [**Lie prolongation of**]{} $(A,D)$ if, moreover, $\tilde D$ is standard. The condition for compatible Spencer operators implies that the representations $\tilde \nabla$ and $\nabla$ are [**compatible**]{}, in the sense that for any $\tilde{\alpha}\in\Gamma(\tilde A)$ and any ${\alpha}\in\Gamma(A)$, $$\begin{aligned} \label{compatible-rep} l(\tilde \nabla_{\tilde{\alpha}}{\alpha})=\nabla_{\tilde l(\tilde{\alpha})}l({\alpha}).\end{aligned}$$ This is true as equation is equivalent to $$\begin{aligned} l\circ \tilde D_{\rho_A(\gamma)}({\alpha})=D_{\rho_A(\gamma)}(\tilde l({\alpha})).\end{aligned}$$ The classical Lie prolongation $P_D(A)$ of a Spencer operator $(D,l):A\to E$ is also characterized by the universal property, stated in this setting as follows. $(D^{(1)},pr):P_D(A)\to A$ is a Lie prolongation of $(A,D)$ which is universal in the sense that for any other Lie prolongation $$\begin{aligned} \tilde A\overset{(\tilde D,\tilde l)}{{\longrightarrow}} A\overset{(D,l)}{{\longrightarrow}} E\end{aligned}$$ there exists a unique Lie algebroid map $j:\tilde A\to P_D(A)$ such that $$\begin{aligned} \tilde D=D^{(1)}\circ j.\end{aligned}$$ Moreover, $j$ is injective. Note that $j=j_D$ is a Lie algebroid map by lemma \[J\^1E\]. The classical Lie prolongation of the Lie-Spencer operator $(J^kA,D^{\text{\rm clas}})$ is $$\begin{aligned} (J^{k+1}A,D^{\text{\rm clas}}).\end{aligned}$$ ### Higher prolongations; formal integrability In the setting of Spencer operators all the notions and results given in chapter \[Relative connections\] are still valid in this setting. Of course, in this context the objects are “Lie theoretic”. For example, whenever smooth, the classical $k$-prolongation space of a Spencer operator $(A,D)$ (definition \[k-prolongation-space\]) $$\begin{aligned} P_D^k(A)\subset J^kA\end{aligned}$$ is a Lie subalgebroid thanks to proposition \[522\]. If, moreover, it is smoothly defined (definition \[smoothly-defined\]) then the relative connection $$\begin{aligned} D^{(k)}:\Gamma(P^k_D(A)){\longrightarrow}\Omega^1(M,P^{k-1}_D(A))\end{aligned}$$ is a Lie-Spencer operator relative to the Lie algebroid map $pr:P_D^k(A)\to P_D^{k-1}(A)$. This is a consequence of example \[higher jets on algebroids\]. In this case we call $$\begin{aligned} (P_D^k(A),D^{(k)}): P_{D}^k(A)\xrightarrow{D^{\text{\rm clas}},pr}{}P^{k-1}_D(A)\end{aligned}$$ the [**classical $k$-Lie prolongation of $(A,D)$**]{}. If $(A,D)$ is formally integrable in the sense of definition \[fi\], then we obtain the tower of Spencer operators $$\begin{aligned}\ (P^\infty_D(A),D^{(\infty)}): \cdots{\longrightarrow}P_D^{k}(A)\overset{D^{(k)}}{{\longrightarrow}}\cdots{\longrightarrow}P_D(A)\overset{D^{(1)}}{{\longrightarrow}}A\overset{D}{{\longrightarrow}}E \end{aligned}$$ called the [**classical Spencer resolution**]{}. Here are some results of relative connections that we want to remind in this setting: \[ultimo-porfavor\] If the classical $k$-prolongation space $P^{k}_D(A)$ is smoothly defined then the symbol space of the classical $k$-prolongation of $(A,D)$ is equal to the $k$-prolongation of the symbol map $\partial_D$ $$\begin{aligned} {\mathfrak{g}}(P_D^{k}(A),D^{(k)})={\mathfrak{g}}^{(k)}(A,D).\end{aligned}$$ As explained in subsection \[Lie-Spencer operators\], the symbol space of a Lie-Spencer operator is a bundle of Lie algebras. Hence, if $P^k_D(A)$ is smoothly defined then for $k>1$, $$\begin{aligned} {\mathfrak{g}}(P_D^{k}(A),D^{(k)})\subset P_D^k(A)\end{aligned}$$ is a bundle of trivial Lie algebras. In the case $k=1$, $$\begin{aligned} {\mathfrak{g}}(P_D^k(A),D^{(1)})={\mathfrak{g}}^{(1)}(A,D)\subset P_D(A)\end{aligned}$$ is the bundle of Lie algebras with bracket given by $[\cdot,\cdot]_\rho$ where $\rho:A\to TM$ is the anchor map (see lemma \[\[\]\]). See example \[higher jets on algebroids\]. If $P_D^{k_0}(F)$ is smoothly defined, then for any integer $0\leq k\leq k_0$, there is a one to one correspondence between the Lie algebras $\text{\rm Sol}(P^k_D(A),D^{(k)})$ and $\text{\rm Sol}(A,D)$ given by the surjective Lie algebroid map $$\begin{aligned} pr^k_0:P^k_D(A){\longrightarrow}A,\end{aligned}$$ where $pr^k_0$ is the composition $pr\circ pr^2\circ\cdots\circ pr^k$. Recall that the main importance of formally integrable Spencer operators is the following existence result in the analytic case: Let $(A,D)$ be an analytic Spencer operator which is formally integrable. Then, given $p\in P^k_D(A)$ with $\pi(p)=x\in M$, there exists an analytic solution $s\in\text{\rm Sol}(A,D)$ over a neighborhood of $s$ such that $j^k_xs=p.$ And of course, we again have the same workable criteria for formal integrability: If 1. $pr:P_D(A)\to A$ is surjective, 2. ${\mathfrak{g}}^{(1)}(A,D)$ is a smooth vector bundle over $M$, and 3. $H^{2,k}({\mathfrak{g}})=0$ for $k\geq0$. Then, $(A,D)$ is formally integrable. ### Abstract prolongations and Spencer towers {#algebroid prolongation and spencer towers} Not to make this subsection redundant, we want to remark that all the definitions and results of subsection \[subsection:prolongation and Spencer towers\] make sense in the setting of Lie algebroids and Spencer operators. For instance: - A [**Spencer tower**]{} $(A^\infty,D^\infty,l^\infty)$ is a tower of relative connections $$\begin{aligned} \cdots{\longrightarrow}A^k \overset{(D^k,l)}{{\longrightarrow}}\cdots {\longrightarrow}A^2\overset{(D^2,l)}{{\longrightarrow}} A^1\overset{(D^1l)}{{\longrightarrow}} A\end{aligned}$$ in which all $A^k$ are lie algebroids, and all $D^k$ are Lie-Spencer operators. We say that $(A^\infty,D^\infty,l^\infty)$ is a [**standard Spencer tower**]{} when all the connections are standard. - A [**standard Spencer resolution**]{} of a Spencer operator $D$ is a standard Spencer tower $(A^{\infty},D^\infty)$ with the property that $(D,D^1)$ are compatible. - A [**morphism $\Psi$ of Spencer towers**]{} is a morphism of towers $$\begin{aligned} \Psi: (A^\infty,D^\infty){\longrightarrow}(\tilde A^\infty,\tilde D^\infty)\end{aligned}$$ such that each $\Psi_k:A_k\to \tilde A_k$ is a Lie algebroid morphism and so on... The sequence $$\begin{aligned} (J^\infty A,D^{\infty\text{-{\text{\rm clas}}}}):\cdots {\longrightarrow}J^{k+1}A\overset{D^{\text{\rm clas}}}{{\longrightarrow}}J^kA\overset{D^{\text{\rm clas}}}{{\longrightarrow}} \cdots{\longrightarrow}J^1A\overset{D^{\text{\rm clas}}}{{\longrightarrow}}A \end{aligned}$$ is an example of a standard Spencer tower. If $(D,l):A\to E$ is formally integrable, we obtain the classical Lie-Spencer resolution $$\begin{aligned}\label{pro2} (P^\infty_D(A),D^{(\infty)}): \cdots{\longrightarrow}P_D^{k}(A)\overset{D^{(k)}}{{\longrightarrow}}\cdots{\longrightarrow}P_D(A)\overset{D^{(1)}}{{\longrightarrow}}A\overset{D}{{\longrightarrow}}E. \end{aligned}$$ Note that is a subtower of $(J^\infty A,D^{\infty\text{-{\text{\rm clas}}}})$. Using corollary \[k-prolongations\] and remark \[extender\], this sequence still makes sense even if $(A,D)$ is not formally integrable, but this involves non-smooth subbundles. Going back to the notion of a Lie pseudogroup $\Gamma\subset \operatorname{Bis}({\mathcal{G}})$ (see subsection \[Lie pseudogroups\]), one has an induced standard Spencer tower $$\begin{aligned} (A^{\infty}(\Gamma),D^{\infty}): \cdots{\longrightarrow}A^{(k)}(\Gamma)\overset{D^{\text{\rm clas}}}{{\longrightarrow}}\cdots{\longrightarrow}A^{(1)}(\Gamma)\overset{D^{{\text{\rm clas}}}}{{\longrightarrow}}A\overset{D^{{\text{\rm clas}}}}{{\longrightarrow}}A^{(0)}(\Gamma) \end{aligned}$$ In the setting of Spencer operators the classical Spencer resolution is also universal in the following sense. Let $D$ be not necessarily formally integrable Spencer operator relative to the vector bundle map $l:A\to E$. The possibly non-smooth subtower $(P_D^\infty(A),D^{(\infty)})$ of $(J^\infty A,D^\infty)$ is universal among the Spencer resolution of $(A,D)$ in the sense that for any other Spencer resolution $$\begin{aligned} A^\infty\xrightarrow{(D^\infty,l^\infty)}{}A\xrightarrow{(D,l)}E\end{aligned}$$ there exists a unique morphism $\Psi:(A^\infty,D^{\infty})\to (J^\infty A,D^{\infty\text{-{\text{\rm clas}}}})$ of Spencer towers such that $\Psi^0:A\to A$ is the identity map, and for $k\geq 1$ $$\begin{aligned} \Psi^k(A_k)\subset P_D^k(A).\end{aligned}$$ Moreover, if $(A^\infty,D^\infty)$ is a standard Spencer resolution of $(A,D)$ then $\Psi$ is injective. It remains to check that the maps $\Psi_k:A_k\to J^kA$ constructed in the proof of theorem \[J\^1E\] are Lie algebroid morphisms. We will give an inductive argument. By construction $\Psi_1$ is equal to $$\begin{aligned} j_{D^1}.:A_1\to J^1A\end{aligned}$$ which is a Lie algebroid morphism by lemma \[J\^1E\]. Let’s assume now that for a fixed $k$, $\Psi_k:A_k\to J^kA$ is a Lie algebroid morphism. By definition $\Psi_{k+1}:A_{k+1}\to J^{k+1}A$ is given by the composition $$\begin{aligned} A_{k+1}\xrightarrow{j_{D^{k+1}}} J^1(A_k)\xrightarrow{prol(\Psi_k)}J^1(J^kA).\end{aligned}$$ As $\Psi_k:A_k\to J^kA$ is a Lie algebroid morphism then it follows that $prol(\Psi_k):J^1(A_k)\to J^1(J^kA)$ is a Lie algebroid morphism, and therefore $\Psi_{k+1}$ is a Lie algebroid morphism since it is the composition of two of them. Indeed, as $\rho_{J^kA}\circ\Psi_k=\rho_{A_k}$, where $\rho_{A_k}$ and $\rho_{J^kA}$ are the anchor maps of $A_k$ and $J^kA$ respectively, and the anchor of the first jet of a Lie algebroid $A$ is the composition of the projection $J^1A\to A$ with the anchor of $A$, then $$\begin{aligned} \begin{aligned} \rho_{J^1(J^kA)}\circ prol(\Psi_k)({\alpha},\omega)=&\rho_{J^1(J^kA)}(\Psi_k\circ{\alpha},\Psi_k\circ\omega)\\&=\rho_{J^kA}(\Psi_k\circ{\alpha})=\rho_{A_k}({\alpha})=\rho_{J^1(A_k)}({\alpha},\omega) \end{aligned}\end{aligned}$$ for $({\alpha},\omega)\in\Gamma(A_k)\oplus \Omega^1(M,A_k)$, where here we are using the Spencer decomposition . On the other hand, as $[j^1{\alpha},j^1{\beta}]_{J^1A_k}=j^1([{\alpha},{\beta}]_{A_k})$ for ${\alpha},{\beta}\in\Gamma(A_k)$, then $$\begin{aligned} \begin{split} [prol(\Psi_k(j^1{\alpha})),&prol(\Psi_k(j^1{\alpha}))]_{J^1(J^kA)}=[j^1(\Psi_k\circ{\alpha}),j^1(\Psi_k\circ{\beta})]_{J^1(J^kA)}\\&=j^1[\Psi_k\circ{\alpha},\Psi_k\circ{\beta}]_{J^kA}=j^1[{\alpha},{\beta}]_{A_k}=[j^1{\alpha},j^1{\beta}]_{J^1(A_k)}. \end{split}\end{aligned}$$ As the space of sections of $J^1(A_k)$ is generated by the holonomic sections as a $C^{\infty}(M)$-module, then by the above equation and the Leibniz identity it follows that $$\begin{aligned} [\cdot,\cdot]_{J^1(J^kA)}\circ\Psi_k=[\cdot,\cdot]_{J^1(A_k)}.\end{aligned}$$ Hence, $\Psi_{k+1}$ is a Lie algebroid morphism. Particular cases {#Cartan algebroids} ---------------- ### Cartan algebroids Recalling example \[inf-act\], let $$\begin{aligned} \rho:\mathfrak{h}{\longrightarrow}{\ensuremath{\mathfrak{X}}}(M)\end{aligned}$$ be an infinitesimal action of a Lie algebra $\mathfrak{h}$ on $M$. The associated action algebroid $\mathfrak{h}\ltimes M$ comes equipped with the canonical flat linear connection $$\begin{aligned} \nabla^{\text{flat}}:{\ensuremath{\mathfrak{X}}}(M)\times C^\infty(M,\mathfrak{h}){\longrightarrow}C^\infty(M,\mathfrak{h}).\end{aligned}$$ This is the main example of a Cartan algebroid defined in [@Blaom], and in fact it is the only flat Cartan algebroid, up to isomorphism, when $M$ is simply connected as we will see in proposition \[only\]. Cartan algebroids were defined as a Lie algebroid $A$ together with a linear connection $$\begin{aligned} \nabla: {\ensuremath{\mathfrak{X}}}(M)\times \Gamma(A){\longrightarrow}\Gamma(A)\end{aligned}$$ such that its basic curvature $R_\nabla$ vanishes. By definition \[Lie-Spencer\] and lemma \[J\^1E\], two equivalent interpretations of Cartan algebroids are: - as a Lie-Spencer operator $D$ relative to the identity map $id:A\to A$, - as a Lie algebroid splitting $$\begin{aligned} j_D:A{\longrightarrow}J^1A\end{aligned}$$ of the projection $pr:J^1A\to A.$ \[only\] Let $(A,\nabla)$ be Cartan algebroid over a simply connected manifold $M$. If $\nabla$ is flat then there exists an infinitesimal action $\rho:\mathfrak{h}\to {\ensuremath{\mathfrak{X}}}(M)$ and a Lie algebroid isomorphism $$\begin{aligned} \phi:\mathfrak{h}\ltimes M{\longrightarrow}A\end{aligned}$$ such that under this identification, $\nabla$ becomes the canonical flat linear connection $\nabla^{\text{\rm flat}}.$ It is a well-known fact that a vector bundle $A$ over a simply connected manifold $M$, with a flat linear connection $\nabla$, is isomorphic to the trivial bundle $V_M$ with fiber $V\subset\Gamma(A)$, the finite dimensional vector space of parallel sections. The isomorphism is given by $$\begin{aligned} \label{lam}\begin{aligned} V_M&\overset{\phi}{{\longrightarrow}} A\\ (s,x)&\mapsto s(x). \end{aligned}\end{aligned}$$ Moreover, under this isomorphism, the canonical flat connection $\nabla^\text{flat}$ of $V_M$ is equal to $\nabla$. Now, from the compatibility condition we have that the Lie bracket of $\Gamma(A)$ restricts to a Lie bracket $[\cdot,\cdot]$ on $V$. Set $\mathfrak{h}:=V$ be the Lie algebra with Lie bracket $[\cdot,\cdot]_{\mathfrak{h}}:=[\cdot,\cdot]$. We have an obvious action $$\begin{aligned} \rho':\mathfrak{h}{\longrightarrow}{\ensuremath{\mathfrak{X}}}(M)\end{aligned}$$ given by $\rho'({\alpha})(x)=\rho({\alpha}(x))$, where $\rho$ is the anchor of $A$. It is clear from the definition of $\rho$ and the Lie algebroid structure of $\mathfrak{h}\ltimes M$ that is a Lie algebroid isomorphism. ### Spencer operators of finite type Let $D$ be a Spencer operator of finite type (see section \[finite type\]) relative to the vector bundle map $l:A\to E$. We will prove the following stronger version of theorem \[finitecase\]. \[F-T\] Suppose that $D$ is a Spencer operator over a simply connected manifold $M$, and let $k\geq 1$ be the order of $D$. If 1. $P^k_D(A)$ is smoothly defined, and 2. $pr:P^{k+1}_D(A)\to P^{k}_D(A)$ is surjective, then there exists an infinitesimal action $$\begin{aligned} \psi:\mathfrak{h}{\longrightarrow}{\ensuremath{\mathfrak{X}}}(M)\end{aligned}$$ of a Lie algebra $\mathfrak{h}$ on $M$ and a surjective morphism $p:(\mathfrak{h}\ltimes M,\nabla^{\text{\rm flat}})\to (A,D)$ of Spencer operators, inducing a bijection in the space of solutions. Moreover, ${\text{\rm Sol}}(A,D)\subset\Gamma(A) $ is a finite dimensional Lie algebra of dimension $$\begin{aligned} r={\text{\rm rk}\,}A+{\text{\rm rk}\,}{\mathfrak{g}}^{(1)}+{\text{\rm rk}\,}{\mathfrak{g}}^{(2)}+\cdots+{\text{\rm rk}\,}{\mathfrak{g}}^{(k-1)}.\end{aligned}$$ In theorem \[F-T\], a morphism of Spencer operators is a morphism of relative connections $p:(\mathfrak{h}\ltimes M,\nabla^{\text{flat}})\to (A,D)$ with $p:\mathfrak{h}\ltimes M\to A$ a Lie algebroid map. The message is clear here: a Spencer operator satisfying the hypotheses of theorem \[F-T\] is the quotient of an action algebroid with the canonical flat connection $\nabla^{\text{\rm flat}}$. Intuitively the situation is as follows $$\begin{aligned} \xymatrix{ \mathfrak{h}\ltimes M \ar[r]^{\nabla^{\text{flat}}} \ar[d]_p & \mathfrak{h}\ltimes M \ar[d]^{l\circ p}\\ A \ar[r]^{D} & E. }\end{aligned}$$ Although the above diagram is not precise, it helps to illustrate our situation. \[lemma:auxiliary\] With the hypotheses of the previous theorem, one has that 1. the Lie algebroid $P^{k}_D(A)$ is isomorphic to the action algebroid $\mathfrak{h}\ltimes M$, 2. $pr:P^{k}_D(A)\to P^{k-1}_D(A)$ is a Lie algebroid isomorphism. Moreover, under the identification $pr$, $D^{(k)}$ becomes the trivial connection $\nabla^{\text{\rm flat}}.$ From lemma \[lema\] one has that under the Lie algebroid isomorphism $pr:P^{k}_D(A)\to P^{k-1}_D(A)$, $(P^{k}_D(A),D^{(k)})$ is a flat Cartan algebroid and therefore, by lemma \[only\], it is isomorphic to the action groupoid $\mathfrak{h}\ltimes M$ where $\mathfrak{h}$ is the Lie algebra of parallel sections of $D^{(k)}.$ That $p:\mathfrak{h}\ltimes M\to A$ is a Lie algebroid morphism follows from the fact that it is the composition of lie algebroid morphisms as one can see in the proof of theorem \[finitecase\]. From this, theorem \[finitecase\], and lemma \[lema\] the results follow. Suppose that $D$ is a Spencer operator over a simply connected manifold $M$, and let $k\geq 1$ be the order of $D$. If 1. $pr:P_D(A)\to A$ is surjective, 2. ${\mathfrak{g}}^{(1)}$ is a smooth vector bundle over $M$, and 3. $H^{2,l}({\mathfrak{g}})=0$ for $0\leq l\leq k-1$, then there exists an infinitesimal action $$\begin{aligned} \psi:\mathfrak{h}{\longrightarrow}{\ensuremath{\mathfrak{X}}}(M)\end{aligned}$$ of a Lie algebra $\mathfrak{h}$ over $M$ and a morphism $p:(\mathfrak{h}\ltimes M,\nabla^{\text{\rm flat}})\to (A,D)$ of Spencer operators, inducing a bijection in the space of solutions. Moreover, ${\text{\rm Sol}}(A,D)\subset\Gamma(A) $ is a finite dimensional Lie algebra of dimension $$\begin{aligned} r={\text{\rm rk}\,}A+{\text{\rm rk}\,}{\mathfrak{g}}^{(1)}+{\text{\rm rk}\,}{\mathfrak{g}}^{(2)}+\cdots+{\text{\rm rk}\,}{\mathfrak{g}}^{(k-1)}.\end{aligned}$$ Our hypotheses imply that $(A,D)$ is formally integrable and therefore we are left in the situation of theorem \[F-T\]. Pfaffian groupoids {#Pfaffian groupoids} ================== Pfaffian groupoids are Lie groupoids ${\mathcal{G}}{\rightrightarrows}M$ endowed with a multiplicative distribution ${\mathcal{H}}$ of ${\mathcal{G}}$ (see definition \[def-pf-syst\]) compatible with the groupoid structure and satisfying some extra conditions (see definition \[defn:pfaffian\_grpds\]). For such a groupoid, the source map $s:({\mathcal{G}},{\mathcal{H}})\to M$ is a Pfaffian bundle, to which we can apply the theory of chapter \[Pfaffian bundles\]. In the dual picture we deal with a Pfaffian form $\theta\in\Omega^1({\mathcal{G}},t^*E)$, where in this case $E$ is a representation of ${\mathcal{G}}$ and $\theta$ is multiplicative, i.e. it is compatible with the multiplication of ${\mathcal{G}}$. In this setting all objects become Lie theoretic and therefore their linearizations are of Spencer type on the Lie algebroid $A$ of ${\mathcal{G}}$ (see chapter \[Relative connections on Lie algebroids\]). Again, one advantage of working with forms is that it is slightly more general (it allows $\theta$ to have non-constant rank). An advantage of the point of view of distributions is that some integrability conditions are more natural and easier to handle globally.\ Here are some connections with the existing literature. Multiplicative distributions in a sense more general than ours, but which are required to be involutive, were studied in [@Hawkins] in the context of geometric quantization and, more recently, in [@JotzOrtiz]. Moving towards Cartan’s ideas, our Cartan connections from subsection \[Cartan connections\] are the global counterpart of Blaom’s Cartan algebroids [@Blaom]. The flat Cartan connections are the same “flat connections on groupoids” used by Behrend in the context of equivariant cohomology [@behrend]. On the other hand, due to our approach, there is long list of literature on Lie pseudogroups and the geometry of PDE that serves as inspiration for this thesis [@Cartan1904; @Cartan1905; @Cartan1937; @Spencer; @KumperaSpencer; @GuilleminSternberg:deformation; @Gold1; @Gold2; @Gold3; @Olver:MC; @Kamran; @BC; @Seiler]. Of course, the appearance of the classical Cartan form and Spencer operator is an indication of this relationship with the theory of Lie pseudogroups. Pfaffian groupoids {#pfaffian-groupoids} ------------------ ### Multiplicative distributions To discuss multiplicativity of distributions, recall that one has a Lie groupoid $T{\mathcal{G}}{\rightrightarrows}TM$ associated to any Lie groupoid ${\mathcal{G}}{\rightrightarrows}M$; its structure maps are just the differentials of the structure maps of ${\mathcal{G}}$. \[def-pf-syst\] A [**multiplicative distribution on ${\mathcal{G}}$**]{} is any distribution ${\mathcal{H}}\subset T{\mathcal{G}}$ which is also a Lie subgroupoid of $T{\mathcal{G}}{\rightrightarrows}TM$ (with the same base $TM$). Here we have some remarks about the previous definitions \[mult-equiv\]Given a distribution ${\mathcal{H}}\subset T{\mathcal{G}}$, the multiplicativity of ${\mathcal{H}}$ is equivalent to: 1. ${\mathcal{H}}$ is closed under ${d}m$, i.e. for any $X_g\in {\mathcal{H}}_g$, $Y_h\in {\mathcal{H}}_h$ for which ${d}s(X_g)={d}t(Y_h)$, ${d}_{(g,h)}m(X_g,Y_h)\in {\mathcal{H}}_{gh}.$ 2. ${\mathcal{H}}$ is closed under ${d}i$, i.e. ${d}i({\mathcal{H}}_g)={\mathcal{H}}_{g^{-1}}.$ 3. At units $x=1_x$, ${\mathcal{H}}_x$ contains $T_xM$. 4. ${\mathcal{H}}$ is $s$-transversal, i.e. $T{\mathcal{G}}={\mathcal{H}}+T^s{\mathcal{G}}$. The proof of the last condition is analogous to the part of proof of lemma \[linear-transversal\] where one shows that a linear distribution is $\pi$-transversal. In this case the roles of the zero section $z:M\to F$ and the fiber-wise addition $a:F\times_MF\to F$ are played by the unit map $u:M\to{\mathcal{G}}$ and the multiplication map $m:{\mathcal{G}}_2\to {\mathcal{G}}$, respectively. Note that the last condition is equivalent to the surjectivity of $ds: {\mathcal{H}}\to TM$; it actually implies that the last map is not only point-wise surjective, but also a submersion (which is necessary for ${\mathcal{H}}$ to be a Lie groupoid over $TM$) as it is a vector bundle map over $s:{\mathcal{G}}\to M$ The dual objects of multiplicative distributions are the point-wise surjective multiplicative one forms. This will be treated in subsection \[the dual point of view\]. \[defn:pfaffian\_grpds\] - A [**Pfaffian groupoid**]{} is a Lie groupoid ${\mathcal{G}}{\rightrightarrows}M$ endowed with a multiplicative distribution ${\mathcal{H}}\subset T{\mathcal{G}}$ which is also a Pfaffian distribution with respect to the source map. - A Pfaffian groupoid $({\mathcal{G}},{\mathcal{H}})$ is said to be of **Lie type** (or a **Lie-Pfaffian** groupoid) if $$\begin{aligned} {\mathcal{H}}\cap\ker ds={\mathcal{H}}\cap\ker dt.\end{aligned}$$ In other words, a Pfaffian groupoid is Lie groupoid endowed with an $s$-involutive multiplicative distribution. See remark \[mult-equiv\]. \[transversalidad\]Equivalently, one could define a Pfaffian groupoid as a Lie groupoid ${\mathcal{G}}$ endowed with a multiplicative distribution ${\mathcal{H}}$ with the property that ${\mathcal{H}}$ is a Pfaffian distribution with respect to the target map (see chapter \[Pfaffian bundles\]). That both notions are equivalent comes from the fact that the differential of the inverse map $i:{\mathcal{G}}\to {\mathcal{G}}$ maps isomorphically $H$ to $H$, and $T^s{\mathcal{G}}$ to $T^t{\mathcal{G}}$. From this one has that the multiplicative distribution ${\mathcal{H}}$ is Pfaffian with respect to the source map $s:{\mathcal{G}}\to M$ if and only if it is Pfaffian with respect to the target map $t:{\mathcal{G}}\to M$. As we shall see, via the linearization along the unit map, Pfaffian groupoids correspond to Spencer operators in the sense of chapter \[Relative connections on Lie algebroids\]. In this correspondence, Lie-Pfaffian groupoids correspond to Lie-Spencer operators (see definition \[Lie-Spencer\]). This also motivates the terminology of “Lie-Pfaffian”. See theorem \[t2\] and proposition \[in-dis\]. Let $({\mathcal{G}},{\mathcal{H}})$ be a Lie-Pfaffian groupoid. There exists a Lie algebroid structure on $$E:= A/({\mathcal{H}}^s|_M)$$ such that the projection $A\to E$ is a Lie algebroid map. Notice that the condition ${\mathcal{H}}^s={\mathcal{H}}^t$ implies that the anchor map $\rho:A\to TM$ descends to the quotient $E\to M$ as ${\mathcal{H}}^s|_M\subset \ker\rho$. On the other hand, we define the bracket on $E$ on the equivalence class of ${\alpha},{\beta}\in\Gamma(A)$ by $$\begin{aligned} \label{bra-quo} [\bar{{\alpha}},\bar{{\beta}}]=[{\alpha},{\beta}]{\operatorname{mod}}{\mathcal{H}}^s|_M.\end{aligned}$$ To see that the bracket is well-defined, take ${\beta}\in \Gamma({\mathcal{H}}^s|_M)$ and ${\alpha}\in\Gamma(A)$. Then for the associated right invariant vector fields ${\beta}^r\in\Gamma({\mathcal{H}}^s)(=\Gamma({\mathcal{H}}^t))$ and ${\alpha}^r\in{\ensuremath{\mathfrak{X}}}^{\text{inv}}({\mathcal{G}})$, $$\begin{aligned} \begin{split} [{\alpha}^r,{\beta}^r](g)&=\frac{d}{d{\epsilon}}|_{{\epsilon}=0}d_{\varphi_{{\alpha}^r}^{\epsilon}(g)}\varphi^{-{\epsilon}}_{{\alpha}^r}({\beta}^r_{\varphi_{{\alpha}^r}^{\epsilon}(g)})\\&=\frac{d}{d{\epsilon}}|_{{\epsilon}=0}dm(\phi^{-{\epsilon}}_{{\alpha}}(dt({\beta}^r_{\varphi_{{\alpha}^r}^{\epsilon}(g)})),{\beta}^r_{\varphi_{{\alpha}^r}^{\epsilon}(g)})\\&=\frac{d}{d{\epsilon}}|_{{\epsilon}=0}dm(0_{t(\varphi_{{\alpha}^r}^{-{\epsilon}}(g))},{\beta}^r_{\varphi_{{\alpha}^r}^{\epsilon}(g)})=\frac{d}{d{\epsilon}}|_{{\epsilon}=0}L_{t(\varphi_{{\alpha}^r}^{-{\epsilon}}(g))}({\beta}^r_{\varphi_{{\alpha}^r}^{\epsilon}(g)}), \end{split}\end{aligned}$$ where $\varphi_{{\alpha}^r}^{\epsilon}(g)=\phi_{\alpha}^{{\epsilon}}(t(g))g$ is the flow of ${\alpha}^r$ (see remark \[flows of sections\]), and $L:T^t{\mathcal{G}}\to T^t{\mathcal{G}}$, left multiplication, is defined by $L(v_g)= dm(0_{g^{-1}},v_g)$. As ${\mathcal{H}}^t$ is closed under left multiplication then $[{\alpha}^r,{\beta}^r]\in\Gamma({\mathcal{H}}^t)(=\Gamma({\mathcal{H}}^s))$. This implies that $[{\alpha},{\beta}]\in\Gamma({\mathcal{H}}^s|_M)$ and therefore formula is well-defined. The previous proof shows that any multiplicative distribution ${\mathcal{H}}$ with the property that ${\mathcal{H}}^s={\mathcal{H}}^t$ is of Pfaffian-type, i.e. ${\mathcal{H}}$ is $s$-involutive. \[618\]There is a natural action of a Lie-Pfaffian groupoid $({\mathcal{G}},{\mathcal{H}})$ on $TM$, the tangent space of the base manifold: for $g\in{\mathcal{G}}$, $$\begin{aligned} g:T_{s(g)}M{\longrightarrow}T_{t(g)}M,\quad g\cdot X=dt(\tilde X_g),\end{aligned}$$ where $\tilde X_g\in {\mathcal{H}}_g$ is any vector that $s$-projects to $X$, i.e. $ds(\tilde X_g)=X$. Note that this does not depend on the choice of $\tilde X_g$. Indeed, if $X'_g\in {\mathcal{H}}_g$ is any other such vector then $$\begin{aligned} \tilde X_g-X'_g\in{\mathcal{H}}_g^s={\mathcal{H}}^t_g\quad\Longrightarrow\quad dt(\tilde X_g)=dt(X'_g).\end{aligned}$$ To check the axioms of the action notice that $TM\hookrightarrow {\mathcal{H}}$ by remark \[mult-equiv\], and therefore $1_x:T_xM\to T_xM$ is the identity. Now, if $(h,g)$ are composable and $X\in T_{s(g)}M$, let $Y_h\in {\mathcal{H}}_h$ be such that $$\begin{aligned} ds(Y_h)=dt(\tilde X_g)=g\cdot X.\end{aligned}$$ Take $ Z_{hg}=dm(Y_h,\tilde X_g)\in {\mathcal{H}}_{hg}. $ Then $ds(Z_{hg})=ds(\tilde X_g)=X$ and therefore $$\begin{aligned} (hg)\cdot X=dt(Z_{hg})=dt(Y_h)=h\cdot(g\cdot X). \end{aligned}$$ Since a Pfaffian groupoid is a Pfaffian bundle $s:({\mathcal{G}},{\mathcal{H}})\to M$, we can apply the notions of Pfaffian bundles as: - the symbol space ${\mathfrak{g}}({\mathcal{H}})$ of ${\mathcal{H}}$ given by the vector bundle over ${\mathcal{G}}$ $$\begin{aligned} {\mathfrak{g}}({\mathcal{H}}):={\mathcal{H}}^s,\end{aligned}$$ - the symbol map given by the vector bundle map over ${\mathcal{G}}$ $$\begin{aligned} \partial_{\mathcal{H}}:{\mathfrak{g}}({\mathcal{H}}){\longrightarrow}{\mathrm{Hom}}(s^*TM,T{\mathcal{G}}/{\mathcal{H}}),\end{aligned}$$ - the prolongations of ${\mathfrak{g}}({\mathcal{H}})$ $$\begin{aligned} {\mathfrak{g}}^{(k)}({\mathcal{G}},{\mathcal{H}})\subset s^*(S^kT^*)\otimes (T{\mathcal{G}}/{\mathcal{H}}).\end{aligned}$$ In the case of groupoids, we will see that these objects over ${\mathcal{G}}$ satisfy some invariance conditions and that they actually come from $M$. For instance, for the symbol space, consider the induced vector bundle over $M$ given by $$\begin{aligned} {\mathfrak{g}}_M({\mathcal{H}}):={\mathfrak{g}}({\mathcal{H}})|_M\subset T^s{\mathcal{G}}|_M=A.\end{aligned}$$ \[isom\]Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distributions. Then $$\begin{aligned} {\mathfrak{g}}({\mathcal{H}})\simeq t^*{\mathfrak{g}}_M({\mathcal{H}}).\end{aligned}$$ While right translations induce an isomorphism of vector bundles over $M$, $R: T^{s}{\mathcal{G}}\stackrel{\sim}{{\longrightarrow}} t^{\ast}A$, $R(X_g)= R_{g^{-1}}(X_g)$, they restrict to an isomorphism ${\mathcal{H}}^s\cong t^{\ast} \mathfrak{g}$: this is a consequence of ${\mathcal{H}}$ being closed under the differential of the multiplication, and that $R(X_g)=dm(X_g,0_{g^{-1}})$ for any $X_g\in T^s_g{\mathcal{G}}.$ In this setting the natural objects to consider as solutions and partial integral elements of the pair $({\mathcal{G}},{\mathcal{H}})$ are the following: Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distribution. - A [**solution of $({\mathcal{G}},{\mathcal{H}})$**]{} is a bisection $b\in\operatorname{Bis}({\mathcal{G}})$ which is also a solution of $({\mathcal{G}},{\mathcal{H}})$ in the sense of definition \[def: pfaffian distributions\]. the set of solutions of $({\mathcal{G}},{\mathcal{H}})$ is denoted by $$\begin{aligned} \operatorname{Bis}({\mathcal{G}},{\mathcal{H}}).\end{aligned}$$ - A [**partial integral element of ${\mathcal{H}}$**]{} is a linear subspace $V\subset T_g{\mathcal{G}}$ with the property that $V\subset {\mathcal{H}}_g$ and $$\begin{aligned} T_g{\mathcal{G}}=V\oplus T^s_g{\mathcal{G}}\qquad{\text{and}}\qquad T_g{\mathcal{G}}=V\oplus T_g^t{\mathcal{G}}.\end{aligned}$$ ### The dual point of view {#the dual point of view} In general, multiplicative distributions arise as kernels of point-wise surjective multiplicative one forms and this provides a dual point of view on Pfaffian groupoids, which generalizes the following 1-1 correspondence for bundles $R\to M$ 1. point-wise surjective $1$-forms $\theta\in \Omega^1(R, E')$, where $E'$ is some vector bundle over $R$. 2. distributions ${\mathcal{H}}$ on $R$, i.e. vector sub-bundles ${\mathcal{H}}\subset TR$. In one direction, ${\mathcal{H}}= \textrm{Ker}(\theta)$; conversely, $E= TR/{\mathcal{H}}$ and $\theta$ is the canonical projection. It is clear that the kernel ${\mathcal{H}}$ of any point-wise surjective multiplicative one form $\theta\in\Omega^1({\mathcal{G}},E)$ (with coefficients in some representation $E$) is multiplicative. We explain here the converse. \[from-theta-H\] Let ${\mathcal{G}}$ be a Lie groupoid. Then for any representation $E$ of ${\mathcal{G}}$ and any point-wise surjective $E$-valued multiplicative form $\theta\in \Omega^1({\mathcal{G}}, t^{\ast}E)$, $${\mathcal{H}}_{\theta}:= \textrm{Ker}(\theta)\subset T{\mathcal{G}}$$ is a multiplicative distribution on ${\mathcal{G}}$. Moreover, any multiplicative distribution arises in this way. We use $$\mathfrak{g}:= {\mathcal{H}}^s|_{M}\subset A$$ from the previous subsection, where $A$ is the Lie algebroid of ${\mathcal{G}}$. As coefficients we take $$\ E:= A/\mathfrak{g} ,$$ As we saw in lemma \[isom\], while right translations induce an isomorphism of vector bundles over $M$, $R: T^{s}{\mathcal{G}}\stackrel{\sim}{{\longrightarrow}} t^{\ast}A$, $r(X_g)= R_{g^{-1}}(X_g)$, restricts to an isomorphism ${\mathcal{H}}^s\cong t^{\ast} \mathfrak{g}$ of vector bundles over ${\mathcal{G}}$ $$T{\mathcal{G}}/{\mathcal{H}}\simeq T^s{\mathcal{G}}/{\mathcal{H}}^s \overset{R}{{\longrightarrow}} t^{\ast}(E) .$$ Hence the canonical projection $T{\mathcal{G}}\to T{\mathcal{G}}/{\mathcal{H}}$ can be interpreted as a form $$\theta_{{\mathcal{H}}}\in \Omega^1({\mathcal{G}}, t^*E).$$ Finally, there is an induced “adjoint action” of ${\mathcal{G}}$ on $E$: for $g\in {\mathcal{G}}$, $${\text{\rm Ad}\,}^{\mathcal{H}}_g: E_{s(g)}{\longrightarrow}E_{t(g)} ,\ \ {\text{\rm Ad}\,}^{\mathcal{H}}_g({\alpha}{\operatorname{mod}}{\mathfrak{g}}) = ({\text{\rm Ad}\,}_{\sigma_g}{\alpha}){\operatorname{mod}}{\mathfrak{g}},$$ where $\sigma_g: T_{s(g)}M \to {\mathcal{H}}_g\subset T_g{\mathcal{G}}$ is any splitting of $d_gs$ and where ${\text{\rm Ad}\,}$ is the adjoint representation of ${J}^1{\mathcal{G}}$ on $A$ (see section \[Jet groupoids and algebroids\]). With this, \[lemma-from-H-to-theta\] $E$ is a representation of ${\mathcal{G}}$ and $\theta_{{\mathcal{H}}}\in \Omega^1({\mathcal{G}}, E)$ is multiplicative. Through this proof we will use that the canonical Cartan form $$\begin{aligned} \theta_{\mathrm{can}}\in \Omega^1(J^1{\mathcal{G}},t^*A)\end{aligned}$$ is multiplicative with respect to the adjoint action. This will be proved in subsection \[example: jet groupoids\]. To see that ${\text{\rm Ad}\,}^{\mathcal{H}}$ is well defined we note that, if ${\beta}\in \mathfrak{g}$, then $${\text{\rm Ad}\,}_{\sigma_g}{\beta}= R_{g^{-1}}{d}m (\sigma_g(\rho({\beta})), {\beta}),$$ which belongs to $\mathfrak{g}$ due to ${\mathcal{H}}$ being multiplicative. Moreover, if $\sigma'_g$ is another splitting of ${d}s$ whose image lies in ${\mathcal{H}}$, then $$\begin{aligned} {\text{\rm Ad}\,}_{\sigma_g}{\alpha}- {\text{\rm Ad}\,}_{\sigma'g}{\alpha}& =R_{g^{-1}}({d}m (\sigma_g(\rho({\alpha})), {\alpha}) - {d}m (\sigma'_g(\rho({\alpha})), {\alpha}))\\ &= R_{g^{-1}}{d}m (\sigma_g(\rho({\alpha})) - \sigma'_g(\rho({\alpha})), 0_{s(g)}), \end{aligned}$$ which also belongs to $\mathfrak{g}$, for all ${\alpha}\in \Gamma(A)$. It follows that ${\text{\rm Ad}\,}^{\mathcal{H}}_g$ is independent of the choice of splitting $\sigma_g$. We now show that $\theta_{{\mathcal{H}}}$ is multiplicative for this representation. Observe that for $\xi \in T_g{\mathcal{G}}$, if $\tilde{\xi}$ is any lift of $\xi$ to $T_{\sigma_g}{J}^1{\mathcal{G}}$, then $$\theta_g(\xi) = \theta_{\mathrm{can},\sigma_g}(\tilde{\xi}){\operatorname{mod}}{\mathfrak{g}},$$ where, again $\sigma_g$ is any splitting of ${d}s$ whose image lies in ${\mathcal{H}}$. Also, since ${\mathcal{H}}$ is multiplicative, if $\sigma_g$ and $\sigma_h$ are splittings of ${d}s$ whose image lie in ${\mathcal{H}}$, then also the image of $\sigma_g\cdot \sigma_h$ lies in ${\mathcal{H}}$ (whenever the product is defined). It follows that $$\begin{aligned} \theta_{gh}({d}m(\xi_1, \xi_2)) & = (\theta_{\mathrm{can}, \sigma_g\sigma_h}({d}m(\tilde{\xi_1},\tilde{\xi_2}))){\operatorname{mod}}{\mathfrak{g}}\\ &=(\theta_{\mathrm{can},\sigma_g}(\tilde{\xi_1}) + {\text{\rm Ad}\,}_{\sigma_g}\theta_{\mathrm{can},\sigma_h}(\tilde{\xi_2})){\operatorname{mod}}{\mathfrak{g}}\\ &=\theta_g(\xi_1) + {\text{\rm Ad}\,}^{\mathcal{H}}_{\sigma_h}(\xi_2). \end{aligned}$$ Recall from remark \[618\] that for a Lie-Pfaffian groupoid there is a canonical action of ${\mathcal{G}}$ on $TM$. \[6113\] With the above notation, let $({\mathcal{G}},{\mathcal{H}})$ be a Lie-Pfaffian groupoid. There exists an action of ${\mathcal{G}}$ on $TM$ which makes $\rho:E\to TM$ ${\mathcal{G}}$-equivariant with respect to the adjoint action ${\text{\rm Ad}\,}^{\mathcal{H}}$ of ${\mathcal{G}}$ on $E$. Consider the action of ${\mathcal{G}}$ on $TM$ described in remark \[618\]. Let $g\in{\mathcal{G}}$ and $[v]\in E_{s(g)}$, and take $\sigma\in J^1{\mathcal{G}}$ any element such that $\sigma(T_{s(g)}M)\subset {\mathcal{H}}_g$. Then $$\begin{aligned} \rho({\text{\rm Ad}\,}^{\mathcal{H}}_g[v])=\rho(R_{g^{-1}}dm(\sigma(\rho(v)),v))=dt(\sigma_g(\rho(v)))=g\cdot(\rho([v])).\end{aligned}$$ ### Examples \[example: jet groupoids\] Our motivating class of examples of Lie-Pfaffian groupoids comes from the Cartan forms on jet groupoids (and their subgroupoids). They are, of course, the multiplicative version of the Cartan forms for jet bundles (see example \[cartandist\]). In the case $k=1$, we talk about the multiplicative Cartan form on $J^1{\mathcal{G}}$ $$\begin{aligned} \theta_{\mathrm{can}}\in \Omega^1({J}^1{\mathcal{G}}, t^{\ast}A).\end{aligned}$$ The Cartan form is the multiplicative form with values on the adjoint representation (see subsection \[Jet groupoids and algebroids\]). Recall that $\theta_{\mathrm{can}}$ is described as follows. Let ${pr}:{J}^1{\mathcal{G}}\to {\mathcal{G}}$ be the canonical projection and let $\xi$ be a vector tangent to ${J}^1{\mathcal{G}}$ at some point ${j}^1_xb\in {J}^1{\mathcal{G}}$. Then the difference $${d}_{{j}^1_xb}pr(\xi)-{d}_x b\circ{d}_{{j}^1_xb}s(\xi) \in T_g{\mathcal{G}}$$ lies in $ \ker {d}s$, hence it comes from an element in $A_{t(g)}$: $$\theta_{\mathrm{can}}(\xi)=R_{b(x)^{-1}}({d}_{{j}^1_xb}pr(\xi)-{d}_x b\circ{d}_{{j}^1_xb}s(\xi))\in A_{t(g)}.$$ \[cartan form\] Let ${\mathcal{G}}$ be a Lie groupoid and $A$ a Lie algebroid over $M$. Then: 1. The Cartan form $\theta_{\mathrm{can}}\in \Omega^1({J}^1{\mathcal{G}}, t^{\ast}A)$ is a multiplicative form with values in the adjoint representation. 2. If $A= Lie({\mathcal{G}})$, the Lie-Spencer operator of $\theta_{\mathrm{can}}$ (cf. theorem \[t1\] and example \[higher jets on algebroids\]) is $D^{\text{\rm clas}}$. We first show that $\theta_{\mathrm{can}}$ is multiplicative, i.e. that: $$(m^{\ast}\theta_{\mathrm{can}})|_{(\sigma_g,\sigma_h)} = {pr}_1^{\ast}\theta_{\mathrm{can}} + {\text{\rm Ad}\,}_{\sigma_g}{pr}_2^{\ast}\theta_{\mathrm{can}}.$$ We use the description from remark \[when working with jets\]. Let $\xi_1\in T_{\sigma_g}{J}^1{\mathcal{G}}$ and $\xi_2\in T_{\sigma_h}{J}^1{\mathcal{G}}$ be such that ${d}s(\xi_1)={d}t(\xi_2)$. Denote by $X_1 = {d}{pr}(\xi_1) \in T_g{\mathcal{G}}$ and $v_1 = {d}s(X_1) = {d}s(\xi_1) \in T_{s(g)}{\mathcal{G}}$. Similarly, let $X_2 = {d}{pr}(\xi_2) \in T_h{\mathcal{G}}$ and $v_2 = {d}s(X_2) = {d}s(\xi_2)\in T_{s(h)}M$. Computing $\theta_{\mathrm{can}}({d}m (\xi_1, \xi_2))$ we find $$\begin{aligned} &\ R_{(gh)^{-1}}({d}{pr}({d}m(\xi_1,\xi_2)) - (\sigma_g\cdot\sigma_h)({d}s ({d}m(\xi_1,\xi_2))))= \\ &=R_{(gh)^{-1}}({d}m(X_1,X_2) - (\sigma_g\cdot\sigma_h)(v_2))\\ &=R_{(gh)^{-1}}({d}m(X_1,X_2) - {d}m(\sigma_g(\lambda_{\sigma_2}(v_2)), \sigma_h(v_2)))\\ &=R_{(gh)^{-1}}({d}m(X_1-\sigma_g(\lambda_{\sigma_2}(v_2)), X_2-\sigma_h(v_2)))\\ &=R_{g^{-1}}({d}m(X_1-\sigma_g(\lambda_{\sigma_2}(v_2)),R_{h^{-1}}( X_2-\sigma_h(v_2))))\\ &=R_{g^{-1}}({d}m(X_1- \sigma_g(v_1), 0_{s(g)}) + {d}m(\sigma_g(v_1)-\sigma_g(\lambda_{\sigma_2}(v_2)),R_{h^{-1}}( X_2-\sigma_h(v_2))))\\ &=R_{g^{-1}}(X_1- \sigma_g(v_1) + {d}m(\sigma_g(v_1)-\sigma_g(\lambda_{\sigma_2}(v_2)),R_{h^{-1}}( X_2-\sigma_h(v_2))))\\ &=R_{g^{-1}}(X_1- \sigma_g(v_1)) + {\text{\rm Ad}\,}_{\sigma_g}(R_{h^{-1}}( X_2-\sigma_h(v_2)))\\ &=\theta_{\mathrm{can}}(\xi_1) + {\text{\rm Ad}\,}_{\sigma_g}\theta_{\mathrm{can}}(\xi_2), \end{aligned}$$ where we have used the fact that ${pr}:{J}^1{\mathcal{G}}\to {\mathcal{G}}$ is a Lie groupoid morphism. Let $(D,l)$ denote the Spencer operator of $\theta_{\mathrm{can}}$. It is clear from the definition of $l$ that $l= {pr}$ and it suffices to prove that $D$ satisfies the holonomicity condition $D({j}^1{\alpha})=0$, for all ${\alpha}\in \Gamma(A)$. Let $\zeta={j}^1{\alpha}$. In the explicit formula for $D$, we remark that $\phi_\zeta^\epsilon(x)={d}_x\phi^\epsilon_{\alpha}$, hence $$\begin{aligned} \theta_{\mathrm{can}}({d}_x\phi^\epsilon_\zeta(X_x))&=R_{(\phi^{\epsilon}_{\alpha}(x))^{-1}}({d}{pr}({d}_x\phi_{\zeta}^{{\epsilon}}(X_x)) - {d}_x \phi^{{\epsilon}}_{\alpha}({d}s({d}_x\phi^\epsilon_\zeta(X_x))))\\ &=R_{(\phi^{\epsilon}_{\alpha}(x))^{-1}}({d}_x \phi^{{\epsilon}}_{\alpha}(X_x) - {d}_x \phi^{{\epsilon}}_{\alpha}(X_x)) = 0, \end{aligned}$$ Hence $D({j}^1{\alpha})(X)= 0$. Of course proposition \[cartan form\] holds for all the Cartan forms $$\theta^k \in \Omega^1(J^k{\mathcal{G}}, t^*{J}^{k-1}A),$$ on higher jet groupoids. This is easily seen as $\theta^k$ is given by the restriction to $J^k{\mathcal{G}}\subset J^1(J^{k-1}{\mathcal{G}})$ of the Cartan form associated to the $(k-1)$-jet groupoid $J^{k-1}{\mathcal{G}}$. On the dual side one recovers the so called Cartan multiplicative distributions on $\mathcal{C}_k\subset TJ^k{\mathcal{G}}$ defined to be the kernel of the Cartan form $\theta^k$. Alternatively, the Cartan multiplicative distributions on $J^k{\mathcal{G}}$ can by described as the intersection $$\begin{aligned} \label{alter} C_k\cap TJ^k{\mathcal{G}},\end{aligned}$$ where $C_k$ is the Cartan distribution of the $k$-jet bundle associated to the source map $s:{\mathcal{G}}\to M$. The Lie groupoid $J^k{\mathcal{G}}$ endowed with the Cartan multiplicative distribution $\mathcal{C}_k\subset TJ^k{\mathcal{G}}$ is Lie-Pfaffian. That $\mathcal{C}_k$ is multiplicative is a consequence of lemma \[from-theta-H\] and the fact that it is defined by the kernel the multiplicative form $\theta^k$. That it is $s$-involutive follows from description . To see that $\mathcal{C}_k^s=\mathcal{C}_k^t$, let $X\in\mathcal{C}_k^s$. Then $dpr(X)=0$ follows by an easy computation of $\theta^k(X)=0$, and therefore $$\begin{aligned} dt(X)=dt(dpr(X))=0.\end{aligned}$$ This means that $X\in\mathcal{C}_k^t$ which implies that $\mathcal{C}_k^s\subset\mathcal{C}_k^t$. By dimension counting we conclude that $\mathcal{C}_k^s=\mathcal{C}_k^t$. As a curiosity: the kernel of Lie groupoid map $pr:J^1{\mathcal{G}}\to{\mathcal{G}}$ is equal to the bundle of Lie groups $\mathcal{K}$ over $M$, with fiber at $x$ given by $$\mathcal{K}_x:=\{\phi:T_{x}M\to A_{x}\mid \rho\circ\phi+ id\text{ is an isomorphism}\}$$ with multiplication of two linear maps $\phi,\psi\in \mathcal{K}_x$ given by $$\begin{aligned} \phi\cdot\psi:=\phi\circ\rho\circ\psi+\psi+\phi.\end{aligned}$$ Hence, we get an exact sequence of Lie groupoids $$\begin{aligned} \mathcal{K}\hookrightarrow J^1{\mathcal{G}}\overset{pr}{{\longrightarrow}}{\mathcal{G}}.\end{aligned}$$ For $k\geq2$, the situation is even simpler: the kernel of $pr:J^k{\mathcal{G}}\to J^{k-1}{\mathcal{G}}$ is isomorphic to the bundle of abelian Lie groups $S^kT^*\otimes A$. With this we have an exact sequence of Lie groupoids $$\begin{aligned} S^kT^*\otimes A\hookrightarrow J^k{\mathcal{G}}\overset{pr}{{\longrightarrow}}J^{k-1}{\mathcal{G}}.\end{aligned}$$ For later use we state the following result: consider the exact sequence of vector bundles over $J^k{\mathcal{G}}$ $$\begin{aligned} 0{\longrightarrow}S^kT^*\otimes T^s {\mathcal{G}}\overset{i}{{\longrightarrow}} T^sJ^k{\mathcal{G}}\overset{dpr}{{\longrightarrow}} T^sJ^{k-1}{\mathcal{G}}{\longrightarrow}0.\end{aligned}$$ \[multi\] Let $\gamma,b\in J^k{\mathcal{G}}$ be a pair of composable arrows. Then right translating by $b$ a vector $\Psi\in S^kT_{s(\gamma)}^*\otimes T^s_\gamma {\mathcal{G}}$ one obtains $$\begin{aligned} R_b(\Psi)\in S^kT_{s(b)}^*\otimes T^s_{\gamma b} {\mathcal{G}}\end{aligned}$$ defined at $X_1,\ldots X_k\in T_s(g)$ by $$\begin{aligned} R_b(\Psi)(X_1,\ldots,X_{k})&=R_{pr(b)}(\Psi(\lambda_b^{-1}(X_{1}),\ldots,\lambda_b^{-1}(X_{k-1}))),\end{aligned}$$ where $R_{pr(b)}:T^s_{pr(\gamma)}{\mathcal{G}}\to T^s_{pr(\gamma b)}{\mathcal{G}}$ is right translation by $pr(b)$. \[ex: G-structures\]Recall from example \[gauge-groupoids\] that any principal $G$-bundle $\pi: P \to M$ has an associated Lie groupoid, called the gauge groupoid, and denoted by ${\mathcal{G}}\text{auge}(P){\rightrightarrows}M$. In particular, if $G$ is a subgroup of $\operatorname{GL}_n$, and $$P = \mathrm{F}_G(M)$$ is a $G$-structure on $M$ – a principal $G$ sub-bundle of the frame bundle of $M$ (the bundle whose fiber over $x$ consists of all linear isomorphisms $p: {\mathbb R}^n \to T_xM$), one obtains ${\mathcal{G}}(\mathrm{F}_G(M)):={\mathcal{G}}\text{\rm auge}(\mathrm{F}_G(M))$. Note that in this case, ${\mathcal{G}}(\mathrm{F}_G(M))$ is a Lie subgroupoid of $\operatorname{GL}(TM)$ (see example \[difeo\]). Thus, we obtain a point-wise surjective multiplicative form $\tau$ on ${\mathcal{G}}(\mathrm{F}_G(M)) \subset \operatorname{GL}(TM)$ with values in $TM$, called the **tautological form**, defined by $$\begin{aligned} \tau=\theta^1|_{{\mathcal{G}}(\mathrm{F}_G(M))},\end{aligned}$$ where $\theta^1\in\Omega^1(\operatorname{GL}(TM),TM)$ is the multiplicative Cartan form associated to the pair groupoid $\Pi(M).$ Of course the symbol space ${\mathfrak{g}}(\tau)|_M\subset T^*M\otimes TM$ of $\tau$ restricted to $M$ is given by the kernel of $$\begin{aligned} dpr:T{\mathcal{G}}(\mathrm{F}_G(M)){\longrightarrow}TM\end{aligned}$$ at the units. By the construction of ${\mathcal{G}}(\mathrm{F}_G(M))$, we see that this is just $$\begin{aligned} {\mathfrak{g}}(\tau)|_M=T^*\otimes {\mathfrak{g}}, \quad {\mathfrak{g}}:=Lie(G)\subset \operatorname{\mathfrak{gl}}_n.\end{aligned}$$ The main importance of the tautological form is that it detects solutions of the $G$-structure in the following sense: recall that bisections of $\operatorname{GL}(TM)$ are simply automorphisms of $TM$ (covering a diffeomorphism of $M$). The tautological form satisfies the following important property: \[prop: tautological form\] Let $\tau \in \Omega^1({\mathcal{G}}(\mathrm{F}_G(M)), t^{\ast}TM)$ denote the tautological form. A bisection $\Phi \in \operatorname{Bis}({\mathcal{G}}(\mathrm{F}_G(M)))$ is of the form $\Phi = {d}\phi$, where $\phi = t \circ \Phi : M \to M$ if and only if $\Phi^{\ast}\tau = 0$. This of course follows from proposition \[cartan form\] applied to $J^1\Pi(M) \cong \operatorname{GL}(TM)$.\ On the infinitesimal side, the Lie algebroid of $\operatorname{GL}(TM)$ is ${\mathfrak{gl}}(TM)$ (for the definition of this Lie algebroid see remark \[rk-algbds\]), and thus the Lie algebroid of ${\mathcal{G}}(\mathrm{F}_G(M))$ is a transitive subalgebroid $A(\mathrm{F}_G(M)) \subset {\mathfrak{gl}}(TM)$. A direct application of proposition \[cartan form\] shows that the Spencer operator associated to the tautological form is $$D_X(\partial) = [\sigma_{\partial},X] - \partial(X), \quad l(\partial) = -\sigma_{\partial},$$ for all $\partial \in \Gamma(A(\mathrm{F}_G(M))$, and $X\in {\ensuremath{\mathfrak{X}}}(M)$. ### The integrability theorem for Pfaffian groupoids; Theorem 2 Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distribution. The multiplicativity of ${\mathcal{H}}\subset T{\mathcal{G}}$ implies that the unit map $u:M\to {\mathcal{G}}$ is a solution of $({\mathcal{G}},{\mathcal{H}})$. The linearization of $({\mathcal{G}},{\mathcal{H}})$ along $u$, in the sense of subsection \[linearization of distributions\], consists of the Lie algebroid $A$ of ${\mathcal{G}}$ $$\begin{aligned} A=L_u({\mathcal{G}}):=u^*T^s{\mathcal{G}}\end{aligned}$$ endowed with the connection $$\begin{aligned} D:{\ensuremath{\mathfrak{X}}}(M)\times\Gamma(A){\longrightarrow}(E)\end{aligned}$$ relative to the projection $A\to E$, where $E$ is the vector bundle over $M$ defined to be $$\begin{aligned} E:=A/{\mathfrak{g}}\end{aligned}$$ for ${\mathfrak{g}}=u^*({\mathcal{H}}^s)$. Actually, in the Lie groupoid setting, $E$ becomes a representation of ${\mathcal{G}}$ and $D$ is now a Spencer operator in the sense of chapter \[Relative connections on Lie algebroids\]. Moreover, if ${\mathcal{G}}$ is a source simply connected Lie groupoid, ${\mathcal{H}}$ is determined by $D$ and ${\mathfrak{g}}$.\ In the rest of this subsection we explain and prove the following integrability result (which is actually a consequence of theorem \[t1\]). This can also be found in [@Maria]. \[t2\] Let ${\mathcal{G}}\rightrightarrows M$ be a $s$-simply connected Lie groupoid with Lie algebroid $A\to M$. There is a one to one correspondence between 1. multiplicative distributions ${\mathcal{H}}\subset T{\mathcal{G}}$, 2. sub-bundles ${\mathfrak{g}}\subset A$ together with a Spencer operator $D$ on $A$ relative to the projection $A\to A/{\mathfrak{g}}$. In this correspondence, ${\mathfrak{g}}$ is the symbol space of ${\mathcal{H}}$ and $$\begin{aligned} D_X\alpha(x)=[\tilde X,\alpha^r]_x{\operatorname{mod}}{\mathcal{H}}^s_{1_x} ,\end{aligned}$$ where $\tilde X\in \Gamma({\mathcal{H}})\subset\mathfrak{X}({\mathcal{G}})$ is any vector field which is $s$-projectable to $X$ and extends $u_*(X)$ (for $\alpha^r$, see remark \[flows of sections\]). Let ${\mathcal{G}}$ be a Lie groupoid whose source fibers are not necessarily simply connected. As noticed before, out of a multiplicative distribution ${\mathcal{H}}\subset T{\mathcal{G}}$ one has a Spencer operator as in theorem \[t2\]. Of course in this case the correspondence may fail to hold. Before going to the proof of theorem \[t2\], we extract some properties of ${\mathcal{H}}$ out of its Spencer operator. \[in-dis\] Let ${\mathcal{G}}{\rightrightarrows}M$ be a (not necessarily s-simply connected) Lie groupoid. Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distribution with $D:\Gamma(A)\to \Omega^1(M,A/{\mathfrak{g}})$ the associated Spencer operator as in theorem \[t2\]. Then, 1. ${\mathcal{H}}$ is of Pfaffian type (i.e. ${\mathcal{H}}$ is $s$-involutive) if and only if ${\mathfrak{g}}\subset A$ is a Lie subalgebroid of $A$. 2. ${\mathcal{H}}$ is of Lie-type (i.e. ${\mathcal{H}}^s={\mathcal{H}}^t$) if and only if $D$ is a Lie Spencer operator. For item 1 recall that right translation $$\begin{aligned} \label{r} R:(\Gamma(A),[,]_A){\longrightarrow}(\Gamma(T^s{\mathcal{G}}),[,]_{\mathcal{G}}),\quad{\alpha}\mapsto R({\alpha})_g=R_g({\alpha}_{t(g)})\end{aligned}$$ is an isomorphism of Lie algebras. As ${\mathcal{H}}$ is multiplicative, $R$ restricts to an isomorphism $R:\Gamma({\mathfrak{g}})\to \Gamma({\mathcal{H}}^s)$, and therefore $\Gamma({\mathfrak{g}})\subset\Gamma(A)$ is Lie subalgebra (i.e. ${\mathfrak{g}}\subset A$ is a Lie subalgebroid) if and only if its image (i.e. $\Gamma({\mathcal{H}}^s)$) is a Lie subalgebra of $\Gamma(T^s{\mathcal{G}})$. For item 2 suppose first that ${\mathcal{H}}^s={\mathcal{H}}^t$. Then by corollary \[6113\] the quotient map $pr:A\to A/{\mathfrak{g}}$ is a Lie algebroid morphism and therefore the Spencer operator $D$ becomes a Lie-Spencer operator. For the converse notice that ${\mathfrak{g}}\subset \ker\rho$ as $\rho\circ pr=\rho$. This means that ${\mathfrak{g}}={\mathcal{H}}^s|_M\subset {\mathcal{H}}^t|_{M}$ and by dimension counting ${\mathcal{H}}^s|_M={\mathcal{H}}^t|_M$. Now, fixing $g\in{\mathcal{G}}$ with $x=s(g)$, $R_{g^{-1}}:{\mathcal{H}}^s_{1_x}\to {\mathcal{H}}^s_{g^{-1}}$ is an isomorphism. As ${\mathcal{H}}^t_{1_x}={\mathcal{H}}^s_{1_x}$ and $dt\circ R_{g^{-1}}=dt$ one has that $\textrm{im } R_{g^{-1}} \subset {\mathcal{H}}^t_g\cap {\mathcal{H}}^s_g$. Again by dimension counting we conclude that ${\mathcal{H}}^t_g={\mathcal{H}}^s_g.$ The next proposition states that we obatin (local) solutions of a non-linear system of PDE’s out of a solution of its linearization. Of course, the immediate advantage is to solve the easier infinitesimal (linear) problem to obtain a solution to the more complicated and interesting global problem. Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distribution and let $D$ be the associated Spencer operator. If ${\alpha}\in\Gamma(A)$ is a solution of $(A,D)$, then for every $x\in M$, $$\begin{aligned} \phi^{\epsilon}_{\alpha}\in\operatorname{Bis}_{\text{\rm loc}}({\mathcal{G}},{\mathcal{H}})\end{aligned}$$ is a solution around $x$, for ${\epsilon}$ such that $\phi^{\epsilon}_{\alpha}$ is defined (around $x$). See definition \[flows of sections\]. As we will see in the proof of theorem \[t2\], the Spencer operator associated to ${\mathcal{H}}$ is the one given in theorem \[t1\] for $\theta:=\theta_{\mathcal{H}}$. Let $x\in M$ and let $U\subset M$ be a neighborhood of $x$. Take ${\epsilon}_0$ to be such that $\phi^{{\epsilon}_0}_{\alpha}$ is defined on $U$. Consider the curve $$\begin{aligned} {\epsilon}\mapsto c({\epsilon})\in\Omega^1(U,E|_U),\quad c({\epsilon})(X_y)=\phi_{\alpha}^{{\epsilon}}(y)^{-1}\cdot\theta(d_y\phi_{\alpha}^{\epsilon}(X_y)).\end{aligned}$$ By the definition of $D$ (see theorem \[t2\]), $$\begin{aligned} \frac{d}{d{\epsilon}}c({\epsilon})|_{{\epsilon}=0}=D({\alpha}). \end{aligned}$$ We will prove that $\frac{d}{d{\epsilon}}c({\epsilon})|_{{\epsilon}=r}=0$ for any $0<r<{\epsilon}_0$. This, together with the fact that $c(0)=0$, implies, of course, that $c({\epsilon}_0)=0$. Indeed, $$\begin{aligned} \phi^{{\epsilon}+r}_{\alpha}(y)=\phi^{\epsilon}_{\alpha}(t(\phi^r_{\alpha}(y)))\cdot \phi^{\epsilon}_{\alpha}(y).\end{aligned}$$ Hence, the differential of $\phi^{{\epsilon}+r}_{\alpha}$ at $y$ satisfies a similar equation involving the differential of the functions. Therefore, by multiplicativity of $\theta$, $$\begin{aligned} \theta(d_y\phi^{{\epsilon}+r}_{\alpha}(X))=\theta(d\phi^{\epsilon}_{\alpha}(dt(d\phi_{\alpha}^r(X))))+ \phi^{\epsilon}_{\alpha}(t(\phi^r_{\alpha}(y)))\cdot \theta(d\phi^r_{\alpha}(X)),\end{aligned}$$ and from this, $$\begin{aligned} \begin{split} \frac{d}{d{\epsilon}}c({\epsilon}+r)|_{{\epsilon}=0}&=\frac{d}{d{\epsilon}}\phi^r_{\alpha}(y)^{-1}\cdot(\phi_{\alpha}^{{\epsilon}}(t(\phi^r_{\alpha}(y)))^{-1}\cdot\theta(d\phi^{\epsilon}_{\alpha}(dt(d\phi_{\alpha}^r(X))))+\theta(d\phi^r_{\alpha}(X)))\\&=\phi^r_{\alpha}(y)^{-1}\cdot D_{dt(d\phi_{\alpha}^r(X)))}{\alpha}=0. \end{split}\end{aligned}$$ Of course, theorem \[t2\] follows from theorem \[t1\] applied to $\theta_{{\mathcal{H}}}$, combined with the reformulation of Spencer operators (remark \[remark-dual\]). What we still have to prove is that the explicit formula (\[eq: explicit formula\]) for $D$ (from theorem \[t1\]) gives the explicit formula (\[D\]) (from theorem \[t2\]). With the right hand side of (\[eq: explicit formula\]) in mind, we consider more general expressions of type: $$({L}_{{\alpha}}\omega)_g:=\frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}(\varphi^{{\epsilon}}_{{\alpha}^r}(g))^{-1}\cdot(\varphi^\epsilon_{{\alpha}^r})^*\omega|_{\varphi^\epsilon_{{\alpha}^r}(g)}$$ for $\alpha\in \Gamma(A)$ and $\omega\in \Omega^k({\mathcal{G}},t^*E)$ (see also remark \[flows of sections\]). This defines $${L}_{{\alpha}}:\Omega^k({\mathcal{G}},t^*E){\longrightarrow}\Omega^k({\mathcal{G}},s^*E).$$ \[commutator\] For any vector field $\xi\in {\ensuremath{\mathfrak{X}}}({\mathcal{G}})$, $\omega \in \Omega^k({\mathcal{G}}, t^{\ast}E)$, $g\in {\mathcal{G}}$: $$[i_\xi,{L}_{{\alpha}}](\omega)_g=g^{-1}\cdot\omega_g([\xi,{\alpha}^r]).$$ Note that this implies theorem \[t2\]. Indeed, if $\tilde{X}$ is as in the statement of theorem \[t2\], using the above lemma for $g= 1_x= x$, $\omega= \theta$, since $D(\alpha)(x)= {L}_{{\alpha}}(\theta)(x)$ and $\theta(\tilde{X}_x)= 0$, $$D_{X}({\alpha})(x)=[i_{\tilde{X}}, {L}_{\alpha}](\theta)_x= \theta_{x}([\tilde X,{\alpha}^r])= \text{[}\tilde{X},{\alpha}^r]_{x} {\operatorname{mod}}\mathfrak{g}.$$ We apply the chain rule to the composition $$\frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}(\varphi^{-{\epsilon}}_{{\alpha}^r}(g))^{-1}\cdot\omega_{\varphi^{-{\epsilon}}_{{\alpha}^r}(g)}({d}_g\varphi^{-{\epsilon}}_{{\alpha}^r}(\xi_g))= f_1\circ f_2,$$ where $$f_1: I\times s^{-1}(s(g)) {\longrightarrow}E_{s(g)}, \qquad f_1({\epsilon},h)= h^{-1}\cdot\omega_h({d}_{\varphi^{{\epsilon}}_{{\alpha}^r}(h)}\varphi^{-{\epsilon}}_{{\alpha}^r}(\xi_{\varphi^{{\epsilon}}_{{\alpha}^r}(h)})),$$ and $$f_2: I {\longrightarrow}s^{-1}(s(g)), \qquad f_2({\epsilon}) = \varphi^{-{\epsilon}}_{{\alpha}^r}(g).$$ We obtain, $$\begin{aligned} \frac{d}{d{\epsilon}}&\big{|}_{{\epsilon}=0}(\varphi^{-{\epsilon}}_{{\alpha}^r}(g))^{-1}\cdot\omega_{\varphi^{-{\epsilon}}_{{\alpha}^r}(g)}({d}_g\varphi^{-{\epsilon}}_{{\alpha}^r}(\xi_g))=\\ =&\frac{d}{d\epsilon}\big{|}_{{\epsilon}=0}g^{-1}\cdot\omega_{g}({d}_{\varphi^{{\epsilon}}_{{\alpha}^r}(g)}\varphi^{-{\epsilon}}_{{\alpha}^r}(\xi_{\varphi^{{\epsilon}}_{{\alpha}^r}(g)}))+ \frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}(\varphi^{-{\epsilon}}_{{\alpha}^r}(g))^{-1}\cdot\omega_{\varphi^{-{\epsilon}}_{{\alpha}^r}(g)}(\xi_{\varphi^{-{\epsilon}}_{{\alpha}^r}(g)}), \end{aligned}$$ or in other words, $$-({L}_{{\alpha}}\omega)(\xi)(g)=g^{-1}\cdot\omega([{\alpha}^r,\xi])-{L}_{{\alpha}}(\omega(\xi))(g).$$ i.e. the equation in the statement. Lie-Prolongations of Pfaffian groupoids --------------------------------------- In this chapter we will show that the notions introduced in section \[prolongations of Pfaffian bundles\] become Lie theoretic. We will also introduce the notions of morphism and abstract Lie prolongations of Pfaffian groupoids, notions closely related to the Maurer-Cartan equation! (see subsection \[Lie-prolongations and the Maurer-Cartan equation\]). We will see the advantages of passing from the more general but “wild” picture of Pfaffian bundles to the easier to handle picture of Pfaffian groupoids, whose rich structures will allow us to have a better understanding of the process of prolonging. Throughout the whole exposition we will point out what the analogous notions are in the setting of Spencer operators (see chapter \[Relative connections on Lie algebroids\]) when taking the Lie functor. This can be slightly generalized to the case where ${\mathcal{G}}$ is endowed with a multiplicative form $\theta$. We will point out which results are still valid in this setting. ### Lie prolongations (abstract prolongations); Theorem 3 Throughout this section let ${\tilde{{\mathcal{G}}}}$ and ${\mathcal{G}}$ be two Lie groupoids over $M$; $${\tilde{\theta}}\in\Omega^1({\tilde{{\mathcal{G}}}},t^*\tilde E) \quad \text{ and } \quad \theta\in\Omega^1({\mathcal{G}},t^*E),$$ are multiplicative point-wise surjective $1$-forms with $${\tilde{{\mathcal{H}}}}:=\ker{\tilde{\theta}}\subset T{\tilde{{\mathcal{G}}}}\quad\text{ and }\quad{\mathcal{H}}:=\ker\theta\subset T{\mathcal{G}}$$ being the multiplicative distributions given by the kernel of ${\tilde{\theta}}$ and $\theta$ respectively. Assume that $$(\tilde D,\tilde l):\tilde A\to \tilde E \quad \text{ and } \quad (D,l):A\to E$$ are the associated Spencer operators of ${\tilde{\theta}}$ and $\theta$ respectively.\ Suppose that we have a Lie groupoid map $p:{\tilde{{\mathcal{G}}}}\to {\mathcal{G}}$ with the property that $$\begin{aligned} \label{con1} dp({\tilde{{\mathcal{H}}}})\subset {\mathcal{H}}.\end{aligned}$$ Then there is an induced vector bundle map $p_0:\tilde E\to E$ making the diagram $$\begin{aligned} \label{la} \xymatrix{ \tilde A \ar[r]^{\tilde l} \ar[d]_{Lie(p)} & \tilde E \ar@{.>}[d]^{p_0}\\ A \ar[r]^l & E }\end{aligned}$$ commutes. Actually, $p_0$ is determined by the formula $$p_0(\tilde l (v))=l(Lie(p)(v)),\quad v\in \tilde A$$ which is well-defined thanks to the condition and the definitions of $\tilde l,l$ ($\tilde l={\tilde{\theta}}|_{\tilde A}, l=\theta|_{A}$). Let $({\tilde{{\mathcal{G}}}},{\tilde{\theta}})$ and $({\mathcal{G}},\theta)$ be Pfaffian groupoids, with ${\tilde{\theta}}\in\Omega^1({\tilde{{\mathcal{G}}}},t^*\tilde E)$ and $\theta\in\Omega^1({\mathcal{G}},t^*E)$. A [**morphism of Pfaffian groupoids**]{}, denoted by $$\begin{aligned} p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}}){\longrightarrow}({\mathcal{G}},\theta),\end{aligned}$$ is a Lie groupoid map $p:{\tilde{{\mathcal{G}}}}\to{\mathcal{G}}$ with the following two properties: 1. $dp(\tilde {\mathcal{H}})\subset {\mathcal{H}}$, and 2. for any $X,Y\in {\tilde{{\mathcal{H}}}}$, $p_0(\delta{\tilde{\theta}}(X,Y))=\delta\theta(dp(X),dp(Y)).$ The following corollary is immediate from the definition. Let $p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}})\to({\mathcal{G}},\theta)$ be a morphism of Pfaffian groupoids. Then the morphism of groups $p:\operatorname{Bis}({\tilde{{\mathcal{G}}}})\to\operatorname{Bis}({\mathcal{G}})$ restricts to $$\begin{aligned} p:\operatorname{Bis}({\tilde{{\mathcal{G}}}},{\tilde{\theta}}){\longrightarrow}\operatorname{Bis}({\mathcal{G}},\theta).\end{aligned}$$ \[canonical-with-dist\]For a multiplicative point-wise surjective form $\theta\in\Omega^1({\mathcal{G}},t^*E)$, with ${\mathcal{H}}=\ker\theta\subset T{\mathcal{G}}$, the identity map $$\begin{aligned} id:({\mathcal{G}},\theta_{\mathcal{H}}){\longrightarrow}({\mathcal{G}},\theta)\end{aligned}$$ is an isomorphism of Pfaffian groupoids, $\theta_{\mathcal{H}}\in \Omega^1({\mathcal{G}},t^*A/{\mathfrak{g}})$ being the multiplicative form associated to ${\mathcal{H}}$ (see subsection \[the dual point of view\]). Assume now that our Lie groupoid map $p:{\tilde{{\mathcal{G}}}}\to {\mathcal{G}}$ satisfying the condition , has the extra property that $$\begin{aligned} dp({\tilde{{\mathcal{H}}}}^s)=0.\end{aligned}$$ Hence $\tilde{\mathfrak{g}}:={\tilde{{\mathcal{H}}}}^s|_M\subset \ker Lie(p)$ and we have an induced vector bundle map $L:\tilde E\to A$, making the diagram $$\begin{aligned} \xymatrix{ \tilde A \ar[r]^{\tilde l} \ar[d]_{Lie(p)} & \tilde E \ar@{.>}[ld]_{L} \ar[d]^{p_0}\\ A \ar[r]^l & E }\end{aligned}$$ commute. Actually, $L$ is determined by the well-defined formula $$\begin{aligned} L(\tilde l(v))=Lie(p)(v),\quad v\in \tilde A.\end{aligned}$$ The following definition is motivated by the fact that the infinitesimal counterparts of the objects defined below are compatible Spencer operators. See definition \[algb-prol\] and theorem \[prol-comp\]. Let $({\mathcal{G}},\theta)$ be a Pfaffian groupoid. A [**Lie-prolongation**]{} of $({\mathcal{G}},\theta)$ is a Pfaffian groupoid $({\tilde{{\mathcal{G}}}},{\tilde{\theta}})$ together with a morphism of Pfaffian groupoids $$\begin{aligned} p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}}){\longrightarrow}({\mathcal{G}},\theta),\end{aligned}$$ which is a surjective submersion satisfying: 1. $dp( {\tilde{{\mathcal{H}}}}^s)=0$, 2. for any $X,Y\in {\tilde{{\mathcal{H}}}}$, $\delta\theta(dp(X),dp(Y))=0$, and 3. \[cond4\] $L:\tilde E\to A$ is an isomorphism. Under the identification $L:\tilde E\simeq A$, $Lie(p):\tilde A\to A$ and $p_0:\tilde E\to E$ become $\tilde l:\tilde A\to A$ and $l:A\to E$ respectively. Therefore we are left in the situation where $$p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}}){\longrightarrow}({\mathcal{G}},\theta)$$ is a surjective submersion, with the property that ${\tilde{\theta}}$ is of Lie-type taking values in the Lie algebroid $A$ of ${\mathcal{G}}$, and satisfying: 1. $Lie(p)=\tilde l$, 2. $dp(\tilde {\mathcal{H}})\subset {\mathcal{H}}$, and 3. for any $X,Y\in {\tilde{{\mathcal{H}}}}$, $\delta\theta(dp(X),dp(Y))=0.$ We will make this more precise in the results below. Let $p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}})\to({\mathcal{G}},\theta)$ be a Lie-prolongation of $({\mathcal{G}},\theta)$. Then $({\tilde{{\mathcal{G}}}},{\tilde{\theta}})$ is a Lie-Pfaffian groupoid and $$\begin{aligned} L:A\simeq \tilde E\end{aligned}$$ is an isomorphism of Lie algebroids. The previous corollary follows from the next lemma and proposition \[in-dis\]. Let $p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}})\to({\mathcal{G}},\theta)$ be an abstract Lie prolongation of $({\mathcal{G}},\theta)$. Then there exists a unique Lie algebroid structure on $\tilde E$ with the property that $$\tilde l:\tilde A{\longrightarrow}\tilde E$$ is a Lie algebroid map. Moreover, the vector bundle map $L:\tilde E\to A$ becomes an isomorphism of Lie algebroids. We use the identification $L:\tilde E\to A$ to put an algebroid structure on $\tilde E$. Now, as $Lie(p):\tilde A\to A$ is a Lie algebroid map and as $L\circ \tilde l=Lie(p)$, it is clear that with this structure $\tilde l:\tilde A\to \tilde E$ is a Lie algebroid map. We say that $$p:({\tilde{{\mathcal{G}}}},{\tilde{{\mathcal{H}}}}){\longrightarrow}({\mathcal{G}},{\mathcal{H}})$$ is a [**Lie prolongation**]{} of $({\mathcal{G}},{\mathcal{H}})$ if $p:({\tilde{{\mathcal{G}}}},\theta_{{\tilde{{\mathcal{H}}}}})\to ({\mathcal{G}},\theta_{{\mathcal{H}}})$ is a Lie prolongation of $({\mathcal{G}},\theta_{{\mathcal{H}}})$. See subsection \[the dual point of view\]. Of course in the previous definitions and results we don’t need that our multiplicative distribution ${\mathcal{H}}\subset T{\mathcal{G}}$ is $s$-involutive. Actually all the following results are still valid without this condition. One result that shows that the notion of Lie-prolongations is the global counterpart of compatible Spencer operators is the following integrability theorem. \[prol-comp\] If $p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}})\to ({\mathcal{G}},\theta)$ is a Lie-prolongation of $({\mathcal{G}},\theta)$ with ${\tilde{\theta}}$ taking values in the Lie algebroid $A$ of ${\mathcal{G}}$, then the associated Spencer operators $$\begin{aligned} \tilde A\xrightarrow{\tilde D,\tilde l}{}A\xrightarrow{D,l} E\end{aligned}$$ are compatible. If ${\tilde{{\mathcal{G}}}}$ is source-connected and $p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}})\to ({\mathcal{G}},\theta)$ is a surjective submersion with the property that $Lie(p)={\tilde{\theta}}|_A$, then the converse also holds. See subsection \[Proofs\] for the proof of theorem \[prol-comp\].\ Actually, as a consequence of theorems \[t1\] and \[prol-comp\], theorem 3 says that under some topological conditions, there is a correspondence between Lie-prolongations and compatible Spencer operators. More precisely, \[siguiente1\] Let ${\tilde{{\mathcal{G}}}}$ and ${\mathcal{G}}$ two Lie groupoid over $M$ with ${\tilde{{\mathcal{G}}}}$ $s$-simply connected and ${\mathcal{G}}$ $s$-connected, with Lie algebroids $\tilde A$ and $A$ respectively. Let $\theta\in\Omega^1({\mathcal{G}},t^*E)$ be a point-wise surjective multiplicative form and denote by $(D,l):A\to E$ the Spencer operator associated to $\theta$. There is a 1-1 correspondence between: 1. \[a1\] Lie-prolongations $p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}})\to ({\mathcal{G}},\theta)$ of $({\mathcal{G}},\theta)$, and 2. \[a2\] Lie-Spencer operators $(\tilde D,\tilde l):\tilde A\to A$ compatible with $(D,l):A\to E$. In this correspondence, $\tilde D$ is the associated Lie-Spencer operator of ${\tilde{\theta}}$. Theorem \[prol-comp\] establishes the correspondence going from to . Conversely, applying theorem \[t1\] we get a a point-wise surjective multiplicative form ${\tilde{\theta}}\in\Omega^1({\tilde{{\mathcal{G}}}},t^*A)$ with the property that ${\tilde{\theta}}|_{\tilde A}=\tilde l$. Moreover, integrating the Lie algebroid map $\tilde l:\tilde A\to A$ we get a Lie groupoid map $p:{\tilde{{\mathcal{G}}}}\to {\mathcal{G}}$. Since $\tilde l$ is surjective then $p$ is a surjective submersion onto the $s$-connected component of ${\mathcal{G}}$ (which is actually ${\mathcal{G}}$). As $Lie(p)={\tilde{\theta}}|_{\tilde A}$ we can apply \[prol-comp\] to conclude that the compatibility of $(\tilde D,D)$ implies that $p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}})\to ({\mathcal{G}},\theta)$ is a Lie-prolongation. For $p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}})\to ({\mathcal{G}},\theta)$ a Lie-prolongation of $({\mathcal{G}},\theta)$, we have a canonical Lie groupoid morphism $$\begin{aligned} j_{{\tilde{\theta}}}:{\tilde{{\mathcal{G}}}}{\longrightarrow}J^1{\mathcal{G}},\end{aligned}$$ where $j_{{\tilde{\theta}}}(g):T_{s(g)}M\to T_{p(g)}{\mathcal{G}}$ is defined by the equation $$\begin{aligned} j_{{\tilde{\theta}}}(g)(X)=dp(\tilde X_g),\end{aligned}$$ for any vector $\tilde X_g\in {\tilde{{\mathcal{H}}}}_g$ with the property that $ds(\tilde X_g)=X$. \[628\]Let $p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}})\to ({\mathcal{G}},\theta)$ be a Lie-prolongation of $({\mathcal{G}},\theta)$. Then there exists a unique Lie groupoid map $$\begin{aligned} j:{\tilde{{\mathcal{G}}}}{\longrightarrow}J^1{\mathcal{G}}\end{aligned}$$ with the properties that $pr\circ j=p$ and $$\begin{aligned} j^*\theta^1={\tilde{\theta}},\end{aligned}$$ where $\theta^1\in \Omega^1(J^1{\mathcal{G}},t^*A)$ is the Cartan form. Take $j=j_{{\tilde{\theta}}}$. Let’s first show that $j$ is well-defined, i.e. $j(g)(X)$ does not depend on the choice of $\tilde X_g$. Indeed, if $X'\in{\tilde{{\mathcal{H}}}}_g$ is any other such vector, then $\tilde X-X'\in {\tilde{{\mathcal{H}}}}^s$ and therefore $$\begin{aligned} dp(\tilde X-X')={\tilde{\theta}}(\tilde X-X')=0.\end{aligned}$$ On the other hand, $$\begin{aligned} dt(j(g)(X))=dt(dp(\tilde X_g))=dt(\tilde X_g)=g\cdot X,\end{aligned}$$ where $g:T_{s(g)}M\to T_{t(g)}$ is the action of ${\tilde{{\mathcal{G}}}}$ on $TM$ (see remark \[618\]), which shows that $dt\circ j(g)$ is an isomorphism and therefore $j(g)$ is indeed an element of $J^1{\mathcal{G}}.$ It’s left to the reader to show that $j$ is Lie groupoid morphism. Let’s see now that $j^*\theta^1=\theta$. For $X\in T_g{\tilde{{\mathcal{G}}}}$, we have $$\begin{aligned} \label{eq11}\begin{split} j^*\theta^1(X)&=r_{p(g)^{-1}}(dpr(dj(X))-j(g)(ds(dj(X))))\\ &=r_{p(g)^{-1}}(dpr(X)-j(g)(ds(X))), \end{split}\end{aligned}$$ where we used that $pr\circ j=p$. Now, for $\tilde X\in{\tilde{{\mathcal{H}}}}_g=\ker{\tilde{\theta}}_g$ $s$-projectable to $ds(X)$, $$\begin{aligned} \label{eq2} j(g)(ds(X)):=dp(\tilde X)=dp(X)-r_{p(g)}{\tilde{\theta}}(X)\end{aligned}$$ as $dp(\tilde X-X)=r_{p(g)}{\tilde{\theta}}(\tilde X-X)=-r_{p(g)}{\tilde{\theta}}(X)$. Plugging in equation into equation we get that $j^*\theta^1(X)={\tilde{\theta}}_g(X)$. For the uniqueness notice that for any other such Lie groupoid map $j':{\tilde{{\mathcal{G}}}}\to J^1{\mathcal{G}}$, then for any $X\in T_gT{\mathcal{G}}$, $$\begin{aligned} 0=j'^*\theta^1(X_g)-j^*\theta^1(X_g)=j'(g)(ds(X))-j(g)(ds(X)).\end{aligned}$$ \[generali\]The previous proof shows $j=j_{{\tilde{\theta}}}$ and that for any $X\in T_{s(g)}M$ $$\begin{aligned} j_{{\tilde{\theta}}}(g)(X)=dp(\tilde X_g)-r_{p(g)^{-1}}{\tilde{\theta}}(\tilde X_g),\end{aligned}$$ where $\tilde X_g\in T_g{\tilde{{\mathcal{G}}}}$ is any vector (not necessarily in ${\tilde{{\mathcal{H}}}}_g$) $s$-projectable to $X$. ### Lie prolongations and the Maurer-Cartan equation; Theorem 4 {#Lie-prolongations and the Maurer-Cartan equation} Throughout this section let ${\mathcal{G}}$ be a Lie groupoid over $M$ and let $\theta\in\Omega^1({\mathcal{G}},t^*E)$ be a multiplicative form. Denote by $$\begin{aligned} D:\Gamma(A){\longrightarrow}\Omega^1(M,t^*E),\end{aligned}$$ the Spencer operator associated to $\theta$, relative to the vector bundle map $l:A\to E$. Out of the $l$-Spencer operator $D$, one can construct an antisymmetric, $C^{\infty}(M)$-bilinear map $$\begin{aligned} \{\cdot,\cdot\}_D:A\times A{\longrightarrow}E\end{aligned}$$ defined at the level of section by $$\begin{aligned} \frac{1}{2}\{{\alpha},{\beta}\}_D:=D_{\rho({\alpha})}({\beta})-D_{\rho({\beta})}({\alpha})-l[{\alpha},{\beta}]\end{aligned}$$ where ${\alpha},{\beta}\in\Gamma(A)$. On the other hand, one has a differential operator relative to $D$ $$\begin{aligned} d_D:\Omega^*({\tilde{{\mathcal{G}}}},t^*A){\longrightarrow}\Omega^{*+1}({\tilde{{\mathcal{G}}}},t^*E)\end{aligned}$$ explicitely defined by $$\begin{aligned} d_D\omega(X_0,\ldots,&X_k)=\sum_{i}(-1)^{i}D^t_{X_i}(\omega(X_0,\ldots,\hat{X_i},\ldots,X_k))\\&+\sum_{i<j}(-1)^{i+j}l(\omega([X_i,X_j],X_0,\ldots,\hat{X_i},\ldots,\hat{X_j},\ldots,X_k)), \end{aligned}$$ where $D^t:\Gamma(t^*A)\to \Omega^1({\tilde{{\mathcal{G}}}},t^*E)$ is the pullback of $D$ via the target map $t:{\tilde{{\mathcal{G}}}}\to M$ (see subsection \[first examples and operations\] for the precise definition of $D^t$). Let ${\tilde{{\mathcal{G}}}}$ be a Lie groupoid over $M$ and ${\tilde{\theta}}\in\Omega^1({\tilde{{\mathcal{G}}}},t^*A)$ a multiplicative form. We denote by $MC({\tilde{\theta}},\theta)\in\Omega^2({\tilde{{\mathcal{G}}}},t^*E)$ the two form given by $$\begin{aligned} \label{MC} MC({\tilde{\theta}},\theta):=d_{D}{\tilde{\theta}}-\frac{1}{2}\{{\tilde{\theta}},{\tilde{\theta}}\}_{D}\end{aligned}$$ We say that the pair $({\tilde{\theta}},\theta)$ satisfies the [**Maurer-Cartan equation**]{} if $$\begin{aligned} MC({\tilde{\theta}},\theta)=0.\end{aligned}$$ In general, in order to obtain a two form as in , one only needs a one form ${\tilde{\theta}}\in\Omega^1(R,t^*A)$, with $t:R\to M$ a surjective submersion, $A\to M$ a Lie algebroid, and $D:\Gamma(A)\to \Omega^1(M,E)$ a relative connection. Theorem 4 shows the relation between Lie-prolongations and the Maurer-Cartan equation. Roughly speaking they are they same thing: \[theorem-mc\] Let $p:{\tilde{{\mathcal{G}}}}\to{\mathcal{G}}$ be a Lie groupoid map which is also a surjective submersion and let ${\tilde{\theta}}\in\Omega^1({\tilde{{\mathcal{G}}}},t^*A)$ be a multiplicative form with $A=Lie({\mathcal{G}})$. If $$p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}}){\longrightarrow}({\mathcal{G}},\theta)$$ is a Lie prolongation of the Pfaffian groupoid $({\mathcal{G}},\theta)$ then the pair $(\theta,{\tilde{\theta}})$ satisfies the Maurer-Cartan equation. If ${\tilde{{\mathcal{G}}}}$ is source connected and $Lie(p)={\tilde{\theta}}|_{\tilde A}$, then the converse also holds. See subsection \[Proofs\] for the proof. ### The partial Lie prolongation Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distribution. The partial Lie prolongation of $({\mathcal{G}},{\mathcal{H}})$, denoted by $J^1_{\mathcal{H}}{\mathcal{G}}$, is our first attempt to construct a Lie-prolongation of the Pfaffian groupoid $({\mathcal{G}},{\mathcal{H}})$. However, the compatibility conditions for a Lie-prolongation will hold for a ‘smaller’ subgroupoid of $J^1_{\mathcal{H}}{\mathcal{G}}$ as we will see in the next subsection. Explicitly, $J^1_{\mathcal{H}}{\mathcal{G}}$ is the subspace of the jet groupoid $J^1{\mathcal{G}}$ given by $$\begin{aligned} \label{J^1_HG}{J}^1_{\mathcal{H}}{\mathcal{G}}=\{\sigma_g\in {J}^1{\mathcal{G}}\mid \sigma_g:T_{s(g)}M\to {\mathcal{H}}_g\subset T_g{\mathcal{G}}\}.\end{aligned}$$ \[prop421\] Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distribution. Then, 1. $J^1_{\mathcal{H}}{\mathcal{G}}\subset J^1{\mathcal{G}}$ is a Lie subgroupoid and $pr:J^1_{\mathcal{H}}{\mathcal{G}}\to{\mathcal{G}}$ is a surjective submersion, 2. the Lie algebroid of $J^1_{\mathcal{H}}{\mathcal{G}}$ is the partial prolongation $J^1_DA\subset J^1A$, where $D:\Gamma(A)\to \Omega^1(M,E)$ is the Spencer operator associated to $\theta$ (see definition \[partial prolongation\] and theorem \[t2\]). The proof is a consequence of the following lemma. Let $E$ be representation of ${\mathcal{G}}$ and consider $$\begin{aligned} {\mathrm{Hom}}(TM,E)\in{\text{\rm Rep}\,}(J^1{\mathcal{G}})\end{aligned}$$ with the tensor product action of $J^1{\mathcal{G}}$ on $TM$ by the canonical adjoint action, and on $E$ by the pullback via $pr:J^1{\mathcal{G}}\to {\mathcal{G}}$ of the action of ${\mathcal{G}}$ on $E$. \[s\] Let $\theta\in\Omega^1({\mathcal{G}},t^*E)$ be a multiplicative form. Then the map $$\begin{aligned} \label{ev} ev:J^1{\mathcal{G}}{\longrightarrow}s^*{\mathrm{Hom}}(TM,E),\quad j^1_xb\mapsto b(x)^{-1}\cdot (b^*\theta)_x\end{aligned}$$ is a cocycle and its linearization is equal to vector bundle map $a:J^1A\to {\mathrm{Hom}}(TM,E)$ defined in lemma \[J\^1\_DA\]. More precisely, $$\begin{aligned} ev(j^1_yb_2\cdot j^1_xb_1)=(j^1_xb_1)^{-1}\cdot ev(j^1_yb_2)+ev(j^1_xb_1)\end{aligned}$$ for any pair of composable arrows $j^1_xb_1,j^1_yb_2\in J^1{\mathcal{G}}$, and $$\begin{aligned} \label{Lie(ev)} Lie(ev)(j^1_x{\alpha})(X)=D_X({\alpha})\end{aligned}$$ for $j^1_x{\alpha}\in J^1A$ and $X\in T_xM.$ We use the description of $J^1{\mathcal{G}}$ given in remark \[when working with jets\]. Let $\sigma_h,\sigma_g\in J^1{\mathcal{G}}$ be a pair of composable arrows. Then, $$\begin{aligned} \begin{split} ev(\sigma_h\cdot\sigma_g)(X)&=(hg)^{-1}\cdot \theta(dm(\sigma_h(\lambda_{\sigma_g}(X)),\sigma_g(X)))\\ &=(hg)^{-1}\cdot(\theta(\sigma_h(\lambda_{\sigma_g}(X)))+h\cdot\theta(\sigma_g(X)))\\ &=g^{-1}\cdot h^{-1}\theta(\sigma_h(\lambda_{\sigma_g}(X)))+g^{-1}\cdot\theta(\sigma_g(X))\\ &=\sigma_g\cdot ev(\sigma_h)(X)+ev(\sigma_g)(X) \end{split}\end{aligned}$$ for any $X\in T_{s(g)}M$. Let’s compute $Lie (ev)(j^1_x{\alpha})$. Consider the flow $\phi^\epsilon_{\alpha^r}:{\mathcal{G}}\to{\mathcal{G}}$ of the right invariant vector field $\alpha^r$ induced by $\alpha$. Denote by $\phi^\epsilon_\alpha:M\to {\mathcal{G}}$ the bisection defined by $\phi^\epsilon_\alpha:=\phi^\epsilon_{\alpha^r}\circ{u}$. Then, $\epsilon\mapsto {d}_x\phi_\alpha^\epsilon :T_xM\to T_{\phi_\alpha^\epsilon(x)}{\mathcal{G}}$ is a curve on ${J}^1{\mathcal{G}}$ lying inside the source fiber at $x$ such that $$\begin{aligned} {j}^1_x\alpha=\frac{d}{d\epsilon}{d}_x\phi_\alpha^\epsilon|_{\epsilon=0}\end{aligned}$$ By definition of $Lie(ev)$ we get that $Lie(ev)({j}_x^1\alpha)(X)$ is given by $$\begin{aligned} \frac{d}{d\epsilon}ev({d}_x\phi_\alpha^\epsilon)(X)|_{\epsilon=0}=\frac{d}{d\epsilon}(\phi_\alpha^\epsilon(x))^{-1}\cdot\theta_{\phi_\alpha^\epsilon(x)}({d}_x\phi_\alpha^\epsilon(X))|_{\epsilon=0}=D_X(\alpha).\end{aligned}$$ Since $J^1_{\mathcal{H}}{\mathcal{G}}\subset J^1{\mathcal{G}}$ is the kernel of $ev$, a cocycle on $J^1{\mathcal{G}}$, then it is a subgroupoid. The smoothness of $J^1_{\mathcal{H}}{\mathcal{G}}$ follows from the fact that it is the intersection a smooth submanifold of the first jet bundle of the source map of $s:{\mathcal{G}}\to M$, namely the partial prolongations of ${\mathcal{H}}$ with respect to the source map (see definition \[part-prol\] and lemma \[smothness\]), with $J^1{\mathcal{G}}$ which is an open subset of the first jet bundle of the source map. Now, that $pr:J^1_{\mathcal{H}}{\mathcal{G}}\to {\mathcal{G}}$ is surjective is clear. That it is a submersion follows from the fact that $J^1_{\mathcal{H}}{\mathcal{G}}$ is an open set of the partial prolongations of ${\mathcal{H}}$ with respect to the source map and lemma \[smothness\]. That $J^1_DA\subset J^1A$ is the algebroid of $J^1_{\mathcal{H}}{\mathcal{G}}$ follows from the fact that $J^1_DA$ is the kernel of $Lie(ev)$ (see remark \[cocyclea\]). For a general multiplicative one form $\theta\in\Omega^1({\mathcal{G}},t^*E)$, the subspace $J^1_\theta{\mathcal{G}}\subset J^1{\mathcal{G}}$ defined by $$\begin{aligned} J^1_\theta{\mathcal{G}}=\{j^1_xb\in J^1{\mathcal{G}}\mid (b^*\theta)_x=0\}\end{aligned}$$ still makes sense and it is again the kernel of the cocycle $ev$ defined exactly in the same way as in . Therefore, $J^1_\theta{\mathcal{G}}\subset J^1{\mathcal{G}}$ is again a subgroupoid (but it may fail to be smooth), and the linearization $Lie(ev)$ of $ev$ is again given by formula , where in this case, $D:\Gamma(A)\to\Omega^1(M,E)$ is defined to be the Spencer operator associated to $\theta$ given in theorem \[t1\]. Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distribution. The [**partial Lie prolongation of $({\mathcal{G}},{\mathcal{H}})$**]{} is the Lie groupoid $J^1_{\mathcal{H}}{\mathcal{G}}$ endowed with the Lie-Pfaffian multiplicative distribution ${\mathcal{H}}^{(1)}\subset TJ^1_{\mathcal{H}}{\mathcal{G}}$ $$\begin{aligned} {\mathcal{H}}^{(1)}:=\mathcal{C}_1\cap TJ^1_{\mathcal{H}}{\mathcal{G}},\end{aligned}$$ where $\mathcal{C}_1\subset TJ^1{\mathcal{G}}$ is the multiplicative Cartan distribution (see example \[example: jet groupoids\]). That ${\mathcal{H}}^{(1)}$ is indeed a Pfaffian multiplicative form is a consequence of proposition \[proposition1\] and subsection \[Jet groupoids and algebroids\], as it follows that $J^1_{\mathcal{H}}{\mathcal{G}}$ is an open subset of the partial prolongation of $s:({\mathcal{G}},H)\to M$ where $H:={\mathcal{H}}$ as a Pfaffian distribution. As ${\mathcal{H}}^{(1)}=H^{(1)}|_{J^1_{\mathcal{H}}{\mathcal{G}}}$ by equation , and $H^{(1)}$ is a Pfaffian distribution, it follows that ${\mathcal{H}}$ is itself a Pfaffian distribution. That ${\mathcal{H}}^{(1)}$ is multiplicative of Lie-type follows from the definition. One obtains one of the main properties of $J^1_{\mathcal{H}}{\mathcal{G}}$ (see proposition \[proposition1\]): For a multiplicative distribution ${\mathcal{H}}\subset T{\mathcal{G}}$ the morphism of groups $$\begin{aligned} pr:\operatorname{Bis}(J^1_{\mathcal{H}}{\mathcal{G}},{\mathcal{H}}^{(1)}){\longrightarrow}\operatorname{Bis}({\mathcal{G}}, {\mathcal{H}})\end{aligned}$$ is a bijection with inverse $j^1:\operatorname{Bis}({\mathcal{G}}, {\mathcal{H}})\to\operatorname{Bis}(J^1_{\mathcal{H}}{\mathcal{G}},{\mathcal{H}}^{(1)})$. Related to the abstract notion of Lie-prolongation, one has that if $p:({\tilde{{\mathcal{G}}}},{\tilde{{\mathcal{H}}}})\to ({\mathcal{G}},{\mathcal{H}})$ is a Lie-prolongation of $({\mathcal{G}},{\mathcal{H}})$, then there is an induced Lie groupoid map $$\begin{aligned} j_{{\tilde{{\mathcal{H}}}}}:{\tilde{{\mathcal{G}}}}{\longrightarrow}J^1_{{\mathcal{H}}}{\mathcal{G}}\end{aligned}$$ such that $j_{{\tilde{{\mathcal{H}}}}}^*\theta^{(1)}=\theta_{{\tilde{{\mathcal{H}}}}}$. See corollary \[628\] to conclude that ${\text{\rm Im}\,}j_{{\tilde{{\mathcal{H}}}}}\subset J^1_{{\mathcal{H}}}{\mathcal{G}}$. ### The classical Lie prolongation: its groupoid structure The classical Lie prolongation of a Pfaffian groupoid $({\mathcal{G}},{\mathcal{H}})$ is, under some smoothness assumptions, a Lie-prolongation of $({\mathcal{G}},{\mathcal{H}})$. Again, it may be thought as the complete infinitesimal data of solutions ${\mathcal{H}}$, and it also satisfies the universal property stated in proposition \[prop-un\]. Also we can apply all the theory and notions developed in subsection \[the-classical-prolongation\] for Pfaffian bundles, but once again, the objects become Lie theoretic and simpler thanks to the multiplicative structure. See for example [@Gold3] for the concept of prolongation in the setting of partial differential equations.\ Closely related to the notion of Lie-prolongation and the involutivity of ${\mathcal{H}}$, we recall that, using $\theta_{\mathcal{H}}$ to identify $T{\mathcal{G}}/{\mathcal{H}}$ with $t^*E$, we get the bracket modulo ${\mathcal{H}}$: Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distribution. The [**curvature map**]{} is defined by $$\label{bra} c_{{\mathcal{H}}}: {\mathcal{H}}\times {\mathcal{H}}{\longrightarrow}t^*E, \ c_{{\mathcal{H}}}(X, Y)= \theta_{\mathcal{H}}([X, Y]).$$ See subsection \[the dual point of view\] and definition \[curvature-map\] in the case of Pfaffian bundles. The [**classical Lie-prolongation space of ${\mathcal{H}}$**]{}, denoted by $$\begin{aligned} P_{{\mathcal{H}}}({\mathcal{G}})\subset J^1_{\mathcal{H}}{\mathcal{G}}\subset J^1{\mathcal{G}}\end{aligned}$$ is defined by $$\begin{aligned} P_{\mathcal{H}}({\mathcal{G}}):=\{j^1_xb\in J^1{\mathcal{G}}\mid d_xb(T_xM)\subset {\mathcal{H}}_{b(x)}\text{ and }c_{\mathcal{H}}(d_xb(\cdot),d_xb(\cdot))=0\}. \end{aligned}$$ We say that $P_{\mathcal{H}}({\mathcal{G}})$ is [**smooth**]{} if it is a smooth submanifold of $J^1{\mathcal{G}}$, and that it is [**smoothly defined**]{} if, moreover, $pr:P_{\mathcal{H}}({\mathcal{G}})\to {\mathcal{G}}$ is a surjective submersion. \[open2\]By subsection \[Jet groupoids and algebroids\] and the previous definition one has that $P_{\mathcal{H}}({\mathcal{G}})$ is an open set of $P_H({\mathcal{G}}),$ the partial prolongation of $H:={\mathcal{H}}$ with respect to the source map $s:{\mathcal{G}}\to M$, given by the intersection of $P_H({\mathcal{G}})$ with $J^1{\mathcal{G}}.$ \[yase\] Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distribution. Then $P_{\mathcal{H}}({\mathcal{G}})$ is a subgroupoid of $J^1{\mathcal{G}}$. Moreover, if $P_{\mathcal{H}}({\mathcal{G}})$ is smooth then $$\begin{aligned} Lie(P_{\mathcal{H}}({\mathcal{G}}))=P_D(A),\end{aligned}$$ where $P_D(A)$ is the classical prolongation space of the Spencer operator $D$ associated to ${\mathcal{H}}$ (see theorem \[t2\] and subsection \[basic notions\]). To prove proposition \[yase\] we will need to introduce the [**$1$-curvature map**]{}, which in this setting becomes a $1$-cocycle. This, of course, comes from the $1$-curvature map in the setting of Pfaffian bundles (see definition \[primer\]), but in this setting all the spaces involved are simpler. Of course there is a price to pay: some formulas become more complicated due to the identifications of the spaces. Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distribution. The [**$1$-curvature map**]{} $$c_1: {J}^{1}_{{\mathcal{H}}}{\mathcal{G}}{\longrightarrow}s^{\ast} {\mathrm{Hom}}(\Lambda^2TM, E)$$ is given by $$c_1(\sigma_g)$$ (X\_x, Y\_x)= \^\_[g\^[-1]{}]{} c\^\_[g]{} (\_g(X\_x), \_g(Y\_[x]{})) for $\sigma_g\in {J}^{1}_{{\mathcal{H}}}{\mathcal{G}}$ with $s(g)= x$, $X_x, Y_x\in T_xM$. The following lemma follows from the definitions. \[lemma:102984\]Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distribution. Then $$\begin{aligned} P_{\mathcal{H}}({\mathcal{G}})=\ker c_1.\end{aligned}$$ In the statement of proposition \[yase\], the fact that $P_{{\mathcal{H}}}({\mathcal{G}})$ is a subgroupoid of ${J}^1_{{\mathcal{H}}}{\mathcal{G}}$ follows from $P_{\mathcal{H}}{\mathcal{G}}$ being the kernel of a cocycle (see lemmas \[lemma:102984\] and \[cocycle\]). Hence it remains to prove that $c_1$ is indeed a $1$-cocycle on ${J}^{1}_{{\mathcal{H}}}{\mathcal{G}}$. The action of ${J}^1_{\mathcal{H}}{\mathcal{G}}$ on ${\mathrm{Hom}}(\wedge^2TM, E)$ is the one induced by the representations $\lambda$, and ${\text{\rm Ad}\,}^{{\mathcal{H}}} \circ {pr}$ on $TM$ and on $E$ respectively: for $\sigma_{g}\in {J}^1_{\mathcal{H}}{\mathcal{G}}$, with $s(g)= x$, $t(g)= y$ and for $T_x\in {\mathrm{Hom}}(\Lambda^{2}T_{x}M, E_x)$, $$g(T_x)\in {\mathrm{Hom}}(\Lambda^{2}T_{y}M, E_y),\ \ g(T_x)(X_y, Y_y)= {\text{\rm Ad}\,}^{{\mathcal{H}}}_g T_x(\lambda_{g}^{-1}X_y, \lambda_{g}^{-1}Y_y).$$ \[cocycle\] The map $c_1: {J}^1_{\mathcal{H}}{\mathcal{G}}\to s^{\ast}{\mathrm{Hom}}(\wedge^2TM, E)$ is a cocycle. For the proof we need the following lemma: \[lemma: delta theta is multiplicative\] Let ${\mathcal{H}}\subset T{\mathcal{G}}$ by a multiplicative distribution. The curvature map $c_{{\mathcal{H}}}: {\mathcal{H}}\times{\mathcal{H}}\to t^{\ast}E$ satisfies $$c_{{\mathcal{H}}}({d}m(\xi_1, \xi_2), {d}m(\xi_1',\xi_2')) = c_{{\mathcal{H}}}(\xi_1,\xi_1') + {\text{\rm Ad}\,}^{\mathcal{H}}_g c_{{\mathcal{H}}}(\xi_2,\xi_2'),$$ where $\xi_1,\xi_1'\in{\mathcal{H}}_g$, and $\xi_2,\xi_2'\in {\mathcal{H}}_h$ are such that ${d}t(\xi_2) = {d}s(\xi_1)$, and similarly ${d}t(\xi_2') = {d}s(\xi_1')$ In general, for a fiber-wise surjective $1$-form $u\in \Omega^1(P, F)$, denote by $I_{u}$ the resulting bilinear form $c_u$ on $K_u:= \textrm{Ker}(u)$. If $f: Q\to P$ is a submersion, then $c_{f^{\ast}u}= f^{\ast}(c_u)$, i.e. $$c_{f^*u}(X, Y)= c_u (df(X), df(Y))\ \ \ \ (X, Y\in K_{f^*u}= (df)^{-1}(K_u)).$$ This is the content of lemma \[delta-commutes\]. In particular, $c_{m^{\ast}\theta}= m^{\ast}c_{\theta}$, $c_{{pr}_{1}^{\ast}\theta}= {pr}_{1}^{\ast}c_{\theta}$. A variation of this argument also gives $c_{g^{-1}{pr}_{2}^{\ast}\theta}= g^{-1}{pr}_{2}^{\ast}c_{\theta}$, where $g^{-1}$ refers to the multiplication by the inverse of the first component on $E$. Another general remark is that, for $u, v\in \Omega^1(P, E)$, $c_{u+v}= c_u+ c_v$ on $K_u\cap K_v$. Putting everything together we find that $$m^{\ast}(c_{\theta}) = {pr}_{1}^{\ast}(c_{\theta})+ g^{-1} {pr}_{2}^{\ast}(c_{\theta})$$ on all pairs $(U, V)$ of vectors tangent to ${\mathcal{G}}_{2}$ with $$U, V\in (d{pr}_1)^{-1}({\mathcal{H}})\cap (d{pr}_2)^{-1}({\mathcal{H}})$$ from which the lemma follows. Let $\sigma_g, \sigma_h\in {J}^{1}_{{\mathcal{H}}}{\mathcal{G}}$ composable, $X, Y\in T_{s(h)}M$. Using the formula describing the composition $\sigma_g\cdot\sigma_h$, i.e. applying (\[mult-i-J1\]) for $u= X$ and $u= Y$ and then applying the multiplicativity of $c_{{\mathcal{H}}}$ from Lemma \[lemma: delta theta is multiplicative\], we find $$\begin{aligned} c_{gh}(\sigma_g\cdot\sigma_h(X), \sigma_g\cdot\sigma_h(Y)) & = c_g(\sigma_g(\lambda_h(X)), \sigma_g(\lambda_h(Y))) + \\ & + {\text{\rm Ad}\,}^{\mathcal{H}}_g c_h(\sigma_h(X), \sigma_h(Y)). \end{aligned}$$ Rewriting this in terms of $c_1$, we obtain, after also applying ${\text{\rm Ad}\,}^H_{h^{-1}g^{-1}}$, $$c_1(\sigma_g\cdot\sigma_h)(X, Y)= {\text{\rm Ad}\,}^{\mathcal{H}}_{h^{-1}} c_1(\sigma_g)(\lambda_h(X), \lambda_h(Y))+ c_1(\sigma_h)(X, Y),$$ i.e. the cocycle condition $c_1(\sigma_g\cdot\sigma_h)= {\text{\rm Ad}\,}^{\mathcal{H}}_{h^{-1}}(c_1(\sigma_g))+ c_1(\sigma_h)$. Of course, the next step is to linearize $c_1$. Hence we pass to the Lie algebroid ${J}^{1}_{D}A$ of ${J}^{1}_{{\mathcal{H}}}{\mathcal{G}}$ where $D:\Gamma(A)\to\Omega^1(M,E)$ is the Spencer operator associated to ${\mathcal{H}}$. See theorem \[t2\]. \[curvatura\] The linearization of the cocycle $c_1$ is equal to the curvature map $$\varkappa_D: {J}^{1}_{D}A {\longrightarrow}{\mathrm{Hom}}(\Lambda^2TM, E).$$ See definition \[varkappa-map\]. To conclude the proof of proposition \[yase\], it suffices to check that $Lie(c_1)=\varkappa_D$ as $$Lie(P_{{\mathcal{H}}} {\mathcal{G}}) = Lie(\ker (Lie(c_1))) = Lie(\varkappa_D) = P_{D}(A).$$ This is precisely the content of lemma \[curvatura\]. For the proof, it is better to consider $$\tilde\theta_g = {\text{\rm Ad}\,}^{\mathcal{H}}_{g^{-1}}(\theta_{\mathcal{H}})_g \in \Omega^1({\mathcal{G}},s^{\ast}E).$$ We claim that, for any connection $\nabla$ on $E$, using the induced derivative operator $d_{\nabla}$, one has: $$\label{help-I} c_1(\sigma_g)(X, Y)= d_{\nabla}\tilde\theta (\sigma_g(X), \sigma_g(Y)).$$ Indeed, using the definition of $c_1$ and $\tilde\theta$, this reduces to $d_{\nabla}\tilde\theta (X, Y)= \tilde\theta ([X, Y])$ for all $X, Y\in \textrm{Ker}(\theta)$, which is clear. We now compute the linearization $\varkappa_D$. Let $(\alpha, \omega)$ representing a section $\zeta$ of ${J}^{1}_{D}A$. From the definition of $\varkappa_D$, $$\varkappa_D(\alpha, \omega)(x) = (dc_1)_{1_x}(\zeta_x).$$ Note that, thinking of elements of ${J}^{1}_{D}A$ in terms of splittings, $$\zeta_x= {d}_x{\alpha}-\omega_x: T_xM{\longrightarrow}T_{\alpha(x)}A, \ X\mapsto ({d}\alpha)_{x}(X)- \omega_x(X),$$ where $\omega_x(X)\in A_x$ is viewed inside $T_{\alpha(x)}A$ by the natural inclusion $$A_x\hookrightarrow T_{\alpha(x)}A, \ v\mapsto \frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}(\alpha(x)+ \epsilon v).$$ To compute $(dc)({d}_x{\alpha}+x\omega_x)$, we will consider the curve $\sigma^{\epsilon}_g: I \to s^{-1}(x) \subset {J}^1{\mathcal{G}}$ given by $$\sigma^{\epsilon}(X_x) = {d}_x\phi^{\epsilon}_{{\alpha}}(X_x - {\epsilon}\omega(X_x)),$$ for all $X_x \in T_xM$ and ${\epsilon}$ small enough (for $\phi_{\alpha}^{\epsilon}$ see remark \[flows of sections\]). Note that $\sigma^{\epsilon}$ is a curve in the $s$-fiber of ${J}^1{\mathcal{G}}$ (not necessarily in ${J}^1_{\mathcal{H}}{\mathcal{G}}$), whose derivative at ${\epsilon}= 0$ is ${d}_x{\alpha}- \omega_x$. In fact, one has that $$\begin{aligned} \frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}\sigma_g^{\epsilon}(X_x) &= \frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}( {d}_x\phi^{\epsilon}_{{\alpha}}(X_x) - {\epsilon}\cdot {d}_x \phi^{\epsilon}_{{\alpha}}(\omega(X_x))) \\ &= {d}_x{\alpha}(X_x) -{d}_x\phi^0_{{\alpha}}\omega(X_x)\\ & = ({d}_x{\alpha}- \omega_x)(X_x). \end{aligned}$$ Next, we fix a splitting of ${d}s: {\mathcal{H}}\to s^{\ast}TM$, and for each $X\in{\ensuremath{\mathfrak{X}}}(M)$ we denote by $\tilde{X}\in \Gamma({\mathcal{H}})$ the corresponding horizontal lift. Then $$\tilde\sigma^{\epsilon}(X)_g = {d}_{\varphi_{{\alpha}^r}^{-{\epsilon}}(g)}\varphi^{\epsilon}_{{\alpha}^r}(\tilde X - {\epsilon}\omega(X)^r)$$ defines an extension of $\sigma^{\epsilon}(X_x)$ to ${\ensuremath{\mathfrak{X}}}({\mathcal{G}})$. From the equation (\[help-I\]) we deduce that $$\begin{aligned} \label{derivada} \varkappa_D(\alpha, \omega)(X_x,Y_x)= \frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}{d}_\nabla\tilde\theta_{g_\epsilon}(\tilde\sigma^\epsilon(X),\tilde\sigma^\epsilon(Y))(x),\end{aligned}$$ for all $X,Y\in{\ensuremath{\mathfrak{X}}}(M)$. Finally, to perform the computation, we let $\nabla$ be the pull-back via $s$ of a connection on $E$ (which we also denote by $\nabla$). We have: $$\begin{aligned} {d}_{\nabla^s}\tilde\theta_{g_{\epsilon}}(\tilde\sigma^{\epsilon}(X),\tilde\sigma^{\epsilon}(Y))&=\nabla_{\tilde\sigma^{\epsilon}(X)}\tilde\theta(\tilde\sigma^{\epsilon}(Y))-\nabla_{\tilde\sigma^{\epsilon}(Y)}\tilde\theta(\tilde\sigma^{\epsilon}(X))-\tilde\theta([\tilde\sigma^{\epsilon}(X),\tilde\sigma^{\epsilon}(Y)])\\ &=-{\epsilon}\nabla_{\tilde\sigma^{\epsilon}(X)}\tilde\theta({d}\varphi^{\epsilon}_{\alpha^r}(\omega(Y)^r))+\nabla_{\tilde\sigma^{\epsilon}(X)}\tilde\theta({d}\varphi^{\epsilon}_{\alpha^r}(\tilde Y))\\ &\quad+{\epsilon}\nabla_{\tilde\sigma^{\epsilon}(Y)}\tilde\theta({d}\varphi^{\epsilon}_{\alpha^r}(\omega(X)^r))-\nabla_{\tilde\sigma^{\epsilon}(Y)}\tilde\theta({d}\varphi^{\epsilon}_{\alpha^r}(\tilde X))\\ &\quad-{\epsilon}^2\tilde\theta([{d}\varphi^{\epsilon}_{\alpha^r}(\omega(X)^r),{d}\varphi^{\epsilon}_{\alpha^r}(\omega(Y)^r)])+{\epsilon}\tilde\theta({d}\varphi^{\epsilon}_{\alpha^r}[\omega(X)^r,\tilde Y])\\ &\quad-{\epsilon}\tilde\theta({d}\varphi^{\epsilon}_{\alpha^r}[\omega(Y)^r,\tilde X])-\tilde\theta({d}\varphi^{\epsilon}_{\alpha^r}[\tilde X,\tilde Y]). \end{aligned}$$ We now take the derivative when ${\epsilon}=0$ and evaluate the expression at $x$. Using the fact that $\nabla$ is the pull-back of a connection on $E$, the first term of the right hand side of the second equality gives us $$\begin{aligned} -\nabla_{\tilde\sigma^0(X)}\tilde\theta(\omega(Y)^r)(x) = -\nabla_X\tilde\theta(\omega(Y))(x)=-\nabla_Xl(\omega(Y))(x), \end{aligned}$$ while the second term gives $$\begin{aligned}\frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}\nabla_{\tilde\sigma^{\epsilon}(X)}\tilde\theta({d}\varphi^{\epsilon}_{\alpha^r}(\tilde Y))(x) &=\nabla_{\frac{d}{d{\epsilon}}\tilde\sigma^{\epsilon}(X)}\tilde\theta(\tilde Y)(x)+\nabla_{\tilde X}\frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}(\varphi^{\epsilon}_{\alpha^r})^*\tilde\theta(\tilde Y)(x)\\ &=\nabla_X\frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}(\phi^{\epsilon}_{\alpha})^*\tilde\theta(Y)(x)\\ &=\nabla_XD_Y\alpha, \end{aligned}$$ where in the passage from the first to the second line we have used that $\tilde\theta(\tilde Y) = 0$ because $\tilde Y\in \Gamma({\mathcal{H}})$. It then follows from $D_Y({\alpha}) = l(\omega(Y))$ that the the first line of the expression vanishes. The same argument shows also that the second line of the expression is equal to zero. So we are left with calculating the last three terms of the expression. We obtain: $$\begin{aligned} \tilde\theta([\omega(X)^r,\tilde Y])(x) - \tilde\theta([\omega(Y)^r,\tilde X])(x) - \frac{{d}}{{d}{\epsilon}}\big{|}_{{\epsilon}= 0}\tilde\theta({d}\varphi^{\epsilon}_{\alpha^r}[\tilde X,\tilde Y])(x). \end{aligned}$$ From the first two terms we obtain $$D_Y(\omega(X))-D_X(\omega(Y)).$$ Finally, for the last term we use the fact that $\tilde X$ and $\tilde Y$ are projectable extensions of $X$ and $Y$ to obtain $$\begin{aligned} \frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}\tilde\theta({d}\varphi^{\epsilon}_{\alpha^r}[\tilde X,\tilde Y])(x)=\frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}\tilde\theta({d}\varphi^{\epsilon}_{\alpha^r}\circ{d}u([X,Y]_x))=\frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}\tilde\theta({d}\phi^{\epsilon}_{\alpha}[X,Y]_x)\\=\frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}({\text{\rm Ad}\,}^{\mathcal{H}}_{\phi^{\epsilon}_{\alpha}})^{-1}\theta({d}\phi^{\epsilon}_{\alpha}[X,Y]_x)=D_{[X,Y]}\alpha(x)=l(\omega([X,Y]_x)).\end{aligned}$$ Putting these pieces together concludes the proof. Going back to jet groupoids (see subsections \[example: jet groupoids\] and \[Jet groupoids and algebroids\]) one finds that the multiplicative Cartan distributions $\mathcal{C}_k\subset J^k{\mathcal{G}}$ are “compatible under prolongations”. That is, \[claslie\]Let ${\mathcal{G}}{\rightrightarrows}M$ be a Lie groupoid and $k>0$ an integer. Then, $$\begin{aligned} P_{\mathcal{C}_k}(J^k{\mathcal{G}})=J^{k+1}{\mathcal{G}}\qquad\text{and}\qquad\mathcal{C}_k^{(1)}=\mathcal{C}_{k+1}.\end{aligned}$$ This follows form proposition \[prol-jet\], and the definition of $\mathcal{C}_k$ as in , and the fact that $J^k{\mathcal{G}}$ and are open submanifolds of the $k$-jet bundle of $s:{\mathcal{G}}\to M$. \[el-caso-forma\]For a multiplicative one form $\theta\in\Omega^1({\mathcal{G}},t^*E)$ of constant rank, one can describe the $1$-curvature map directly in terms of $\theta$: $$c_1: {J}^{1}_{\theta}{\mathcal{G}}{\longrightarrow}s^{\ast} {\mathrm{Hom}}(\Lambda^2TM, E)$$ given by $$c_1(\sigma_g)$$ (X\_x, Y\_x)= g\^[-1]{}\_[g]{} (\_g(X\_x), \_g(Y\_[x]{})) for $\sigma_g\in {J}^{1}_{\theta}{\mathcal{G}}$ with $s(g)= x$, $X_x, Y_x\in T_xM$, where $\delta\theta$ is defined by equation . This follows from the fact that in this case, $\delta\theta$ satisfies a similar multiplicativity condition as in lemma \[lemma: delta theta is multiplicative\], where in this case we replace the action ${\text{\rm Ad}\,}^{\mathcal{H}}$ by the action with respect to which $\theta$ is multiplicative. The classical Lie prolongation, denoted by $$\begin{aligned} P_\theta({\mathcal{G}})\subset J^1_\theta{\mathcal{G}}\subset J^1{\mathcal{G}}\end{aligned}$$ is set to be the kernel of $c_1$. The linearization of $c_1$ is again given by , where in this case $D:\Gamma(A)\to \Omega^1(M,E)$ is the the Spencer operator associated to $\theta$ (see theorem \[t1\]). ### The classical Lie-prolongation: smoothness In order to define a canonical Lie-prolongation of a multiplicative distribution ${\mathcal{H}}\subset T{\mathcal{G}}$ on the space $P_{\mathcal{H}}({\mathcal{G}})$ we need to require the smoothness of $P_{\mathcal{H}}({\mathcal{G}})$. Thus, we now concentrate on the smoothness of $P_{\mathcal{H}}({\mathcal{G}})$.\ For the following results let $D:\Gamma(A)\to \Omega^1(M,E)$ be the Spencer operator associated to ${\mathcal{H}}$ (see theorem \[t2\]) and let ${\mathfrak{g}}^{(1)}(A,D)\subset T^*\otimes A$ be the prolongation of the symbol map $\partial_D:{\mathfrak{g}}\to {\mathrm{Hom}}(TM, E)$ (see definition \[definitions\]). \[prop4213\] Assume that ${\mathcal{G}}$ has connected $s$-fibers and that $pr:P_{\mathcal{H}}({\mathcal{G}})\to {\mathcal{G}}$ is surjective. Then the following are equivalent: 1. $P_{\mathcal{H}}({\mathcal{G}})\subset J^1{\mathcal{G}}$ is smooth and $pr:P_{\mathcal{H}}({\mathcal{G}})\to {\mathcal{G}}$ is a submersion. 2. ${\mathfrak{g}}^{(1)}(A,D)$ has constant rank and $pr:P_D(A)\to A$ is surjective. The following two lemmas, which are interesting in their own right, are going to be used together with results on relative connections in the proof of the proposition \[prop4213\]. \[4217\] $P_{\mathcal{H}}({\mathcal{G}})\subset J^1{\mathcal{G}}$ is a Lie subgroupoid if and only if $P_D(A)\subset J^1A$ is a Lie subalgebroid. By proposition \[yase\] we know that if $P_{\mathcal{H}}({\mathcal{G}})$ is smooth then $P_D(A)$ is its Lie algebroid. Conversely, as $P_D(A)$ is the kernel of $\varkappa_D$, then $P_D(A)$ is smooth if and only if $\varkappa_D$ has constant rank. By definition of the linearization of $c_1$ and lemma \[curvatura\], one has that $\varkappa_D=dc_1|_A$ and this implies that $$\begin{aligned} \label{c=l+m} {\text{\rm rk}\,}d_{\sigma_g}c_1={\text{\rm rk}\,}(\varkappa_D|_{t(g}))+\dim M\end{aligned}$$ for any $\sigma_g\in J^1_{\mathcal{H}}{\mathcal{G}}$. From this one has that $c_1:J^1_{\mathcal{H}}{\mathcal{G}}\to {\mathrm{Hom}}(\wedge^2TM,E)$ has constant rank and therefore $P_{\mathcal{H}}({\mathcal{G}})=\ker c_1$ is smooth (see proposition 2.1 in [@Gold2]). Indeed, to show consider a $1$-cocycle $c$ on a Lie groupoid $\Sigma{\rightrightarrows}M$ with values in a representation $F\in {\text{\rm Rep}\,}(\Sigma)$. Choose any $g\in\Sigma$, and let $b\in\operatorname{Bis}(\Sigma)$ be such that $b(x)=g$ for $x=s(g)$. Using $b$ we split $T_g\Sigma$ as the direct sum $$\begin{aligned} \label{dec} T_g\Sigma\simeq T_xM\oplus T^s_g\Sigma,\end{aligned}$$ where a vector $X\in T_xM$ corresponds to $d_xb(X)$. As the composition $c\circ b:M\to F$ is a section of $F$, then $d_gc$ is injective in the $T_xM$ component of the decomposition . On the other hand, the cocycle condition for $c\in Z^1(\Sigma,s^*F)$ implies that $$\begin{aligned} \label{c-l} d_gc(v)=g^{-1}\cdot d_{t(g)}c(r_{g^{-1}}v)=g^{-1}\cdot Lin(c)(r_{g^{-1}}v)\end{aligned}$$ and therefore ${\text{\rm rk}\,}(d_gc|_{T_g^s\Sigma})={\text{\rm rk}\,}Lin(c)_{t(g)}$ since $r_{g^{-1}}:T^s_g\Sigma\to A_{t(g)}$ is an isomorphism. The argument to prove equation is completely analogous to the one given on proposition \[prop: appendix\]. From this one has that $$\begin{aligned} {\text{\rm rk}\,}d_{g}c={\text{\rm rk}\,}(Lin (c))|_{t(g)})+\dim M.\end{aligned}$$ \[con\] Assume that ${\mathcal{G}}$ has connected $s$-fibers. If ${\mathfrak{g}}^{(1)}(A,D)$ has constant rank and $pr:P_D(A)\to A$ is surjective then $P_{\mathcal{H}}({\mathcal{G}})$ is smooth and $pr:P_{\mathcal{H}}({\mathcal{G}})\to {\mathcal{G}}$ is a submersion. From lemma \[workable\] one obtains that $P_D(A)$ is smooth, and therefore by the previous lemma, $P_{\mathcal{H}}({\mathcal{G}})\subset J^1{\mathcal{G}}$ is smooth. That $pr:P_{\mathcal{H}}({\mathcal{G}})\to{\mathcal{G}}$ is a submersion is an instance of the following general case: let $pr:\Sigma\to {\mathcal{G}}$ be a Lie groupoid map over the identity map of $M$ with the property that ${\mathcal{G}}$ has connected $s$-fibers. If $Lie(pr):A(\Sigma)\to A({\mathcal{G}})$ is surjective then $pr:\Sigma\to {\mathcal{G}}$ is a surjective submersion. To see this we will show that $pr$ is a submersion onto its image and therefore ${\text{\rm Im}\,}(pr)\subset {\mathcal{G}}$ is an open set. As ${\mathcal{G}}$ has connected $s$-fibers and $M\subset {\text{\rm Im}\,}(pr)$ then ${\text{\rm Im}\,}(pr)={\mathcal{G}}$. Let’s prove then that $pr$ is a submersion. First of all, notice that $pr$ is a submersion at the units of $\Sigma$. Indeed, for any $x\in M$, $$\begin{aligned} T_{1_x}\Sigma\simeq T_xM\oplus A(\Sigma)_x,\end{aligned}$$ where a vector of $X\in T_xM$ corresponds to $d_xu(X)\in T_{1_x}\Sigma$. Similarly one can decompose $T_{1_x}{\mathcal{G}}$ as $T_xM\oplus A({\mathcal{G}})_x$. As $pr$ is the identity on the units then $dpr$ is the on $T_xM$. Now, as $$\begin{aligned} dpr|_{A(\Sigma)_x}=Lie(pr)\end{aligned}$$ and $Lie(pr)$ is surjective by assumption, then $d_{1_x}pr$ is surjective. Secondly, for any $g\in \Sigma$, let $b\in\operatorname{Bis}(\Sigma)$ be such that $b(s(g))=g$. If $L_b:\Sigma\to \Sigma, h\mapsto b(t(h))\cdot h$ denotes the diffeomorphism given by left multiplication by $b$, then the fact that $pr$ respects multiplication implies that we have the commutative diagram where the columns are diffeomorphisms $$\begin{aligned} \xymatrix{ \Sigma \ar[r]^{pr} \ar[d]_{L_b} & {\mathcal{G}}\ar[d]^{L_{pr\circ b}} \\ \Sigma \ar[r]^{pr} & {\mathcal{G}}. }\end{aligned}$$ Taking differentials we get that $$\begin{aligned} \xymatrix{ T_{s(g)}\Sigma \ar[r]^{dpr} \ar[d]_{dL_b} &T_{s(g)} {\mathcal{G}}\ar[d]^{dL_{pr\circ b}} \\ T_g\Sigma \ar[r]^{dpr} & T_{pr(g)}{\mathcal{G}}. }\end{aligned}$$ As the columns are isomorphisms and the top arrow is surjective, by the commutativity we have that the bottom arrow is also surjective. That statement 1 implies 2 is clear from remark \[smooth-prol\], and the fact that the Lie functor of the Lie groupoid map $pr:P_{\mathcal{H}}({\mathcal{G}})\to {\mathcal{G}}$ is $pr:P_D(A)\to A$. Lemma \[con\] gives the converse. Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distribution. If $P_{\mathcal{H}}({\mathcal{G}})$ is smoothly defined we call $$\begin{aligned} pr:(P_{\mathcal{H}}({\mathcal{G}}),{\mathcal{H}}^{(1)}){\longrightarrow}({\mathcal{G}},{\mathcal{H}})\end{aligned}$$ [**the classical Lie prolongation of $({\mathcal{G}},{\mathcal{H}})$**]{}, where $${\mathcal{H}}^{(1)}:=\mathcal{C}_1\cap TP_{\mathcal{H}}({\mathcal{G}})$$ with $\mathcal{C}_1\subset TJ^1{\mathcal{G}}$ being the multiplicative Cartan distribution. For ${\mathcal{H}}\subset T{\mathcal{G}}$ as in the previous definition. That ${\mathcal{H}}^{(1)}\subset T P_{{\mathcal{H}}}({\mathcal{G}})$ is indeed a Pfaffian distribution with respect to the source map can be proven using the ideas of lemma \[lemaese\]. The Lie-multiplicativity condition follows by definition. \[yase2\]If $D$ is the associated Spencer operator of ${\mathcal{H}}$, the induced Spencer operator associated to ${\mathcal{H}}^{(1)}\subset TP_{\mathcal{H}}({\mathcal{G}})$ is $$\begin{aligned} D^{(1)}:\Gamma(P_D(A)){\longrightarrow}\Omega^1(M,A).\end{aligned}$$ This follows from propositions \[yase\] and \[cartan form\]. \[si-es\] If $P_{\mathcal{H}}({\mathcal{G}})$ is smoothly defined, then $$\begin{aligned} pr:(P_{\mathcal{H}}({\mathcal{G}}),{\mathcal{H}}^{(1)})\overset{}{{\longrightarrow}}({\mathcal{G}},{\mathcal{H}})\end{aligned}$$ is a Lie-prolongation of $({\mathcal{G}},{\mathcal{H}}).$ This is a consequence of the key properties \[key-properties\] of $pr:P_{\mathcal{H}}({\mathcal{G}})\to {\mathcal{G}}$. ### The classical Lie-prolongation: the universal property The classical Lie-prolongation of $({\mathcal{G}},{\mathcal{H}})$ is actually universal among Lie-prolongations of $({\mathcal{G}},{\mathcal{H}})$ in the following sense: \[prop-un\]The Lie prolongation space $P_{\mathcal{H}}({\mathcal{G}})$ of $({\mathcal{G}},{\mathcal{H}})$ is universal among Lie-prolongations of $({\mathcal{G}},{\mathcal{H}})$. More precisely, if $$\begin{aligned} p:({\tilde{{\mathcal{G}}}},{\tilde{{\mathcal{H}}}}){\longrightarrow}({\mathcal{G}},{\mathcal{H}})\end{aligned}$$ is a Lie prolongation of $({\mathcal{G}},{\mathcal{H}})$, then there exists a unique Lie groupoid map $j:{\tilde{{\mathcal{G}}}}\to P_{\mathcal{H}}({\mathcal{G}})$ with the property that $p\circ j=pr$ and $$\begin{aligned} \label{cond3} j^*\theta^{(1)}=\theta_{{\tilde{{\mathcal{H}}}}},\qquad j^*\delta\theta^{(1)}=\delta\theta_{{\tilde{{\mathcal{H}}}}},\end{aligned}$$ where $\theta_{{\tilde{{\mathcal{H}}}}}$ is the multiplicative form associated to ${\tilde{{\mathcal{H}}}}$. One can write the condition directly in terms of the multiplicative distributions as follows. By assumption $\tilde A/\tilde{\mathfrak{g}}\simeq A$. On the other hand, recall that right translation induces an isomorphism $$\begin{aligned} T{\tilde{{\mathcal{G}}}}/{\tilde{{\mathcal{H}}}}\simeq T^s{\tilde{{\mathcal{G}}}}/{\tilde{{\mathcal{H}}}}^s\overset{R}{\simeq} t^*(\tilde A/\tilde{\mathfrak{g}})\simeq t^*A.\end{aligned}$$ Similarly, $$\begin{aligned} TP_{\mathcal{H}}/{\mathcal{H}}^{(1)}\simeq t^*A.\end{aligned}$$ Condition means that, under this identification, $$\begin{aligned} X{\operatorname{mod}}{\tilde{{\mathcal{H}}}}=dj_{{\tilde{\theta}}}(X){\operatorname{mod}}{\mathcal{H}}^{(1)},\qquad c_{{\tilde{{\mathcal{H}}}}}(Y,Z)=c_{{\mathcal{H}}^{(1)}}(dj_{{\tilde{\theta}}}(Y),dj_{{\tilde{\theta}}}(Y)) \end{aligned}$$ for any vectors $X,Y,Z$ of ${\tilde{{\mathcal{G}}}}$ with the property that $Y,Z$ belong to ${\tilde{{\mathcal{H}}}}$. Take $j=j_{\theta_{{\tilde{{\mathcal{H}}}}}}$ of corollary \[628\]. By definition of $P_{\mathcal{H}}({\mathcal{G}})$ and proposition \[410\] it follows that ${\text{\rm Im}\,}j\subset P_{{\mathcal{H}}}({\mathcal{G}})$. That $j^*\theta^{(1)}=\theta_{{\tilde{{\mathcal{H}}}}}$ follows from corollary \[628\]. To prove that $j^*\delta\theta^{(1)}=\delta\theta_{{\tilde{{\mathcal{H}}}}}$ notice that $j^*\theta^{(1)}=\theta_{{\tilde{{\mathcal{H}}}}}$ implies that $dj:T{\tilde{{\mathcal{G}}}}\to J^1{\mathcal{G}}$ maps ${\tilde{{\mathcal{H}}}}$ to ${\mathcal{H}}^{(1)}$. Take now $X,Y\in \Gamma({\tilde{{\mathcal{H}}}})$ $s$-projectable and let $g\in{\tilde{{\mathcal{G}}}}$. Consider $s$-projectable extensions $\widetilde{dj(X)},\widetilde{dj(Y)}\in\Gamma(\mathcal{H}^{(1)})$ of $ds(X)$ and $ds(Y)$, respectively with the property that $$\begin{aligned} \widetilde{dj(X)}_g=d_gj(X)\quad\text{and}\quad \widetilde{dj(Y)}_g=d_gj(Y).\end{aligned}$$ Then $$\begin{aligned} \begin{split} j^*\delta\theta^{(1)}(X,Y)_g&=\theta^1_{j_{{\tilde{\theta}}}(g)}[\widetilde{dj(X)},\widetilde{dj(Y)}]\\ &=r_{p(g)^{-1}}(dpr[\widetilde{dj(X)},\widetilde{dj(Y)}]-j(g)(ds[\widetilde{dj(X)},\widetilde{dj(Y)}])). \end{split}\end{aligned}$$ But applying lemma \[delta-commutes\] (or the remark below) to the submersion $pr:P^1_{{\mathcal{H}}}{\mathcal{G}}\to {\mathcal{G}}$, and recalling that $dpr({\mathcal{H}}^{(1)})\subset {\tilde{{\mathcal{H}}}}$ by remark \[key-properties\], one has that $$\begin{aligned} dpr[\widetilde{dj(X)},\widetilde{dj(Y)}]=c_{\mathcal{H}}(dpr(dj(X)),dpr(dj(Y)))=0.\end{aligned}$$ On the other hand, as $\widetilde{dj(X)}$ and $\widetilde{dj(Y)}$ are $s$-projectable to $ds(X)$ and $ds(Y)$ respectively, using remark \[generali\] $$\begin{aligned} \begin{split} j(g)(ds[\widetilde{dj(X)},\widetilde{dj(Y)}])&=j(g)[ds(X),ds(Y)]=j(g)(ds[X,Y])\\ &=d_gp([X,Y])-r_{p(g)}\theta_{{\tilde{{\mathcal{H}}}}}([X,Y]_g). \end{split}\end{aligned}$$ Using again lemma \[delta-commutes\], one gets that $$\begin{aligned} d_gp([X,Y])=c_{{\mathcal{H}}}(dp(X),dp(Y))=0,\end{aligned}$$ and therefore $$\begin{aligned} j^*\delta\theta^{(1)}(X,Y)_g=\theta_{{\tilde{{\mathcal{H}}}}}([X,Y]_g)=\delta\theta_g(X,Y).\end{aligned}$$ \[compatibility-prolongations\] If $pr:P_{\mathcal{H}}({\mathcal{G}})\to{\mathcal{G}}$ is smoothly defined, then $$\begin{aligned} \label{mc} MC(\theta^{(1)},\theta_{{\mathcal{H}}})=0.\end{aligned}$$ This is a consequence of theorem \[theorem-mc\] and proposition \[si-es\]. Going back to jet groupoids (see subsections \[example: jet groupoids\]), we have the main example of Lie-prolongations. \[cor:1349\] For $k\geq 1$ a natural number, each jet groupoid endowed with the Cartan form $$\begin{aligned} pr:(J^{k+1}{\mathcal{G}},\theta^{k+1})\overset{}{{\longrightarrow}}(J^k{\mathcal{G}},\theta^{k})\end{aligned}$$ is the classical Lie-prolongation of $(J^k{\mathcal{G}},\theta^k)$. Hence, $$\begin{aligned} MC(\theta^{k+1},\theta^k)=0.\end{aligned}$$ This is a consequence of proposition \[claslie\] and theorem \[theorem-mc\]. ### Proofs of the compatibility theorems \[prol-comp\] and \[theorem-mc\] {#Proofs} Through this subsection we assume that: - ${\tilde{{\mathcal{G}}}}$ and ${\mathcal{G}}$ are Lie groupoids over $M$ with Lie algebroids $\tilde A$ and $A$ respectively, - the Lie groupoid map $p:{\tilde{{\mathcal{G}}}}\to {\mathcal{G}}$ is a surjective submersion, - ${\tilde{\theta}}\in\Omega^1\in\Omega^1({\tilde{{\mathcal{G}}}},t^*A)$ and $\theta\in\Omega^1({\mathcal{G}},t^*E)$ are point-wise surjective multiplicative forms, - $(\tilde D,\tilde l):\tilde A\to A$ and $(D,l):A\to E$ are the associated Spencer operators of ${\tilde{\theta}}$ and $\theta$, - $Lie(p)=\tilde l:\tilde A\to A$, and - we denote by ${\tilde{{\mathcal{H}}}}:=\ker{\tilde{\theta}}\subset T{\tilde{{\mathcal{G}}}}$ and by ${\mathcal{H}}:=\ker\theta\subset T{\mathcal{G}}$ the associated multiplicative distributions. We will prove the following two propositions: \[410\] If the differential $dp:T{\tilde{{\mathcal{G}}}}\to T{\mathcal{G}}$ is such that $$\begin{aligned} \label{cont} dp({\tilde{{\mathcal{H}}}})\subset {\mathcal{H}},\end{aligned}$$ then $$\begin{aligned} \label{co} D\circ\tilde l-\tilde D\circ l=0.\end{aligned}$$ Moreover, if $dp({\tilde{{\mathcal{H}}}})\subset {\mathcal{H}}$ holds, then $$\begin{aligned} \label{b1} \delta\theta(dp(\tilde X),dp(\tilde Y))=0,\quad\text{for any }\tilde X,\tilde Y\in{\tilde{{\mathcal{H}}}}\end{aligned}$$ implies that $$\begin{aligned} \label{b2} D_X\tilde D_Y-D_Y\tilde D_X-l\tilde D_{[X,Y]}=0,\quad\text{for any }X,Y\in{\ensuremath{\mathfrak{X}}}(M).\end{aligned}$$ If ${\tilde{{\mathcal{G}}}}$ has connected $s$-fibers then conditions and are equivalent to and respectively. \[523\]Note that as $T{\tilde{{\mathcal{G}}}}={\tilde{{\mathcal{H}}}}+T^s{\tilde{{\mathcal{G}}}}$ and $dp|_{T^s({\tilde{{\mathcal{G}}}})}={\tilde{\theta}}|_{T^s({\tilde{{\mathcal{G}}}})}$, then condition is equivalent to the equation $$\begin{aligned} \theta\circ dp=\theta\circ {\tilde{\theta}}=l\circ{\tilde{\theta}}.\end{aligned}$$ It is clear that proposition \[410\] implies theorem \[prol-comp\]; in order to achieve a proof of the former, we will first prove two key lemmas (see lemmas \[lemma:3459724\] and \[4418\]). However, before proceeding to a proof, we introduce two cocycles. The first one takes care of condition . Explicitely, let $$\begin{aligned} ev_{{\tilde{{\mathcal{H}}}}}:J^1_{{\tilde{{\mathcal{H}}}}} {\tilde{{\mathcal{G}}}}{\longrightarrow}s^*{\mathrm{Hom}}(TM,E)\end{aligned}$$ be defined at $\sigma_g\in J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}}$ and $X\in T_{s(g)}M$ by $$\begin{aligned} ev_{{\tilde{{\mathcal{H}}}}}(\sigma_g)(X)=p(g)^{-1}\cdot\theta_{p(g)}(dp(\sigma_g(X))).\end{aligned}$$ As $dp({\tilde{{\mathcal{H}}}}^s)={\tilde{\theta}}({\tilde{{\mathcal{H}}}}^s)=0$ since $Lie(p)={\tilde{\theta}}_{\tilde A}$, then it is clear that holds if and only if $ev_{{\tilde{{\mathcal{H}}}}}$ is identically zero. Under the assumption that $dp({\tilde{{\mathcal{H}}}})\subset {\mathcal{H}}$ (see also remark \[523\]), we introduce the second $1$-cocycle which takes cares of the condition . More precisely, consider the well-defined map $$\begin{aligned} \tilde c_1:J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}}{\longrightarrow}s^*{\mathrm{Hom}}(\wedge^2TM,E)\end{aligned}$$ defined at $\sigma_g\in J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}}$ and $X,Y\in T_{s(g)}M$ by $$\begin{aligned} \tilde c_1(\sigma_g)(X,Y)=p(g)^{-1}\delta\theta_g(dp(X),dp(Y)).\end{aligned}$$ \[ayuda\]Of course if equation holds then $\tilde c_1$ vanishes. For the converse, let $g\in{\tilde{{\mathcal{G}}}}$ and let $\sigma_g:T_{s(g)}M\to T_g{\tilde{{\mathcal{G}}}}$ be any element of $J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}}$. As $$\begin{aligned} \label{eseno} {\tilde{{\mathcal{H}}}}_g=\sigma_g(T_{s(g)}M)\oplus {\tilde{{\mathcal{H}}}}^s_g,\end{aligned}$$ we can write $$\begin{aligned} X=\sigma_g(U)+u,\qquad Y=\sigma_g(V)+v,\end{aligned}$$ where $u,v\in {\tilde{{\mathcal{H}}}}^s_g$. Then $$\begin{aligned} \begin{split} \delta\theta(dp(X),dp(Y))&=\delta\theta(dp(\sigma_g(U)),dp(\sigma_g(V)))+\delta\theta(dp(u),dp(\sigma_g(V)))\\&\quad+\delta\theta(dp(\sigma_g(U)),dp(v))+\delta\theta(dp(u),dp(v))\\ &=\delta\theta(dp(\sigma_g(U)),dp(\sigma_g(V)))=p(g)\tilde c_1(\sigma_g)(U,V), \end{split}\end{aligned}$$ where we use that $dp({\tilde{{\mathcal{H}}}}^s)=0$. This shows the equivalence. The previous computation also shows that if ${\tilde{{\mathcal{G}}}}$ has connected $s$-fibers, then condition holds if and only if $$\begin{aligned} \tilde c_1:(J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}})^0{\longrightarrow}s^*{\mathrm{Hom}}(\wedge^2TM,E)\end{aligned}$$ vanishes, where $(J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}})^0$ is the connected component of the $s$-fibers. This is true since $pr:J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}}\to{\tilde{{\mathcal{G}}}}$ is a surjective a submersion and therefore $pr:(J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}})^0\to{\tilde{{\mathcal{G}}}}$ is still surjective. Hence, for any $g\in {\tilde{{\mathcal{G}}}}$, we can choose our $\sigma_g$ as in belonging to $(J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}})^0$. The action of $J^1_{{\tilde{{\mathcal{H}}}}} {\tilde{{\mathcal{G}}}}$ on ${\mathrm{Hom}}(TM,E)$ (and on ${\mathrm{Hom}}(\wedge^2TM,E)$) is the one induced by the representation $\lambda$ on $TM$ and the pullback of the action of ${\mathcal{G}}$ on $E$ via $p\circ pr:J^1_{{\tilde{{\mathcal{H}}}}} {\tilde{{\mathcal{G}}}}\to {\mathcal{G}}$: for $\sigma_g\in J^1_{{\tilde{{\mathcal{H}}}}} {\tilde{{\mathcal{G}}}}$ with $s(g)=x$,$t(g)=y$ and $T\in {\mathrm{Hom}}(T_xM,E_x)$, $$\begin{aligned} \sigma_g\cdot T\in {\mathrm{Hom}}(T_yM,E_y),\qquad \sigma_g\cdot T(X_y)=p(g)\cdot T(\lambda_{\sigma_g}^{-1}(X_y)).\end{aligned}$$ For the following lemma recall that $J^1_{\tilde D}\tilde A\subset J^1\tilde A$ is the Lie algebroid of $J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}}$, whose sections are given by the Spencer decomposition by pairs $({\alpha},\omega)\in\Gamma(\tilde A)\oplus \Omega^1(M,\tilde A)$ with the property that $$\begin{aligned} \label{eq} \tilde D({\alpha})=\tilde l\circ\omega.\end{aligned}$$ See remark \[cocyclea\]. \[lemma:3459724\] The map $ev_{{\tilde{{\mathcal{H}}}}}:J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}}\to s^*{\mathrm{Hom}}(TM,E)$ is a cocycle with linearization $$\begin{aligned} Lie(ev_{{\tilde{{\mathcal{H}}}}}):J^1_{\tilde D}\tilde A{\longrightarrow}{\mathrm{Hom}}(TM,E)\end{aligned}$$ given on sections $({\alpha},\omega)\in \Gamma(J^1_{\tilde D}A)$ by $$\begin{aligned} D(\tilde l({\alpha}))-l(\tilde D({\alpha})).\end{aligned}$$ Note that $ev_{{\tilde{{\mathcal{H}}}}}$ is the pullback of the cocycle $ev:J^1{\mathcal{G}}\to {\mathrm{Hom}}s^*(TM,E)$ in lemma \[s\], along the Lie groupoid map $$\begin{aligned} j^1p:J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}}\to J^1{\mathcal{G}},\quad\sigma_g\mapsto dp\circ \sigma.\end{aligned}$$ Therefore $ev_{{\tilde{{\mathcal{H}}}}}$ is a cocycle and its linearization is given by the composition $$\begin{aligned} J^1_{\tilde D}\tilde A\overset{Lie(j^1p)}{{\longrightarrow}}J^1A\overset{a}{{\longrightarrow}}{\mathrm{Hom}}(TM,E),\end{aligned}$$ where $a=Lie(ev)$. Explicitely, for $({\alpha},\omega)\in\Gamma(J^1_{\tilde D}\tilde A)$ as in $$\begin{aligned} \begin{split} Lie(ev_{{\tilde{{\mathcal{H}}}}})({\alpha},\omega)&=a(Lie(j^1p)({\alpha},\omega))=a(Lie(p)\circ{\alpha},Lie(p)\circ\omega)\\&=a(\tilde l({\alpha}),\tilde l(\omega))=a(\tilde l({\alpha}), \tilde D({\alpha}))=D(\tilde l({\alpha}))-l(\tilde D({\alpha})). \end{split}\end{aligned}$$ \[4418\]Assume that $dp({\tilde{{\mathcal{H}}}})\subset {\mathcal{H}}$. The map $\tilde c_1:J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}}\to{\mathrm{Hom}}(\wedge^2TM,E)$ is a cocycle with linearization $$\begin{aligned} Lie(\tilde c_1):J^1_{\tilde D}\tilde A{\longrightarrow}{\mathrm{Hom}}(\wedge^2TM,E)\end{aligned}$$ given on section $({\alpha},\omega)\in J^1_{\tilde D}\tilde A$ as in by $$\begin{aligned} D_X\tilde D_Y({\alpha})-D_Y\tilde D_X({\alpha})-l\tilde D_{[X,Y]}({\alpha})\end{aligned}$$ for $X,Y\in{\ensuremath{\mathfrak{X}}}(M)$. Notice that $\tilde c_1$ is the pullback of the $1$-cocycle $$\begin{aligned} c_1:J^1_\theta{\mathcal{G}}{\longrightarrow}s^*{\mathrm{Hom}}(\wedge^2TM,E)\end{aligned}$$ defined in remark \[el-caso-forma\], along the Lie groupoid map $$\begin{aligned} j^1p:J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}}{\longrightarrow}J^1_\theta{\mathcal{G}},\quad\sigma_g\mapsto dp\circ\sigma_g.\end{aligned}$$ Hence $\tilde c_1$ is a cocycle with linearization given by the composition $$\begin{aligned} J^1_{\tilde D}\tilde A\overset{Lie(j^1p)}{{\longrightarrow}}J^1_DA\overset{Lie(c_1)}{{\longrightarrow}}{\mathrm{Hom}}(\wedge^2TM,E).\end{aligned}$$ Explicitly, for $({\alpha},\omega)\in\Gamma(J^1_{\tilde D}\tilde A)$ as in and $X,Y\in{\ensuremath{\mathfrak{X}}}(M)$, $$\begin{aligned} \begin{split} Lie(\tilde c_1)({\alpha},\omega)(X,Y)&=Lie(c_1)(\tilde l({\alpha}),\tilde l(\omega))(X,Y)=Lie(c_1)(\tilde l({\alpha}),\tilde D({\alpha}))(X,Y)\\ &=D_X\tilde D_Y({\alpha})-D_Y\tilde D_X({\alpha})-l\tilde D_{[X,Y]}({\alpha}). \end{split}\end{aligned}$$ Lemmas \[lemma:3459724\] and \[4418\] yield proposition \[410\] as shown below. As mentioned before, conditions and are equivalent to the fact that $ev$ and $\tilde c_1$ vanish respectively. From this it is clear that if conditions and hold then so do and respectively. For the converse, assume that ${\tilde{{\mathcal{G}}}}$ has connected $s$-fibers. As $ev$ and $\tilde c_1$ are 1-cocycles with $Lie(ev)=Lie(\tilde c_1)=0$, then $dev$ and $d\tilde c_1$ vanish on vectors tangent to the $s$-fibers (see equation in the proof of lemma \[4217\]). This implies that $ev$ vanishes on $(J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}})^0$–the connected component of the $s$-fibers. On the other hand, as the map $pr:J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}}\to {\tilde{{\mathcal{G}}}}$ is a surjective submersion (see proposition \[prop421\]), then $(J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}})^0$ is mapped onto ${\tilde{{\mathcal{G}}}}^0$, i.e. the image equals ${\tilde{{\mathcal{G}}}}$ since ${\tilde{{\mathcal{G}}}}$ is $s$-connected. This means that for any $g\in{\tilde{{\mathcal{G}}}}$, one can choose a splitting $\sigma_g:T_{s(g)}M\to T_g{\tilde{{\mathcal{G}}}}$ of the source map, whose image lies in ${\tilde{{\mathcal{H}}}}$, with the property that $$\begin{aligned} \theta(dp\circ\sigma_g(T_{s(g)}M))=0.\end{aligned}$$ As the vector space ${\tilde{{\mathcal{H}}}}_g$ can be decomposed as the direct sum $$\begin{aligned} {\tilde{{\mathcal{H}}}}_g=\sigma_g(T_{s(g)}M)\oplus {\tilde{{\mathcal{H}}}}^s_g\end{aligned}$$ and $dpr({\tilde{{\mathcal{H}}}}^s_g)=R_{pr(g)}dpr({\tilde{{\mathcal{H}}}}^s_{t(g)})=R_{pr(g)}{\tilde{\theta}}({\tilde{{\mathcal{H}}}}^s_{t(g)})=0$, then we can conclude that $dpr({\tilde{{\mathcal{H}}}}_g)=0$ for any $g\in{\tilde{{\mathcal{G}}}}$. A similar computation shows that if $Lie(\tilde c_1)$ vanishes on $(J^1_{{\tilde{{\mathcal{H}}}}}{\tilde{{\mathcal{G}}}})^0$, then it vanishes everywhere (see remark \[ayuda\]). Now we proceed with the proof of theorem \[theorem-mc\]. As ${\tilde{{\mathcal{H}}}}$ is $s$-transversal, any vector on ${\tilde{{\mathcal{G}}}}$ can be written as a linear combination (with coefficients in $C^{\infty}({\tilde{{\mathcal{G}}}})$) of vector fields tangent to ${\tilde{{\mathcal{H}}}}$ together with right invariant vector fields on ${\tilde{{\mathcal{G}}}}.$ For this reason, to compute $MC({\tilde{\theta}},\theta)$, it suffices to calculate it on three types of pairs of vectors $(X,Y)$, namely, 1. $X={\alpha}^r\in{\ensuremath{\mathfrak{X}}}^r({\tilde{{\mathcal{G}}}})$ is a right invariant vector field and $Y$ is tangent to ${\tilde{{\mathcal{H}}}}$, 2. $X={\alpha}^r$ and $Y={\beta}^r$ are both right invariant vector fields, and 3. $X,Y$ are both tangent to ${\tilde{{\mathcal{H}}}}$. In the first case, we have that for any $g\in{\tilde{{\mathcal{G}}}}$ $$\begin{aligned} \label{Mc} \begin{split} MC({\tilde{\theta}},\theta)({\alpha}^r_{g},Y_{g})&=-D^t_Y({\tilde{\theta}}({\alpha}^r))(g)-l\circ{\tilde{\theta}}[{\alpha}^r,Y](g)\\ &=-D^t_Y({\tilde{\theta}}({\alpha})^r)(g)+l\circ{\tilde{\theta}}[{\alpha}^r,Y](g)\\ &=-D^t_Y(\tilde l({\alpha})^r)(g)+l\circ{\tilde{\theta}}[{\alpha}^r,Y](g), \end{split}\end{aligned}$$ where in the second equality we use that multiplicativity of ${\tilde{\theta}}$ implies that $$\begin{aligned} {\tilde{\theta}}({\alpha}^r)={\tilde{\theta}}({\alpha})^r=\tilde l({\alpha})^r.\end{aligned}$$ On the one hand, one can easily show that $D^t_Y(\tilde l({\alpha})^r)=D_{dt(Y)}(\tilde l({\alpha}))\circ t$. On the other, lemma \[commutator\] implies that $$\begin{aligned} \begin{split} {\tilde{\theta}}([{\alpha}^r,Y]_{g})&=g\cdot (L_{\alpha}{\tilde{\theta}})(Y)\\ &=g\cdot\frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}(\varphi^{{\epsilon}}_{{\alpha}^r}(g))^{-1}\cdot(\varphi^\epsilon_{{\alpha}^r})^*{\tilde{\theta}}|_{\varphi^\epsilon_{{\alpha}^r}(g)}\\ &=g\cdot\frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}(\varphi_{{\alpha}}^{\epsilon}(t(g))\cdot g)^{-1}\cdot ({\tilde{\theta}}(dm(d\varphi^\epsilon_{\alpha}(dt(Y)),Y))\\ &=g\cdot\frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}g^{-1}\cdot(\varphi^{{\epsilon}}_{{\alpha}}(t(g)))^{-1}\cdot ({\tilde{\theta}}(dm(d\varphi^\epsilon_{\alpha}(dt(Y)),Y))\\ &=\frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}(\varphi^{{\epsilon}}_{{\alpha}}(t(g)))^{-1}\cdot ({\tilde{\theta}}(d\varphi^\epsilon_{\alpha}(dt(Y)))-\varphi_{{\alpha}}^{\epsilon}(t(g))\cdot{\tilde{\theta}}(Y))\\ &=\frac{d}{d{\epsilon}}\big{|}_{{\epsilon}=0}(\varphi^{{\epsilon}}_{{\alpha}}(t(g)))^{-1}\cdot {\tilde{\theta}}(d\varphi^\epsilon_{\alpha}(dt(Y)))=\tilde D_{dt(Y)}({\alpha})(t(g)), \end{split}\end{aligned}$$ where we used that the flow of a right invariant vector field ${\alpha}^r$ is given by $\varphi_{{\alpha}^r}^{\epsilon}(g)=\varphi_{{\alpha}}^{\epsilon}(t(g))\cdot g$ and therefore for a fixed ${\epsilon}$, $d\varphi_{{\alpha}^r}^{\epsilon}=dm(d\varphi_{{\alpha}}^{\epsilon}\circ dt,id)$. Expression becomes then $$\begin{aligned} MC({\tilde{\theta}},\theta)({\alpha}^r,Y)=D_{dt(Y)}(\tilde l({\alpha}))(t(g))-l(\tilde D_{dt(Y)}({\alpha})(t(g))).\end{aligned}$$ As ${\tilde{{\mathcal{H}}}}$ is $t$-transversal (see remarks \[mult-equiv\] and \[transversalidad\]), then the above equation implies that $MC({\tilde{\theta}},\theta)=0$ if and only if $D\circ \tilde l-l\circ \tilde D=0.$ For the second case, $$\begin{aligned} \label{mC} \begin{split} MC({\tilde{\theta}},\theta)({\alpha}^r,{\beta}^r)&=D^t_{{\alpha}^r}({\tilde{\theta}}({\beta}^r))-D^t_{{\beta}^r}({\tilde{\theta}}({\alpha}^r))\\&\quad-l\circ{\tilde{\theta}}[{\alpha}^r,{\beta}^r]-\{{\tilde{\theta}}({\alpha}^r),{\tilde{\theta}}({\beta}^r)\}_D\\& =D^t_{{\alpha}^r}({\tilde{\theta}}({\beta})^r)-D^t_{{\beta}^r}({\tilde{\theta}}({\alpha})^r)\\&\quad-l\circ dp([{\alpha}^r,{\beta}^r])-\{{\tilde{\theta}}({\alpha})^r,{\tilde{\theta}}({\beta})^r\}_D, \end{split}\end{aligned}$$ where we use that $dp|_{T^s{\tilde{{\mathcal{G}}}}}={\tilde{\theta}}|_{T^s{\tilde{{\mathcal{G}}}}}$. As $d_{g}t({\alpha}^r)=\rho({\alpha}_{t(g)})$ for any $g\in {\tilde{{\mathcal{G}}}}$, then it is easy to see that $$\begin{aligned} D^t_{{\alpha}^r}({\tilde{\theta}}({\beta})^r)=D_{\rho({\alpha})}({\tilde{\theta}}({\beta}))\circ t\quad\text{and}\quad D^t_{{\beta}^r}({\tilde{\theta}}({\alpha})^r)=D_{\rho({\beta})}({\tilde{\theta}}({\alpha}))\circ t.\end{aligned}$$ Also, $$\begin{aligned} \begin{split} T({\tilde{\theta}}({\alpha})^r,{\tilde{\theta}}({\beta})^r)&=T({\tilde{\theta}}({\alpha}),{\tilde{\theta}}({\beta}))\circ t\\ &=(D_{\rho({\alpha})}({\tilde{\theta}}({\beta}))-D_{\rho({\beta})}({\tilde{\theta}}({\alpha}))-l[{\tilde{\theta}}({\alpha}),{\tilde{\theta}}({\beta})])\circ t; \end{split}\end{aligned}$$ hence, becomes $$\begin{aligned} \begin{split} -l&\circ dp[{\alpha}^r,{\beta}^r]+l[{\tilde{\theta}}({\alpha}),{\tilde{\theta}}({\beta})]\circ t=-l[dp({\alpha}^r),dp({\beta}^r)]\\&+l[dp({\alpha}),dp({\beta})]\circ t =-l[dp({\alpha}),dp({\beta})]^r+l[dp({\alpha}),dp({\beta})]\circ t=0. \end{split}\end{aligned}$$ where we used that $dp|_{\tilde A}={\tilde{\theta}}.$ For the third case, $$\begin{aligned} MC({\tilde{\theta}},\theta)(X,Y)=-l({\tilde{\theta}}[X,Y])=-\theta(dp[X,Y])=-\delta\theta(dp(X),dp(Y))\end{aligned}$$ where in the the right equality we use that $p$ is a submersion and therefore $p^*\delta\theta=\delta p^*\theta$ (see lemma \[delta-commutes\]). From the above 3 cases it is clear that if $p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}})\to({\mathcal{G}},\theta)$ is a Lie-prolongation, then $MC({\tilde{\theta}},\theta)=0$. Conversely, if ${\tilde{{\mathcal{G}}}}$ is source-connected then we saw from the first and third case that $({\tilde{\theta}},\theta)$ satisfies the Maurer-Cartan equation if and only if conditions and hold, which in this case, by proposition \[410\], it is equivalent to the fact that $p:({\tilde{{\mathcal{G}}}},{\tilde{\theta}})\to({\mathcal{G}},\theta)$ is a Lie-prolongation. Higher Lie-prolongations ------------------------ ### Cartan towers; Corollary 1 Let $$\begin{aligned} \label{eq: Cartan tower} \cdots {\longrightarrow}({\mathcal{G}}^{k+1},{\mathcal{H}}^{k+1})\overset{p^{k+1}}{{\longrightarrow}}({\mathcal{G}}^{k},{\mathcal{H}}^{k})\overset{p^{k}}{{\longrightarrow}}\cdots {\longrightarrow}({\mathcal{G}}^{2},{\mathcal{H}}^{2})\overset{p^{2}}{{\longrightarrow}}({\mathcal{G}}^1,{\mathcal{H}}^1) \end{aligned}$$ be an infinite sequence of Pfaffian groupoids. For ease of notation, when there is no risk of confusion, we omit the upper index of the map $p^{k}:{\mathcal{G}}^k\to {\mathcal{G}}^{k-1}$. - A [**Cartan tower**]{} $({\mathcal{G}}^{\infty}, {\mathcal{H}}^{\infty},p^{\infty})$ is a sequence as in where any Pfaffian groupoid is a Lie-prolongation of the previous one. - A [**Cartan resolution**]{} of a Pfaffian groupoid $({\mathcal{G}},{\mathcal{H}})$ is a Cartan tower with the property that there exists a Lie groupoid map $p:{\mathcal{G}}^{1}\to {\mathcal{G}}$ such that $$\begin{aligned} p:({\mathcal{G}}^{1},{\mathcal{H}}^1){\longrightarrow}({\mathcal{G}},{\mathcal{H}})\end{aligned}$$ is a Lie-prolongation of $({\mathcal{G}},{\mathcal{H}})$. Corollary 1 says that (under the usual conditions) there is a one to one correspondence between Cartan towers and Spencer resolutions. More precisely, Given a tower of $s$-simply connected Lie groupoids $$\begin{aligned} \label{to1} \cdots {\longrightarrow}{\mathcal{G}}^{3}\overset{p^3}{{\longrightarrow}}{\mathcal{G}}^{2}\overset{p^{2}}{{\longrightarrow}}{\mathcal{G}}^1 \end{aligned}$$ (i.e. all the maps are Lie groupoid morphisms and surjective submersions), and the induced tower of Lie algebroids $$\begin{aligned} \label{to2} \cdots {\longrightarrow}A_{3}\overset{l_3}{{\longrightarrow}}A_{2}\overset{l_{2}}{{\longrightarrow}}A_1, \end{aligned}$$ there is a 1-1 correspondence between: 1. multiplicative forms $\theta^k\in\Omega^1({\mathcal{G}}^k,t^*A_{k-1})$ making into a Cartan tower, 2. Spencer operators $D^k$ relative to $l_k$ making into a Spencer tower. This, of course, is a consequence of \[prol-comp\]. The sequence $$\begin{aligned} (J^{\infty}{\mathcal{G}}, \mathcal{C}^{\infty}):\cdots {\longrightarrow}(J^{k+1}{\mathcal{G}},\mathcal{C}^{k+1})\overset{pr}{{\longrightarrow}}(J^{k}{\mathcal{G}},\mathcal{C}^{k})\overset{pr}{{\longrightarrow}}\cdots{\longrightarrow}(J^1{\mathcal{G}},\mathcal{C}^1) \end{aligned}$$ is an example of a Cartan tower. Taking the Lie functor we get the classical Spencer tower $(J^{\infty}A, D^{\infty\text{-}{\text{\rm clas}}},pr^{\infty})$ on the Lie algebroid $A$ of ${\mathcal{G}}$. See corollary \[cor:1349\]. Going back to the theory of Lie pseudogroups (see subsection \[Lie pseudogroups\]), for any smooth Lie pseudogroup $\Gamma$, one has a Cartan tower $$\begin{aligned} (\Gamma^{\infty}(\Gamma),\mathcal{C}^{\infty}): \cdots{\longrightarrow}(\Gamma^{(k)},\mathcal{C}_k){\longrightarrow}\cdots{\longrightarrow}(\Gamma^{(2)},\mathcal{C}_2){\longrightarrow}(\Gamma^{(1)},\mathcal{C}_1). \end{aligned}$$ See [@Ngo] where the author discusses the universal property (in the sense of proposition \[prop-un\]) of the tower above for $\Gamma\subset\operatorname{Diff}(M)$. ### The classical $k$-Lie prolongation Now we proceed to define the classical $k$-Lie prolongation space inductively, as in the case of Pfaffian bundles (see definition \[higher-prol\]). We remark that the smoothness results of section \[sec:class-prol-pfaff\] are still valid in this setting, for example propositions \[integrable\] and \[affine\], and corollary \[337\]. Let $({\mathcal{G}},{\mathcal{H}})$ be a Pfaffian groupoid. We say that the [**classical $k$-Lie prolongation space $P^k_{\mathcal{H}}({\mathcal{G}})$**]{} is [**smooth**]{} if 1. $(P_{\mathcal{H}}({\mathcal{G}}), {\mathcal{H}}^{(1)}),\dots,(P_{\mathcal{H}}^{k-1}({\mathcal{G}}),{\mathcal{H}}^{(k-1)})$ are smoothly defined, and 2. the classical prolongation space of $(P_{\mathcal{H}}^{k-1}({\mathcal{G}}),{\mathcal{H}}^{(k-1)})$ $$\begin{aligned} P^k_{\mathcal{H}}({\mathcal{G}}):=P_{{\mathcal{H}}^{(k-1)}}(P_{\mathcal{H}}^{k-1})\end{aligned}$$ is smooth. In this case, we define the [**$k$-Lie prolongation of ${\mathcal{H}}$**]{}: $$\begin{aligned} {\mathcal{H}}^{(k)}:=({\mathcal{H}}^{(k-1)})^{(1)}\subset TP^k_{\mathcal{H}}({\mathcal{G}}).\end{aligned}$$ We say that the classical Lie prolongation space $P_{\mathcal{H}}^k({\mathcal{G}})$ is [**smoothly defined**]{} if, moreover, $$\begin{aligned} pr: P^k_{\mathcal{H}}({\mathcal{G}}){\longrightarrow}P_{\mathcal{H}}^{k-1}({\mathcal{G}})\end{aligned}$$ is a surjective submersion. In this case, $$\begin{aligned} \pi:(P_{\mathcal{H}}^k({\mathcal{G}}),{\mathcal{H}}^{(k)}){\longrightarrow}M\end{aligned}$$ (a Lie-Pfaffian groupoid: see proposition \[si-es\]) is called the [**classical $k$-Lie prolongation of $({\mathcal{G}},{\mathcal{H}})$.**]{} \[groupoid-prop2\] Let $({\mathcal{G}},{\mathcal{H}})$ be a Pfaffian bundle and suppose that $P^k_{\mathcal{H}}({\mathcal{G}})$ is smoothly defined. Then, $$\begin{aligned} pr^k_0:\operatorname{Bis}(P^k_{\mathcal{H}}({\mathcal{G}}),{\mathcal{H}}^{(k)}){\longrightarrow}\operatorname{Bis}({\mathcal{G}},{\mathcal{H}}) \end{aligned}$$ is a bijection of groups with inverse $j^k:\operatorname{Bis}({\mathcal{G}},{\mathcal{H}})\to \operatorname{Bis}(P^k_{\mathcal{H}}({\mathcal{G}}),{\mathcal{H}}^{(k)})$. The proof is completely analogous to that of proposition \[bundle-prop2\]. \[non-smooth-prolongations\]Of course as in the case of Pfaffian bundles one can define the $k$-prolongation of $({\mathcal{G}},{\mathcal{H}})$ without smoothness assumptions as $$\begin{aligned} P^k_{\mathcal{H}}({\mathcal{G}})=J^{k-1}(P_{\mathcal{H}}({\mathcal{G}}))\cap J^k{\mathcal{G}},\end{aligned}$$ where one takes jets of bisections. Note that proposition \[jet-prol\] also holds in this setting. Let $D:\Gamma(A)\to\Omega^1(M,E)$ be the Spencer operator associated to ${\mathcal{H}}$. \[corollary:auxiliar\]Let $k>0$ be an integer. If $P^k_{\mathcal{H}}({\mathcal{G}})\subset J^k{\mathcal{G}}$ is smoothly defined then it is a Lie subgroupoid and $$\begin{aligned} Lie(P^k_{\mathcal{H}}({\mathcal{G}}))=P^k_D(A).\end{aligned}$$ Moreover, $D^{(k)}:\Gamma(P^k_D(A))\to \Omega^1(M,A)$ is the associated Lie-Spencer operator ${\mathcal{H}}^{(k)}.$ Apply proposition \[yase\] and remark \[yase2\] inductively. \[corollary:auxiliar2\] Let $k>0$ be an integer. If $P^k_{\mathcal{H}}({\mathcal{G}})\subset J^k{\mathcal{G}}$ is smoothly defined then $$\begin{aligned} {\mathfrak{g}}(P^k_{\mathcal{H}}({\mathcal{G}}),{\mathcal{H}}^{(k)})|_M={\mathfrak{g}}^k(A,D).\end{aligned}$$ From corollary \[ultimo-porfavor\], we have that $$\begin{aligned} {\mathfrak{g}}(P^k_{\mathcal{H}}({\mathcal{G}}),{\mathcal{H}}^{(k)})|_M={\mathfrak{g}}(P^k_D(A),D^{(k)})={\mathfrak{g}}^{(k)}(A,D)\end{aligned}$$ by corollary \[corollary:auxiliar\] and proposition \[inv-prol\]. ### Formally integrable Pfaffian groupoids A Pfaffian groupoid $({\mathcal{G}},{\mathcal{H}})$ is called [**formally integrable**]{} if all the classical $k$-Lie prolongations $$\begin{aligned} P_{\mathcal{H}}({\mathcal{G}}),P^1_{\mathcal{H}}({\mathcal{G}}),\ldots, P^k_{\mathcal{H}}({\mathcal{G}}),\ldots\end{aligned}$$ are smoothly defined. If $({\mathcal{G}},{\mathcal{H}})$ is formally integrable, we obtain the [**classical Cartan resolution**]{}, $$\begin{aligned} (P^\infty_{\mathcal{H}}({\mathcal{G}}),{\mathcal{H}}^{(\infty)}):\cdots {\longrightarrow}(P_{\mathcal{H}}^{k}({\mathcal{G}}),{\mathcal{H}}^{(k)})\overset{pr}{{\longrightarrow}}\cdots{\longrightarrow}(P_{\mathcal{H}}({\mathcal{G}}),{\mathcal{H}}^{(1)})\overset{pr}{{\longrightarrow}}({\mathcal{G}},{\mathcal{H}}).\end{aligned}$$ Taking the Lie functor of $(P^\infty_{\mathcal{H}}({\mathcal{G}}),{\mathcal{H}}^{(\infty)})$ we obtain the classical Spencer resolution $$\begin{aligned} (P_D^{\infty}(A),D^{(\infty)}).\end{aligned}$$ The main importance of formally integrable Pfaffian groupoids is the following existence result for analytic Pfaffian groupoids, and is a consequence of theorem \[resolution\]. Let $({\mathcal{G}},{\mathcal{H}})$ be an analytic Pfaffian groupoid. If $({\mathcal{G}},{\mathcal{H}})$ is formally integrable, then for every point $g\in{\mathcal{G}}$ there exists real analytic solutions of $({\mathcal{G}},{\mathcal{H}})$ passing through $g$. In the spirit of theorem \[workable-pfaffian-bundles\], we will prove the following workable criteria for formally integrable Pfaffian groupoids. \[workable-pfaffian-groupoids\] Let $({\mathcal{G}},{\mathcal{H}})$ be a Pfaffian groupoid and let $D:\Gamma(A)\to \Omega^1(M,E)$ be the Spencer operator associated to ${\mathcal{H}}$. If 1. $pr:P_{\mathcal{H}}({\mathcal{G}})\to {\mathcal{G}}$ is surjective. 2. $\mathfrak{g}^{(1)}(A,D)$ is a vector bundle over $M$, 3. $H^{2,l}({\mathfrak{g}}(A,D))=0$ for $l\geq0.$ Then, $({\mathcal{G}},{\mathcal{H}})$ is formally integrable. In the previous theorem we are considering the $\partial_D$-Spencer cohomology of ${\mathfrak{g}}^{(1)}(A,D)$. See definition \[exten\]. The main difference between the previous result and theorem \[workable-pfaffian-groupoids\] is that of replacing the global symbol space ${\mathfrak{g}}({{\mathcal{H}}})$ and its cohomology (which are bundles over ${\mathcal{G}}$), by the easier-to-handle cohomology of the symbol space ${\mathfrak{g}}(A,D)$ (both the cohomology and ${\mathfrak{g}}(A,D)$ are bundles over $M$!). This situation reflects again the phenomenon occurring in Lie groupoids: their multiplicative structure allows us to obtain global information out of its infinitesimal data. In this case the result will follow from the invariance of some objects in consideration, such as the symbol map $$\begin{aligned} \partial_{\mathcal{H}}:{\mathfrak{g}}({\mathcal{H}}){\longrightarrow}{\mathrm{Hom}}(s^*TM,T{\mathcal{G}}/{\mathcal{H}})\end{aligned}$$ and the $\partial_{\mathcal{H}}$-Spencer cohomology. #### Invariance of the $\partial_{\mathcal{H}}$-Spencer cohomology In this section fix ${\mathcal{H}}\subset T{\mathcal{G}}$ a multiplicative Pfaffian distribution with $D:\Gamma(A)\to \Omega^1(M,E)$ the associated Spencer operator given in theorem \[t2\].\ Recall that the symbol map of ${\mathcal{H}}\subset T{\mathcal{G}}$ is given by $$\begin{aligned} \partial_{\mathcal{H}}:{\mathfrak{g}}({\mathcal{H}}){\longrightarrow}{\mathrm{Hom}}(s^*TM,T{\mathcal{G}}/{\mathcal{H}}), \quad v_g\mapsto\partial_{\mathcal{H}}(X_{s(g)})=r_gc_{\mathcal{H}}(v_g,\tilde X_g),\end{aligned}$$ where $\tilde X_g\in {\mathcal{H}}_g$ is any vector $s$-projectable to $X\in T_{s(g)}M$. Notice that by definition of $\partial_D$, the symbol map of $D$, $$\begin{aligned} \label{pb} \partial_D(r_{g^{-1}}(v))(\lambda_{\sigma_g}(X))=c_{\mathcal{H}}(r_{g^{-1}}(v),\lambda_{\sigma_g}(X))\end{aligned}$$ for any $v\in {\mathfrak{g}}({\mathcal{H}})_g$, $X\in T_{s(g)}M$ and $\sigma_g\in J^1_{\mathcal{H}}({\mathcal{G}}).$ \[label\] For a fixed $g\in{\mathcal{G}}$, $v\in {\mathfrak{g}}({\mathcal{H}})_g$, and $X\in T_{s(g)}M,$ the expression does not depend on the choice of $\sigma_g\in J^1_{\mathcal{H}}{\mathcal{G}}$. Moreover, $$\begin{aligned} r_{g^{-1}}\partial_{\mathcal{H}}(v)(X)=\partial_D(r_{g^{-1}}(v))(\lambda_{\sigma_g}(X)).\end{aligned}$$ This is a consequence of lemma \[lemma: delta theta is multiplicative\] as follows. Writing $r_{g^{-1}}(v)$ and $\lambda_{\sigma_g}(X)$ as $$\begin{aligned} r_{g^{-1}}(v)=dm(v,0_{g^{-1}}),\qquad \lambda_{\sigma_g}(X)=dm(\sigma_g(X),di\circ\sigma_g(X)),\end{aligned}$$ one has that $$\begin{aligned} \begin{split} \partial_D(r_{g^{-1}}(v))(\lambda_{\sigma_g}(X))&=c_{\mathcal{H}}(r_{g^{-1}}(v),\lambda_{\sigma_g}(X))\\&=c_{\mathcal{H}}(v,\sigma_g(X))+{\text{\rm Ad}\,}^{\mathcal{H}}_{g^{-1}}c_{\mathcal{H}}(0,di\circ\sigma_g(X))\\&=c_{\mathcal{H}}(v,\sigma_g(X))=r_{g^{-1}}\partial_{\mathcal{H}}(v)(X). \end{split}\end{aligned}$$ \[inv-prol\]Let $k\geq0$ be a natural number. Then the following statements are equivalent: 1. ${\mathfrak{g}}^{(k)}(A,D)\subset S^kT^*\otimes ({\mathfrak{g}})$, the $k$-prolongation of $\partial_D$, is a vector bundle over $M$. 2. ${\mathfrak{g}}^{(k)}({\mathcal{H}})\subset s^*S^kT^*\otimes ({\mathcal{H}}^s)$, the $k$-prolongation of $\partial_{\mathcal{H}}$, is a vector bundle over ${\mathcal{G}}$. Moreover, for any $g\in {\mathcal{G}}$ there is an isomorphism of vector spaces $$\begin{aligned} {\mathfrak{g}}^{(k)}({\mathcal{H}})_g\simeq{\mathfrak{g}}^{(k)}(A,D)_{t(g)}.\end{aligned}$$ \[expli\]The isomorphism of $$\begin{aligned} {\mathfrak{g}}^{(k)}({\mathcal{H}})_g\subset S^kT^*_{s(g)}\otimes {\mathcal{H}}^s_g\end{aligned}$$ with $$\begin{aligned} {\mathfrak{g}}^{(k)}(D)_{t(g)}\subset S^kT^*_{t(g)}\otimes {\mathfrak{g}}(D)_{t(g)}\end{aligned}$$ is given on $S^kT^*M$ by the action $\lambda_{\sigma_g}:T_{s(g)}M\to T_{t(g)}M$ where $\sigma_g\in J^1_{\mathcal{H}}{\mathcal{G}}$ is any element, and on ${\mathcal{H}}^s$ by right translation $R_{g^{-1}}:{\mathcal{H}}^s_h\to {\mathfrak{g}}(D)_{t(g)}$. For $k=0$, the statement is true by lemma \[s\] as ${\mathfrak{g}}_M({\mathcal{H}})={\mathfrak{g}}(A,D)$, where the isomorphism $t^*{\mathfrak{g}}(A,D)\simeq {\mathfrak{g}}({\mathcal{H}})$ is given by right translation. For $k=1$, let $g\in{\mathcal{G}}$ and choose any $\sigma_g\in J^1_{\mathcal{H}}{\mathcal{G}}$. For a linear map $\varphi:T_{s(g)}M\to {\mathcal{H}}^s_g$, let $\bar\varphi:T_{t(g)}M\to {\mathfrak{g}}_{t(g)}$ be the unique linear map such that the diagram $$\begin{aligned} \xymatrix{ T_{t(g)}M \ar[r]^{\bar\varphi} & {\mathfrak{g}}_{t(g)} \\ T_{s(g)}M \ar[u]^{\lambda_{\sigma_g}}_{\simeq} \ar[r]_{\varphi} &{\mathcal{H}}^s_g \ar[u]^{\simeq}_{r_{g^{-1}}} }\end{aligned}$$ commutes. It is clear that this correspondence gives an isomorphism $$\begin{aligned} {\mathrm{Hom}}(T_{s(g)}M,{\mathcal{H}}^s_g)\simeq{\mathrm{Hom}}(T_{t(g)}M,{\mathfrak{g}}_{t(g)}).\end{aligned}$$ Now, $\varphi\in {\mathfrak{g}}^{(1)}({\mathcal{H}})_g$ if and only if $\partial_{\mathcal{H}}(\varphi(X))(Y)-\partial_{\mathcal{H}}(\varphi(Y))(X)=0$ for any $X,Y\in T_{s(g)}$M. By lemma \[label\], $$\begin{aligned} \begin{split} \partial_{\mathcal{H}}(\varphi(X))(Y)-&\partial_{\mathcal{H}}(\varphi(Y))(X)=\\&=r_g\partial_D(\bar\varphi(\lambda_{\sigma_g}(X)))(\lambda_{\sigma_g}(Y))-r_g\partial_D(\bar\varphi(\lambda_{\sigma_g}(Y)))(\lambda_{\sigma_g}(X)). \end{split}\end{aligned}$$ Hence, ${\mathfrak{g}}^{(1)}(A,D)_{t(g)}={\mathfrak{g}}^{(1)}({\mathcal{H}})_g$. Using the isomorphisms $\lambda_{\sigma_g}:T_{s(g)}M\to T_{t(g)}M$ and $R_{g^{-1}}:{\mathcal{H}}^s_g\to {\mathfrak{g}}_{t(g)}$ and the case $k=1$, the general case follows. \[final\] For any $g\in{\mathcal{G}}$, and any $p,k\geq 0$ integers, there is an isomorphism of vector spaces $$\begin{aligned} H^{(p,k)}({\mathfrak{g}}(A,D))_{t(g)}\simeq H^{(p,k)}({\mathfrak{g}}({\mathcal{H}}))_g,\end{aligned}$$ where $H^{(p,k)}({\mathfrak{g}}(A,D))$ and $H^{(p,k)}({\mathfrak{g}}({\mathcal{H}}))$ are the cohomology groups of the $\partial_D$ and $\partial_{\mathcal{H}}$-Spencer cohomology respectively (see definition \[exten\]). Let $g\in {\mathcal{G}}$, and fix $k\geq 1$ an integer. We want to show that there is an isomorphism $$\begin{aligned} l_k:\wedge^pT_{t(g)}^*\otimes{\mathfrak{g}}^{(k)}(A,D)_{t(g)}\simeq \wedge^pT_{s(g)}^*\otimes{\mathfrak{g}}^{(k)}({\mathcal{H}})_{g}\end{aligned}$$ which makes the diagram $$\begin{aligned} \label{partial-commutes} \xymatrix{ \wedge^pT_{t(g)}^*\otimes{\mathfrak{g}}^{(k)}(A,D)_{t(g)} \ar[r]^{\partial} \ar[d]_{l_k} & \wedge^{p+1}T_{t(g)}^*\otimes{\mathfrak{g}}^{(k-1)}(A,D)_{t(g)} \ar[d]^{l_{k-1}} \\ \wedge^pT_{s(g)}^*\otimes{\mathfrak{g}}^{(k)}({\mathcal{H}})_{g} \ar[r]^{\partial} & \wedge^{p+1}T_{s(g)}^*\otimes{\mathfrak{g}}^{(k-1)}({\mathcal{H}})_{g} }\end{aligned}$$ commutative, where $\partial$ is the formal differentiation (see definition \[fdo\]). We will carry out the proof for $p=0$, the general case can be proven analogously. Throughout we will be using the exact sequence of vector bundles over $J^k{\mathcal{G}}$ $$\begin{aligned} 0{\longrightarrow}S^kT^*\otimes T^s {\mathcal{G}}\overset{i}{{\longrightarrow}} TJ^k{\mathcal{G}}\overset{dpr}{{\longrightarrow}} TJ^{k-1}{\mathcal{G}}{\longrightarrow}0.\end{aligned}$$ Choose an element $\sigma_g\in J^1_{\mathcal{H}}{\mathcal{G}}$ and let $(b,\xi)\in J^{k+1}{\mathcal{G}}$ be any element with the property that it projects to $\sigma_g$. Regarding $(b,\xi)$ as a splitting $\xi:T_{s(g)}M\to T_bJ^{k}{\mathcal{G}}$, then it has the property that $\xi\subset C_k$ (see proposition \[prol-jet\]). By example \[cartandist\], we have that for an element $v\in S^kT^*_{s(g)}\otimes T^s_g{\mathcal{G}}=\mathcal{C}_k^s|_g$ $$\begin{aligned} \partial(v)(X)=\partial_k(v)(X),\end{aligned}$$ where $\partial_k$ is the symbol map of $\mathcal{C}_k$. By lemma \[label\], and taking into account that the associated Spencer operator of $\mathcal{C}_k$ is $D^{k\text{-}{\text{\rm clas}}}$, we have that $$\begin{aligned} \partial_k(v)(X)=R_{b}\partial_{D^{k\text{-}{\text{\rm clas}}}}(R_{b^{-1}}v)(\lambda_\xi(X))=R_b\partial(R_{b^{-1}}v)(\lambda_{\sigma_g}(X)),\end{aligned}$$ where in the last equality we used that the actions $\lambda_\xi,\lambda_{\sigma_g}:T_{s(g)}M\to T_{t(g)}M$ are equal ($\xi$ projects to $\sigma_g$), and $\partial_{D^{k\text{-}{\text{\rm clas}}}}=\partial$ (see example \[classicalSpencertower\]). Now, by lemma \[multi\], right translation by $b\in J^{k}{\mathcal{G}}$ on an element $$\begin{aligned} \Psi\in S^{k-1}T_{t(g)}^*\otimes T^s_{t(g)}{\mathcal{G}}\subset T_{t(g)}J^{k-1}{\mathcal{G}}\simeq TJ^k{\mathcal{G}}/\mathcal{C}_k|_{t(g)}\end{aligned}$$ evaluated at $X_1,\ldots,X_{k-1}\in T_{s(g)}M$, is given by $$\begin{aligned} \begin{split} R_b(\Psi)(X_1,\ldots,X_{k-1})&=R_{pr(b)}(\Psi(\lambda_b^{-1}(X_{1}),\ldots,\lambda_b^{-1}(X_{k-1})))\\ &=R_{g}(\Psi(\lambda_{\sigma_g}^{-1}(X_{1}),\ldots,\lambda_{\sigma_g}^{-1}(X_{k-1}))), \end{split}\end{aligned}$$ where we used again that $b$ projects to $\sigma_g$ and therefore $\lambda_b=\lambda_{\sigma_g}$. Now, for the isomorphism $$\begin{aligned} l_k:{\mathfrak{g}}^{(k)}(A,D)_{t(g)}\simeq {\mathfrak{g}}^{(k)}({\mathcal{H}})_{g}\end{aligned}$$ we use the one given in remark \[expli\], as for $$\begin{aligned} l_{k-1}:T_{t(g)}^*\otimes{\mathfrak{g}}^{(k-1)}(A,D)_{t(g)}\simeq T_{s(g)}^*\otimes{\mathfrak{g}}^{(k-1)}({\mathcal{H}})_{g}\end{aligned}$$ we use the action $\lambda_{\sigma_g}^{-1}$ on the $TM$ component and right translation by $b$ on ${\mathfrak{g}}^{(k-1)}(A,D)_{t(g)}$. It is left to the reader to check that the diagram commutes. For a multiplicative one form $\theta\in\Omega^1({\mathcal{G}},t^*E)$, one has analogous versions of the results \[label\], \[inv-prol\] and \[final\], where in this case $D:\Gamma(A)\to \Omega^1(M,E)$ is the Spencer operator given in theorem \[t1\]. A completely analogous proof to that of theorem \[workable-pfaffian-bundles\] adapts to check that if $P_{\mathcal{H}}({\mathcal{G}})\to {\mathcal{G}}$ is surjective, ${\mathfrak{g}}^{(1)}({\mathcal{H}})$ is a vector bundle over ${\mathcal{G}}$ and $H^{(l,2)}({\mathfrak{g}}({\mathcal{H}}))=0$ for any integer $l\geq0$, then $({\mathcal{G}},{\mathcal{H}})$ is formally integrable. Now, proposition \[inv-prol\] says that ${\mathfrak{g}}^{(1)}({\mathcal{H}})$ is a vector bundle if and only if ${\mathfrak{g}}^{(1)}(A,D)$ is a vector bundle over $M$. Finally, corollary \[final\] implies that $H^{(l,2)}({\mathfrak{g}}({\mathcal{H}}))_g=H^{(l,2)}({\mathfrak{g}}(A,D))_{t(g)}$. Results related to the theory developed in this thesis ------------------------------------------------------ In this sections we state and prove some results very much related to the language and notions developed in this thesis. Again we come across the phenomenon occurring in Lie groupoids (under some topological conditions): global information can be recovered from its infinitesimal data.\ Subsections \[involutive multiplicative distributions\] and \[Contact groupoids\] are largely based on the preprint [@Maria]. Parts of subsection \[Cartan connections\] can be found in the same preprint. ### Integrability theorem for involutive multiplicative distributions; Theorem 5 {#involutive multiplicative distributions} Let now ${\mathcal{H}}$ be a multiplicative distribution on a Lie groupoid ${\mathcal{G}}$ which is source connected, and consider the associated symbol space ${\mathfrak{g}}= {\mathcal{H}}^{s}|_{M}$, representation $E= A/{\mathfrak{g}}$, and the associated Spencer operator $$D: {\ensuremath{\mathfrak{X}}}(M)\times \Gamma(A)\to \Gamma(E).$$ Let $$\partial_{D}: {\mathfrak{g}}\to \textrm{Hom}(TM, E),$$ be the symbol map of $D$. Remark that, if $\partial_D= 0$, then $D$ induces a connection $$\nabla^{E}: {\ensuremath{\mathfrak{X}}}(M)\times \Gamma(E)\to \Gamma(E) ,\ \nabla^{E}_{X}[{\alpha}]= D_X({\alpha}).$$ \[t6\] A multiplicative distribution ${\mathcal{H}}\subset T{\mathcal{G}}$ is involutive if and only if the symbol map $\partial_D$ vanishes and the connection $\nabla^{E}$ on $E$ is flat. \[ex-flat-Cartan\]  Let $\rho: \mathfrak{h}\to {\ensuremath{\mathfrak{X}}}(M)$ be an infinitesimal action of a Lie algebra $\mathfrak{h}$ on $M$. Consider the associated Lie algebroid $\mathfrak{h}\ltimes M$ (see example \[inf-act\]). In this case the canonical flat connection $$\nabla^{\textrm{flat}}: {\ensuremath{\mathfrak{X}}}(M)\times C^{\infty}(M, \mathfrak{h}){\longrightarrow}C^{\infty}(M, \mathfrak{h})$$ satisfies the conditions from the previous theorem with $E= \mathfrak{h}\ltimes M$, $l= \textrm{Id}$. Hence one obtains a flat involutive ${\mathcal{H}}$ on the integrating groupoid. This can be best seen when the infinitesimal action comes from the action of a Lie group $H$ on $M$. Then $\mathfrak{h}\ltimes M$ is the Lie algebroid of the action groupoid $H\ltimes M$, which as a manifold is $H\times M$ (see example \[ex-action-groupoids\]). The flat involutive ${\mathcal{H}}$ on $H\times M$ is simply the foliation with the leaves $\{h\}\times M$ (for $h\in H$). See also corollary \[flat-Cartan\]. If ${\mathcal{G}}$ is $s$-simply connected then there is a 1-1 correspondence between 1. involutive multiplicative distributions ${\mathcal{H}}$ on ${\mathcal{G}}$. 2. a flat vector bundle $(E, \nabla^{E})$ over $M$, a $\nabla^{E}$-parallel tensor $T: \Lambda^2E\to E$ and a surjective vector bundle map $l: A\to E$ satisfying $$l([\alpha, \beta])= \nabla^{E}_{\rho(\alpha)}(l(\beta))- \nabla^{E}_{\rho(\beta)}(l(\alpha))+ T(l(\alpha), l(\beta)), \ \ \forall\ \alpha, \beta\in \Gamma(A).$$ The last equation defines $T$ in terms of $\nabla^{E}$ and $l$; the only problem is whether it is well-defined, but this immediately follows from (\[horizontal\]). The rest follows from the fact that $D$ is determined by $\nabla^{E}$ (a condition that itself implies that $\partial_D= 0)$. Hence one just has to rewrite the equation (\[horizontal2\]) in terms of $\nabla^{E}$ and $T$, and one finds the condition that $T$ is $\nabla^E$-parallel. #### Proof of theorem \[t6\] Let ${\mathcal{H}}\subset T{\mathcal{G}}$ be a multiplicative distribution, let $(\theta, E)$ be its canonically associated Pfaffian system given in lemma \[lemma-from-H-to-theta\], and $(D,l)$ its associated Spencer operator given explicitly in Theorem \[t2\]. Recall that $$\mathfrak{g} = ({\mathcal{H}}\cap\ker {d}s)|_M,\qquad E = A/\mathfrak{g}$$ and that $\partial_D$ denotes the symbol representation $$\partial_D: \mathfrak{g}{\longrightarrow}{\mathrm{Hom}}(TM, E), \ \ \partial_D(\beta)(X)= D_{X}(\beta).$$ As we have already pointed out, the involutivity of ${\mathcal{H}}$ is controlled by the bracket modulo ${\mathcal{H}}$; using $\theta$ to identify $T{\mathcal{G}}/{\mathcal{H}}$ with $t^*E$, this is $$c_{{\mathcal{H}}}: {\mathcal{H}}\times {\mathcal{H}}{\longrightarrow}t^*E, \ c_{{\mathcal{H}}}(X, Y)= \theta([X, Y]).$$ $c_{{\mathcal{H}}}({\mathcal{H}}, {\mathcal{H}}^s)= 0$ if and only if $\partial_D= 0$. For any $y\in M$, $Y_y\in T_yM$, $\beta_y\in \mathfrak{g}_y$, extending them to sections $Y \in \Gamma({\mathcal{H}})$ and $\beta \in \Gamma({\mathcal{H}}^s)$ and using them in the formula for $D$ in theorem \[t2\], we have $$\partial_D(\beta_y)(X_y)= D_{Y}(\beta)(y)= \theta_{1_y}([\beta, Y])= c^{H}_{y}(\beta_y, Y_y),$$ where, as before, we identify $y$ with $1_y$. For arbitrary $\sigma_g\in {J}^{1}_{{\mathcal{H}}}{\mathcal{G}}$ with $s(g)= x$, $t(g)=y$ and $X_x\in T_xM$, $\beta_y\in \mathfrak{g}_y$ we write $$\lambda_g(X_x)= (dt)_g(\sigma_g(X_x))= (dm)_{g, g^{-1}}(\sigma_g(X_x), (di)_g\sigma_g(X_x)),$$ $$\beta_y= (dm)_{g, g^{-1}}(R_g(\beta_y)), 0_{g^{-1}})$$ and we use lemma \[lemma: delta theta is multiplicative\]: $$c^{{\mathcal{H}}}_{g}(\sigma_g(X_x), R_{g}(\beta_y))= c^{{\mathcal{H}}}_{y}(\lambda_g(X_x), \beta_y)= \partial_D(\beta_y)(\lambda_g(X_x)).$$ Hence $\partial_D= 0$ if and only if $c^{{\mathcal{H}}}_{g}(\sigma_g(T_xM), {\mathcal{H}}^{s}_{g})= 0$ for all $\sigma_g\in {J}^{1}_{{\mathcal{H}}}{\mathcal{G}}$. Note that for any $\sigma_g$ and any $\xi: T_xM\to \mathfrak{g}_y$ linear, $$\sigma_{g}^{\epsilon}(X_x)= \sigma_{g}(X_x)+ \epsilon R_{g}(\xi(X_x))$$ belongs to ${J}^{1}_{{\mathcal{H}}}{\mathcal{G}}$ for $\epsilon$ small enough, so the last equation also implies that $c^{{\mathcal{H}}}_{g}({\mathcal{H}}_g, {\mathcal{H}}^{s}_{g})= 0$, and then the equivalence with $\partial_D= 0$ is clear. Recall again that the cocycle $$c_1: {J}^{1}_{{\mathcal{H}}}{\mathcal{G}}{\longrightarrow}s^{\ast} {\mathrm{Hom}}(\Lambda^2TM, E)$$ is defined by $$c_1(\sigma_g)$$ (X\_x, Y\_x)= \^\_[g\^[-1]{}]{} c\^\_[g]{} (\_g(X\_x), \_g(Y\_[x]{})) for $\sigma_g\in {J}^{1}_{{\mathcal{H}}}{\mathcal{G}}$ with $s(g)= x$, $X_x, Y_x\in T_xM$. This cocycle, together with $\partial_D$, takes care of the involutivity of ${\mathcal{H}}$. The following is now clear: ${\mathcal{H}}$ is involutive if and only if $\partial_D= 0$ and $c_1= 0$. Note that, under the assumption $\partial_D \equiv 0$, $c(\sigma_g)$ only depends on $g$ and not on the entire splitting $\sigma_g$ at $g$, i.e. $c_1= {pr}^{*}\bar c_1$, the pull-back along the projection ${pr}: {J}^{1}_{{\mathcal{H}}}{\mathcal{G}}\to {\mathcal{G}}$ of the $1$-reduced curvature map $$\begin{aligned} \bar c_1: {\mathcal{G}}{\longrightarrow}s^{\ast} {\mathrm{Hom}}(\Lambda^2TM, E),\quad g\mapsto c_1(\sigma_g)\end{aligned}$$ where $\sigma_g$ is any element in $J^1_{\mathcal{H}}{\mathcal{G}}$ which projects to $g$. See definition \[curvature-map1\]. However, even in this case, we will continue to work with $c_1$ because ${\mathcal{G}}$ does not act canonically on ${\mathrm{Hom}}(\Lambda^2TM, E)$, and solving this problem for $\bar c_1$ requires some work. For the following corollary recall that the linearization of $c_1$ is given by the map $$\begin{aligned} \varkappa_D:J^1_DA{\longrightarrow}{\mathrm{Hom}}(\wedge^2TM,E)\end{aligned}$$ which, under the Spencer decomposition , is given at the level of sections $({\alpha},\omega)\in \Gamma(A)\oplus\Omega^1(M,A)$ by $$\begin{aligned} \label{eq:1} D_X\omega(Y)-D_Y\omega(X)-l\omega[X,Y].\end{aligned}$$ See proposition \[curvatura\]. Assume that $\partial_D= 0$ and consider the induced connection $\nabla^{E}$ on $E$ ($\nabla^{E}_{X}(l(\alpha))= D_X(\alpha)$). Then $$\varkappa_D(\alpha, \omega)(X, Y)= \nabla^{E}_{X}\nabla^{E}_{Y}(\alpha)- \nabla^{E}_{Y}\nabla^{E}_{X}\alpha- \nabla^{E}_{[X, Y]}(l(\alpha)),$$ hence $\varkappa_D$ vanishes if and only if $\nabla^{E}$ is flat. From the formula for $\varkappa_D(\alpha, \omega)(X, Y)$ from equation , we obtain that $$\varkappa_D(\alpha, \omega)(X, Y)= \nabla^{E}_{X}(l\omega(Y))- \nabla^{E}_{Y}(l\omega(X))- l\omega([X, Y]).$$ Using that $l\circ \omega(Z)= D_Z(\alpha)= \nabla^{E}_{Z}(l(\alpha))$, we obtain the formula from the statement of the corollary. The following finishes the proof of theorem \[t6\]. \[lemma: passing to algebroid\] If $\partial_D= 0$ and ${\mathcal{G}}$ has connected source fibers, then $c_1= 0$ if and only if $\varkappa_D= 0$. As mentioned before, a cocycle vanishes on the connected component of the $s$-fibers if and only if its linearization vanishes. Now, the situation is simpler here because, as as we already remarked, $c_1$ as a section lives already on ${\mathcal{G}}$: $c_1= {pr}^{\ast}(\bar c_1)$. Also, as proven before (see proof of lemma \[4217\]), for a Lie groupoid map which is also a surjective submersion, the connected components of the $s$-fibers are mapped surjectively to the connected components of the $s$-fibers. But $pr:J^1_{\mathcal{H}}{\mathcal{G}}\to {\mathcal{G}}$ is a surjective submersion by proposition \[prop421\], and ${\mathcal{G}}$ is source connected itself. This implies that $\bar c_1$ vanishes and therefore $c_1$ vanishes itself (!). ### Cartan connections on groupoids {#Cartan connections} The notion of Cartan connections on a Lie groupoid ${\mathcal{G}}$ arises when looking at the adjoint representation of ${\mathcal{G}}$ [@Abadrep]. They can be seen as the global counterpart of Blaom’s cartan algebroids [@Blaom]. It is straightforward to see that the definition from [@Abadrep] is equivalent to: A Cartan connection on a Lie groupoid ${\mathcal{G}}$ over $M$ is a multiplicative distribution ${\mathcal{H}}\subset T{\mathcal{G}}$ which is complementary to $\textrm{Ker}(ds)$. As for any Ehresmann connection, we will denote the inverse of $(ds)|_{{\mathcal{H}}}$ by $$\textrm{hor}: TM{\longrightarrow}{\mathcal{H}}\subset T{\mathcal{G}}.$$ On the infinitesimal side, we deal with the Cartan algebroids. These are classical connections $$\nabla: {\ensuremath{\mathfrak{X}}}(M)\times \Gamma(A) {\longrightarrow}\Gamma(A)$$ on the vector bundle underlying a Lie algebroid $A$, whose basic curvature $R_\nabla$ vanishes. See subsection \[Cartan algebroids\]. Theorems \[t2\] and \[t6\] give: For any Cartan connection ${\mathcal{H}}$ on a Lie groupoid ${\mathcal{G}}$ over $M$, $$\begin{aligned} \label{nabla-basic} \nabla: {\ensuremath{\mathfrak{X}}}(M)\times \Gamma(A) {\longrightarrow}\Gamma(A), \ \nabla_X\alpha(x)= ds([\textrm{hor}(X),\alpha^r]_x) \end{aligned}$$ is a Cartan connection on the Lie algebroid $A$ of ${\mathcal{G}}$. When ${\mathcal{G}}$ is $s$-simply connected, this gives a bijection between Cartan connections ${\mathcal{H}}$ on ${\mathcal{G}}$ and Cartan connections $\nabla$ on the algebroid $A$. Moreover, ${\mathcal{H}}$ is involutive if and only if $\nabla$ is flat. Under topological conditions, the existence of flat Cartan connections implies that the groupoid must come from the action of a Lie group. For instance: \[flat-Cartan\] If ${\mathcal{G}}$ is a $s$-simply connected Lie groupoid over a compact $1$-connected manifold $M$ and if ${\mathcal{G}}$ admits a flat Cartan connection ${\mathcal{H}}$, then ${\mathcal{G}}$ is isomorphic to an action Lie groupoid $H\ltimes M$ associated to a Lie group $H$ acting on $M$ (as defined in example \[ex-flat-Cartan\]). The flatness of the associated $\nabla$ and the $1$-connectedness of $M$ implies that $A$ is a trivial bundle: $A= \mathfrak{h}\times M$ for some vector space $\mathfrak{h}$ and the constant sections correspond to flat sections. The vanishing of the basic curvature implies that the bracket of constant sections is again constant; hence one has an induced Lie algebra structure on $\mathfrak{h}$; the anchor of $A$ becomes an infinitesimal action. Due to the compactness of $M$, one can integrate this to an action of the $1$-connected Lie group $H$ whose Lie algebra is $\mathfrak{h}$. Then $H\ltimes M$ and ${\mathcal{G}}$ are two Lie groupoids with $1$-connected $s$-fibers and with the same Lie algebroid; hence they are isomorphic. In the same spirit as corollary \[flat-Cartan\], we have the following result for Pfaffian groupoids of finite type. Let $({\mathcal{G}},{\mathcal{H}})$ be a Pfaffian groupoid with associated Spencer operator $D:\Gamma(A)\to \Omega^1(M,A)$. We say that the Pfaffian groupoid $({\mathcal{G}},{\mathcal{H}})$ is of [**finite type**]{} if there exists an integer $k$ such that ${\mathfrak{g}}^{(k)}(A,D)=0$. The smallest $k$ with this property is called the [**order of ${\mathcal{H}}$**]{}. Let $({\mathcal{G}},{\mathcal{H}})$ be a Pfaffian groupoid of finite type over a compact connected 1-manifold $M$. If the $s$-connected component $(P^k_{\mathcal{H}}({\mathcal{G}}))^0$ is $1$-connected, where $k$ is the order of ${\mathcal{H}}$, then there is an inclusion of groups $$\begin{aligned} H\subset \operatorname{Bis}({\mathcal{G}},{\mathcal{H}}),\quad h\mapsto \sigma_h\end{aligned}$$ for some Lie group $H$. Moreover, if ${\mathcal{G}}$ is source connected then for any $g\in {\mathcal{G}}$, there exists $h\in H$ such that $$\begin{aligned} \sigma_h(s(g))=g.\end{aligned}$$ Let $(P^k_{\mathcal{H}}({\mathcal{G}}),{\mathcal{H}}^{k})$ be the $k$-Lie prolongation of $({\mathcal{G}},{\mathcal{H}})$. From corollary \[corollary:auxiliar2\] we have that $${\mathfrak{g}}(P^k_{\mathcal{H}}({\mathcal{G}}),{\mathcal{H}}^{k})|_M={\mathfrak{g}}^{(k)}(A,D)=0.$$ Lemma \[isom\] implies that $({\mathcal{H}}^{(k)})^s=0$ which means that ${\mathcal{H}}^{(k)}$ is a Cartan connection of $P^k_{\mathcal{H}}({\mathcal{G}})$. Restricting ${\mathcal{H}}^{(k)}$ to the open subgroupoid $P^k_{\mathcal{H}}({\mathcal{G}})^0\subset P_{\mathcal{H}}^k({\mathcal{G}})$ we have a Cartan connection on a 1-connected groupoid. Moreover, its associated Spencer operator $D^{(k)}$ on $P_D^k(A)$ is flat (see lemma \[lemma:auxiliary\] and corollary \[corollary:auxiliar\]), and therefore, by theorem \[t6\], ${\mathcal{H}}^k$ is flat on $P^k_{\mathcal{H}}({\mathcal{G}})^0$. Applying corollary \[flat-Cartan\] we have that $P^k_{\mathcal{H}}({\mathcal{G}})^0$ is isomorphic to an action groupoid $H\ltimes M$ (see example \[ex-action-groupoids\]). Note that the group of solutions of $(H\ltimes M, \mathcal{F})$, where $\mathcal{F}$ is the trivial foliation on $H\ltimes M$, is canonically identified with $H$. Therefore, under the identification $(H\ltimes M, \mathcal{F})\simeq (P^k_{\mathcal{H}}({\mathcal{G}})^0,{\mathcal{H}}^{(k)})$, we get that $H\simeq\operatorname{Bis}(P^k_{\mathcal{H}}({\mathcal{G}})^0,{\mathcal{H}}^{(k)})$ as a group. By proposition \[groupoid-prop2\] $$\begin{aligned} pr^k_0: H{\longrightarrow}\operatorname{Bis}({\mathcal{G}},{\mathcal{H}}),\quad h\mapsto \sigma_h\end{aligned}$$ is an injective group morphism. Moreover, if ${\mathcal{G}}$ is source connected, then the image of the submersion $pr^k_0:P^k_{\mathcal{H}}({\mathcal{G}})\to {\mathcal{G}}$ is ${\mathcal{G}}$. With this we get that if $g\in{\mathcal{G}}$, taking an element on the preimage $(h,(s(g)))\in H\ltimes M\simeq P^k_{\mathcal{H}}({\mathcal{G}})$, then $\sigma_h(s(g))=g.$ ### Contact groupoids; Corollary 2 {#Contact groupoids} In analogy with symplectic groupoids and Poisson geometry, contact groupoids are the global counterpart of Jacobi manifolds. Although this has been known for a while (see e.g. [@Jacobi] and the references therein), the existing approaches have been rather in-direct (by using “Poissonization”, applying the similar results from Poisson geometry, then passing to quotients). What happens is that contact groupoids require the use of non-trivial coefficients; therefore, our main theorem now allows for a direct approach. Furthermore, using the slightly more general setting of Kirillov’s local Lie algebras, the approach becomes less computational and more conceptual. The difference between Jacobi manifolds and local Lie algebras, or the difference between their global counterparts, is completely analogous the difference between the two related but non-equivalent notions of contact manifolds that one finds in the literature. Here we follow the terminology of [@contact]. Let $M$ be a manifold. - A **contact structure** on $M$ is a contact form $\theta$, i.e. a regular $1$-form $\theta\in \Omega^1(M)$ with the property that the restriction of $d\theta$ to the distribution $H_{\theta}= \textrm{Ker}(\theta)$ is pointwise non-degenerate. - A **contact structure in the wide sense** on $M$ is a contact hyperfield, i.e. a codimension one distribution $H\subset TM$ which is maximally non-integrable. Here maximal non-integrability can be understood globally as follows. First, $H$ induces a line bundle $$L= TM/H .$$ The maximal non-integrability of $H$ means that $c$ is non-degenerate, where $$\label{I-H} c: H\times H {\longrightarrow}L, \ \ (X, Y)\mapsto [X, Y]\ \textrm{mod}\ H$$ is the curvature map of $H$ (see definition \[curvature-map\]). The contact case is obtained when $L$ is the trivial line bundle. Passing to groupoids: Let $\Sigma$ be a Lie groupoid over $M$. - A **contact structure** on the groupoid $\Sigma$ is a pair $(\theta, r)$ consisting of a smooth map $r: \Sigma\to \mathbb{R}$ (the Reeb cocycle) and a contact form $\theta\in \Omega^1(\Sigma)$ which is $r$-multiplicative in the sense that $$m^{\ast}\theta= {pr}_{2}^{\ast}(e^{-r}){pr}_{1}^{\ast}\theta+ {pr}_{2}^{\ast}\theta .$$ - A **contact structure in the wide sense** on $\Sigma$ is a contact hyperfield ${\mathcal{H}}$ on $\Sigma$ which is multiplicative. Regarding the first notion, note that the equation above implies that, indeed, $r$ is a $1$-cocycle; hence it induces a representation $\mathbb{R}_r$ of $\Sigma$ (cf. remark \[remark-cocycles\]). Also, one has the following immediate but important remark, which will allow us to reconstruct $\theta$ from associated Spencer operators: The $r$-multiplicativity of $\theta$ is equivalent to the fact that $e^{r}\theta\in \Omega^1(\Sigma)$ is multiplicative as a form with values in the representation $\mathbb{R}_r$. The following indicates the conceptual advantage of the “wide” point of view. \[lemma-contact-versus-wide\] Assume that the $s$-fibers of $\Sigma$ are connected. Then the construction $(\Sigma, \theta, r)\mapsto (\Sigma, \textrm{Ker}(\theta))$ induces a 1-1 correspondence between contact groupoids $(\Sigma, \theta, r)$ and contact groupoids in the wide sense $(\Sigma, {\mathcal{H}})$ with the property that the associated line bundle is trivial. It is clear that $\textrm{Ker}(\theta)$ has the desired properties. Conversely, assume that we start with $(\Sigma, {\mathcal{H}})$ so that $L$ is trivial. First of all, we know that the multiplicativity of ${\mathcal{H}}$ makes $L$ into a representation of $\Sigma$ (cf. subsection \[the dual point of view\]); we also know that a representation of $\Sigma$ on a trivial line bundle is uniquely determined by a $1$-cocycle (cf. e.g. remark \[remark-cocycles\]); this gives rise to the cocycle $r$. Then the canonical projection $T\Sigma \to L$ gives a $1$-form $\overline{\theta}\in \Omega^1(\Sigma)$ which, by proposition \[lemma-from-H-to-theta\] is multiplicative as a form with coefficients in $\mathbb{R}_r$. Hence, by the previous lemma, $\theta:= e^{-r}\overline{\theta}$ is $r$-multiplicative. We now pass to the corresponding infinitesimal structures. Let $M$ be a manifold. - A **Jacobi structure** on $M$ is a pair $(\Lambda, R)$ consisting of a bivector $\Lambda$ and a vector field $R$ (the Reeb vector field) satisfying $$[\Lambda, \Lambda]= 2 R\wedge \Lambda,\ \ [\Lambda, R]= 0.$$ - A **Jacobi structure in the wide sense** on $M$ is a pair $(L, \{\cdot, \cdot \})$ consisting of a line bundle $L$ over $M$ and a Lie bracket $\{\cdot, \cdot \}$ on the space of sections $\Gamma(L)$, with the property that it is local, i.e. $$\textrm{sup}(\{u, v\})\subset \textrm{sup}(u)\cap \textrm{sup}(v)\ \ \ \ \forall\ u, v\in \Gamma(L).$$ The second notion appears in the literature under various names. Kirillov introduced them under the notion of local Lie algebra [@Kirillov]; Marle uses the term Jacobi bundle [@Marle]. Our term “wide” is ad-hoc, for compatibility with the previous definitions; however, we will also say that $L$ is a Jacobi bundle. For a Jacobi bundle $L$, Kirillov proves that $\{\cdot, \cdot\}$ must be a differential operator of order at most one in each argument. When $L= \mathbb{R}_M$ is the trivial line bundle, this implies that the bracket must be of type $$\{f, g\}_{\Lambda, R}= \Lambda(df, dg)+ {L}_R(f)g- f{L}_R(g)\ \ \ (f, g\in \Gamma(\mathbb{R}_M)= C^{\infty}(M))$$ for some bivector $\Lambda$ and vector field $R$. A straightforward check shows that this satisfies the Jacobi identity if and only if $(\Lambda, R)$ is a Jacobi structure. Hence, one obtains the following well-known: \[Jacobi-wide\] $(\Lambda, R)\mapsto (\mathbb{R}_{M}, \{\cdot, \cdot\}_{\Lambda, R})$ defines a bijection between Jacobi structures and local Lie algebras with trivial underlying line bundle. Next, we sketch the connection between contact groupoids and Jacobi structures pointing out the relevance of the Spencer operator and of theorem \[t2\]. #### The Lie functor {#the-lie-functor .unnumbered} In one direction (the Lie functor), starting with a contact groupoid in the wide sense $(\Sigma, {\mathcal{H}})$, there is an induced Jacobi bundle on $M$. The relevance of the Spencer operator $D$ associated to ${\mathcal{H}}$ is the following: since ${\mathcal{H}}$ is contact, it follows that the vector bundle map associated to $D$ (cf. example \[rk-convenient’\]), $$j_{D}: A{\longrightarrow}{J}^1L,$$ is an isomorphism, where $A$ is the Lie algebroid of $\Sigma$ and $L$ is the line bundle associated to ${\mathcal{H}}$. Identifying $A$ with ${J}^1L$, we obtain a Lie bracket $[\cdot, \cdot]$ on ${J}^1L$. On $\Gamma(L)$ we define the bracket $$\{u, v\}:= {pr}([j^1u, j^1v]).$$ $(L, \{\cdot, \cdot\})$ is a Jacobi structure in the wide sense. The bracket is clearly local, hence we are left with proving the Jacobi identity. For this it suffices to show that $$[j^1u, j^1v]= j^1\{u, v\}$$ for all $u, v\in \Gamma(L)$. Note that, after the identification of $A$ with ${J}^1L$, the $D$ is identified with the classical Spencer operator (see example \[higher jets on algebroids\]); in particular, $D(\xi)= 0$ if and only if $\xi$ is the first jet of a section. Fixing $u$ and $v$, the equation (\[eq: compatibility-1\]) for the Spencer operator implies that $D$ kills $[j^1u, j^1v]$, hence $[j^1u, j^1v]= j^1s$ for some $s$. Applying ${pr}$, we find $s= \{u, v\}$. Of course, starting with a contact groupoid $(\Sigma, \theta, r)$, lemmas \[lemma-contact-versus-wide\] and \[Jacobi-wide\] ensure the existence of a Jacobi structure $(\Lambda, R)$ on the base. #### Integrability {#integrability .unnumbered} Conversely, start with a Jacobi structure in the wide sense $(L, \{\cdot, \cdot\})$ on $M$. With the Lie functor in mind, the strategy is quite clear: consider the induced Lie algebroid structure on ${J}^1L$ with the property that $$[j^1u, j^1v]= j^1\{u, v\}$$ for all $u, v\in \Gamma(L)$ and show that the classical Spencer operator $D$ is a Spencer operator with respect to this Lie algebroid structure. Then, if ${J}^1L$ comes from a Lie groupoid $\Sigma$, assumed to be $s$-simply connected, integrating $D$ gives the multiplicative hyperfield ${\mathcal{H}}$ on $\Sigma$ and the fact that $j_{D}$ is an isomorphism implies that ${\mathcal{H}}$ is contact. For instance, when $(L, \{\cdot, \cdot\})$ comes from a Jacobi structure $(\Lambda, R)$, ${J}^1L= T^*M\oplus \mathbb{R}$ and, starting from the previous formula, one finds the Lie algebroid $$A_{\Lambda, R}:= T^{\ast}M\oplus \mathbb{R},$$ with anchor $\rho_{\Lambda, R}= \rho$ given by $$\rho(\eta, \lambda)= \Lambda^{\sharp}(\eta)+ \lambda R$$ and bracket $$\begin{aligned} & [(\eta, 0), (\xi, 0)]_{\Lambda, R}= ([\eta, \xi]_{\Lambda}- i_{R}(\eta\wedge \xi), \Lambda(\eta,\xi)), \nonumber\\ & [(0, 1), (\xi, 0)]_{\Lambda, R}= ({L}_{R}(\xi), 0), \nonumber \\ & [(0, 1), (0, 1)]_{\Lambda, R}= (0, 0)\nonumber\end{aligned}$$ (extended to general elements using bilinearity and the Leibniz identity). The classical Spencer operator becomes $$D: \Gamma(A_{\Lambda, R}){\longrightarrow}\Omega^1(M), \ \ D(\eta, f)= \eta+ {d}f,\ l(\eta, f)= f .$$ Of course, checking directly that $D$ is a Spencer operator on $A_{\Lambda, R}$ is rather tedious. The advantage of the “wide” point of view is that it provides a more compact and computationally free approach. So, let’s return to our $(L, \{\cdot, \cdot\})$. It is rather unfortunate (and surprising) that the definition of the associated Lie algebroid ${J}^1L$ is missing from the literature. The remaining part of this section is mostly devoted to this point (after that, the part with the Spencer operator is immediate). The starting point is the result of Kirillov mentioned above: $\{u, v\}$ must be a differential operator of order at most one in each argument. To fix notation, recall that a differential operator $$P: \Gamma(E){\longrightarrow}\Gamma(F)$$ of order at most one, where $E$ and $F$ are vector bundles, has a symbol $$\sigma_{P}\in \Gamma(TM\otimes \textrm{Hom}(E, F)).$$ Its defining property is $$P(fs)= fP(s) + \sigma_{P}(df)(s)$$ for all $s\in \Gamma(E)$, $f\in C^{\infty}(M)$. Of course, when $E= F= L$ is one-dimensional, we get $\sigma_{P}\in \Omega^1(M)$. Fixing $u\in \Gamma(L)$, applying this to the operator $\{u, \cdot\}$, we denote the associated symbol by $\rho^1(u)$. This defines a map $$\rho^1: \Gamma(L){\longrightarrow}{\ensuremath{\mathfrak{X}}}(M),$$ characterised by the property that $$\{u, fv\}= f\{u, v\}+ {L}_{\rho^1(u)}(f) v$$ for all $u, v\in \Gamma(L)$. A straightforward computation with the Jacobi identity for $u, fv, w$ combined with the last equations implies that $\rho^1$ is a Lie algebra map: $$\rho^1(\{u, v\})= [\rho^1(u), \rho^1(v)].$$ Unlike the case of Lie algebroids, $\rho^1$ need not be $C^{\infty}(M)$-linear. However, it is a differential operator of order at most one; hence it satisfies the equation $$\rho^1(fu)= f\rho^1(u)+ \rho^2(df\otimes u) ,$$ where $\rho^2$ is the symbol of $\rho^1$, interpreted as a vector bundle map $$\rho^2: \textrm{Hom}(TM, L){\longrightarrow}TM .$$ We define the anchor of ${J}^1L$ by putting $\rho^1$ and $\rho^2$ together (see (\[J-decomposition\])): $$\rho: \Gamma({J}^1L)\cong \Gamma(L)\oplus \Omega^1(M, L)\stackrel{\rho^1-\rho^2}{{\longrightarrow}} {\ensuremath{\mathfrak{X}}}(M).$$ Using the classical Spencer operator, one can write more compactly: $$\rho(\xi)= \rho^1({pr}(\xi))- \rho^2(D^{\textrm{clas}}(\xi)).$$ Finally, the Lie bracket for ${J}^1L$ is, as we wanted, given by $$[j^1u, j^1v]:= j^1(\{u, v\}),$$ extended by the Leibniz identity to arbitrary sections (see also remark \[curiosity\]). $({J}^1L, [\cdot, \cdot],\rho)$ is a Lie algebroid. The Leibniz identity holds by construction. The Jacobi identity $$\textrm{Jac}(\xi_1, \xi_2, \xi_3)= 0$$ is clearly satisfied when the $\xi_i$’s are first jets of sections of $L$. Hence it suffices to remark that the expression $\textrm{Jac}$ is $C^{\infty}(M)$-linear in all arguments. Using the Leibniz identity, we see that this is equivalent to the fact that the anchor is a Lie algebra map: $$\rho([\xi_1, \xi_2])= [\rho(\xi_1), \rho(\xi_2)].$$ This time, the Leibniz identity implies that the difference between the two terms is $C^{\infty}(M)$-bilinear, hence it suffices to check it when $\xi_1= j^1u$, $\xi_2= j^1v$. This is equivalent to $\rho^1$ being a Lie algebra map. The classical Spencer operator $D^{\textrm{\rm clas}}: \Gamma({J}^1L) {\longrightarrow}\Omega^1(M, L)$ is a Spencer operator on the Lie algebroid ${J}^1L$. Of course, the action $\nabla$ of ${J}^1L$ on $L$ is the one induced by formula (\[coneccion\]); hence it is characterised by $$\nabla_{j^1u}(v)= \{u, v\}.$$ First note that equation (\[eq: compatibility-2\]) is satisfied (it is $C^{\infty}(M)$-linear in the arguments and, on holonomic sections, it reduces to the previous formula for $\nabla$). In turn, this implies that formula is $C^{\infty}(M)$-linear in the arguments hence, again, it suffices to check it on holonomic sections, when it becomes $0= 0$. \[curiosity\]As a curiosity, note that $\nabla$ and $[\cdot, \cdot]$ can be written on general elements using the Spencer operator $D= D^{\textrm{clas}}$ as: $$\nabla_{\xi}(v)= \{{pr}(\xi), v\}+ D(\xi)(\rho^1(v)),$$ $$[\xi, \eta]= j^1\{{pr}\xi, {pr}\eta\}+ {L}_{\xi}(D\eta)- {L}_{\eta}(D\xi).$$ See . In particular, we obtain the following integrability result (corollary 4), which should be compared with the one of [@Jacobi] (and please compare the proofs as well!). Given a Jacobi structure in the wide sense $(L, \{\cdot, \cdot\})$ over $M$, if the associated Lie algebroid ${J}^1L$ comes from an $s$-simply connected Lie groupoid $\Sigma$, then $\Sigma$ carries a contact hyperfield ${\mathcal{H}}$ making it into a contact groupoid in the wide sense; ${\mathcal{H}}$ is uniquely characterized by the fact that the associated Spencer operator coincides with the classical one. If $(L, \{\cdot, \cdot\})$ comes from a Jacobi structure $(\Lambda, R)$, then we end up with a contact groupoid $(\Sigma, \theta, r)$. Samenvatting {#samenvatting .unnumbered} ============ De motivatie voor dit proefschrift komt voort uit de studie van symmetrieën van partiële differentiaalvergelijkingen (PDE’s). Partiële differentiaalvergelijkingen dienen als modellen voor verscheidene fenomenen in het dagelijks leven, waaronder bijvoorbeeld de voortplanting van licht en geluid in de atmosfeer. In de wiskunde spelen ze ook een essentiële rol in het beschrijven van meetkundige structuren. De Noorse wiskundige Sophus Lie [@Lie] begon aan het eind van de 19$^{\text{de}}$ eeuw aan een diepe studie van hun symmetrieën, in een poging om de meetkunde van PDE’s beter te begrijpen. Probeert u zich, om een idee te krijgen van de symmetrieën van een ruimte, een fietswiel voor te stellen. Het wiel kan geroteerd worden met een hoek $x$ zonder dat, als we de spaken en andere details negeren, de vorm van het wiel veranderd. Vervolgens kan het wiel nogmaals geroteerd worden met een hoek $y$, of het wiel kon vanaf het begin direct met een hoek $x+y$ geroteerd worden. Alledrie deze rotaties zijn symmetrieën van het wiel en we observeren dat twee symmetrieën samengesteld kunnen worden om een nieuwe te vormen. In het bijzonder kunnen we roteren met een hoek van $0$ graden, dit wordt de eenheidssymmetrie genoemd. Als het wordt samengesteld met een andere symmetrie verandert dit niets, vergelijkbaar met het vermenigvuldigen met het getal één. Tot slot, als we eerst het wiel met een hoek $x$ met de klok mee draaien en daarna met dezelfde hoek tegen de klok in (in andere woorden, een rotatie met de inverse van $x$), dan krijgen we uiteraard weer de eenheidssymmetrie. Het werk van Lie lag was het beginsel van de het moderne begrip van een Lie groep; welke een ‘glad’ (ofwel differentieerbaar) concept van symmetrie verwezenlijkt. In het voorbeeld hierboven is de cirkel een Lie groep. De cirkel *werkt* op het wiel: elk punt van de cirkel stelt een hoek van rotatie voor. Verder is de cirkel ‘glad’ in de zin dat het geen hoeken heeft. In werkelijkheid werkte Sophus Lie met een algemener concept, wat hedendaags een Lie pseudogroep wordt genoemd. Hierbij is het toegestaan dat de symmetrieën slechts op een deel van de ruimte werken. Een van zijn grootste ontdekkingen was dat Lie pseudogroepen beter begrepen kunnen worden door ze lineair te benaderen. Dit lineairiseringsproces stelt ons in staat om van complexe differentiaalvergelijkingen naar eenvoudigere lineaire vergelijkingen, én weer terug, te gaan. Hij beperkte zich in zijn beschrijving van de infinitesimale data voornamelijk tot een speciaal soort Lie pseudogroepen: die van het eindige type. Zij corresponderen met Lie groepen en hun infinitesimale data staan bekend als Lie algebra’s. Het was echter pas Élie Cartan [@Cartan1904; @Cartan1905; @Cartan1937] die in het begin van de 20ste eeuw vooruitgang boekte op het gebied van Lie pseudogroepen van het *oneindige* type. Hij ontdekte dat de infinitesimale data voor een dergelijke Lie pseudogroepen, in analogie met Lie algebras voor Lie pseudogroepen van het eindige type, wordt gegeven door een differentiaal-1-vorm. Deze aanpak leidde tot de ontdekking van differentiaalvormen, welke hedendaags een centrale rol spelen in differentieerbare meetkunde, en tot de theorie van ‘Exterior Differential Systems’ (EDS)[@BC]. In het hart van zijn interpretatie van Lie’s werk ligt een *jet-bundel* $X$ samen met een zogenaamde Cartan-vorm (ook wel contactvorm genoemd) $\theta$. De infinitesimale data van de originele Lie pseudogroep is vervolgens gecodeerd in een Maurer-Cartan-achtige vergelijking voor $\theta$. Dit brengt ons tot de titel van dit proefschrift: het mooie samenspel tussen $X$ en $\theta$ komt niet zozeer voort uit het feit dat $X$ een jet bundel en $\theta$ zijn Cartan-vorm is, maar uit de compatibiliteit van $\theta$ met de multiplicatieve structuur van $X$. Dit leidt tot de notie van een Pfaffiaanse groepoïde: een Lie groepoïde $X$ samen een *multiplicatieve* differentiaalvorm $\theta$. Deze abstractie blijkt het meetkundige inzicht achter Cartan’s elegante theorie naar boven te brengen. In dit proefschrift bestuderen we Pfaffiaanse groepoïden vanuit twee equivalente uitgangspunten: de bovengenoemde definitie met een differentiaal-1-vorm en het duale beeld met een deelbundel van de raakbundel (een distributie). U zult zien dat we in alle hoofdstukken (met uitzondering van hoofdstuk 4) veel standaard theorie over de meetkunde van PDE’s aandoen (zoals prolongatie en Spencer cohomologie), in een poging om een conceptueler begrip van deze theorie uit te dragen. In hoofdstuk 1 behandelen we het benodigde inleidende werk, zoals bijvoorbeeld de theorie van jet-bundels van Ehresmann, Spencer cohomologie en Lie groepoïdes en hun algebroïdes. Omdat de theorie in latere hoofdstukken gebaseerd is op de klassieke theorie (waarin wordt gewerkt met een algemene vezelbundel in plaats van een groepoïde) is het belangrijk om eerst een goed conceptueel beeld van deze theorie op te bouwen. Zodoende zijn hoofdstukken 2 en 3 toegewijd aan de klassieke theorie, beginnende met lineaire plaatje van relatieve connecties in hoofdstuk 2, om vervolgens naar het globale plaatje in hoofdstuk 3 te gaan. In een zekere zin is het natuurlijker om vanuit het globale af te dalen naar het lineaire. We zijn echter van mening dat de huidige volgorde tot een helderdere uiteenzetting oplevert. In hoofdstuk 4 doet het multiplicatieve aspect zijn intrede in de vorm van Lie groepoïdes en hun infinitesimale tegenhangers, de Lie algebroïdes. Dit hoofdstuk is in een zekere zin onafhankelijk van de rest van het proefschrift; de resultaten in dit hoofdstuk zijn belangrijk voor de volgende hoofdstukken, maar spelen nog geen rol in het voorgaande. Het hoofdresultaat in dit hoofdstuk is de integreerbaarheidsstelling voor multiplicatieve $k$-vormen met coëfficiënten. Het stelt dat, onder de aannames die men zou verwachten, we de multiplicatieve $k$-vorm terug kunnen winnen uit zijn infinitesimale data (in de vorm van een $k$-Spencer operator). Hierin is vooral het geval $k=1$ relevant voor de rest van het proefschrift. Hoofdstukken 5 en 6 vormen de kern van de theorie. De klassieke theorie wordt vereenvoudigd door de multiplicativiteitsconditie voor Pfaffiaanse groepoïdes en het krijgt zijn “Lie theoretische” karakter. In tegenstelling tot hoofdstuk 3, kunnen integreerbaarheidsresultaten (stelling 2) gebruikt worden om de Pfaffiaanse groepoïde volledig terug te winnen uit de infinitesimale data. Hiervoor richt hoofdstuk 5 zich eerst op deze infinitesimale data van een Pfaffiaanse groepoïdes: Spencer operatoren. Zij zijn de natuurlijke interpretaties van relatieve connecties in de wereld van Lie algebroïden, in de zin dat ze compatibel zijn met het anker en de Lie-haak. Tot slot behandelen we in hoofdstuk 6 enkele zelfstandige gerelateerde resultaten. Stelling 5 geeft de infinitesimale conditie die Frobenius-involutiviteit voor Pfaffiaanse distributies garandeert. Corollarium 2 beschrijft hoe Jacobi-structuren integreren to contactgroepoïdes. Acknowledgements {#acknowledgements .unnumbered} ================ There are many people that I would like to thank at this point. Some of them were of fundamental help for my academic career, while others made me feel at home in Utrecht. I would like to start with my advisor Marius Crainic. You, Marius, are truly my mathematical father. I am really grateful for guiding me so closely during my PhD. Thanks to your advice and example I am learning how to be a researcher. Your help on matters beyond my research will leave a permanent mark on my life. Secondly, I am grateful to Prof. Ieke Moerdijk. With your seminars and walks to the forest, you formed a big nice group of mathematicians, all willing to share and discuss many different and interesting topics. Thanks to my dear colleagues and friends at the math department: Ionut, Pedro, Roy, David, Matias, Daniele, Ivan, João, Ori, Boris, Florian, Dana, Dave, Dmitry, Sergey, Gil... I am sure I ended up in the best mathematical group thanks to you guys. I learned a lot from you and you became my family in Utrecht. Special thanks go to Ivan: I enjoyed working with you a lot, and I learned a lot of interesting mathematics from you. But above all, you together with Léa were my adopted parents in Utrecht, and with you both I felt with my own people. I would also like to thank Jean, Helga, Ria, Cécile, and Wilke for helping me with many practical issues at the department. Someone else that I would like to thank is Cate. With you we have created a beautiful friendship, which will prevail regardless of the path that we will take. Daniele, you made the period of writing pass with the scent of a sweet dream, falling in love over and over again with you ever since. I am very grateful to you because you were the one who kept me in good shape, always watching over me and helping me again and again with the corrections of my thesis. My parents, Jorge Iván and Lina deserve the greatest of thanks for being there whenever I needed it, and for their unconditional support. I have never told you this but I believe that you are wonderful parents, and that I am the luckiest daughter and sister ever. Curriculum Vitae {#curriculum-vitae .unnumbered} ================ María Amelia Salazar Pinzón was born in Manizales, Colombia on the 29th of September 1983. She studied at Colegio San Luis Gonzaga, where she completed her high school in 2001. In 2002, she moved to Medellín, where she read mathematics at Universidad Nacional de Colombia sede Medellín as an undergraduate. In 2006, she obtained her bachelor’s degree under the supervision of Prof. Carlos Parra. After completing her bachelor’s studies she moved to Bogotá, where she started a master in mathematics at Universidad de los Andes. She wrote her master thesis under the supervision of Prof. Bernardo Uribe and Prof. Erik Backelin, obtaining her master in 2008. In September 2008 she moved to the Netherlands and took part in the Master Class program “Calabi-Yau manifolds” at Utrecht University. Her Master Class project was supervised by Prof. Marius Crainic. Since 2009, she has been a Ph.D. student at Utrecht University under the supervision of Prof. Marius Crainic.
--- abstract: 'A sunflower is a collection of distinct sets such that the intersection of any two of them is the same as the common intersection $C$ of all of them, and $|C|$ is smaller than each of the sets. A longstanding conjecture due to Erdős and Szemerédi states that the maximum size of a family of subsets of $[n]$ that contains no sunflower of fixed size $k>2$ is exponentially smaller than $2^n$ as $n\rightarrow\infty$. We consider this problem for multiple families. In particular, we obtain sharp or almost sharp bounds on the sum and product of $k$ families of subsets of $[n]$ that together contain no sunflower of size $k$ with one set from each family. For the sum, we prove that the maximum is $$(k-1)2^n+1+\sum_{s=n-k+2}^{n}\binom{n}{s}$$ for all $n \ge k \ge 3$, and for the $k=3$ case of the product, we prove that it is between $$\left(\frac{1}{8}+o(1)\right)2^{3n}\qquad \hbox{ and } \qquad (0.13075+o(1))2^{3n}.$$' author: - '[^1]' - '[^2]' title: Multicolor Sunflowers --- Introduction ============ Throughout the paper, we write $[n]=\{1,\ldots,n\}$, $2^{[n]}=\{S:S\subset[n]\}$ and $\binom{[n]}{s}=\{S:S\subset[n], |S|=s\}$. A *sunflower* (or *strong $\Delta$-system*) with $k$ petals is a collection of $k$ sets ${\cal S}=\{S_1,\ldots,S_k\}$ such that $S_i\cap S_j=C$ for all $i\neq j$, and $S_i\setminus C\neq\emptyset$ for all $i\in[k]$. The common intersection $C$ is called the *core* of the sunflower and the sets $S_i\setminus C$ are called the *petals*. In 1960, Erdős and Rado [@ER60] proved a fundamental result regarding the existence of sunflowers in a large family of sets of uniform size, which is now referred to as the *sunflower lemma*. It states that if $\cal A$ is a family of sets of size $s$ with $|{\cal A}|>s!(k-1)^s$, then $\cal A$ contains a sunflower with $k$ petals. Later in 1978, Erdős and Szemerédi [@ES78] gave the following upper bound when the underlying set has $n$ elements. There exists a constant $c$ such that if ${\cal A}\subset 2^{[n]}$ with $|{\cal A}|>2^{n-c\sqrt{n}}$ then $\cal A$ contains a sunflower with $3$ petals.\[thmES\] In the same paper, they conjectured that for $n$ sufficiently large, the maximum number of sets in a family ${\cal A}\subset 2^{[n]}$ with no sunflowers with three petals is at most $(2-\epsilon)^n$ for some absolute constant $\epsilon>0$. This conjecture remains open, and is closely related to the algorithmic problem of matrix multiplication, see [@ASU13]. Similar problems have been studied for systems of sets where only the size (rather than the actual set) of pairwise intersections is fixed. A *weak $\Delta$-system* of size $k$ is a collection of $k$ sets ${\cal S}=\{S_1,\ldots,S_k\}$ such that $|S_i\cap S_j|=|S_1\cap S_2|$ for all $i\neq j$. Thus, a sunflower is a weak $\Delta$-system but not vice versa. In 1973, Deza [@D73] gave the criterion for a weak $\Delta$-system to be a sunflower: If ${\cal F}$ is an $s$-uniform weak $\Delta$-system with $|{\cal F}|>s^2-s+1$, then $\cal F$ is a sunflower. The lower bound can be achieved only if the projective plane $PG(2,s)$ exists. This was shown by van Lint [@L73] later in the same year. Erdős posed the problem of determining the largest size of a family ${\cal A} \subset 2^{[n]}$ that contains no weak $\Delta$-system of a fixed size. The problem was solved by Frankl and Rödl [@FR87] in 1987. They proved that given $k\ge 3$, there exists a constant $\epsilon=\epsilon(k)$ so that for every ${\cal A}\subset 2^{[n]}$ with $|{\cal A}|>(2-\epsilon)^n$, $\cal A$ contains a weak $\Delta$-system of size $k$. A natural way to generalize problems in extremal set theory is to consider versions for multiple families or so-called multicolor or cross-intersecting problems. Beginning with the famous Erdős-Ko-Rado theorem [@EKR61], which states that an intersecting family of $k$-element subsets of $[n]$ has size at most $\binom{n-1}{k-1}$, provided $n\geq 2k$, several generalizations were proved for multiple families that are cross-intersecting. In particular, Hilton [@H77] showed in 1977 that if $t$ families ${\cal A}_1,\ldots,{\cal A}_t\subset\binom{[n]}{k}$ are cross intersecting (meaning that $A_i \cap A_j \neq\emptyset$ for all $(A_i, A_j) \in {\cal A}_i\times {\cal A}_j$) and if $n/k\leq t$, then $\sum_{i=1}^t|{\cal A}_i|\leq t\binom{n-1}{k-1}$. On the other hand, results of Pyber [@P86] in 1986, that were later slightly refined by Matsumoto and Tokushige [@MT89] and Bey [@B05], showed that if two families ${\cal A}\subset\binom{[n]}{k}$, ${\cal B}\subset\binom{[n]}{l}$ are cross-intersecting and $n\geq\max\{2k,2l\}$, then $|{\cal A}||{\cal B}|\le\binom{n-1}{k-1}\binom{n-1}{l-1}$. These are the first results about bounds on sums and products of the size of cross-intersecting families. More general problems were considered recently, for example for cross $t$-intersecting families (i.e. pair of sets from distinct families have intersection of size at least $t$) and $r$-cross intersecting families (any $r$ sets have a nonempty intersection where each set is picked from a distinct family) and labeled crossing intersecting families, see [@B08; @FT11; @FLST14]. A more systematic study of multicolored extremal problems (with respect to the sum of the sizes of the families) was begun by Keevash, Saks, Sudakov, and Verstraëte [@KSSV], and continued in [@BKS; @KS]. Cross-intersecting versions of Erdős’ problem on weak $\Delta$-systems mentioned above (for the product of the size of two families) were proved by Frankl and Rödl [@FR87] and by the first author and Rödl [@MR]. In this note, we consider multicolor versions of sunflower theorems. Quite surprisingly, these basic questions appear not to have been studied in the literature. Given families of sets ${\cal A}_i\subset 2^{[n]}$ for $i=1,\ldots,k$, a *multicolor sunflower* with $k$ petals is a collection of sets $A_i\in {\cal A}_i$, $i=1,\ldots,k$, such that $A_i\cap A_j=C$ for all $i\neq j$, and $A_i\setminus C\neq\emptyset$, for all $i\in[k]$. Say that ${\cal A}_1, \ldots, {\cal A}_k$ is sunflower-free if it contains no multicolor sunflower with $k$ petals. For any $k$ families that are sunflower-free, the problem of upper bounding the size of any single family is uninteresting, since there is no restriction on a particular family. So we are interested in the sum and product of the sizes of these families. Given integers $n$ and $k$, let $${\cal F}(n,k)=\{\{{\cal A}_i\}_{i=1}^k:{\cal A}_i\subset 2^{[n]} \text{ for $i\in [k]$ and ${\cal A}_1,{\cal A}_2,\ldots,{\cal A}_k$ is sunflower-free}\}.$$ We define $$S(n,k):=\max_{\{{\cal A}_i\}_{i=1}^k\in{\cal F}(n,k)}\sum_{i=1}^k|\mathcal{A}_i|,$$ and $$P(n,k):=\max_{\{{\cal A}_i\}_{i=1}^k\in{\cal F}(n,k)}\prod_{i=1}^k|\mathcal{A}_i|.$$ Main Results ============ Our two main results are sharp or nearly sharp estimates on $S(n,k)$ and $P(n,3)$. By Theorem \[thmES\] we obtain that $$S(n,3)\le 2\cdot2^n+2^{n-c\sqrt{n}}.$$ Indeed, if $|{\cal A}|+|{\cal B}|+|{\cal C }|$ is larger than the RHS above then $|{\cal A}\cap{\cal B}\cap{\cal C}|>2^{n-c\sqrt{n}}$ by the pigeonhole principle and we find a sunflower in the intersection which contains a multicolor sunflower. Our first result reduces the term $2^{n-c\sqrt{n}}$ to obtain an exact result. For $n\ge k\ge 3$ $$S(n,k)= (k-1)2^n+1+\sum_{s=n-k+2}^n\binom{n}{s}.$$\[thm2\] The problem of determining $P(n,k)$ seems more difficult than that of determining $S(n,k)$. Our bounds for general $k$ are quite far apart, but in the case $k=3$ we can refine our argument to obtain a better bound. $$\left(\frac{1}{8}+o(1)\right)2^{3n}\le P(n,3)\le \left(0.13075+o(1)\right)2^{3n}.$$\[thm3\] We conjecture that the lower bound is tight. For each fixed $k \ge 3$, $$P(n,k)=\left(\frac{1}{8}+o(1)\right)2^{kn}.$$ In the next two subsections we give the proofs of Theorems \[thm2\] and \[thm3\]. Sums ---- In order to prove Theorem \[thm2\], we first deal with $s$-uniform families and prove a stronger result. Given a multicolor sunflower ${\cal S}$, define its *core size* to be $c({\cal S})=|C|$. Given integers $s\ge 1$ and $c$ with $0\le c\le s-1$, let $n$ be an integer such that $n\ge c+k(s-c)$. For $ i=1,\ldots,k$, let $\mathcal{A}_i\subset \binom{[n]}{s}$ such that $\{\mathcal{A}_i\}_{i=1}^k$ contains no multicolor sunflower with $k$ petals and core size $c$. Then $$\sum_{i=1}^k|\mathcal{A}_i|\le (k-1)\binom{n}{s}.$$ Furthermore, this bound is tight. \[thm1\] Randomly take an ordered partition of $[n]$ into $k+2$ parts $X_1, X_2,\ldots, X_{k+2}$ such that $|X_1|=n-(c+k(s-c)), |X_2|=c$, and $|X_i|=s-c$ for $i=3,\ldots, k+2$, with uniform probability for each partition. For each partition, construct the bipartite graph $$G=(\{\mathcal{A}_i:i=1,\ldots,k\}\cup\{X_2\cup X_j:j\in [3,k+2]\},E)$$ where a pair $\{\mathcal{A}_i,X_2\cup X_j\}\in E$ if and only if $X_2\cup X_j\in\mathcal{A}_i$. If there exists a perfect matching in $G$, then we will get a multicolor sunflower with $k$ petals and core size $c$, since $X_2$ will be the core. This shows that $G$ has matching number at most $k-1$. Then König’s theorem implies that the random variable $|E(G)|$ satisfies $$\label{k} |E(G)|\le (k-1)k.$$ Another way to count the edges of $G$ is through the following expression: $$\begin{split} |E(G)|=\sum_{i=1}^k\sum_{j=3}^{k+2}\chi_{\{X_2\cup X_j\in \mathcal{A}_i\}}, \end{split}\label{ineq1}$$ where $\chi_A$ is the characteristic function of the event $A$. Taking expectations and using (\[k\]) we obtain $$\mathbb{E}\left(\sum_{i=1}^k\sum_{j=3}^{k+2}\chi_{\{X_2\cup X_j\in \mathcal{A}_i\}}\right) \le (k-1)k.$$ By linearity of expectation, $$\begin{split} &\mathbb{E}\left(\sum_{i=1}^k\sum_{j=3}^{k+2}\chi_{\{X_2\cup X_j\in \mathcal{A}_i\}}\right)=\sum_{i=1}^k\sum_{j=3}^{k+2}\mathbb{P}\left(X_2\cup X_j\in \mathcal{A}_i\right)=\sum_{i=1}^k\sum_{j=3}^{k+2}\sum_{A\in\mathcal{A}_i}\mathbb{P}\left(A=X_2\cup X_j\right). \end{split}$$ The probability that a set $A$ is partitioned as $X_2\cup X_j$ is the same as the probability that $A$ is partitioned into two ordered parts of sizes $c$ and $s-c$, and $[n]\setminus A$ has an ordered partitioned into $k$ parts with one of the parts of size $n-(c+k(s-c))$ and $k-1$ of them of size $s-c$. Hence for any $A\in {\cal A}_i$, $$\begin{split} \mathbb{P}{\left(A=X_2\cup X_j\right)} &=\frac{\binom{|A|}{c}\binom{n-|A|}{n-(c+k(s-c))}\prod_{i=1}^{k-1}\binom{(k-i)(s-c)}{s-c}}{\binom{n}{c+k(s-c)}\binom{c+k(s-c)}{c}\prod_{i=0}^{k-1}\binom{(k-i)(s-c)}{s-c}}\\ &=\frac{\binom{s}{c}\binom{n-s}{n-(c+k(s-c))}\prod_{i=1}^{k-1}\binom{(k-i)(s-c)}{s-c}}{\binom{n}{c+k(s-c)}\binom{c+k(s-c)}{c}\binom{k(s-c)}{s-c}\prod_{i=1}^{k-1}\binom{(k-i)(s-c)}{s-c}}\\ &=\frac{1}{\binom{n}{s}}. \end{split}$$ So we have $$\mathbb{E}\left(\sum_{i=1}^k\sum_{j=3}^{k+2}\chi_{\{X_2\cup X_j\in \mathcal{A}_i\}}\right)=\sum_{i=1}^k\sum_{j=3}^{k+2}\sum_{A\in\mathcal{A}_i}\frac{1}{\binom{n}{s}}=\sum_{i=1}^k|\mathcal{A}_i|\frac{k}{\binom{n}{s}}.$$ Hence by (\[ineq1\]), $$\sum_{i=1}^k|\mathcal{A}_i|\le (k-1)\binom{n}{s}.$$ The bound shown above is tight, since we can take ${\cal A}_1={\cal A}_2=\ldots={\cal A}_{k-1}=\binom{[n]}{s}$, and ${\cal A}_k=\emptyset$. Now we use this lemma to prove Theorem \[thm2\]. [**Proof of Theorem \[thm2\].**]{} Recall that $n \ge k \ge 3$ and we are to show that $$S(n,k)= (k-1)2^n+1+\sum_{s=n-k+2}^n\binom{n}{s}.$$ To see the upper bound, given families $\{{\cal A}_i\}_{i=1}^k\in {\cal F}(n,k)$, we define ${\cal A}_{i,s}={\cal A}_i\cap \binom{[n]}{s}$ for each $i\in [k]$ and integer $s\in [0,n]$. This gives a partition of each family ${\cal A}_i$ into $n+1$ subfamilies. Since families $\{{\cal A}_i\}_{i=1}^k$ contain no multicolor sunflowers with $k$ petals, neither do $\{{\cal A}_{i,s}\}_{i=1}^k$ for all $s\in[0,n]$. Now, for each $s=1, 2,\ldots, n-k+1$ let $$c=\max\left\{0,\, \frac{n-ks}{1-k}\right\}.$$ Then $0 \le c \le s-1$, and $n\ge c+k(s-c)$. Therefore, by Lemma \[thm1\], for $1 \le s\le n-k+1$, $$\sum_{i=1}^k|{\cal A}_{i,s}|\le(k-1)\binom{n}{s}.$$ For $s>n-k+1$, notice that a trivial bound for this sum is $k\binom{n}{s}$. So we get, $$\begin{split} \sum_{i=1}^k|{\cal A}_{i}|&=\sum_{i=1}^{k}\sum_{s=0}^{n}|{\cal A}_{i,s}|\\ &=\sum_{s=0}^{n}\sum_{i=1}^k|{\cal A}_{i,s}|\\ &=\sum_{i=1}^k|{\cal A}_{i,0}|+\sum_{s=1}^{n-k+1}\sum_{i=1}^k|{\cal A}_{i,s}|+\sum_{s=n-k+2}^{n}\sum_{i=1}^k|{\cal A}_{i,s}|\\ &\leq k\binom{n}{0}+ \sum_{s=1}^{n-k+1}(k-1)\binom{n}{s}+ \sum_{s=n-k+2}^nk\binom{n}{s}\\ &\leq \sum_{s=0}^n(k-1)\binom{n}{s}+\binom{n}{0}+\sum_{s=n-k+2}^n\binom{n}{s}\\ &=(k-1)2^n+1+\sum_{s=n-k+2}^n\binom{n}{s}. \end{split}$$ The lower bound is obtained by the following example: ${\cal A}_i=2^{[n]}$ for $i=1\ldots,k-1$ and ${\cal A}_k=\{\emptyset\}\cup\{S\subset[n]: |S|\ge n-k+2\}$. To see that $\{{\cal A}_i\}_{i=1}^{k}$ contains no multicolor sunflower, notice that any multicolor sunflower uses a set from ${\cal A}_k$. The empty set does not lie in any sunflowers. So if a set of size at least $n-k+2$ appeared in a sunflower with $k$ petals, it requires at least $k-1$ other points to form such a sunflower, but then the total number of points in this sunflower is at least $n+1$, a contradiction. Products -------- From the bound on the sum of the families that do not contain a multicolor sunflower, we deduce the following bound on the product by using AM-GM inequality. \[cor1\]Fix $k\ge 3$. As $n\rightarrow\infty$, $$\left(\frac{1}{8}+o(1)\right)2^{kn}\le P(n,k)\le\left(\left(\frac{k-1}{k}\right)^k+o(1)\right)2^{kn}.$$ The upper bound follows from Theorem \[thm2\] and the AM-GM inequality, $$\prod_{i=1}^k|\mathcal{A}_i|\le\left(\frac{\sum_{i=1}^k|\mathcal{A}_i|}{k}\right)^k\le\left((1+o(1))\frac{(k-1)2^n}{k}\right)^k=(1+o(1))\left(\frac{k-1}{k}\right)^k2^{kn}.$$ For the lower bound, we take $${\cal A}_{1}={\cal A}_2=\{S\subset[n]:1\in S\mbox{ or }|S|\ge n-1\},$$ $${\cal A}_3=\{S\subset[n]:1\notin S\mbox{ or }|S|\ge n-1\},$$ and ${\cal A}_4={\cal A}_5=\ldots={\cal A}_{k}=2^{[n]}$. A multicolor sunflower with $k$ petals must use three sets from ${\cal A}_{1},{\cal A}_2$, and ${\cal A}_{3}$, call them $A_1, A_2, A_3$ respectively. These three sets form a multicolor sunflower with three petals. If any of these sets is of size at least $n-1$, then it will be impossible to form a 3-petal sunflower with the other two sets. So by their definitions, we have $1\in A_1\cap A_2$, but $1\notin A_3$, which implies $A_1\cap A_2\neq A_1\cap A_3$, a contradiction. So the families $\{{\cal A}_i\}_{i=1}^k$ contain no multicolor sunflowers with $k$ petals. The sizes of these families are $|{\cal A}_{1}|=|{\cal A}_{2}|=|{\cal A}_{3}|=(\frac{1}{2}+o(1))2^n$, and $|{\cal A}_{i}|=2^n$ for $i\ge 4$. Thus, $$\prod_{i=1}^k|\mathcal{A}_i|=\left(\frac{1}{8}+o(1)\right)2^{kn}.$$ For any positive integer $k$ we have $(\frac{k-1}{k})^k<1/e$, so Corollary \[cor1\] implies the upper bound $(1/e+o(1))2^{kn}$ for all $k\ge 3$. For $k=3$, we will improve the factor in the upper bound from $(2/3)^3=0.29629\cdots$ to approximately $0.131$, which is quite close to our conjectured value of $0.125$. We need the following lemma. Let $G=(V_1\cup V_2,E)$ be a bipartite graph with $|V_1|=|V_2|=3$ and $d(v)\le 2$ for all $v\in V_2$. If the maximum size of a matching in $G$ is at most two, then $G$ is a subgraph of one of the following three graphs. $\bullet$ $G_1$: a copy of $K_{2,3}$ with the part of size two in $V_1$ and the part of size three in $V_2$ $\bullet$ $G_2$: two vertex disjoint copies of the path with two edges $\bullet$ $G_3$: a path with four edges whose endpoints are in $V_1$ at (0,0) \[vertex\] ; at (0,1) \[vertex\] ; at (0,2) \[vertex\] ; at (1.5,0) \[vertex\] ; at (1.5,1) \[vertex\] ; at (1.5,2) \[vertex\] ; at (0,2.5) [$V_1$]{}; at (1.5,2.5) [$V_2$]{}; at (0.75,-0.5) [$G_1$]{}; at (4,0) \[vertex\] ; at (4,1) \[vertex\] ; at (4,2) \[vertex\] ; at (5.5,0) \[vertex\] ; at (5.5,1) \[vertex\] ; at (5.5,2) \[vertex\] ; at (4,2.5) [$V_1$]{}; at (5.5,2.5) [$V_2$]{}; at (4.75,-0.5) [$G_2$]{}; at (8,0) \[vertex\] ; at (8,1) \[vertex\] ; at (8,2) \[vertex\] ; at (9.5,0) \[vertex\] ; at (9.5,1) \[vertex\] ; at (9.5,2) \[vertex\] ; at (8,2.5) [$V_1$]{}; at (9.5,2.5) [$V_2$]{}; at (8.75,-0.5) [$G_3$]{}; (0,2) to (1.5,2); (0,2) to (1.5,1); (0,2) to (1.5,0); (0,1) to (1.5,2); (0,1) to (1.5,1); (0,1) to (1.5,0); (4,2) to (5.5,2); (4,2) to (5.5,1); (4,1) to (5.5,0); (4,0) to (5.5,0); (8,2) to (9.5,2); (8,1) to (9.5,2); (8,1) to (9.5,1); (8,0) to (9.5,1); \[strcture\] By König’s theorem, the minimum vertex cover of $G$ has size at most two. Suppose $G$ has a vertex that covers all the edges. Then $G$ is a subgraph of either a $K_{1,3}$ whose degree three vertex is in $V_1$ or a $K_{1,2}$ whose degree two vertex is in $V_2$. Both of these are subgraphs of $G_1$. Now we may assume that a minimum vertex cover $S$ of $G$ has size two. If $S \subset V_1$, then $G$ is a subgraph of $G_1$. Next we assume that $S=\{u,v\}$ with $u\in V_1, v\in V_2$. If $d(u)=3$, then $d(v)\le 2$ implies that $G\subset G_1$. If $d(u)=2$ and $uv\in E(G)$, then $G\subset G_1$. If $d(u)=2$ and $uv\notin E(G)$, then $G\subset G_2$. If $d(u)=1$, then clearly $G\subset G_3$. The remaining case is that $S \subset V_2$, and it is obvious that $G\subset G_1$ or $G\subset G_3$. We now have the necessary ingredients to prove Theorem \[thm3\]. [**Proof of Theorem \[thm3\].**]{} Recall that $n \ge k \ge 3$ and we are to show that $$\left(\frac{1}{8}+o(1)\right)2^{3n}\le P(n,3)\le \left(0.13075+o(1)\right)2^{3n}.$$ The lower bound follows from Corollary \[cor1\]. The upper bound is proved using a similar idea as the proof of Lemma \[thm1\], although the graph statistic considered is more complicated. First notice that given families $\mathcal{A}_i\subset 2^{[n]}, i=1, 2, 3$ that are in ${\cal F}(n,3)$, such that $\prod_{i=1}^3|{\cal A}_i|$ is maximized, we may assume that the common part of the three families $\bigcap_{i=1}^3{\cal A}_i=\emptyset$. To see this, let ${\cal A}_c=\bigcap_{i=1}^3{\cal A}_i$. By Theorem \[thmES\], $|{\cal A}_c|\le 2^{n-c\sqrt{n}}=o(2^n)$, otherwise a $3$-petal multicolor sunflower exists. Notice also that from the lower bound, $|{\cal A}_i|=\Theta(2^n)$ for all $i\in[3]$, we have $$\prod_{i=1}^3|{\cal A}_i|=\prod_{i=1}^3(|{\cal A}_i\setminus{\cal A}_c|+|{\cal A}_c|)=\prod_{i=1}^3(|{\cal A}_i\setminus{\cal A}_c|+o(2^n))=\prod_{i=1}^3|{\cal A}_i\setminus{\cal A}_c|+o(2^{3n}).$$ Hence, it suffices to show a bound on $\prod_{i=1}^3|{\cal A}_i\setminus{\cal A}_c|$. We uniformly take an ordered partition of $[n]=X_1\cup X_2\cup X_3\cup X_4$ at random such that the parts $X_2, X_3$ and $X_4$ are nonempty. So there are $$p(n)=4^n-3\cdot3^n+3\cdot2^n-1=4^n+O(3^n)$$ such partitions, each is chosen with probability $1/p(n)$. Again, we construct a bipartite graph $G=(V_1\cup V_2,E)$, where $V_1=\{\mathcal{A}_i:i=1,2,3\}$ and $V_2=\{X_1\cup X_j: j=2,3,4\}$, and a pair $\{\mathcal{A}_i,X_1\cup X_j\}\in E$ if and only if $X_1\cup X_j\in \mathcal{A}_i$. A perfect matching in $G$ gives rise to a multicolor sunflower with three petals. Hence the maximum size of a matching in $G$ is at most two. Moreover, since ${\cal A}_c=\emptyset$, the degrees of vertices in $V_2$ are at most two. We may now apply Lemma \[strcture\] to deduce that $G$ is a subgraph of $G_i$ for some $i=1,2,3$. Let $m_2(G)$ be the number of matchings in $G$ of size two and $t(G)$ be the number of five vertex subgraphs of $G$ comprising a degree two vertex $v \in V_2$, the two edges incident to it, and an additional isolated edge. Observe that $m_2(G_1)+t(G_1)=6+0=6,m_2(G_2)+t(G_2)=4+2=6,$ and $m_2(G_3)+t(G_3)=3+2=5$. Since $G\subset G_i$ for some $i=1,2,3$, we have $$\label{mf} m_2(G)+t(G) \le \max_{i\in[3]}\left(m_2(G_i)+t(G_i)\right)= 6.$$ Let $$P=\sum_{\substack{(\mathcal{B}_1,\mathcal{B}_2)\in V_1^2\\{\cal B}_1\neq{\cal B}_2}}\sum_{\substack{(Y_1,Y_2)\in V_2^2\\Y_1\neq Y_2}}\frac{1}{2}\chi_{\{Y_i\in \mathcal{B}_{i}: i=1,2\}}$$ and $$Q=\sum_{\substack{(\mathcal{B}_1,\mathcal{B}_2,\mathcal{B}_3)\in V_1^3\\{\cal B}_i\neq{\cal B}_j, i\neq j}}\sum_{\substack{(Y_1,Y_2)\in V_2^2\\Y_1\neq Y_2}}\frac{1}{2}\chi_{\{Y_1\in \mathcal{B}_{1},Y_2\in \mathcal{B}_{2},Y_2\in \mathcal{B}_{3}\}}.$$ Then (\[mf\]) implies that $$\begin{split} &\mathbb{E}(P+Q)\le 6. \end{split}\label{ineq2}$$ By linearity of expectation, to calculate $\mathbb{E}(P+Q)$, we just need to calculate the expectations of $\chi_{\{Y_i\in \mathcal{B}_{i}: i=1,2\}}$ and $\chi_{\{Y_1\in \mathcal{B}_{1},Y_2\in \mathcal{B}_{2},Y_2\in \mathcal{B}_{3}\}}$. Call a pair of sets $(B_i\in \mathcal{B}_{i}: i=1,2)$ *good* if $B_1\setminus B_2\neq\emptyset$, $B_2\setminus B_1\neq\emptyset$ and $B_1\cup B_2\neq[n].$ Conversely, a pair is called *bad*, if either $B_1\subset B_2$, or $B_2\subset B_1$, or $B_1\cup B_2=[n]$. We see that bad pairs induce partitions on $[n]$ into at most three parts, which shows that the number of bad pairs is $O(3^n)$. Now, for each good pair $(B_1, B_2)$, there exists a unique partition $$[n]= (B_1\cap B_2) \cup (B_1\setminus B_2)\cup (B_2\setminus B_1) \cup ([n]\setminus(B_1 \cup B_2))$$ such that $Y_1=B_1, Y_2=B_2$. Therefore, $$\begin{split} \mathbb{E}\left(\chi_{\{Y_i\in \mathcal{B}_{i}: i=1,2\}}\right)&=\mathbb{P}\left(Y_i\in \mathcal{B}_{i}: i=1,2\right) \\ &=\sum_{B_1\in\mathcal{B}_{1}}\sum_{B_2\in\mathcal{B}_{2}}\mathbb{P}\left(Y_1=B_1,Y_2=B_2\right)\\ &=\sum_{(B_i\in {\cal B}_i: i=1,2)\text{ is good}}\frac{1}{p(n)}\\ &=\frac{\#\{\text{good pairs } (B_i\in {\cal B}_i: i=1,2)\}}{4^n+O(3^n)}\\ &=\frac{|{\cal B}_1||{\cal B}_2|+O(3^n)}{4^n+O(3^n)}\\ &=(1+o(1))\frac{|{\cal B}_1||{\cal B}_2|}{4^n}. \end{split}$$ Similarly, $$\begin{split} \mathbb{E}\left(\chi_{\{Y_1\in \mathcal{B}_{1},Y_2\in \mathcal{B}_{2},Y_2\in \mathcal{B}_{3}\}}\right)&=\mathbb{P}\left(Y_1\in \mathcal{B}_{1},Y_2\in \mathcal{B}_{2},Y_2\in \mathcal{B}_{3}\right)\\ &=\sum_{B_1\in\mathcal{B}_{1}}\sum_{B_2\in\mathcal{B}_{2}\cap\mathcal{B}_3}\mathbb{P}\left(Y_1=B_1,Y_2=B_2\right)\\ &=(1+o(1))\frac{|\mathcal{B}_{1}||\mathcal{B}_{2}\cap\mathcal{B}_3|}{4^n}. \end{split}$$ Consequently, $\mathbb{E}(P+Q)$ is equal to $$\begin{split} &\sum_{\substack{(\mathcal{B}_1,\mathcal{B}_2)\in V_1^2\\{\cal B}_1\neq{\cal B}_2}}\sum_{\substack{(Y_1,Y_2)\in V_2^2\\Y_1\neq Y_2}}\frac{(1+o(1))}{2}\frac{|\mathcal{B}_{1}||\mathcal{B}_{2}|}{4^n}+\sum_{\substack{(\mathcal{B}_1,\mathcal{B}_2,\mathcal{B}_3)\in V_1^3\\{\cal B}_i\neq{\cal B}_j, i\neq j}}\sum_{\substack{(Y_1,Y_2)\in V_2^2\\Y_1\neq Y_2}}\frac{(1+o(1))}{2}\frac{|\mathcal{B}_{1}||\mathcal{B}_{2}\cap\mathcal{B}_3|}{4^n}\\ &=\sum_{\substack{(\mathcal{B}_1,\mathcal{B}_2)\in V_1^2\\{\cal B}_1\neq{\cal B}_2}}\frac{(6+o(1))}{2}\frac{|\mathcal{B}_{1}||\mathcal{B}_{2}|}{4^n}+\sum_{\substack{(\mathcal{B}_1,\mathcal{B}_2,\mathcal{B}_3)\in V_1^3\\{\cal B}_i\neq{\cal B}_j, i\neq j}}\frac{(6+o(1))}{2}\frac{|\mathcal{B}_{1}||\mathcal{B}_{2}\cap\mathcal{B}_3|}{4^n}\\ &=\frac{6+o(1)}{4^n}(|\mathcal{A}_{1}||\mathcal{A}_{2}|+|\mathcal{A}_{2}||\mathcal{A}_{3}|+|\mathcal{A}_{3}||\mathcal{A}_{1}|+|\mathcal{A}_{1}||\mathcal{A}_{2}\cap\mathcal{A}_3|+|\mathcal{A}_{2}||\mathcal{A}_{3}\cap\mathcal{A}_1|+|\mathcal{A}_{3}||\mathcal{A}_{1}\cap\mathcal{A}_2|). \end{split}$$ Let $$a=|\mathcal{A}_{1}|, \,b=|\mathcal{A}_{2}|, \,c=|\mathcal{A}_{3}|, \,d=|\mathcal{A}_{2}\cap\mathcal{A}_3|, \,e=|\mathcal{A}_{3}\cap\mathcal{A}_1|, \,f=|\mathcal{A}_{1}\cap\mathcal{A}_2|.$$ If follows from (\[ineq2\]) that $$ab+bc+ca+ad+be+cf\le (1+o(1))4^n.$$ We also have by inclusion/exclusion $$2^n \ge |\mathcal{A}_{1} \cup \mathcal{A}_{2} \cup \mathcal{A}_{3}| =\sum_i |\mathcal{A}_{i}| - \sum_{i, j} |\mathcal{A}_{i} \cap \mathcal{A}_{j}|$$ which gives $a+b+c \le d+e+f+2^n$. Thus, we are left to solve the following optimization problem. $$\begin{aligned} & \max & & abc \\ & \textrm{ s.t.}& & ab+bc+ca+ad+be+cf\leq (1+o(1))4^n\\ & & & a+b+c-d-e-f\leq 2^n\\ & & & d+e\leq c, e+f\leq a, f+d\leq b\\ & & & a,b,c,d,e,f\ge0. \end{aligned}$$ Now, if we rescale the variables in this optimization problem by a factor of $2^n$, that is, write $x'=x/2^n$ for $x\in\{a,b,c,d,e,f\}$, we get $$\begin{aligned} & \max & & a'b'c' \\ & \textrm{ s.t.}& & a'b'+b'c'+c'a'+a'd'+b'e'+c'f'\le 1+\epsilon\\ & & & a'+b'+c'-d'-e'-f'\leq 1\\ & & & d'+e'\le c', e'+f'\le a', f'+d'\le b'\\ & & & a',b',c',d',e',f'\ge0. \end{aligned}$$ Since $n$ can be made arbitrarily large, $\epsilon$ is arbitrarily close to zero. So we are left to show the optimization problem with $\epsilon=0$. By solving the KKT conditions (see Appendix for the details), we get $$\max abc\le (0.13075+o(1))2^{3n}.$$ This concludes the proof. Concluding remarks ================== $\bullet$ Our basic approach is simply to average over a suitable family of partitions. It can be applied to a variety of other extremal problems, for example, it yields some results about cross intersecting families proved by Borg [@B14]. It also applies to the situation when the number of colors is more than the size of the forbidden configuration. In particular, the proof of Lemma \[thm1\] yields the following more general statement. \[tk\] Given integers $s\ge 1$, $1 \le t \le k$ and $0\le c\le s-1$, let $n$ be an integer such that $n\ge c+t(s-c)$. For $ i=1,\ldots,k$, let $\mathcal{A}_i\subset \binom{[n]}{s}$ such that $\{\mathcal{A}_i\}_{i=1}^k$ contains no multicolor sunflower with $t$ petals and core size $c$. Then, $$\sum_{i=1}^k|\mathcal{A}_i|\le\begin{cases} \frac{(t-1)k}{m}\binom{n}{s}, & \mbox{if } c+t(s-c)\le n\le c+k(s-c) \\(t-1)\binom{n}{s}, & \mbox{if } n\ge c+k(s-c), \end{cases}$$ where $m= \lfloor (n-c)/(s-c)\rfloor$. Note that both upper bounds can be sharp. For the first bound, when $c=0$, $m=t<k$ and $n=ms$, let each $\mathcal{A}_i$ consist of all $s$-sets omitting the element 1. A sunflower with $t=m$ petals and core size $c=0$ is a perfect matching of $[n]$. Since every perfect matching has a set containing 1, there is no multicolor sunflower. Clearly $\sum_i|\mathcal{A}_i|=k{n-1 \choose s}=((t-1)k/m){n \choose s}$. For the second bound, we can just take $t-1$ copies of ${[n] \choose s}$ to achieve equality. $\bullet$ Another general approach that applies to the sum of the sizes of families was initiated by Keevash-Saks-Sudakov-Verstraëte [@KSSV]. Both methods can be used to solve certain problems. For example, as pointed out to us by Benny Sudakov, the approach in [@KSSV] can be used to prove the $k=3$ case of Theorem \[thm2\]. On the other hand, we can use our approach to prove the following that is a very special case of a result of [@KSSV]: if we have graphs $G_1, G_2, G_3$ on vertex set $[n]$ with no multicolored triangle, then $|G_1|+|G_2|+|G_3| \le 2{n \choose 2}$ provided $n\equiv 1,3$ (mod 6). Indeed, we just take a Steiner triple system $S$ on $[n]$ and observe (by König’s theorem) that on each triple $e$ of $S$, the sum over $i$ of the number of edges of $G_i$ within $e$ is at most six (the result of [@KSSV] is quite a bit stronger, as it does not require the divisibility requirement and also applies when the number of colors is much larger). The same argument works for larger cliques and even for $r$-uniform hypergraphs, using the recent result of Keevash [@K] on the existence of designs. More precisely, given integers $2\le r<q$ and $n>n_0$ satisfying certain divisibility conditions, if we have $r$-uniform hypergraphs $H_1, \ldots, H_{{q \choose r}}$ on vertex set $[n]$ forming no multicolored $K_q^r$, then $\sum |H_i| \le ({q \choose r}-1){n \choose r}$ by the same proof as for triangles above except we replace a Steiner triple system with an appropriate design which is known to exist by Keevash’s result. $\bullet$ The main idea in the proof of Theorem \[thm3\] was to consider the graph parameter $h(G)=m_2(G)+t(G)$. The choice of $h(G)$ was obtained by a search of several parameters for which one could prove good upper bounds like in (\[mf\]), while also being able to compute the expectation in (\[ineq2\]). Perhaps some new ideas will be needed to improve the upper bound further to the conjectured value of 1/8. Carrying out this approach for larger $k$ appears to be difficult. [99]{} N. Alon, A. Shpilka, C. Umans, *On Sunflowers and Matrix Multiplication*, Computational Complexity, 22 (2013), 219–243. C. Bey, *On Cross-Intersecting Families of Sets*, Graphs and Combinatorics, 21 (2005), 161–168. B. Bollobás, P. Keevash, B. Sudakov, *Multicolored extremal problems*, Journal of Combinatorial Theory, Series A 107 (2004), 295–312. P. Borg, *Intersecting and Cross-Intersecting Families of Labeled Sets*, The Electronic Journal of Combinatorics, 15, \#N9, 2008. P. Borg, *The maximum sum and the maximum product of sizes of cross-intersecting families*, European Journal of Combinatorics 35 (2014), 117–130. S. Boyd, L. Vandenberghe, *Convex Optimization*, Cambridge University Press (2004), 243. M. Deza, *Une Propriete Extremale des Plans Projectifs Finis Dans une Classe de Codes Equidestants*, Discrete Mathematics, 6 (1973), 343–352. P. Erdős, C. Ko, R. Rado,*Intersection Theorems for Systems of Finite Sets*, Quarterly Journal of Mathematics: Oxford Journals, 12 (1) (1961), 313–320. P. Erdős, R. Rado, *Intersection Theorems for Systems of Finite Sets*, Journal of London Mathematical Society, Second Series 35 (1) (1960), 85–90. P. Erdős, E. Szemerédi, *Combinatorial Properties of Systems of Sets*, Journal of Combinatorial Theory, Series A 24 (3) (1978), 308–313. P. Frankl, V. Rödl, *Forbidden Intersections*, Transactions of the American Mathematical Society, 300 (1) (1987), 259–286. P. Frankl, N. Tokushige, *On $r$-Cross Intersecting Families of Sets*, Combinatorics, Probability and Computing 20 (2011), 749–752. P. Frankl, S.J. Lee, M. Siggers, N. Tokushige, *An Erdős-Ko-Rado Theorem for Cross $t$-Intersecting Families*, Journal of Combinatorial Theory, Series A 128 (2014), 207-249. A.J.W. Hilton, *An Intersection Theorem for a Collection of Families of Subsets of a Finite Set*, Journal of London Mathematical Society, 2 (1977), 369–384. P. Keevash, *The existence of designs*, http://arxiv.org/abs/1401.3665. P. Keevash, M. Saks, B. Sudakov, J. Verstraëte, *Multicolor Turán problems*, Advances in Applied Mathematics 33 (2004), 238–262. P. Keevash, B. Sudakov, *Set systems with restricted cross-intersections and the minimum rank of inclusion matrices*, Siam. J. Discrete Math. Vol. 18, No. 4 (2005), 713–727. J.H. van Lint, *A Theorem on Equidistant Codes*, Discrete Mathematics, 6 (1973), 353–358. M. Matsumoto, N. Tokushige, *The Exact Bound in the Erdős-Rado-Ko Theorem for Cross-Intersecting Families*, Journal of Combinatorial Theory, Series A 52 (1989), 90–97. D. Mubayi, V. Rödl, *Specified Intersections*, Transactions of the American Mathematical Society 366 (2014), No. 1, 491–504. L. Pyber, *A New Generalization of the Erdős-Ko-Rado Theorem*, Journal of Combinatorial Theory, Series A 43 (1986), 85–90. The Optimization Problem in the Proof of Theorem \[thm3\] ========================================================= We are actually considering the following problem $$\begin{aligned} & \max & & abc \\ & \textrm{ s.t.}& & ab+bc+ca+ad+be+cf-1\le 0\\ & & & a+b+c-d-e-f-1\leq 0\\ & & & a,b,c,d,e,f\geq0. \end{aligned}$$ We will see that the optimal solution to this problem satisfies all the constraints in the original problem, hence solves the original problem. We consider the KKT conditions (see, for example [@BV04]) for this problem. Let $(a,b,c,d,e,f)$ be an optimal solution to the problem, and $\mu_i\ge 0, i=1,\ldots,8$ be the dual variable associated with each constraint above respectively. We have the following stationary (gradient) condition. $$\begin{split} \begin{pmatrix} bc \\ ca\\ ab\\0\\0\\0 \end{pmatrix} =&\mu_1\begin{pmatrix} b+c+d\\a+c+e\\b+a+f\\a\\b\\c \end{pmatrix} +\mu_2\begin{pmatrix} 1\\1\\1\\-1\\-1\\-1 \end{pmatrix} +\mu_3\begin{pmatrix} -1\\0\\0\\0\\0\\0 \end{pmatrix} +\mu_4\begin{pmatrix} 0\\-1\\0\\0\\0\\0 \end{pmatrix}\\ &+\mu_5\begin{pmatrix} 0\\0\\-1\\0\\0\\0 \end{pmatrix} +\mu_6\begin{pmatrix} 0\\0\\0\\-1\\0\\0 \end{pmatrix} +\mu_7\begin{pmatrix} 0\\0\\0\\0\\-1\\0 \end{pmatrix}+\mu_8\begin{pmatrix} 0\\0\\0\\0\\0\\-1 \end{pmatrix}. \end{split}$$ The complimentary slackness conditions are the following. $$\begin{split} \mu_1(ab+bc+ca+ad+be+cf-1)&=0\\ \mu_2(a+b+c-d-e-f-1)&=0\\ \mu_3a=\mu_4b=\mu_5c=\mu_6d=\mu_7e=\mu_8f&=0. \end{split}$$ First notice that clearly the maximum of $abc$ should be positive, so in any optimal solution we have $a,b,c>0$. By complimentary slackness conditions $\mu_3 a=\mu_4 b=\mu_5 c=0$, we get $\mu_3=\mu_4=\mu_5=0$. Now, suppose that $d=e=f=0$, then the problem is reduced to maximizing $abc$, subject to $ab+bc+ca\le 1, a+b+c\le 1,$ and $a,b,c>0$. It is easy to see that in this case $a=b=c=1/3$ solves the problem, with the maximum $1/27$. Therefore, we may assume without loss of generality that $d>0$, so by complimentary slackness, $\mu_6=0.$ Thus, we have reduced the stationary condition into the following form. $$\begin{pmatrix} bc \\ ca\\ ab\\0\\0\\0 \end{pmatrix} =\mu_1\begin{pmatrix} b+c+d\\a+c+e\\b+a+f\\a\\b\\c \end{pmatrix} +\mu_2\begin{pmatrix} 1\\1\\1\\-1\\-1\\-1 \end{pmatrix} +\begin{pmatrix} 0\\0\\0\\0\\-\mu_7\\-\mu_8 \end{pmatrix}.$$ If $\mu_1=0$, then we get that $\mu_2=0$ also, then $a=b=c=0$, a contradiction. This shows that $\mu_1>0$. Now suppose that $\mu_2=0$. Then we get $a=\mu_2/\mu_1=0$ which makes the maximum also 0, so we must have $\mu_2>0$. By complimentary slackness, the first two constraints both hold with equality. Next, we use the stationary condition to express $a,b,c$ in terms of $\mu_1,\mu_2,\mu_7$ and $\mu_8$. $$a=\frac{\mu_2}{\mu_1},\qquad b=\frac{\mu_2+\mu_7}{\mu_1},\qquad c=\frac{\mu_2+\mu_8}{\mu_1}. \label{abcd}$$ #### Case 1. $\mu_7=\mu_8=0$. We get $a=b=c=\mu_2/\mu_1$. In this case, we are solving the maximization of $a^3$, subject to $3a^2+ax=1, 3a-x=1,$ and $a,x>0$, where $x=d+e+f$. The optimality is obtained at $a=x=1/2$, with maximum $1/8$. #### Case 2. Exactly one of $\mu_7$ and $\mu_8$ is positive. Without loss of generality, we may assume $\mu_7>0$, which implies that $e=0$. Since in this case $\mu_8=0$, we get $a=c$ from (\[abcd\]), so the problem is reduced to $$\begin{aligned} & \max & & a^2b \\ & \textrm{ s.t.}& & 2ab+a^2+ax-1= 0\\ & & & 2a+b-x-1= 0\\ & & & a,b,x>0 \end{aligned}$$ where $x=d+f$. This problem has the maximum $\frac{1}{729} \left(29+20 \sqrt{10}\right)\approx0.126537$ at $$a=\frac{1}{9} \left(1+\sqrt{10}\right)\approx0.462475,\qquad b= -\frac{-29-20 \sqrt{10}}{9 \left(1+\sqrt{10}\right)^2}\approx0.591617,$$ and $$x= -\frac{81 \left(-\frac{1}{729} 2 \left(1+\sqrt{10}\right)^3+\frac{1}{81} \left(1+\sqrt{10}\right)^2+\frac{1}{729} \left(-29-20 \sqrt{10}\right)\right)}{\left(1+\sqrt{10}\right)^2}\approx0.516568.$$ #### Case 3. Both $\mu_7,\mu_8>0$. This implies that $e=f=0$. So from the stationary condition, we get $$\begin{aligned} &ac=\mu_1(a+c)+\mu_2\\ &ab=\mu_1(a+b)+\mu_2.\\ \end{aligned}$$ By (\[abcd\]), we can eliminate the variables $a,b,c$ to get $$\begin{aligned} &\mu_2(\mu_2+\mu_8)=\mu_1^2(\mu_2+(\mu_2+\mu_8))+\mu_1^2\mu_2\\ &\mu_2(\mu_2+\mu_7)=\mu_1^2(\mu_2+(\mu_2+\mu_7))+\mu_1^2\mu_2.\\ \end{aligned}$$ A bit of algebra shows that $\mu_7=\mu_8$, which implies $b=c$. Thus the problem is reduced to $$\begin{aligned} & \max & & ab^2 \\ & \textrm{ s.t.}& & 2ab+b^2+ad-1= 0\\ & & & a+2b-d-1= 0\\ & & & a,b,d>0 \end{aligned}$$ Similar to the previous cases, one can solve system using the method of Lagrange multipliers as follows. We first eliminate variable $d$ to get $2ab+b^2+a(a+2b-1)-1=0$. Then define the Lagrangian as $$L(a,b,\lambda)=ab^2+\lambda(a^2+4ab+b^2-a-1).$$ To find the maximum, we need to solve the following system of equations. $$\begin{aligned} \frac{\partial L}{\partial a}&=b^2+2\lambda a+4\lambda b-\lambda=0\\ \frac{\partial L}{\partial b}&=2ab+4\lambda a+2\lambda b=0\\ \frac{\partial L}{\partial \lambda}&=a^2+4ab+b^2-a-1=0. \end{aligned}$$ One can obtain exact closed form solutions to this system involving radicals, however, the formulas are too long to display, so we give the approximation as follows. The maximum is approximately equal to $0.130748$, with $a\approx0.37478, b\approx0.590649, \lambda\approx-0.165171$. This corresponds to the following solution to the original problem: $$a \approx0.37478,\qquad b=c \approx0.590649,\qquad d\approx0.556078,\qquad e=f=0.$$ Comparing with all the other cases, this is the actual maximum that $abc$ can achieve. Moreover, this solution satisfies the constraints $d+e\le c, e+f\le a$ and $f+d\le b$, hence is the optimal solution to the original problem in the proof of Theorem \[thm3\] as well. [^1]: . Research partially supported by NSF grants DMS-0969092 and DMS-1300138 [^2]: .
--- abstract: 'The yielding behavior of a sheared Laponite suspension is investigated within a 1 mm gap under two different boundary conditions. No-slip conditions, ensured by using rough walls, lead to shear localization as already reported in various soft glassy materials. When apparent wall slip is allowed using a smooth geometry, the sample breaks up into macroscopic solid pieces that get slowly eroded by the surrounding fluidized material up to the point where the whole sample is fluid. Such a drastic effect of boundary conditions on yielding suggests the existence of some macroscopic characteristic length that could be connected to cooperativity effects in jammed materials under shear.' author: - Thomas Gibaud - Catherine Barentin - Sébastien Manneville title: Influence of boundary conditions on yielding in a soft glassy material --- “Soft glassy materials” constitute an extremely wide category ranging from food products or cosmetics, such as emulsions or granular pastes, to foams, physical gels, and colloidal glasses. One of the most striking characteristics of these materials is the existence of a transition from solidlike behavior at rest to fluidlike behavior when a strong enough external shear stress is applied [@Liu:1998; @Trappe:2001]. Although defining and measuring the [*yield stress*]{}, i.e., the stress above which the material becomes fluid, still raise many difficulties, most of them due to thixotropic features resulting from the competition between aging and shear rejuvenation [@Barnes:1999], it is now well established that the yielding transition leads in most cases to [*shear localization*]{}, i.e., to flow profiles characterized by the coexistence of a solidlike region that does not flow and a fluidlike region that bears some finite shear rate [@Pignon:1996; @Coussot:2002; @Rogers:2008] even in the absence of any geometry-induced stress heterogeneity [@Moller:2008]. Such a picture, revealed both by optical imaging and magnetic resonance velocimetry in various geometries, is also supported by numerical results with no-slip boundary conditions [@Varnik:2003; @Xu:2005]. However recent studies have suggested more complicated scenarii where the nature of the yielding transition may depend on the interactions between the individual components [@Becu:2006] or on the confinement of the system [@Isa:2007; @Goyon:2008] pointing to the absence of any universal local relationship between the shear stress and the shear rate in the vicinity of yielding. [*Boundary conditions*]{} may be another crucial control parameter for the yielding transition. Indeed, it has long been known that rheological measurements can strongly depend on wall surface roughness due to the presence of apparent wall slip [@Barnes:1999]. Whether a soft glassy material may or may not slip at the walls is thus expected to affect the solid–fluid transition under shear. So far local measurements investigating the influence of wall roughness in the context of yielding have only shown a weak impact of slip on the nature of the transition both in microfluidic devices [@Isa:2007; @Goyon:2008] and in macroscopic gaps [@Meeker:2004]. In this Letter the yielding transition of a Laponite suspension is probed within a 1 mm gap for two different boundary conditions (rough and smooth walls) using standard rheology, time-resolved local velocity measurements, and direct optical imaging simultaneously. As previously observed, rough walls lead to shear localization without significant temporal fluctuations. However smooth walls provide an entirely different scenario where the initially solidlike material breaks up into a very heterogeneous pattern of macroscopic solid domains separated by fluidized zones. These solid pieces progressively erode leading to complete fluidization of the sample. A tentative interpretation of such a striking impact of boundary conditions on yielding is suggested in light of recent works on cooperativity in soft glassy materials. Laponite powder (Rockwood, grade RD), a synthetic clay made of platelets of diameter 25–30 nm and thickness 1–2 nm, is dispersed at 3 wt. $\%$ in ultrapure water [@Bonn:2002; @DiLeonardo:2005a]. Hollow glass spheres of mean diameter 6 $\mu$m (Sphericel, Potters) are added at 0.3 wt. $\%$ to the Laponite suspension and act as contrast agents for ultrasonic velocimetry (see below). Within 30 min of magnetic stirring, the dispersion becomes homogeneous and very viscous but remains fluid. Due to the presence of the large glass particles, the solution is slightly turbid. The elastic modulus $G'$ slowly builds up and, after approximately two hours, overcomes the loss modulus $G"$, corresponding to “solidification.” In our samples, $G'$ and $G"$ obtained from oscillatory measurements in the linear regime show no significant frequency dependence in the range 0.01–10 Hz. Over a two week period, $G'$ and $G"$ vary from 200 to 1000 Pa and from 15 to 40 Pa respectively due to aging [@Bonn:2002; @DiLeonardo:2005a]. Meanwhile, the yield stress measured from stress sweep experiments at 1 Hz shows a variation from 30 to 100 Pa. In the following, our purpose is to focus on the influence of boundary conditions on yielding. We compare the flow behavior of two Laponite samples in a sand-blasted and in a “smooth” Plexiglas Couette cell (rotating inner cylinder radius $R=24$ mm, gap width 1 mm, and height 30 mm). The respective roughnesses of these cells measured from atomic force microscopy are 0.6 $\mu$m and 15 nm, which will be referred to as “rough” and “smooth” [@Roughness]. A rheometer (TA Instruments AR1000N) imposes a constant shear rate (${\dot{\gamma}}\simeq 20$ s$^{-1}$) and measures the corresponding shear stress $\sigma$ as a function of time $t$. Velocity profiles accross the gap are recorded every 0.5 s at about 15 mm from the cell bottom using ultrasonic speckle velocimetry, a technique based on acoustic tracking of scatterers (here, the hollow glass spheres) [@Manneville:2004]. Direct visualization of the sheared sample is performed using a CCD camera (Cohu 4192). The temperature is kept constant at $25\pm 0.1$[$^\circ$C]{}. Before any measurement, the Laponite sample is pre-sheared for 1 min at +1500 s$^{-1}$ and for 1 min at -1500 s$^{-1}$ to erase most of the sample history. We then proceed with a standard oscillatory test at 1 Hz in the linear regime for 2 min. We checked that this procedure leads to reproducible results over a few hours. Figure \[fig1\] shows the results obtained in the [*rough*]{} Couette cell after a step-like shear rate ${\dot{\gamma}}=25$ s$^{-1}$ is imposed at $t=0$. The velocity profiles present shear localization: an unsheared solidlike region close to the stator coexists with a sheared fluid region on the rotor side. The sample remains optically homogeneous during the whole experiment. The size of the unsheared band slowly increases with time for $t\gtrsim 1500$ s, consistently with the slow decrease of $\sigma(t)$, so that the system has not completely reached a stationary state after more than 5000 s. For a fixed shearing duration, we checked that the size of the unsheared band decreases with the imposed shear rate (data not shown [@Gibaud]). All these observations are consistent with previous results obtained in the absence of wall slip [@Coussot:2002; @Moller:2008; @Rogers:2008] and will serve here as a hallmark of the “standard” yielding transition. Measurements performed in the [*smooth*]{} Couette cell for a similar start-up experiment are gathered in Fig. \[fig2\]. The stress response of Fig. \[fig2\](a) is qualitatively very similar to that of Fig. \[fig1\](a). One can only note that noisy features are detectable in $\sigma(t)$ until at least $t\simeq 4000$ s whereas, in the rough geometry, $\sigma(t)$ slowly and smoothly relaxes after a much shorter noisy transient. Surprisingly the velocity profiles in the smooth geometry (Fig. \[fig2\](b)–(d)) reveal a totally different yielding behavior with much more complex spatiotemporal dynamics where three regimes may be distinguished. For the first few seconds ($\circ$ in Fig. \[fig2\](b)), the velocity profiles present an unsheared region close to the stator, similar to that observed in the rough geometry, and the sample appears homogeneous. However, in the smooth cell, the solidlike region soon detaches from the stator ($\square$ and $\vartriangle$ in Fig. \[fig2\](b)): the velocity of the unsheared region progressively increases as the width of the sheared layer close to the rotor decreases. As shown in Fig. \[fig3\], this corresponds to a strong increase of apparent wall slip at the rotor. This slow process, noted regime I in Fig. \[fig2\](a), ends at $t\simeq 1000$ s when the unsheared region has invaded almost the whole gap and undergoes a plug-like flow at a velocity $v_{\rm solid}$ which roughly corresponds to half the rotor velocity. Direct observations of the sample during regime I indicate the slow apparition of heterogeneities in the sample, most probably due to local variations of the hollow glass spheres concentration. At the end of regime I, these heterogeneities are arranged in a fracture-like pattern [@SupMat]. Regime II, that extends from $t\simeq 1000$ s to $t\simeq 4200$ s, is characterized by a flow pattern that is strongly heterogeneous in both space and time. Indeed, as illustrated in Fig. \[fig2\](c), velocity measurements display alternately plug-like flow in most of the gap and viscous flow profiles ($\vartriangleleft$ and $\vartriangleright$ respectively). Consequently slip velocities show huge fluctuations both at the rotor and at the stator (see Fig. \[fig3\]). Note that one velocity profile corresponds to an average over $\Delta t=0.22$ s whereas the rotation period of the rotor is 8.8 s. Therefore we conclude that the sample is constituted of macroscopic solid parts that are separated by fluid regions and rotate at a velocity $v_{\rm solid}\simeq 7$[ mms$^{-1}$]{}. This picture is confirmed by direct optical imaging of the sample [@SupMat], which reveals large domains of typical size 1 mm–1 cm carried away by the flow and whose edges appear brighter than the background in the inset of Fig. \[fig2\](c). Figure \[fig4\] provides a closer look into regime II by comparing simultaneous velocimetry and imaging. The spatiotemporal diagram of Fig. \[fig4\](a) shows a clear alternance of plug-like and viscous flow profiles with a period $T\simeq 23$ s. The same period is observed in Fig. \[fig4\](b) from optical imaging and corresponds to the rotation period $T=2\pi R/v_{\rm solid}$ of solid pieces carried away by the flow. $v_{\rm solid}$ can also be independently recovered from the slope of the traces left by the solid domains in Fig. \[fig4\](b) [@Note]. Finally, it is directly checked that viscous flow coincides with homogeneous parts of the images whereas plug-like profiles are associated to heterogeneities in the detected intensity. Therefore the complex temporal behavior of the velocity measurements in regime II is merely a consequence of the highly heterogeneous distribution of the solid pieces in the azimuthal direction. On longer time scales, it can be seen from the images that the fraction of solid pieces in the sample slowly decreases thoughout regime II [@SupMat]. More quantitatively, Fig. \[fig5\] shows the temporal evolution of the fraction $\Phi$ of plug-like profiles during the whole experiment: $\Phi=0$ ($\Phi=1)$ corresponds to a homogeneous fluid (solid) state. The slow relaxation of $\Phi(t)$ in regime II suggests that the solid domains get eroded by the surrounding sheared fluid. At the end of regime II, no solid part is left and the sample becomes optically homogenous again (see inset of Fig. \[fig2\](d)). Velocity profiles in regime III are stationary and linear except for a small arrested band of width 120 $\mu$m close to the stator (see Fig. \[fig2\](d)). It is important to notice that this asymptotic flow behavior is [*not*]{} compatible with that obtained in the rough geometry (where half the gap is in a fluid state) even if one accounts for the difference in apparent shear rates. Finally, in the smooth geometry, we found that the scenario of fragmentation and fluidization described above occurs for imposed shear rates ranging from 10 to 65 s$^{-1}$. At lower shear rates, this scenario becomes hardly accessible as the time scale for fragmentation diverges and at larger shear rates, the sample becomes fluid within seconds and remains so. More quantitative results on the characteristic relaxation time for $\Phi(t)$ and its dependence on the imposed shear rate will be provided elsewhere [@Gibaud]. In summary we have shown that the yielding properties of a soft glassy material may depend drastically on boundary conditions. In the absence of wall slip, shear localization is observed, whose very slow dynamics can most probably be attributed to aging [@Rogers:2008]. When wall slip is allowed, our system, namely a Laponite suspension seeded with hollow glass spheres, shows a much richer behavior characterized by the breaking up of the initially solid material followed by a progressive fluidization through slow erosion of the solid fragments. This scenario leads us to revisit some recent observations where the lower temporal and/or spatial resolutions of the velocimetry techniques did not allow for definite conclusions. More precisely, puzzling oscillations of velocity measurements were reported in Laponite samples at low shear rates [@Moller:2008; @Ianni:2008]. Another example is provided by some glassy colloidal star polymers where intermittent jammed states were recorded together with strong wall slip [@Holmes:2004]. We believe that such dynamics are the signature of the fragmentation process unveiled in the present study which may thus be very general as soon as apparent slip is involved. A deeper investigation of the interplay between the material microstructure and the wall surface properties, e.g. by varying the Laponite and/or the glass spheres concentration, will be performed to provide more insight on the generality of this behavior. Our main conclusion is that boundary conditions may have a drastic influence on the nature of the yielding transition. In the case of slippery walls, a possible interpretation is that breaking up in regime I starts due to some temporary sticking of the sample to the cell walls. This is supported by the fact that the fluctuations of slip velocities increase throughout regime I. During such sticking events, the solid may accumulate stresses that eventually overcome locally the yield stress and give birth to a fluidized zone which may subsequently erode the surrounding solid due to viscous stresses (regime II). This interpretation raises a few important issues. \(i) What parameters control the competition between fragmentation and erosion? For instance, we observed that the age of the sample has a strong impact on the relative durations of regimes I and II: samples older than the ones investigated here present a larger density of solid pieces at the end of regime I and a faster fluidization [@SupMat]. \(ii) In the case where fragmentation dominates, the typical size of solid pieces constitutes a [*macroscopic*]{} characteristic length $\lambda$ which is difficult to assess quantitatively with the present setup. Is $\lambda$ an intrinsic parameter of the material? Very recently, Goyon [*et al.*]{} [@Goyon:2008] evidenced the existence of a “cooperativity length” $\xi$ in confined flows of concentrated emulsions that characterizes the influence region of rearrangements. $\xi$ is non-zero only in the jammed material and falls in the 10 $\mu$m range ($\sim$ 5 droplet diameters). How may $\lambda$ be related to $\xi$? \(iii) What is the relevant combination between material properties and surface roughness that triggers stick rather than slip? Does the roughness characteristic length provide a cutoff for $\lambda$ [@Steinberger:2007; @Bocquet:2007]? All these questions not only urge for more experimental effort but also for theoretical and numerical approaches that could account for the interaction between a solid boundary and a jammed structure close to yielding. The authors wish to thank P. Jop, L. Petit, and N. Taberlet for their help on the system and on the experiment, as well as A. Piednoir and M. Monchanin for the surface roughness measurements. [40]{} A. Liu and S. R. Nagel, Nature [**396**]{}, 21 (1998). V. Trappe [*et al.*]{}, Nature [**411**]{}, 772 (2001). H. A. Barnes, J. Non-Newtonian Fluid Mech. [**56**]{}, 221 (1995); [**70**]{}, 1 (1997); [**81**]{}, 133 (1999). F. Pignon [*et al.*]{}, J. Rheol. [**40**]{}, 573 (1996). P. Coussot [*et al.*]{}, Phys. Rev. Lett. [**88**]{}, 218301 (2002). S. A. Rogers [*et al.*]{}, Phys. Rev. Lett. [**100**]{}, 128304 (2008). P. C. F. M[ø]{}ller [*et al.*]{}, Phys. Rev. E [**77**]{}, 041507 (2008). F. Varnik [*et al.*]{}, Phys. Rev. Lett. [**90**]{}, 095702 (2003). N. Xu [*et al.*]{}, Phys. Rev. Lett. [**94**]{}, 016001 (2005). L. B[é]{}cu [*et al.*]{}, Phys. Rev. Lett. [**96**]{}, 138302 (2006). L. Isa [*et al.*]{}, Phys. Rev. Lett. [**98**]{}, 198305 (2007). J. Goyon [*et al.*]{}, Nature [**454**]{}, 84 (2008). S. P. Meeker [*et al.*]{}, Phys. Rev. Lett. [**92**]{}, 198302 (2004). D. Bonn [*et al.*]{}, Phys. Rev. Lett. [**89**]{}, 015701 (2002). R. Di Leonardo [*et al.*]{}, Phys. Rev. E [**71**]{}, 011505 (2005). These estimates correspond to the average standard deviation of height profiles taken from various AFM images. S. Manneville [*et al.*]{}, Eur. Phys. J. AP [**28**]{}, 361 (2004). T. Gibaud [*et al.*]{}, in preparation. See EPAPS Document No. \[to be inserted\] for movies of sheared samples of different ages. Since the sample is semi-transparent and the rotor is made of Plexiglas, solid pieces are observed on both sides of the Couette cell when they pass in front of and behind the rotor. Therefore, stripes with respectively upward and downward slopes show up in the spatiotemporal diagram of Fig. \[fig4\](b). F. Ianni [*et al.*]{}, Phys. Rev. E [**77**]{}, 031406 (2008). W. H. Holmes [*et al.*]{}, J. Rheol. [**48**]{}, 1085 (2004). A. Steinberger [*et al.*]{}, Nature Materials [**6**]{}, 665 (2007) L. Bocquet and J.-L. Barrat, Soft Matter [**3**]{}, 685 (2007)
--- abstract: | An efficient algorithm is presented to compute the characteristic polynomial of a threshold graph. Threshold graphs were introduced by Chvátal and Hammer, as well as by Henderson and Zalcstein in 1977. A threshold graph is obtained from a one vertex graph by repeatedly adding either an isolated vertex or a dominating vertex, which is a vertex adjacent to all the other vertices. Threshold graphs are special kinds of cographs, which themselves are special kinds of graphs of clique-width 2. We obtain a running time of $O(n \log^2 n)$ for computing the characteristic polynomial, while the previously fastest algorithm ran in quadratic time. Algorithms, Threshold Graphs, Characteristic Polynomial author: - 'Martin Fürer[^1]' title: Efficient Computation of the Characteristic Polynomial of a Threshold Graph --- Introduction ============ The characteristic polynomial of a graph $G=(V,E)$ is defined as the characteristic polynomial of its adjacency matrix $A$, i.e. $\chi(G, \lambda) = \det(\lambda I - A)$. The characteristic polynomial is a graph invariant, i.e., it does not depend on the enumeration of the vertices of $G$. The complexity of computing the characteristic polynomial of a matrix is the same as that of matrix multiplication [@Keller-Gehrig85; @Pernet2007] (see [@BurgisserCS97 Chap.16]), currently $O(n^{2.376})$ [@CoppersmithW90]. For special classes of graphs, we expect to find faster algorithms for the characteristic polynomial. Indeed, for trees, a chain of improvements [@TinhoferS85; @Mohar89] resulted in an $O(n \log^2 n)$ time algorithm [@Furer2014]. The determinant and rank of the adjacency matrix of a tree can even be computed in linear time [@FrickeHJT96]. For Threshold graphs (defined below), Jacobs et al. [@JacobsTT2014] have designed an $O(n^2)$ time algorithm to compute the characteristic polynomial. Here, we improve the running time to $O(n \log^2 n)$. As usual, we use the algebraic complexity measure, where every arithmetic operation counts as one step. Throughout this paper, $n=|V|$ is the number of vertices of $G$. Threshold graphs [@ChvatalH77; @HendersonZ77] are defined as follows. Given $n$ and a sequence $b=(b_1,\dots, b_{n-1}) \in \{0,1\}^{n-1}$, the threshold graph $G_b = (V, E)$ is defined by $V=[n]=\{1,\dots,n\}$, and for all $i<j$, $\{i,j\} \in E$ iff $b_i = 1$. Thus $G_b$ is constructed by an iterative process starting with the initially isolated vertex $n$. In step $j>1$, vertex $n-j+1$ is added. At this time, vertex $j$ is isolated if $b_j$ is 0, and vertex $j$ is adjacent to all other (already constructed) vertices $\{j+1,\dots,n\}$ if $b_j=1$. It follows immediately that $G_b$ is isomorphic to $G_{b'}$ iff $b=b'$. $G_b$ is connected if $b_1 = 1$, otherwise vertex $1$ is isolated. Usually, the order of the vertices being added is $1,2, \dots ,n$ instead of $n,n-1, \dots, 1$. We choose this unconventional order to simplify our main algorithm. Threshold graphs have been widely studied and have several applications from combinatorics to computer science and psychology [@MahadevP95]. In the next section, we study determinants of weighted threshold graph matrices, a class of matrices containing adjacency matrices of threshold graphs. In Section 3, we design the efficient algorithm to compute the characteristic polynomial of threshold graphs. We also look at its bit complexity in Section 4, and finish with open problems. The determinant of a weighted threshold graph matrix ==================================================== We are concerned with adjacency matrices of threshold graphs, but we consider a slightly more general class of matrices. We call them weighted threshold graph matrices. Let $M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n}$ be the matrix with the following entries. $${\left(M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n}\right)}_{ij} = \begin{cases} b_i & \mbox{if $i<j$} \\ b_j & \mbox{if $j<i$} \\ d_i & \mbox{if $i=j$} \end{cases}$$ Thus, the weighted threshold matrix for $(b_1 b_2 \dots b_{n-1};d_1 d_2 \dots d_n)$ looks like this. $$M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n} = \begin{pmatrix} d_1 & b_1 & b_1 & \dots & b_1 & b_1 \\ b_1 & d_2 & b_2 & \dots & b_2 & b_2 \\ b_1 & b_2 & d_3 & \dots & b_3 & b_3 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ b_1 & b_2 & b_3 & \dots & d_{n-1} & b_{n-1} \\ b_1 & b_2 & b_3 & \dots & b_{n-1} & d_n \\ \end{pmatrix}$$ In order to compute the determinant of $M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n}$, we subtract the penultimate row from the last row and the penultimate column from the last column. In other words, we do a similarity transform with the following regular matrix $$P= \begin{pmatrix} 1 & 0 & 0 & \dots & 0 & 0 \\ 0 & 1 & 0 & \dots & 0 & 0 \\ 0 & 0 & 1 & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & 1 & -1 \\ 0 & 0 & 0 & \dots & 0 & 1 \\ \end{pmatrix},$$ i.e., $$P_{ij} = \begin{cases} 1 & \mbox{if $i=j$} \\ -1 & \mbox{if $i=n$ and $j=n-1$} \\ 0 & \mbox{otherwise}. \end{cases}$$ The row and column operations applied to $M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n}$ produce the similar matrix $$P^T \, M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n} \, P = \begin{pmatrix} d_1 & b_1 & b_1 & \dots & b_1 & 0 \\ b_1 & d_2 & b_2 & \dots & b_2 & 0 \\ b_1 & b_2 & d_3 & \dots & b_3 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ b_1 & b_2 & b_3 & \dots & d_{n-1} & b_{n-1}-d_{n-1} \\ 0 & 0 & 0 & \dots & b_{n-1}-d_{n-1} \, & \, d_n + d_{n-1} - 2b_{n-1} \\ \end{pmatrix}$$ Naturally, the determinant of $P$ is 1, implying $$\det \left( P^T \, M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n} \, P \right) = \det \left( M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n} \right).$$ Furthermore, we observe that $P^T \, M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n} \, P$ has a very nice pattern. $$P^T \, M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n} \, P = \left( \, \begin{array}{cccccc} \cline{1-5} \multicolumn{5}{|c|}{M_{b_1 b_2 \dots b_{n-2}}^{d_1 d_2 \dots d_{n-1}}} & \begin{array}{c} b_1 \\ b_2 \\ b_3 \\ \vdots \\ b_{n-1}-d_{n-1} \end{array} \\ \cline{1-5} \rule{0mm}{4mm} b_1 & b_2 & b_3 & \dots & b_{n-1}-d_{n-1} \, & \, d_n + d_{n-1} - 2b_{n-1} \\ \end{array} \right)$$ To further compute the determinant of $P^T \, M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n} \, P$, we use Laplacian expansion by minors applied to the last row. $$\begin{aligned} \lefteqn{\det \left( M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n} \right) = \det \left(P^T M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n} P \right)} \\ & = & (d_n + d_{n-1} - 2 b_{n-1}) \det \left( M_{b_1 b_2 \dots b_{n-2}}^{d_1 d_2 \dots d_{n-1}} \right) - (b_{n-1} - d_{n-1})^2 \det \left( M_{b_1 b_2 \dots b_{n-3}}^{d_1 d_2 \dots d_{n-2}} \right) \\\end{aligned}$$ By defining the determinant of the $0 \times 0$ matrix $M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n}$ with $n=0$ to be 1, and checking the determinants for $n=1$ and $n=2$ directly, we obtain the following result. \[thm:rec\] $D_n = \det \left( M_{b_1 b_2 \dots b_{n-1}}^{d_1 d_2 \dots d_n}\right) $ is determined by the recurrence equation $$D_n = \begin{cases} 1 & \mbox{if $n=0$} \\ d_1 & \mbox{if $n=1$} \\ (d_n + d_{n-1} - 2 b_{n-1}) D_{n-1} - (b_{n-1} - d_{n-1})^2 D_{n-2} & \mbox{if $n \geq 2$} \end{cases}$$ This has an immediate implication, as we assume every arithmetic operation takes only 1 step. The determinant of an $n \times n$ weighted threshold graph matrix can be computed in time $O(n)$. Every step of the recurrence takes a constant number of arithmetic operations. For arbitrary matrices, the tasks of computing matrix products, matrix inverses, and determinants are all equivalent [@BurgisserCS97 Chap.16], currently $O(n^{2.376})$ [@CoppersmithW90]. For weighted threshold graph matrices, they all seem to be different. We have just seen that the determinant can be computed in linear time, which is optimal, as this time is already needed to read the input. The same lower bound holds for computing the characteristic polynomial, and we will show an $O(n \log^2 n)$ algorithm. It is not hard to see that the multiplication of weighted threshold graph matrices can be done in quadratic time. This is again optimal, because the product is no longer a threshold graph matrix, and its output requires quadratic time. Computation of the Characteristic Polynomial of a Threshold Graph ================================================================= The adjacency matrix $A$ of the $n$-vertex threshold graph $G$ defined by the sequence $(b_1, \dots , b_{n-1})$ is the matrix $M_{b_1 b_2 \dots b_{n-1}}^{0 \, 0 \dots 0}$, and the characteristic polynomial of this threshold graph is $$\chi(G,\lambda) = \det(\lambda I - A) = \det \left( M_{-b_1 -b_2 \dots -b_{n-1}}^{\lambda \, \lambda \dots \lambda}\right).$$ This immediately implies that any value of the characteristic polynomial can be computed in linear time. The characteristic polynomial itself can be computed by the recurrence equation of Theorem \[thm:rec\]. Here all $d_i = \lambda$, and $D_n$, as the characteristic polynomial of an $n$-vertex graph, obviously is a polynomial of degree $n$ in $\lambda$. Now, the computation of $D_n$ from $D_{n-1}$ and $D_{n-2}$ according to the recurrence equation is a multiplication of polynomials. It takes time $O(n)$, as one factor is always of constant degree. The resulting total time is quadratic. The same quadratic time is achieved, when we compute the characteristic polynomial $\chi(G,\lambda)$ for $n$ different values of $\lambda$ and interpolate to obtain the polynomial $\chi(G,\lambda)$. We want to do better. Therefore, we write the recurrence equation of Theorem \[thm:rec\] in matrix form. $$\begin{pmatrix} D_n \\ D_{n-1} \\ \end{pmatrix} = \begin{pmatrix} d_n + d_{n-1} - 2 b_{n-1} \,\, & \,\, -(b_{n-1} - d_{n-1})^2 \\ 1 & 0 \\ \end{pmatrix} \begin{pmatrix} D_{n-1} \\ D_{n-2} \\ \end{pmatrix}$$ Noticing that $D_0 =1$ and $D_1 = \lambda$, and all $d_i = \lambda$, we obtain the following matrix recurrence immediately. \[thm:Dn\] For $$B_n = \begin{pmatrix} 2(\lambda - b_{n}) \,\, & \,\, -(b_{n} - \lambda)^2 \\ 1 & 0 \\ \end{pmatrix},$$ we have $$\begin{pmatrix} D_n \\ D_{n-1} \\ \end{pmatrix} = B_{n-1} B_{n-2} \cdots B_1 \begin{pmatrix} \lambda \\ 1 \\ \end{pmatrix}$$ This results in a much faster way to compute the characteristic polynomial $\chi(G,\lambda)$. The characteristic polynomial $\chi(G,\lambda)$ of a threshold graph $G$ with $n$ vertices can be computed in time $O(n \log^2 n)$. For every $i$, all the entries in the $2\times 2$ matrix $B_i$ are polynomials in $\lambda$ of degree at most 2. Therefore, products of any $k$ such factors have entries which are polynomials of degree at most $2k$. To be more precise, actually the degree bound is $k$, because by induction on $k$, one can easily see that the degrees of the $i,j$-entry of such a matrix is at most $$\begin{aligned} k & \mbox{for $i=1$ and $j=1$,} \\ k+1 & \mbox{for $i=1$ and $j=2$,} \\ k-1 & \mbox{for $i=2$ and $j=1$,} \\ k & \mbox{for $i=2$ and $j=2$,} \\\end{aligned}$$ But the bound of $2k$ is sufficient for our purposes. W.l.o.g., we may assume that $n-1$ (the number of factors) is a power of 2. Otherwise, we could fill up with unit matrices. Now the product $B_{n-1} B_{n-2} \cdots B_1$ is computed in $\log (n-1)$ rounds of pairwise multiplication to reduce the number of factors by two each time. In the $r$th round, we have $n 2^{-r}$ pairs of matrices with entries of degree at most $2^r$, requiring $O(n2^{-r})$ multiplications of polynomials of degree at most $2^r$. With FFT (Fast Fourier Transform) this can be done in time $O(n r)$. Summing over all rounds $r$ results in a running time of $O(n \log^2 n)$. Omitting the simplification of $d_i = \lambda$ in Theorem \[thm:Dn\], we see immediately, that also the characteristic polynomial of a weighted threshold graph matrix can be computed in the same asymptotic time of $O(n \log^2 n)$. Complexity in the Bit Model =========================== By definition, the characteristic polynomial of an $n$-vertex graph can be viewed as a sum of $n!$ monomials with coefficients form $\{-1,0,1\}$. Thus all coefficients of the characteristic polynomial have absolute value at most $n!$, and can therefore be represented by binary numbers of length $O(n \log n)$. The coefficients can indeed be so big. An example is the constant term in the characteristic polynomial of the clique $K_n$. Its absolute value is the number of derangements (permutations without fixed points), which asymptotically converges to $n!/e$. With such long coefficients, the usual assumption of arithmetic operations in linear time is actually unrealistic for large $n$. Therefore, the bit model might be more useful. We can use the Turing machine time, because our algorithm is sufficiently uniform. No Boolean circuit is known to compute such things with asymptotically fewer operations than the number of steps of a Turing machine. We can use the fast $m \log m 2^{O(\log^* m)}$ integer multiplication algorithm [@Furer09] (where $m$ is the length of the factors) to compute the FFT for the polynomials. A direct implementation, just using fast integer multiplication everywhere, results in time $$(n r) (2^r r^2 2^{O(\log^* r)}) = n 2^r r^3 2^{O(\log^* r)}$$ for the $r$th round where $O(n 2^{-r})$ pairs of polynomials of degree $O(2^r)$ are multiplied. The coefficients of these polynomials have length $O(2^r r)$. As the coefficients and the degrees of the polynomials increase at least geometrically, only the last round with $r = \log n$ counts asymptotically. The resulting time bound is $O(n^2 \log^3 n 2^{O(\log^* n)})$. Using Schönhage’s [@Schonhage1982] idea of encoding numerical polynomials into integers in order to do polynomial multiplication, a speed-up is possible. Again only the last round matters. Here a constant number of polynomials of degree $O(n)$ with coefficients of length $O(n \log n)$ are multiplied. For this purpose, each polynomial is encoded into a number of length $O(n^2 \log n)$, resulting in a computation time of $$n^2 \log^2 n \, 2^{O(\log^* n)}.$$ Actually, because the lengths of coefficients are not smaller than the degree of the polynomials, no encoding of polynomials into numbers is required for this speed-up. In this case, one can do the polynomial multiplication in a polynomial ring over Fermat numbers as in Schönhage and Strassen [@SchonhageS1971]. Then, during the Fourier transforms all multiplications are just shifts. Fast integer multiplication is only used for the multiplication of values. This results in the same asymptotic $n^2 \log^2 n \, 2^{O(\log^* n)}$ computation time with a better constant factor. Open Problems ============= We have improved the time to compute the characteristic polynomial of a threshold graph from quadratic to almost linear (in the algebraic model). The question remains whether another factor of $\log n$ can be removed. More interesting is the question whether similarly efficient algorithms are possible for richer classes of graphs. Of particular interest are larger classes of graphs containing the threshold graphs, like cographs, graphs of clique-width 2, graphs of bounded clique-width, or even perfect graphs. [10]{} \[1\][`#1`]{} B[ü]{}rgisser, P., Clausen, M., Shokrollahi, M.A.: Algebraic complexity theory, Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\], vol. 315. Springer-Verlag, Berlin (1997) Chv[á]{}tal, V., Hammer, P.L.: Aggregation of inequalities in integer programming. In: Studies in Integer Programming (Proc. Worksh. Bonn 1975). Annals of Discrete Mathematics, vol. 1, pp. 145–162. North-Holland, Amsterdam (1977) Coppersmith, D., Winograd, S.: Matrix multiplication via arithmetic progressions. Journal of Symbolic Computation 9(3), 251–280 (1990) Fricke, G.H., Hedetniemi, S., Jacobs, D.P., Trevisan, V.: Reducing the adjacency matrix of a tree. Electron. J. Linear Algebra 1, 34–43 (1996) F[ü]{}rer, M.: Faster integer multiplication. SIAM Journal on Computing 39(3), 979–1005 (2009) F[ü]{}rer, M.: Efficient computation of the characteristic polynomial of a tree and related tasks. Algorithmica 68(3), 626–642 (2014), <http://dx.doi.org/10.1007/s00453-012-9688-5> Henderson, P.B., Zalcstein, Y.: A graph-theoretic characterization of the pv[\_]{}chunk class of synchronizing primitives. [SIAM]{} J. Comput. 6(1), 88–108 (1977), <http://dx.doi.org/10.1137/0206008> , D.P., [Trevisan]{}, V., [Tura]{}, F.: Computing the characteristic polynomial of threshold graphs. Journal of Graph Algorithms and Applications 18(5), 709–719 (2014) Keller-Gehrig, W.: Fast algorithms for the characteristic polynomial. Theor. Comput. Sci. 36(2,3), 309–317 (1985) Mahadev, N.V.R., Peled, U.N.: Threshold graphs and related topics, Annals of Discrete Mathematics, vol. 56. Elsevier Science Publishers B.V. (North Holland), Amsterdam-Lausanne-New York-Oxford-Shannon-Tokyo (1995) Mohar, B.: Computing the characteristic polynomial of a tree. J. Math. Chem. 3(4), 403–406 (1989) Pernet, C., Storjohann, A.: Faster algorithms for the characteristic polynomial. In: Brown, C.W. (ed.) Proceedings of the 2007 International Symposium on Symbolic and Algebraic Computation, July 29–August 1, 2007, University of Waterloo, Waterloo, Ontario, Canada. pp. 307–314. ACM Press, pub-ACM:adr (2007) Sch[ö]{}nhage, A.: Asymptotically fast algorithms for the numerical multiplication and division of polynomials with complex coeficients. In: Computer Algebra, [EUROCAM]{} ’82, European Computer Algebra Conference, Marseille, France, 5-7 April, 1982, Proceedings. Lecture Notes in Computer Science, vol. 144, pp. 3–15. Springer (1982) Sch[ö]{}nhage, A., Strassen, V.: Schnelle [M]{}ultiplikation grosser [Z]{}ahlen. Computing 7, 281–292 (1971) Tinhofer, G., Schreck, H.: Computing the characteristic polynomial of a tree. Computing 35(2), 113–125 (1985) [^1]: Research supported in part by NSF Grant CCF-1320814.
--- abstract: 'We investigate the properties of an $N$-site tight-binding lattice with periodic boundary condition (PBC) in the presence of a pair of gain and loss impurities $\pm i\gamma$, and two tunneling amplitudes $t_0,t_b$ that are constant along the two paths that connect them. We show that the parity and time-reversal (${{\mathcal P}}{{\mathcal T}}$)-symmetric phase of the lattice with PBC is robust, insensitive to the distance between the impurities, and that the critical impurity strength for ${{\mathcal P}}{{\mathcal T}}$-symmetry breaking is given by $\gamma_{PT}=|t_0-t_b|$. We study the time-evolution of a typical wave packet, initially localized on a single site, across the ${{\mathcal P}}{{\mathcal T}}$-symmetric phase boundary. We find that it acquires chirality with increasing $\gamma$, and the chirality reaches a universal maximum value at the threshold, $\gamma=\gamma_{PT}$, irrespective of the initial location of the wave packet or the lattice parameters. Our results imply that ${{\mathcal P}}{{\mathcal T}}$-symmetry breaking on a lattice with PBC has consequences that have no counterpart in open chains.' author: - 'Derek D. Scott' - 'Yogesh N. Joglekar' title: '${{\mathcal P}}{{\mathcal T}}$-symmetry breaking and universal chirality in a ${{\mathcal P}}{{\mathcal T}}$-symmetric ring' --- [*Introduction.*]{} Recently, lattice models with a non-Hermitian Hamiltonian that is parity and time-reversal (${{\mathcal P}}{{\mathcal T}}$)-symmetric, and their experimental realizations in optical waveguides [@expt1; @expt2] and coupled resistor-inductor-capacitor (RLC) circuits with balanced loss and gain [@expt3], have become a focal point of research. Properties of continuum, non-Hermitian, ${{\mathcal P}}{{\mathcal T}}$-symmetric Hamiltonians were first investigated by Bender and co-workers more than a decade ago [@bender1; @bender2]; they showed that although such a Hamiltonian is not Hermitian, it has purely real eigenvalues over a range of parameters. Since then, significant theoretical progress on non-Hermitian, ${{\mathcal P}}{{\mathcal T}}$-symmetric Hamiltonians in continuum theories and lattice models has occurred [@bender3; @mostafa; @znojil; @bendix; @spin]. Traditionally, the region of parameters where the spectrum of the Hamiltonian is real and its eigenfunctions are simultaneous eigenfunctions of the combined ${{\mathcal P}}{{\mathcal T}}$-operation is defined as the ${{\mathcal P}}{{\mathcal T}}$-symmetric region, and the emergence of first pair of complex (conjugate) eigenvalues is defined as ${{\mathcal P}}{{\mathcal T}}$ symmetry breaking. During the past two years, one-dimensional, tight-binding lattice models with ${{\mathcal P}}{{\mathcal T}}$-symmetric, position-dependent tunneling amplitudes and on-site potentials have been extensively investigated [@song0; @song1; @longhi; @mark]. These investigations have shown that although the ${{\mathcal P}}{{\mathcal T}}$-symmetric phase is exponentially fragile in a constant-tunneling-model [@bendix], it can be strengthened by choosing appropriate tunneling profiles [@localpt] and that the degree of ${{\mathcal P}}{{\mathcal T}}$-symmetry breaking in such systems can be tuned by the tunneling profile [@derek]. The equivalence between a ${{\mathcal P}}{{\mathcal T}}$-symmetric, non-Hermitian system and a Hermitian system, too, has been demonstrated in limited cases [@song1; @song2; @ya]. In the past decade, evanescently coupled optical waveguides [@christo] have provided a unique realization of a one dimensional lattice with tunable tunneling amplitudes [@gf], disorder [@berg1], and non-Hermitian, on-site, impurity potentials [@makris]. Today, these systems are a promising candidate for experimental investigations of ${{\mathcal P}}{{\mathcal T}}$-symmetry breaking phenomena in lattice models [@expt1; @expt2]. In these systems, the tunneling is controlled by the width of the barrier between adjacent waveguides and the complex, on-site potential is determined by the local refractive index. The number of waveguides in a typical array is $N\sim 10-100$, and the experimental realizations correspond to a lattice with open boundary condition. Therefore, most theoretical investigations have focused on truncated open lattices, but properties of chains with periodic boundary conditions (PBC) have not been explored; for a notable exception, see [@znojilring]. We remind the reader that although the differences between properties of open chains and lattices with PBC vanish as $N\rightarrow\infty$, for small $N\lesssim 100$, relevant here, they can be significant. In this paper, we investigate an $N$-site lattice with PBC in the presence of a pair of non-Hermitian impurities $\pm i\gamma$. Note that on a lattice with PBC, such a pair represents ${{\mathcal P}}{{\mathcal T}}$-symmetric impurities irrespective of the individual impurity locations; in other words, for given impurity locations, a “parity" operator can be defined such that the impurity Hamiltonian becomes ${{\mathcal P}}{{\mathcal T}}$-symmetric. The lattice is characterized by two tunneling amplitudes that are uniform along the two paths that connect the impurities, but may be different from each other. Our salient results are as follows: i) The ${{\mathcal P}}{{\mathcal T}}$-symmetric phase is robust [@caveat] and insensitive to the distance between the non-Hermitian impurities. ii) The critical impurity strength $\gamma_{PT}$ at which the ${{\mathcal P}}{{\mathcal T}}$-symmetry breaks is equal to the difference between the tunneling amplitudes along the two paths, and is thus widely tunable. iii) As $\gamma$ increases from zero, a time-evolved wave packet acquires chirality that is independent of the initial wave packet location or the distance between the impurities. iv) This chirality, quantified by a steady-state, dimensionless momentum $p(\gamma)$ reaches a universal maximum at the ${{\mathcal P}}{{\mathcal T}}$-breaking point, $p(\gamma_{PT})=1$. For $\gamma\geq\gamma_{PT}$, although the norm of the wave function, or equivalently, the net intensity increases exponentially with time [@kottospower], the chirality is reduced. Our results predict that ${{\mathcal P}}{{\mathcal T}}$-symmetry breaking in a lattice system with PBC is accompanied by novel signatures that have no counterpart in open chains. [*Tight-binding model.*]{} We start with a one-dimensional chain with $N$ lattice sites and periodic boundary conditions. Without loss of generality, we take the gain and loss impurities $(i\gamma,-i\gamma)$ at positions $(1,d)$ where $2\leq d\leq N$. The Hamiltonian for this chain is given by $\hat{H}_{PT}=\hat{H}_0 +\hat{V}$ where the Hermitian tunneling Hamiltonian $\hat{H}_0$ is given by $$\begin{aligned} \label{eq:h0} \hat{H}_0 & = & -\sum_{i=1}^{N} t(i)\left(a^{\dagger}_{i+1} a_i + a^{\dagger}_i a_{i+1}\right),\\ t(i) & = & \left\{ \begin{array}{cc} t_b >0 & 1\leq i < d,\\ t_0 >0 & d\leq i\leq N.\\ \end{array}\right.\end{aligned}$$ Here $t(j)$ is the tunneling amplitude between adjacent sites $j$ and $j+1$, $a^\dagger_j$ ($a_j$) is the creation (annihilation) operator for a single-particle state $|j\rangle$ localized at site $j$, and periodic boundary conditions imply that $a^\dagger_{N+1}=a^\dagger_1$. The ${{\mathcal P}}{{\mathcal T}}$-symmetric, non-Hermitian potential is given by $$\label{eq:v} \hat{V}=i\gamma\left(a^\dagger_1a_1-a^{\dagger}_d a_d\right)\neq \hat{V}^\dagger.$$ When $\gamma=0$, the energy spectrum of the Hamiltonian $\hat{H}_{PT}$ is given by $E(k,k')=-2t_0\cos(k)=-2t_b\cos(k')$, and is bounded by $2\max(t_0,t_b)$. When $t_0\geq t_b$, the $N$ eigenmomenta correspond to purely real $k$ values and $k'$ values that are either real or purely imaginary; when $t_0\leq t_b$, the situation is reversed [@jake]. When $\gamma\neq 0$, since the tunneling amplitudes along the two paths between the source and sink are constant, an arbitrary eigenfunction $|\psi\rangle=\sum_j f_j|j\rangle$ with energy $E(k,k')$ can be expressed using the Bethe ansatz as [@song0; @jake] $$f_n=\left\{ \begin{array}{cc} A\sin(k'n)+B\cos(k'n) & 1\leq n\leq d,\\ P\sin(kn)+Q\cos(kn) & d+1\leq n\leq N.\\ \end{array} \right.$$ By considering the eigenvalue equation $\hat{H}_{PT}|\psi\rangle=E|\psi\rangle$ near impurity locations, we find that the dimensionless quasimomentum $k$ (or equivalently $k'$) obeys the following equation [@jake] $$\begin{aligned} M(k,k') & \equiv & t_0^2\sin[k'(d-1)]\sin[k(N-d-1)]\nonumber\\ & +& t_b^2\sin[k'(d+1)]\sin[k(N-d+1)]\nonumber\\ & - & 2t_bt_0\left(\sin(k'd)\sin[k(N-d)]+\sin(k')\sin(k)\right)\nonumber\\ \label{eq:m} & + &\gamma^2\sin[k'(d-1)]\sin[k(N-d+1)]=0.\end{aligned}$$ When $t_0\geq t_b$, the real quasimomenta $k$ are constrained by $-\pi<k\leq\pi$, and states with quasimomenta $k$ and $-k$ are degenerate except when $k=0,\pi$, provided the corresponding $k'$ is purely real. If $k_0$ is a real quasimomentum, then $\pi-k_0$ is also a quasimomentum if and only if $N$ is even; when $t_0\leq t_b$, the situation is reversed. Thus, in contrast with an open chain, the spectrum of a chain with PBC has a particle-hole symmetric spectrum if and only if $N$ is even. ![(color online) Typical ${{\mathcal P}}{{\mathcal T}}$ phase diagram of a lattice with PBC. These results are for a chain with $N=40, t_0=1$ and impurities $\pm i\gamma$ at sites $(1,d)$ for different values of the sink-position $d$ and different $t_b\leq t_0$; we get identical results for odd $N$ or $t_b> t_b$. Remarkably, the critical impurity strength $\gamma_{PT}(\mu)=|t_0-t_b|$ is independent of the fractional distance $\mu$ between the impurities. Thus, the ${{\mathcal P}}{{\mathcal T}}$-symmetric region in a chain with PBC is more tunable that its open chain counterpart.[]{data-label="fig:ptphase"}](ptphasedirect.pdf){width="\columnwidth"} Figure \[fig:ptphase\] shows the numerically obtained typical phase diagram for the critical impurity strength $\gamma_{PT}$ as a function of fractional distance between impurities $\mu=(d-1)/N$. Note that since there are two paths from the source $i\gamma$ to the sink $-i\gamma$, we restrict the fractional distance to $0<\mu\leq 1/2$. These results are obtained for a chain with $N=40, t_0=1$ and $t_b\leq t_0$. We obtain identical results for odd $N$ and different values of tunneling amplitudes including $t_b>t_0$, when the impurity strength $\gamma$ is measured in the units of $\max(t_0,t_b)$. We also find that the weak $\mu$-dependence of the critical impurity strength $\gamma_{PT}/\max(t_0,t_b)$ vanishes as $N$ increases. This remarkable phase diagram predicts that in a chain with PBC, [*the critical impurity strength $\gamma_{PT}(\mu)$ is independent of the inter-impurity distance $\mu$, and is given by*]{} $\gamma_{PT}=|t_b-t_0|$. It implies that the fragile ${{\mathcal P}}{{\mathcal T}}$-symmetric phase in an open chain with constant tunneling [@bendix; @mark] is stabilized and strengthened by periodic boundary condition, and that the critical impurity strength $\gamma_{PT}$ can be easily tuned by an appropriate choice of tunneling amplitudes. To gain insight into the insensitivity of $\gamma_{PT}(\mu)$ to the inter-impurity distance $\mu$ for large $N\gg 1$, let us consider Eq.(\[eq:m\]) in the limit $1\ll d\ll N$, $$\label{eq:newm} \frac{\sin(k'\mu N)\sin\left[k(1-\mu)N\right]}{\sin(k')\sin(k)}=\frac{2t_0t_b}{\left[(t_0-t_b)^2+\gamma^2\right]}.$$ Graphical solutions of Eq.(\[eq:newm\]) show that as $\gamma$ increases, irrespective of $\mu$, two adjacent quasimomenta near $k\sim \pi/2$ become degenerate and then complex, leading to the ${{\mathcal P}}{{\mathcal T}}$-symmetry breaking. ![image](chiralreal.pdf){width="9cm"} ![image](chiralmomentum.pdf){width="9cm"} [*Wave function dynamics.*]{} Now we consider the real- and reciprocal-space time evolution of a wave function that is initially localized on a single site. In an optical-waveguide realization of a ${{\mathcal P}}{{\mathcal T}}$-symmetric system, this initial state is most easily achievable. For an arbitrary, normalized initial state $|\psi(0)\rangle$, the wave function at time $t$ is given by $|\psi(t)\rangle=\exp(-i \hat{H}_{PT}t/\hbar)|\psi(0)\rangle$ where $\hbar$ is the scaled Planck’s constant and the time evolution operator $\exp(-i\hat{H}_{PT}t/\hbar)$ is not unitary since the Hamiltonian $\hat{H}_{PT}$ is not Hermitian. We denote the site- and time-dependent real-space intensity by $I_R(j,t)=|\langle j|\psi(t)\rangle|^2$ where $j=1,\ldots, N$ denotes the site index, and use $I_M(u,t)=|\langle u |\psi(t)\rangle|^2$ to denote the reciprocal-space intensity where the discrete index $u=1,\ldots,N$ corresponds to reciprocal-space index $p_u=\pi(2u/N-1)$ with $-\pi<p_u\leq\pi$. The left-hand column in Fig. \[fig:psi\] shows the typical evolution of real-space intensity with increasing impurity strength. These results are for a lattice with $N=32, t_0=0.5, t_b=1.0$, the source and sink at sites 1 and $d=16$ respectively, and the initial wave packet localized on site $m_0=8$. The vertical axis in each panel indicates the site index, and the horizontal axis denotes time measured in units of $2\pi\hbar/\max(t_0,t_b)$. When $\gamma=0$ (top panel), the wave packet diffuses, suffering a change in speed at the impurity locations consistent with the change in the tunneling amplitude. As $\gamma$ increases towards $\gamma_{PT}$ (center panel) and beyond (bottom panel), the wave packet evolution acquires chirality and the overall intensity also increases from its $\gamma=0$ value. In an open chain, there is only one path from the source to the sink; in contrast, a chain with PBC has two such paths. [*Physically, the chirality implies that, on average, the path with the higher tunneling amplitude is preferred over the other*]{}. Note that chirality does not represent a preferential flow from source to the sink - the wave packet motion continues past the sink, to the source again - but rather the handedness of the motion. We also emphasize that when $\gamma=0$ (top panel), on average, both paths are equally preferred. These results are robust, independent of the initial location $m_0$ of the wave packet or the distance $(d-1)$ between the impurities. The right-hand column in Fig. \[fig:psi\] provides a complementary view of chirality development, with corresponding evolution of the reciprocal-space intensity $I_M(u,t)$. The vertical axis in each panel indicates discrete index $1\leq u \leq N$ that translates into $p_u\in(-\pi,\pi]$. The top panel shows that when $\gamma=0$, as the wave packet diffuses, its average momentum is zero. As $\gamma$ approaches $\gamma_{PT}$ (center panel) and beyond (bottom panel), we see that the reciprocal-space intensity develops a pronounced peak at a finite, positive value, consistent with the chirality seen in the left-hand column of the figure. In addition, results at longer times $T\sim 10N\gg N$ show that the reciprocal-space intensity distribution for $\gamma>0$ reaches a steady-state. To quantify this chirality and to dissociate it from the exponentially increasing net intensity $I_R(t)=\sum_j I_R(j,t)$ that occurs for $\gamma>\gamma_{PT}$ [@kottospower], we consider the dimensionless, discrete momentum operator on the lattice with PBC, $$\langle\phi(t)|\hat{p}|\psi(t)\rangle=-\frac{i}{2}\sum_{j=1}^N\frac{(g^*_{j+1}+g^*_j)(f_{j+1}-f_j)}{\sqrt{\langle\phi|\phi\rangle\langle\psi|\psi\rangle}},$$ where $|\phi(t)\rangle=\sum_j g_j(t)|j\rangle$, $|\psi(t)\rangle=\sum_j f_j(t)|j\rangle$, and the normalization factor in the denominator is necessary due to the non-unitary time evolution. As is hinted by the right-hand column in Fig. \[fig:psi\], the momentum expectation value $p_\psi(t)=\langle \psi(t)|\hat{p}|\psi(t)\rangle$ in a given state oscillates about zero when $\gamma=0$ and when $\gamma>0$, reaches a steady-state value $p(\gamma)\equiv\int_0^T p(t')dt'/T$. Note that since $|p_\psi(t)|\leq 1$ for any initial state and time, the magnitude of $p(\gamma)$ is bounded by unity. ![(color online) Dependence of chirality, quantified by the steady-state momentum $p(\gamma)$, as a function of impurity strength $\gamma$ for different locations $d$ of the loss impurity $-i\gamma$; the gain impurity $i\gamma$ is located on site 1. The initial wave function is localized on site $m_0=10$. The momentum $p(\gamma)$ varies linearly with $\gamma$ at small $\gamma/\gamma_{PT}\ll 1$. It reaches a universal, maximum value, $p=1$, at the ${{\mathcal P}}{{\mathcal T}}$-symmetry breaking threshold. For $\gamma/\gamma_{PT}\geq 1$, the steady-state momentum decreases, although the net intensity increases with time. These results are independent of the lattice parameters and the initial wave function.[]{data-label="fig:pgamma"}](pvsgamma.pdf){width="\columnwidth"} Figure  \[fig:pgamma\] shows the evolution of the dimensionless, steady-state momentum $p(\gamma)$ across the ${{\mathcal P}}{{\mathcal T}}$-symmetry breaking threshold for different locations of the loss impurity. These results are for a lattice with $N=32,t_0=0.5,t_b=1$, and thus, $\gamma_{PT}/\max(t_0,t_b)=0.5$. The initial location of the wave packet is $m_0=10$, and we have used normalized time $T=500$ to numerically obtain the average. These results are generic and qualitatively independent of the initial location $m_0$ or the lattice parameters. When $\gamma=0$ there is no chirality to the wave packet evolution and $p(\gamma)=0$. At small $\gamma$, we see that the $p(\gamma)$ increases linearly with $\gamma$, as is expected from a first-order perturbation theory calculation; the slope of this line decreases monotonically with $d$. [*We find that the steady-state momentum reaches a universal value, $p=1$, at the ${{\mathcal P}}{{\mathcal T}}$-breaking point and decreases linearly for*]{} $\gamma\geq \gamma_{PT}$. We emphasize that although the total intensity increases exponentially with time, the chirality, which captures the handedness of motion of the wave packet, decreases for $\gamma>\gamma_{PT}$. [*Discussion.*]{} In this paper, we have investigated the ${{\mathcal P}}{{\mathcal T}}$ phase diagram and signatures of ${{\mathcal P}}{{\mathcal T}}$-symmetry breaking in a lattice with periodic boundary conditions. We have presented a model of a lattice with PBC and two uniform tunneling amplitudes, and shown that the ${{\mathcal P}}{{\mathcal T}}$-symmetric region for such a model is robust, insensitive to the distance between the loss and gain impurities, and widely tunable in size. We have shown that in such a lattice with PBC, where there are two different paths from the source to the sink, the motion of a wave packet acquires a chirality when the impurity strength is nonzero. We have predicted that the ${{\mathcal P}}{{\mathcal T}}$-symmetry breaking in such a lattice is signaled by a universal, maximal value for the steady-state momentum that quantifies this chirality. Traditionally, the investigation of signatures of ${{\mathcal P}}{{\mathcal T}}$-symmetry breaking has focused on dependence of the intensity profile $I_R(j,t)$ or the net intensity, on time and the impurity strength. These quantities vary smoothly across the phase boundary [@derek; @kottospower]. In this paper, we have shown that the chirality, on the other hand, shows a peak with a universal value at the ${{\mathcal P}}{{\mathcal T}}$-symmetry breaking threshold. We note that such a measurement will require the knowledge of relative phases of the wave function values at adjacent sites, $p(\gamma)\propto\sum_j {{\rm Im}}(f_j^*f_{j+1})$, and therefore is more complex than the site-dependent intensity measurements [@expt1; @expt2; @christo; @berg1]. Our results show that ${{\mathcal P}}{{\mathcal T}}$-symmetric rings [@znojilring] display properties that have no counterparts in open chains. Their in-depth investigation, including the effects of disorder and non-linearity, will deepen our understanding of ${{\mathcal P}}{{\mathcal T}}$-symmetric systems with periodic boundary conditions. [99]{} A. Guo, G.J. Salamo, D. Duchesne, R. Morandotti, M. Volatier-Ravat, V. Aimez, G.A. Siviloglou, and D.N. Christodoulides, Phys. Rev. Lett. [**103**]{}, 093902 (2009). C.E. Rüter, K.G. Makris, R. El-Ganainy, D.N. Christodoulides, M. Segev, and D. Kip, Nat. Phys. [**6**]{}, 192 (2010); T. Kottos, [*ibid*]{}, 166 (2010). J. Schindler, A. Li, M.C. Zheng, F.M. Ellis, and T. Kottos, Phys. Rev. A [**84**]{}, 040101(R) (2011). C.M. Bender and S. Boettcher, Phys. Rev. Lett. [**80**]{}, 5243 (1998). C.M. Bender, D.C. Brody, and H.F. Jones, Phys. Rev. Lett. [**89**]{}, 270401 (2002). C.M. Bender, Rep. Prog. Phys. [**70**]{}, 947 (2007). A. Mostafazadeh, J. Math. Phys. [**43**]{}, 205 (2002); A. Mostafazadeh, J. Phys. A [**36**]{}, 7081 (2003); A. Mostafazadeh, Phys. Rev. Lett. [**99**]{}, 130502 (2007). M. Znojil, Phys. Lett. A [**40**]{}, 13131 (2007); M. Znojil, Phys. Rev. A [**82**]{}, 052113 (2010). O. Bendix, R. Fleischmann, T. Kottos, and B. Shapiro, Phys. Rev. Lett. [**103**]{}, 030402 (2009). C. Korff and R. Weston, J. Phys. A [**40**]{}, 8845 (2007); O.A. Castro-Alvared and A. Fring, J. Phys. A [**42**]{}, 465211 (2009). L. Jin and Z. Song, Phys. Rev. A [**80**]{}, 052107 (2009). L. Jin and Z. Song, Phys. Rev. A [**81**]{}, 032109 (2010). S. Longhi, Phys. Rev. B [**82**]{}, 041106(R) (2010). Y.N. Joglekar, D. Scott, M. Babbey, and A. Saxena, Phys. Rev. A [**82**]{}, 030103(R) (2010). O. Bendix, R. Fleischmann, T. Kottos, and B. Shapiro, J. Phys. A [**43**]{}, 265305 (2010). D.D. Scott and Y.N. Joglekar, Phys. Rev. A [**83**]{}, 050102(R) (2011). L. Jin and Z. Song, J. Phys. A [**44**]{}, 375304 (2011). Y.N. Joglekar and A. Saxena, Phys. Rev. A [**83**]{}, 050101(R) (2011). D.N. Christodoulides, F. Lederer, and Y. Silberberg, Nature (London) [**424**]{}, 817 (2003). A. Perez-Leija, H. Moya-Cessa, A. Szameit, and D.N. Christodoulides, Opt. Lett. [**35**]{}, 2409 (2010). Y. Lahini, A. Avidan, F. Pozzi, M. Sorel, R. Morandotti, and Y. Silberberg, Phys. Rev. Lett. [**100**]{}, 013906 (2008). K.G. Makris, R. El-Ganainy, D.N. Christodoulides, and Z.H. Musslimani, Phys. Rev. Lett. [**100**]{}, 103904 (2008). M. Znojil, Phys. Lett. A [**375**]{}, 3435 (2011). We call a ${{\mathcal P}}{{\mathcal T}}$-symmetric phase as robust if $\gamma_{PT}(N)$, measured in units of the bandwidth of the $N$-site lattice, is nonzero as $N\rightarrow\infty$. M.C. Zheng, D.N. Christodoulides, R. Fleischmann, and T. Kottos, Phys. Rev. A [**82**]{}, 010103(R) (2010). Y.N. Joglekar and J.L. Barnett, Phys. Rev. A [**84**]{}, 024103 (2011).
--- abstract: 'Radio-frequency (RF) signals enabled wireless information and power transfer (WIPT) is a cost-effective technique to achieve two-way communications and at the same time provide energy supplies for low-power wireless devices. However, the information transmission in WIPT is vulnerable to the eavesdropping by the energy receivers (ERs). To achieve secrecy communications with information nodes (INs) while satisfying the energy transfer requirement of ERs, an efficient solution is to exploit a dual use of the energy signals also as useful interference or artificial noise (AN) to interfere with the ERs, thus preventing against their potential information eavesdropping. Towards this end, this article provides an overview on the joint design of energy and information signals to achieve energy-efficient and secure WIPT under various practical setups, including simultaneous wireless information and power transfer (SWIPT), wireless powered cooperative relaying and jamming, and wireless powered communication networks (WPCN). We also present some research directions that are worth pursuing in the future.' author: - 'Yuan Liu, Jie Xu, and Rui Zhang [^1] [^2] [^3]' bibliography: - 'IEEEabrv.bib' - 'Secrecy.bib' title: Exploiting Interference for Secrecy Wireless Information and Power Transfer --- Introduction ============ Future wireless networks are expected to constitute billions of low-power wireless devices (such as sensor nodes, radio frequency (RF) identification (RFID) tags and Internet-of-things (IoT) devices) for diversified applications, and it is crucial to provide them with satisfactory communication quality of service (QoS), guaranteed data security, and sustainable energy supply. RF signals enabled wireless information and power transfer (WIPT) has been recently recognized as a promising technique to achieve two-way communications and provide cost-effective energy supplies for low-power wireless devices at the same time [@KrikidisCM; @BiCM]. In general, there are mainly two types of applications for WIPT, namely simultaneous wireless information and power transfer (SWIPT) and wireless powered communication networks (WPCN), as shown in Figs. 1 and 2, respectively. In SWIPT, hybrid access point (H-AP) simultaneously broadcasts information and energy signals to communicate with information nodes (INs) and power energy receivers (ERs) in the downlink; while in WPCN, H-AP broadcasts energy signals to power both ERs and INs in the downlink, and INs use the harvested energy to transmit information back to H-AP in the uplink [@BiCM]. Despite the technology advancements, WIPT systems face new data security challenges, since the information transmission of the INs (in both downlink and uplink) is vulnerable to be intercepted by the ERs that are untrusted and can be potential eavesdroppers [@ChenNg]. In SWIPT systems, ERs are normally located much closer to the H-AP than INs due to their different requirement of receiver power sensitivity (e.g., $-$10dBm for energy harvesting versus $-$60dBm for information reception) [@LiuTSP]. Due to this “near-far” effect, untrusted ERs can easily overhear and eavesdrop the downlink information intended to INs. In WPCN, both ERs and INs are located close to the H-AP to harvest the RF energy in the downlink. Therefore, INs’ transmitted information in the uplink is easy to be eavesdropped by nearby untrusted ERs. ![An example of SWIPT, where the information sent to INs in the downlink is vulnerable to be eavesdropped by ERs.[]{data-label="fig:SISO"}](SecSWIPT.eps) ![An example of WPCN, where the information transmission of INs in the uplink is easy to be overheard by nearby ERs.[]{data-label="fig:wpc"}](SecWPCN.eps) Physical-layer security has been recognized as a promising technique to secure wireless communications against malicious eavesdropping attacks. The objective of physical-layer security is to maximize the secrecy rate of a communication channel, which corresponds to the achievable data rate of this channel provided that the eavesdropper cannot intercept or decode any information. In the literature, there are various approaches that have been proposed to improve the secrecy rate. Among them, the artificial noise (AN) (see, e.g., [@GoelNegi2008]) and cooperative jamming (see, e.g., [@DongHan2010]) based designs are appealing, where properly designed AN or jamming signals are transmitted by the transmitter itself (together with the confidential message) and by the external helping nodes (HNs), respectively, for the purpose of interfering with the malicious eavesdroppers to improve the secrecy rate. In this article, we integrate the physical-layer security in WIPT to overcome the challenging information leakage problem due to ER eavesdroppers. Such secure WIPT systems aim to achieve two-way secrecy communications with INs while satisfying the energy harvesting requirements at ERs. We present an efficient solution by exploiting [a dual use of the energy signals in WIPT, for not only energy transfer, but also as useful interference or AN to jam ERs against their potential information eavesdropping.]{} With such consideration, we provide an overview on the joint design of information and energy signals under various practical system setups, by investigating two types of IN receivers (namely, Type-I and Type-II IN receivers) that can or cannot cancel the energy signals (or AN) before decoding the information, respectively. [It is assumed that the H-AP accurately knows the channel state information (CSI) to the receivers.]{} The outline of this article is given as follows. - First, we consider AN-aided secrecy SWIPT systems. We show that when there is only a single antenna at the H-AP, due to the “near-far” effect, secrecy communications are only feasible for Type-I IN receivers with AN cancelation employed. On the other hand, when there are multiple antennas at the H-AP, a joint beamforming design of information and energy signals can help achieving secrecy communications while ensuring the energy harvesting requirements, for both Type-I and Type-II IN receivers. - Next, we consider the secrecy SWIPT with wireless powered cooperative relaying and jamming to improve secrecy communication performance. In this approach, trusted idle ERs in the system are enabled as external HNs, which can use the harvested energy from the H-AP to help relay information to the intended IN receivers and jam the untrusted ER eavesdroppers in the [*downlink*]{}, with a joint design of both information and energy signals. - Furthermore, we also address the secrecy WPCN with downlink energy transfer and uplink secrecy communications. In this case, wireless powered cooperative jamming is used to exploit trusted ERs as HNs to help for interfering with the [*uplink*]{} eavesdropping of suspicious ERs. [It is crucial to efficiently schedule these trusted ERs with joint downlink energy transfer and uplink jamming design, in order to optimize the uplink secrecy communication performance. However, this problem has not been addressed before, to the authors’ best knowledge.]{} Along with the above discussions, we also point out some promising directions for future work in both secrecy SWIPT and secrecy WPCN. Finally, we conclude this article. Secrecy Communication in SWIPT: An Artificial Noise Approach ============================================================ In this section, we consider the downlink secrecy SWIPT as shown in Fig. \[fig:SISO\], where a single H-AP serves multiple INs and ERs at the same time, [ by sending them either information or energy]{}. In the following, we focus on a special setup with one single-antenna IN and one single-antenna ER to draw essential insight, in which two cases with single-antenna and multi-antenna H-AP are considered. The Case with Single-Antenna H-AP --------------------------------- To start with, we consider the simplest case with one single-antenna H-AP serving one single-antenna IN and one single-antenna ER. For the purpose of illustration, let $h_I$ and $h_E$ denote the channel power gains from the H-AP to the IN and the ER, respectively, where $h_I < h_E$ holds to be consistent with their “near-far" locations. Also, let $s_0$ (with unit power) denote the confidential information signal to be sent to the IN, and $P$ denote the transmit power of the H-AP. Conventionally, the H-AP transmits with only information signal (which also conveys RF energy), by setting the transmit signal as $\sqrt{P}s_0$. In this case, the amount of power harvested at the ER is $\eta h_E P$, where $0<\eta\leq1$ denotes the energy conversion efficiency at the ER. As for the secrecy communication to IN, the received signal power at the (far) IN (i.e., $h_I P$) is always weaker than that at the (near) ER (i.e., $h_E P$). In this case, any information decodable at the IN can always be decoded or eavesdropped by the ER. Therefore, transmitting solely information signals is infeasible to achieve secrecy communication in the single-antenna SWIPT setup. To overcome this security problem caused by the “near-far” issue, it is crucial to employ an AN approach by additionally sending a dedicated energy signal or AN to jam the ER eavesdropper; furthermore, we need to enable the IN receiver to pre-cancel the AN before decoding the information (as will be shown in detail later), in order to achieve non-zero secrecy rate. In the AN approach, the H-AP splits the transmit power $P$ into two components, with a fraction of $(1-\alpha)$ to send the information signal $s_0$ for the IN and the remaining fraction of $\alpha$ to send the dedicated energy signal or AN (denoted by $s_1$ with unit power) for the ER, where $0 \le \alpha \le 1$ denotes the transmit power splitting ratio. In this case, the transmitted baseband signal at the H-AP is expressed as $x=\sqrt{P(1-\alpha)}s_0+\sqrt{P\alpha}s_1$. Accordingly, the amount of power harvested at the ER is $\eta P(1-\alpha)h_E+\eta P\alpha h_E= \eta Ph_E$, which is regardless of the transmit power splitting ratio $\alpha$. As for the secrecy communication, we consider two types of IN receivers (namely Type-I and Type-II IN receivers), which can and cannot cancel the AN before decoding the information signal, respectively. In order to enable Type-I IN receiver to perform AN cancellation, it needs to know the AN used at the H-AP [*a priori*]{}, based on a practical physical-layer key distribution method described as follows [@HongTVT16]. First, both the H-AP and the IN should pre-store a sufficiently large ensemble of sequences that are used as the seeds to generate Gaussian distributed AN, and the index of each sequence in the ensemble is regarded as a “key”. Then, the H-AP randomly picks up one sequence and transmits its index confidentially to the IN before data transmission. Accordingly, the H-AP is able to generate a random AN signal that is only known to the IN but unknown at the ER. This is due to the fact that without the knowledge of the employed key and given a very large key set, the ER cannot decode the AN even with a long-term observation, since the complexity is practically too large and thus infeasible. With the AN known at both the H-AP and the IN, the AN cancellation is thus implementable at the Type-I IN receiver. Under the Type-I and Type-II IN receivers, the received signal-to-interference-plus-noise ratios (SINRs) are expressed as $\gamma_{I}^{(\rm I)} = \frac{P(1-\alpha)h_I}{\sigma^2}$ and $\gamma_{I}^{(\rm II)} = \frac{P(1-\alpha)h_I}{P\alpha h_I + \sigma^2}$, respectively, and the SINR at the ER eavesdropper is given as $\gamma _{E} = \frac{P(1-\alpha)h_E}{P\alpha h_E+\sigma^2}$, where $\sigma^2$ denotes the noise power at the receivers of both the IN and the ER. By assuming that $s_0$ and $s_1$ are independent Gaussian random variables, the achievable secrecy rate at the Type-$i$ IN receiver is expressed as [@GoelNegi2008] $$\label{eqn:1} R_s^{{(i)}}=\left[\log_2\left(1+\gamma_{I}^{(i)}\right) - \log_2\left(1+\gamma_{ E}\right)\right]^+,$$ where $i\in\{{\rm I},{\rm II}\}$ and $[x]^+=\max\{x,0\}$. [The achievable secrecy rate $R_s^{{(i)}}$ is a non-concave function with respect to the transmit power splitting ratio $\alpha$. To maximize $R_s^{{(i)}}$, we can adopt a one-dimensional search over $0\le \alpha \le 1$.]{} To illustrate the necessity of the AN approach and the AN cancellation at the (Type-I) IN, [we conduct simulations to compare the performance of the two AN approaches with Type-I and Type-II IN receivers, versus the conventional approach without AN. The results are shown in Fig. \[fig:siso-p\]. In the conventional approach, the achievable secrecy rate corresponds to $R_s^{{(\rm I)}}$ or $R_s^{{(\rm II)}}$ in with $\alpha = 0$.]{} In this simulation, the distances from the H-AP to the IN and the ER are $d_I=20$ meters (m) and $d_E=2$m, respectively, the path loss exponent is assumed to be $3$, and the noise power is $-80$dBm. [Under this setup with “near-far” deployment, we have $h_I < h_E$.]{} Note that in all schemes, the harvested powers at the ER are identical, [and thus are not shown. The left subfigure of Fig. \[fig:siso-p\] shows the maximum secrecy rate versus the transmit power $P$. It is observed that the secrecy rates of the conventional approach without AN and the AN approach with Type-II IN receiver are always zero. This is due to the fact that as $h_I < h_E$, we have $\gamma_{I}^{({\rm II})} < \gamma_{E}$ irrespective of the transmit power splitting ratio $\alpha$. As a result, for both schemes, the received SINR at the IN receiver is always smaller than that at the ER eavesdropper, and hence, the secrecy rate is always zero. By contrast, it is observed that the secrecy rate achieved by the AN approach with Type-I IN receiver is strictly positive and monotonically increasing with the transmit power. This result is expected, and shows that under the singe-antenna H-AP case, the AN approach is only beneficial in improving the secrecy rate when AN cancellation (i.e., Type-I IN receiver) is employed, due to the “near-far" deployment. The right subfigure of Fig. \[fig:siso-p\] shows the optimal transmit power splitting ratio $\alpha$ in the AN approach with Type-II IN receiver versus the transmit power $P$.]{} It is observed that the optimal $\alpha$ converges to about 0.5 as transmit power increases, i.e., the transmit power is equally split to the information signal and the energy/AN signal. [This is because when the transmit power $P$ goes to infinity, we can approximate the secrecy rate as $R_s^{({\rm I})}\rightarrow\log_2\left(\frac{P(1-\alpha)h_I}{\sigma^2}\right)-\log_2\left(1+\frac{1-\alpha}{\alpha}\right)=\log_2\left(\frac{P(1-\alpha)\alpha h_I}{\sigma^2}\right)$, for which the maximum is attained when $\alpha = 0.5$.]{} ![Secrecy rates versus the transmit power at the H-AP in a SWIPT system with a single-antenna H-AP. []{data-label="fig:siso-p"}](siso-p.eps) In the literature, the benefit of using AN at the H-AP and employing AN cancelation at the IN has been further exploited under other system setups. For example, the authors in [@HongTVT16] considered fading channels, where the H-AP adaptively adjusts the power assigned for information and AN signals over different fading states to optimally exploit the channel dynamics for improving the average secrecy rate. Furthermore, the authors in [@YuanTWC16] investigated a more general orthogonal frequency-division multiple access (OFDMA) scenario with multiple INs and ERs, where all other receivers (i.e., ERs and other INs) are potential eavesdroppers for each IN. In [@YuanTWC16], the H-AP adds independent AN over each sub-carrier and only the intended IN at that sub-carrier can [*a-priori*]{} know the corresponding key to cancel the AN before decoding its information. The Case with Multi-Antenna H-AP -------------------------------- Multi-antenna beamforming has been recognized as an efficient technique to improve both communication rates and energy transfer efficiency in SWIPT [@ZhangHo2013], by steering RF signals towards targeted INs/ERs with focused information and/or energy beams. Similarly, we can exploit the benefit of multi-antenna H-AP in secrecy SWIPT systems. [Suppose that there are $N>1$ antennas at the H-AP, and denote the $N \times 1$ channel vectors from the H-AP to the IN and the ER as ${\mbox{\boldmath{$ h $}}}_I$ and ${\mbox{\boldmath{$ h $}}}_E$, respectively. Let ${\mbox{\boldmath{$ w $}}}_{I}$ and ${\mbox{\boldmath{$ w $}}}_{E}$ denote the $N\times 1$ unit-norm transmit beamforming vectors at the H-AP for the information signal and the energy signal (or AN), respectively. Under the Type-I and Type-II IN receivers, the received SINRs are re-expressed as $\gamma_{I}^{(\rm I)} = \frac{P(1-\alpha)|{\mbox{\boldmath{$ h $}}}_I^H {\mbox{\boldmath{$ w $}}}_{I}|^2}{\sigma^2}$ and $\gamma_{I}^{(\rm II)} = \frac{P(1-\alpha)|{\mbox{\boldmath{$ h $}}}_I^H {\mbox{\boldmath{$ w $}}}_{I}|^2}{P\alpha |{\mbox{\boldmath{$ h $}}}_I^H {\mbox{\boldmath{$ w $}}}_{E}|^2 + \sigma^2}$, respectively, and the SINR at the ER eavesdropper is given as $\gamma_{E} = \frac{P(1-\alpha)|{\mbox{\boldmath{$ h $}}}_E^H {\mbox{\boldmath{$ w $}}}_{I}|^2}{P\alpha |{\mbox{\boldmath{$ h $}}}_E^H {\mbox{\boldmath{$ w $}}}_{E}|^2+\sigma^2}$, where the superscript $H$ denotes the conjugate transpose of a vector. Then, the achievable secrecy rate at the Type-$i$ IN receiver, $i\in\{\rm{I},\rm{II}\}$, is given by $R_s^{{(i)}}$ in by substituting the newly defined SINRs.]{} Consider first the conventional design without AN, [for which the achievable secrecy rate corresponds to $R_s^{{(\rm I)}}$ or $R_s^{{(\rm II)}}$ with $\alpha = 0$]{}. In this case, a positive secrecy rate is achievable if the H-AP transmits the information beam [${\mbox{\boldmath{$ w $}}}_I$]{} lying in the null space of the channel vector [${\mbox{\boldmath{$ h $}}}_E$]{} to the ER. This is in sharp contrast to the case with single-antenna H-AP, where the secrecy rate is always zero without AN employed. As a result, we expect that by additionally exploiting the gain provided by the AN approach, both the secrecy rate and energy transfer efficiency can be further improved via proper information and energy/AN beamforming design. In general, the design of information and energy/AN beams critically depends on the types of IN receiver, and obtaining the optimal beamformers corresponds to solving complicated optimization problems. Instead of considering the optimal solution, we adopt the following intuitive designs to draw insights. The H-AP first sets the information beam [${\mbox{\boldmath{$ w $}}}_I$]{} following the maximum-ratio transmission (MRT) principle based on the channel vector [${\mbox{\boldmath{$ h $}}}_I$]{} to the IN, and then designs the energy/AN beamforming vector [${\mbox{\boldmath{$ w $}}}_{E}$]{} depending on the type of IN receiver considered. When Type-I IN receiver is used with the AN cancellation, the H-AP designs the energy/AN beam [${\mbox{\boldmath{$ w $}}}_{E}$]{} following the MRT principle based on the channel vector [${\mbox{\boldmath{$ h $}}}_{E}$]{} to the ER, so as to maximally transfer energy and also maximally degrade the ER’s received SINR. When Type-II IN receiver is considered without AN cancellation, the energy beam [${\mbox{\boldmath{$ w $}}}_{E}$]{} should be designed based on a zero-forcing (ZF) principle to lie in the null space of the IN’s channel vector [${\mbox{\boldmath{$ h $}}}_{E}$]{}, so as to avoid the interference to the IN receiver. By exploiting the spatial degrees of freedom brought by the multiple antennas at the H-AP, positive secrecy rates are achievable for both types of IN receiver, which is different from the case with a single-antenna H-AP, where positive secrecy rate is achievable only for Type-I IN receiver. To compare the performances of the AN approach under Type-I and Type-II IN receivers, Fig. \[fig:mimo-region\] provides a numerical example to show the secrecy rate at the IN versus the harvested power at the ER, where the system parameters are set same as those in the case with single-antenna H-AP, except that the number of transmit antennas at the H-AP is $N = 4$ with the directions from the H-AP to the IN and the ER are specifically set as 0 and 60 degrees, respectively. For comparison, we also consider the conventional design without AN, in which the H-AP can either transmit over the ER channel’s null space to achieve secrecy communication to the IN but without any energy delivered to the ER, or use the MRT beamforming to the ER for maximizing its harvested energy but without any confidential information conveyed. We use time-sharing between the two strategies to achieve both positive secrecy rate and positive harvested energy in general. From Fig. \[fig:mimo-region\], it is observed that both the conventional design without AN and the AN approach with Type-II receiver achieve positive secrecy rates, which is different from the case with single-antenna H-AP in Fig. \[fig:siso-p\], where the secrecy rate is always zero. This is obtained by properly designing the information and energy/AN beamforming to exploit the additional spatial degrees of freedom. Furthermore, it is observed that the AN approach with Type-I IN receiver achieves the best performance in terms of both secrecy rate and harvested power, thanks to the exploitation of both the cancelation of AN and the beamforming gain. ![The secrecy rate at the IN versus the harvested power at the ER for the secrecy SWIPT system in the case with multi-antenna HAP.[]{data-label="fig:mimo-region"}](mimo-region.eps) In the literature, the authors in [@LiuTSP] and [@NgTWC] considered multi-antenna secrecy SWIPT systems with one “far" IN as well as multiple “near" ER eavesdroppers, by considering Type-II and Type-I IN receivers, respectively, where information and energy/AN beamforming vectors are jointly determined following the similar design principle above. [For future work, how to extend the design for secrecy SWIPT to the case with multiple INs each with one or more antennas is still an open problem, which, however, is challenging to solve. On one hand, with multiple INs, the H-AP in general needs to design multiple transmit information beams to different INs, in order to properly control the inter-user interference among them, in addition to ensuring the secrecy performance. On the other hand, the use of multi-antennas at both the H-AP and INs leads to matrix optimization problems that are more difficult to solve than the above example problem with vector variables only.]{} Secrecy SWIPT with Wireless Powered Cooperative Relaying and Jamming ==================================================================== ![Secrecy SWIPT system with wireless-powered cooperative relaying and jamming: (a) System model; (b) Operation at each wireless-powered helping node (HN).[]{data-label="fig:relay"}](relay.eps) This section considers another appealing solution named wireless powered cooperative relaying and jamming to improve the secrecy rate of IN in secrecy SWIPT systems. In future, practical wireless networks will constitute numerous low-power wireless devices, and as a result, it is practical that during the IN’s communication, some trusted and idle ERs are located between the H-AP and INs. This motivates the idea of wireless powered cooperative relaying and jamming to improve the performance of secrecy SWIPT, where these trusted ERs are enabled as friendly HNs in both relaying the information from the H-AP to the IN, and sending AN to cooperatively jam the untrusted ER eavesdroppers. As these HNs are of low power, to avoid their energy waste, their individual energy consumption for relaying and jamming should be no larger than that harvested from the H-AP. In particular, an example secrecy SWIPT system with wireless powered cooperative relaying and jamming is shown in Fig. \[fig:relay\](a), which can be implemented based on the “harvest-then-relay-and-jam” protocol consisting of two time slots: in the first slot, the H-AP sends confidential information signals to both INs and HNs, and transmits energy/AN signals to charge HNs and ERs as well as interfere with ER eavesdroppers at the same time; in the second slot, HNs use the harvested energy to relay information to INs, where AN signals are also sent to defend against the ERs’ eavesdropping. In order to harvest energy as well as relay information and jam with AN, the HNs should adopt new transceiver architectures, an example of which is shown in Fig. \[fig:relay\](b), operating as follows. In the first time slot, the HN uses a receive power splitter to split its received RF signal into two parts, one with a fraction of power $\gamma$ for harvesting the energy to be used in the second time slot and the other with the remaining portion of power $1-\gamma$ for receiving the information to be relayed. In the second slot, the harvested energy at the HN is further split into two parts by a ratio $0\leq\beta\leq1$, with a portion of $\beta$ used for generating AN to jam ER eavesdroppers and the other portion $1-\beta$ for relaying the received information. Specifically, in a particular time instant, each HN can adjust its operation among the following modes by adjusting the receive and transmit power splitting ratios $\gamma$ and $\beta$. - *Harvest-then-jam:* The HN uses all its harvested energy to jam ER eavesdroppers by setting transmit and receive power splitting ratios as $\beta=1$ and $\gamma = 1$, respectively. - *Harvest-then-relay:* The HN uses all its harvested energy to relay confidential information by setting $\beta=0$ and $0<\gamma<1$. - *Harvest-then-relay-and-jam:* The HN uses its harvested energy for dual purposes of relaying and jamming by setting $0<\gamma<1$ and $0<\beta<1$. For example, intuitively, when the HN is near ER eavesdroppers but far from the INs (e.g., HN1 in Fig. \[fig:relay\](a)), it may choose to operate in the harvest-then-jam mode; when the HN is near INs but far from ER eavesdroppers (e.g., HN2 in Fig. \[fig:relay\](a)), it may work in the harvest-then-relay mode; when the HN is near both (e.g., HN3 in Fig. \[fig:relay\](a)), it can work in the general harvest-then-relay-and-jam mode. Furthermore, when the HN needs to relay confidential information (i.e., with $0<\beta<1$ and $0\le\gamma<1$), it can choose its relaying protocol between amplify-and-forward (AF) and decode-and-forward (DF), depending on whether the key (for AN design) is known a-priori at the HN or not (i.e., Type-I or Type-II receiver of HN). Intuitively, when the key is known [*a-priori*]{}, at the receiver side HNs can cancel the AN from the H-AP for more efficient DF confidential message relaying; furthermore, at the transmitter side, different HNs can coordinate their respective AN signal design to achieve beamforming (so-called coordinated beamforming), such that the AN will be coherently combined at the ER eavesdroppers for more efficient jamming. In this case, a significantly improved secrecy communication performance would be achieved, which is at the cost of more implementation complexity, due to the requirement of key sharing (for AN cancellation) from the H-AP to HNs, as well as the decoding processing for extracting the AN at each individual HN. Instead, when the HNs do not know the key (for AN design) [*a-priori*]{}, it is generally desirable to use AF relaying to avoid the decoding operation at HNs. This also prevents HNs from overhearing the confidential information. To optimize the performance of the secrecy SWIPT with wireless powered cooperative relaying and jamming, it is crucial to perform a network-wide optimization to determine the operation mode and relaying protocol of different HNs, jointly with their transmit and receive power splitting ratios, as well as the transmit power allocation/beamforming for information and energy/AN signals at both the H-AP and HNs. Such an optimization is also dependent on the types of receivers at the HNs/INs and subject to the energy harvesting constraints at HNs (i.e., the relaying and jamming energy consumption cannot exceed the harvested energy). [Due to the above issues, the performance optimization for the secrecy SWIPT with wireless powered cooperative relaying and jamming is a difficult problem to solve, and even developing a general problem formulation is a challenging task.]{} Instead of addressing the general scenario, in the literature there have been several works [@Xing2015; @Xing2016; @LiuZhou2016; @BiChen2016] considering various specific setups. For example, the authors in [@Xing2015; @Xing2016] considered the case with wireless powered AF relaying, where the ER eavesdroppers can only overhear the confidential information relaying from HNs in the second time slot. The optimal AF relaying processing and AN beamforming at HNs are jointly designed to maximize the sum secrecy rate. In [@LiuZhou2016; @BiChen2016], the authors considered the case with only wireless powered jamming, where the H-AP first powers the HN and then the HN jams the eavesdropper by using the harvested power. [Despite the above progress, the extension to more general setups still requires substantial future work. For example, in practice ER eavesdroppers that are not far away from both the H-AP and HN may be able to combine the signals overhead from both hops to degrade the system’s secrecy performance, thus making the secrecy design more challenging. Moreover, the multiple wireless-powered HNs may operate in different modes as aforementioned, i.e., harvest, relay, and/or jam. How to select their optimal modes under general setups are also challenging problems unsolved.]{} Secrecy Communication in WPCN ============================= Besides secrecy SWIPT, another important application of WIPT is secrecy WPCN as shown in Fig. \[fig:wpc\], where INs use the harvested energy from the H-AP to send confidential information back to the H-AP, in the presence of untrusted ER eavesdroppers. To the best of our knowledge, how to improve the secrecy communication performance for WPCN has not been addressed in the literature yet. Similar as the conventional WPCN without security consideration in [@JuTWC], the secrecy WPCN can be generally implemented in a “harvest-then-transmit” protocol, where the transmission is divided into two time slots: one for wireless energy transfer from the H-AP to a set of INs as well as ERs, and the other for confidential messages transmission from the INs back to the H-AP. In this case, the uplink information can be easily eavesdropped because untrusted ERs (potential eavesdroppers) may be located near the H-AP for energy harvesting. To improve the secrecy rate from INs to the H-AP via defending against untrusted ERs’ eavesdropping, an efficient solution is to employ wireless powered cooperative jamming similarly as in the previous section, in which trusted idle ERs in the network are employed as friendly HNs to jam untrusted ERs’ uplink eavesdropping (instead of downlink eavesdropping in the previous section). Specifically, as HNs are not aware of the INs’ transmitted confidential information messages, they will operate in the harvest-then-jam mode to interfere with the untrusted ER eavesdroppers, with $\beta=1$ and $\gamma = 1$ in Fig. \[fig:relay\](b). In order for more efficient jamming, different HNs should share the same key for AN design, such that they can use coordinated beamforming to maximize the jamming power to ER eavesdroppers. This also simplifies the AN cancellation at the H-AP, as only one key is used. Furthermore, the maximization of uplink secrecy rates requires a joint scheduling for the downlink wireless energy transfer of the H-AP, the uplink information transmission of the INs, and the uplink jamming of the HNs, subject to the energy harvesting constraints at both the INs and HNs. For instance, allocating a longer time slot for wireless energy transfer can lead to higher transmit power of INs and higher jamming power of HNs, but this in turn reduces the time for confidential message transmissions from HNs to the H-AP. An efficient time allocation between the two slots is thus crucial. The other issue faced in the secrecy WPCN system is the so-called “doubly near-far” problem, where “far” INs (i.e., INs far away from the H-AP) would harvest less wireless energy in the downlink but needs more transmit power in the uplink to achieve the same communication rate as “near” INs. Furthermore, due to the limited available power at far INs, far INs are more vulnerable to be eavesdropped by ERs than near INs. To tackle the “doubly near-far" problem in secrecy WPCN, it is efficient to enable nearby HNs to help relaying the confidential information to the H-AP. In order to implement such a wireless powered cooperative relaying and jamming in secrecy WPCN, one additional time slot for information relaying is required. In this case, the HN can be implemented based on the structure in Fig. \[fig:relay\](b) as follows: in the first time slot, the HN sets the receiver power splitting ratio to be $\gamma = 1$ for harvesting the wireless energy; in the second time slot, the HN sets $\gamma = 0$ for receiving the confidential information from far INs; and in the third time slot, the HN splits its transmit power into two parts for information relaying and sending AN, respectively. A more sophisticated time allocation among the three slots together with the joint downlink and uplink scheduling is necessary to fully reap the gain of wireless powered cooperative relaying and jamming in this case. [ In addition, backscatter WPCN has recently emerged as a new type of WPCN by leveraging the technique of backscatter communications, where a wireless device without active RF components can reflect back (rather than actively radiate) the RF signals received from the H-AP for the purpose of communications. Here, the reflected signals are modulated via the device by properly controlling the mismatch between the antenna and load impedance. While most existing works focused on using the reflected signals for delivering information, how to use such signals as AN for jamming to improve the secrecy performance, or even use them for dual goals of AN and information signal at the same time is an interesting topic that has not been addressed yet. ]{} Concluding Remarks ================== This article provided a new perspective in improving the secrecy communication performance in emerging WIPT systems while ensuring the energy transfer requirements, which exploits a dual use of energy signals as useful interference or AN to combat against potential eavesdropping by untrusted ERs. In particular, we discussed three secrecy WIPT setups, namely, SWIPT, wireless powered cooperative relaying and jamming, and WPCN, respectively. For each setup, we presented the joint information and energy/AN signals design by considering two types of IN receivers that can or cannot cancel the energy/AN signals. We also discussed the design challenges and some future research directions. Furthermore, there are other interesting issues that are unaddressed in this article due to space limitation, which are briefly discussed in the following. The implementation of the secrecy WIPT requires the H-AP to accurately know the channel state information (CSI) to both INs and ERs, which is practically a difficult task, especially for the CSI to untrusted ER eavesdroppers. For the purpose of initial investigation, this article assumes that the ER eavesdroppers are existing energy users in the network, and thus they are willing to cooperate in helping the H-AP obtain their CSI, for the purpose of facilitating the energy transfer. However, if the ER eavesdroppers intend to eavesdrop information rather than receiving energy, then it is difficult for the H-AP to acquire their actual CSI. How to design the secrecy WIPT in this case is thus more challenging. Furthermore, the proposed AN approach with Type-I IN receiver relies on the assumption that the keys for generating the AN can be shared secretly between the secrecy transmitters and receivers. In practice, the secrecy keys may be overheard by ER eavesdroppers. In this case, how to achieve secrecy WIPT with only partially secure key exchange is another challenging open problem. [^1]: Y. Liu is with the School of Electronic and Information Engineering, South China University of Technology, Guangzhou 510641, China (e-mail: [email protected]). [^2]: J. Xu is with the School of Information Engineering, Guangdong University of Technology, Guangzhou 510006, China (e-mail: [email protected]). [^3]: R. Zhang is with the Department of Electrical and Computer Engineering, National University of Singapore, Singapore 117583 (e-mail: [email protected]).
--- abstract: | The scattering of a straight, infinitely long string moving with velocity $v$ by a black hole is considered. We analyze the weak-field case, where the impact parameter ($b_{imp}$) is large, and obtain exact solutions to the equations of motion. As a result of scattering, the string is displaced in the direction perpendicular to the velocity by an amount $\Delta b\sim -2\pi GMv\gamma/c^3 -\pi (GM)^2/ (4c^3 v b_{imp})$, where $\gamma=(1-(v/c)^2)^{-1/2}$. The second term dominates at low velocities $v/c<(GM/b_{imp})^{1/2}$ . The late-time solution is represented by a kink and anti-kink, propagating in opposite directions at the speed of light, and leaving behind them the string in a new “phase”. The solutions are applied to the problem of string capture, and are compared to numerical results. author: - 'Jean-Pierre De Villiers[^1] ${}^{1}$, and Valeri Frolov[^2] ${}^{1,2}$\' title: | Scattering of Straight Cosmic Strings by\ Black Holes: Weak Field Approximation [^3] --- ${}^{1}$[*Theoretical Physics Institute, Department of Physics, University of Alberta,\ Edmonton, Canada T6G 2J1*]{}\ ${}^{2}$[*CIAR Cosmology Program*]{} [*PACS number(s): 04.60.+n, 12.25.+e, 97.60.Lf, 11.10.Gh*]{} =.8cm Introduction ============ A cosmic string is a relativistic non-local object with an infinite number of internal degrees of freedom. The problem of scattering and capture of a cosmic string by a black hole is interesting for many reasons. In some regimes, it has features in common with the scattering of test particles. In other regimes, its non-local properties give rise to similarities with the problem of black-hole–black-hole scattering. In the process of scattering or capture, one can expect strong gravitational radiation from the string-black hole system; this radiation might be of astrophysical interest in connection with LIGO and other projects searching for gravitational radiation. In our study of string motion we neglect the gravitational effects produced by the string (which for GUT strings are of order $\sim 10^{-6}$) and assume that the width of the string is negligible (for GUT strings the width is of order $\sim 10^{-29}$ cm). In this approximation, a test cosmic string is represented by a two-dimensional world-sheet, and its motion is described by the Nambu-Goto action [@ShVi:94]. From the mathematical point of view, the scattering problem reduces to finding a minimal surface which gives an extremum to the Nambu-Goto action. We are interested in a cosmic string whose length is much greater than the radius of the black hole. For this reason, we will consider a string of infinite length. The interaction of the string with a black hole has two possible outcomes: either the string is captured by the black hole, or it is scattered. In the latter case the string absorbs some energy, so this process is inelastic. A complete description of the final stationary configurations of trapped cosmic strings has already been given [@FrHeLa:96; @FrHeDV:97]. Stationary trapped strings are a special case of stationary string configurations; in the Kerr-Newman spacetime, stationary string configurations admit exact solutions by separation of variables [@FrSkZeHe:89; @CaFr:89]. The general scattering problem, and the determination of the conditions of capture, requires solving the dynamical equations and is a much more complicated problem. A numerical determination of the critical impact parameter for capture has been discussed in Ref.[@LoMo:88; @DVFr:97]. This paper is devoted to the analytical study of the motion of a straight cosmic string in the gravitational field of a black hole in the weak-field approximation. At early times (before scattering), and at late times (after scattering), the string is moving in a nearly flat spacetime where the weak-field approximation allows one to formulate the scattering problem in terms of “in” and “out” states of the string. For large impact parameters, the string moves at all times in a region where the weak-field approximation remains valid. Moreover, even if the impact parameter is small and the string reaches the strong-field region near the black hole, the analytic weak-field solutions of the equations of motion are important in formulating the initial and boundary conditions for the numerical computations [@DVFr:98]. In this paper we derive and solve the equations of motion of an infinite straight cosmic passing near a black hole in the weak-field approximation. We demonstrate that, as a result of scattering, the string is displaced in the direction perpendicular to its motion by an amount $\Delta b\sim -2\pi GMv\gamma/c^3 -\pi (GM)^2/ (4c^3 v b_{imp})$, where $\gamma=(1-(v/c)^2)^{-1/2}$. The second term dominates at low velocities $v/c<(GM/b_{imp})^{1/2}$. This result for low velocity motion is in an agreement with the result recently obtained by Page [@Page:98]. The late-time solution is represented by a kink and anti-kink, propagating in opposite directions at the speed of light, and leaving behind them the string in a new “phase”. In the Conclusions, the solutions are applied to the problem of string capture, and are compared to numerical results. Motion of Straight Strings ========================== The aim of this paper is to study the scattering of an infinitely long cosmic string by a black hole. We assume that the string is initially far from the black hole, straight, and moving with constant velocity $v$. We assume that the gravitational field is weak and solve the equations of string motion using the perturbation theory. Our starting point is the Polyakov action for the relativistic string [@Poly:81], $$\label{n1.1} I=-{\mu \over 2}\,\int d\tau d\sigma \sqrt{-h}h^{AB}G_{AB}\, .$$ We use units in which $G=c=1$, and the sign conventions of [@MTW]. In (\[n1.1\]) $h_{AB}$ is the internal metric with determinant $h$, and $G_{AB}$ is the induced metric on the world-sheet, $$\label{n1.2} G_{AB}=g_{\mu\nu}{\partial {\cal X}^{\mu}\over \partial\zeta^A}{\partial {\cal X}^{\nu}\over \partial\zeta^B}=g_{\mu\nu} {\cal X}^{\mu}_{,A}{\cal X}^{\nu}_{,B} \, .$$ ${\cal X}^{\mu}$ ($\mu=0,1,2,3$) are the spacetime coordinates and $\zeta^A$ ($A=0,3$) are the world-sheet coordinates $\zeta^0=\tau$, $\zeta^3=\sigma$. Finally, $g_{\mu\nu}$ is the spacetime metric. The variation of the action (\[n1.1\]) with respect to ${\cal X}^{\mu}$ and $h_{AB}$ gives the following equations of motion: $$\label{n1.3} \Box {\cal X}^{\mu}+h^{AB}\Gamma^{\mu}_{\alpha\beta}{\cal X}^{\alpha}_{,A}{\cal X}^{\beta}_{,B}=0\, ,$$ $$\label{n1.4} G_{AB}-{1\over 2}h_{AB}h^{CD}G_{CD}=0 \, ,$$ where $$\label{n1.5} \Box ={1\over \sqrt{-h}}\partial_A(\sqrt{-h}h^{AB}\partial_B)\, .$$ The first of these equations is the dynamical equation for string motion, while the second one plays the role of constraints. In the absence of the external gravitational field $g_{\mu\nu}=\eta_{\mu\nu}$, where $\eta_{\mu\nu}$ is the flat spacetime metric. In Cartesian coordinates ($T,X,Y,Z$), $\eta_{\mu\nu}=\mbox{diag}(-1,1,1,1)$ and $\Gamma^{\mu}_{\alpha\beta}=0$, and it is easy to verify that $$\label{n1.6} {\cal X}^{\mu}=X^{\mu}(\tau,\sigma)\equiv (\cosh(\beta)\, \tau, (\sinh\beta)\,\tau+X_0, Y_0,\sigma) \, ,$$ $$\label{n1.7} h_{AB}=\eta_{AB}\equiv \mbox{diag}(-1,1)\, ,$$ is a solution of equations (\[n1.3\]) and (\[n1.4\]). This solution describes a straight string oriented along the $Z$-axis which moves in the $X$-direction with constant velocity $v=\tanh \beta$. Initially, at ${\tau}_{0} = 0$, the string is found at ${\cal X}^{\mu}(0,\sigma) = (0,X_0, Y_0,\sigma)$, with $Y_0$ playing the role of impact parameter, $Y_0 \equiv {b}_{imp}$. For definiteness we choose $Y_0>0$ and $X_0<0$, so that $\beta>0$. It is convenient to introduce an orthogonal tetrad $e^{\mu}_{(m)}$ ($m=0,1,2,3$) connected with the world-plane of the string $$\label{n1.8} e_{(0)}^{\mu}=X_{,\tau}=(\cosh\beta, \sinh\beta, 0, 0) \, , \hspace{0.5cm}e_{(3)}^{\mu}=X_{,\sigma}=(0, 0, 0, 1) \, ,$$ $$\label{n1.9} e_{(1)}^{\mu}=n_1^{\mu}=(\sinh\beta, \cosh\beta, 0, 0) \, , \hspace{0.5cm}e_{(2)}^{\mu}=n_2^{\mu}=(0, 0, 1, 0) \, .$$ The first two unit vectors $X_{,A}^{\mu}$ are tangent to the world-sheet of the string, while the other two $n_R^{\mu}$ ($R=1,2$) are orthogonal to it. It is easy to very that the induced metric $G_{AB}$ on the world-sheet of the string is of the form $$\label{n1.10} G_{AB}=\stackrel{0}{G}_{AB}=\eta_{AB}\, .$$ Weak-Field Approximation ======================== The unperturbed solution is expressed in Cartesian coordinates. To treat the Schwarzschild black hole as a source of perturbations on a flat background, we use isotropic coordinates $(T,X,Y,Z)$ in which the line element of Schwarzschild spacetime is $$\label{n2.1} d{s}^{2}= -{{\left(1 - M/2R \right)}^{2}\over {\left(1 + M/2R \right)}^{2}}\,d{T}^{2} + {\left(1 + {M \over 2\,R} \right)}^{4}\,\left(d{X}^{2}+d{Y}^{2}+d{Z}^{2}\right)\, ,$$ where $R^2=X^2+Y^2+Z^2$. This metric is of the form $$\label{n2.2} ds^2=-(1-2\Phi)dT^2+(1+2\Psi)(dX^2+dY^2+dZ^2)\,$$ with $$\label{n2.3} \Phi={\varphi\over (1+{1\over 2}\varphi)^2}\, ,\hspace{0.5cm} \Psi=\varphi+{3\over 4}\varphi^2+{1\over 4}\varphi^3 + {1\over 32}\varphi^4\,,$$ and $\varphi$ is the Newtonian potential $\varphi= M/R$. In what follows we assume that this potential is small and write[^4] $$\label{n2.4} \Phi=\stackrel{1}{\phi}+\stackrel{2}{\phi}+\ldots= \varphi+a\varphi^2+\ldots \, ,\hspace{0.5cm} \Psi=\stackrel{1}{\psi}+\stackrel{2}{\psi}+\ldots= \varphi+b\varphi^2+\ldots \, .$$ The dots denote terms of order $\varphi^3$ and higher and $$\label{n2.5} a=-1\, ,\hspace{0.5cm}b={3\over 4}\, .$$ A string moving far from the black hole is moving in the perturbed metric $$\label{n2.6} g_{\mu\nu}=\eta_{\mu\nu}+\gamma_{\mu\nu}\, , \hspace{0.5cm}\gamma_{\mu\nu}=\stackrel{1}{\gamma}_{\mu\nu}+ \stackrel{2}{\gamma}_{\mu\nu}+\ldots\,\, ,$$ $$\label{n2.7} \stackrel{1}{\gamma}_{\mu\nu}=2\varphi\,\,\delta_{\mu\nu}\, , \hspace{0.5cm} \stackrel{2}{\gamma}_{\mu\nu}= 2 \varphi^2\, \pi_{\mu\nu}\, ,\hspace{0.5cm} \pi_{\mu\nu}=a\delta^0_{\mu}\delta^0_{\nu}+ b \delta^i_{\mu}\delta^j_{\nu}\delta_{ij}\, .$$ Here $i,j=1,2,3$ and $\delta_{ij}$ is the Kronecker $\delta$-symbol. The perturbation, $\gamma_{\mu\nu}$, of the metric results in the perturbations $\delta X^{\mu}$ and $\delta h_{AB}$ of the flat-spacetime solution (\[n1.6\]) and (\[n1.7\]). The equations describing these perturbations can be obtained by perturbing string equations (\[n1.3\]) and (\[n1.4\]). For this purpose we decompose the perturbation of the string as $$\label{n2.8} \delta X^{\mu}=\chi^m e^{\mu}_{(m)}= \chi^{R}n^{\mu}_{R}+\chi^{A}X^{\mu}_{A} \, ,$$ where the four scalar functions of two variables, $\chi^m(\tau,\sigma)$, describe the deflection of the string world-sheet from the plane (\[n1.6\]). As done earlier, we expand $\chi^m={\stackrel{1}{\chi}}{}^m+{\stackrel{2}{\chi}}{}^m+\ldots$ in powers of $\varphi$. We will also use the expansion of the internal metric $h_{AB}$ $$\label{n2.9} h_{AB}=\eta_{AB}+{\stackrel{\tiny{1}}{h}}_{AB}+{\stackrel{\tiny{2}}{h}}_{AB}+\ldots\, .$$ The first-order corrections will be treated next and then applied to the general scattering problem. Second-order corrections will be discussed last to obtain the low-velocity behavior of strings. First-Order Corrections ======================= We start by considering effects which are of the first order in $\varphi$. In this approximation, the induced metric is $$\label{n3.1} {G}_{AB}=\eta_{AB}+{\stackrel{\tiny{1}}{\gamma}}_{AB}+2{\stackrel{\tiny{1}}{\chi}}_{(A,B)}\, ,$$ where, $$\label{n3.1a} {\stackrel{\tiny{1}}{\gamma}}_{AB}={\stackrel{\tiny{1}}{\gamma}}_{\mu\nu}X^{\mu}_{A}X^{\nu}_{B}= 2\varphi\,\, \mbox{diag}(1-2\sinh^2\beta, 1)\, .$$ The perturbation of the constraint equation (\[n1.4\]) has the form $$\label{n3.2} {\stackrel{\tiny{1}}{\gamma}}_{AB}+2{\stackrel{\tiny{1}}{\chi}}_{(A,B)}-{\stackrel{\tiny{1}}{h}}_{AB} -{Q}\eta_{AB}=0\, ,$$ where $$\label{n3.3} {Q}={1\over 2}\eta^{CD}\left[{\stackrel{\tiny{1}}{h}}_{CD}+{\stackrel{\tiny{1}}{\gamma}}_{CD}+2{\stackrel{\tiny{1}}{\chi}}_{(C,D)} \right]\,.$$ The tensor ${\stackrel{\tiny{1}}{\gamma}}_{AB}$ on the two-dimensional world-sheet can be decomposed as[^5] $$\label{n3.4} {\stackrel{\tiny{1}}{\gamma}}_{AB}={1\over 2}{\stackrel{\tiny{1}}{\gamma}}\eta_{AB} +2\xi_{(A,B)}-{1\over 2} \eta_{AB} \eta^{CD}\xi_{C,D} \,.$$ By comparing (\[n3.2\]) and (\[n3.4\]) one can conclude that one can always choose ${\stackrel{\tiny{1}}{\chi}}_{A}$ so that $$\label{n3.5} {\stackrel{\tiny{1}}{h}}_{AB}={\stackrel{\tiny{1}}{h}} \eta_{AB}\, .$$ To reach this it is sufficient to put ${\stackrel{\tiny{1}}{\chi}}_A=-\xi_A$. For this choice we have $$\label{n3.6} {\stackrel{\tiny{1}}{\gamma}}_{AB}={1\over 2}{\stackrel{\tiny{1}}{\gamma}} \eta_{AB} -2{\stackrel{\tiny{1}}{\chi}}_{(A,B)}-{1\over 2} \eta_{AB} \eta^{CD}{\stackrel{\tiny{1}}{\chi}}_{C,D} \, .$$ Using (\[n3.1a\]) we get $$\label{n3.6a} {\stackrel{\tiny{1}}{\chi}}_{0,0}+{\stackrel{\tiny{1}}{\chi}}_{3,3}=-2\varphi\, \cosh^2\beta\, ,\hspace{0.5cm} {\stackrel{\tiny{1}}{\chi}}_{0,3}+{\stackrel{\tiny{1}}{\chi}}_{3,0}=0\, .$$ In what follows we choose these gauge fixing conditions[^6]. Let us consider now the perturbation of the dynamical equation (\[n1.3\]). First, we note that equation (\[n3.5\]) implies that $\sqrt{-h}h^{AB}$ is equal to $\eta^{AB}$ up to the terms which are quadratic in $\varphi$. As a result we have $$\label{n3.7} \Box {\stackrel{\tiny{1}}{\chi}}_m+\eta^{AB}{\stackrel{\tiny{1}}{\Gamma}}_{\mu,\alpha\beta} X^{\alpha}_{,A} X^{\beta}_{,B}\, \, e^{\mu}_{(m)}=0\, .$$ In these equations, $$\label{n3.8} \Box = -{\partial^2\over \partial\tau^2}+ {\partial^2\over \partial\sigma^2}\, ,$$ and $$\label{n3.9} {\stackrel{\tiny{1}}{\Gamma}}_{\mu,\alpha\beta}={1\over 2}({\stackrel{\tiny{1}}{\gamma}}_{\mu\alpha,\beta}+ {\stackrel{\tiny{1}}{\gamma}}_{\mu\beta,\alpha}-{\stackrel{\tiny{1}}{\gamma}}_{\alpha\beta,\mu}) =\varphi_{,\alpha}\delta_{\mu\beta} +\varphi_{,\beta}\delta_{\mu\alpha}-\varphi_{,\mu}\delta_{\alpha\beta} \, .$$ Since “longitudinal” fields ${\stackrel{\tiny{1}}{\chi}}_A$ are already fixed by our gauge fixing condition we need to verify that equations (\[n3.7\]) for $m=A$ are identically satisfied for this choice and do not give additional restrictions. For this purpose we remark that $$\label{n3.10} {\stackrel{\tiny{1}}{\Gamma}}_{\mu,\alpha\beta} X^{\alpha}_{,A} X^{\beta}_{,B}X^{\mu}_{,C}={1\over 2}({\stackrel{\tiny{1}}{\gamma}}_{AC,B}+ {\stackrel{\tiny{1}}{\gamma}}_{BC,A}-{\stackrel{\tiny{1}}{\gamma}}_{AB,C})\, .$$ By using this relation and (\[n3.6\]) it is easy to verify that (\[n3.7\]) is satisfied identically for $m=A$. Let us now discuss dynamical equation (\[n3.7\]). It has the form $$\label{n3.11} \Box {\stackrel{\tiny{1}}{\chi}}_R=\left(-{\partial^2\over \partial\tau^2}+ {\partial^2\over \partial\sigma^2}\right){\stackrel{\tiny{1}}{\chi}}_R={\stackrel{\tiny{1}}{f}}_R\, ,$$ and describes “transverse” perturbations of the straight string under the action of the external gravitational force ${\stackrel{\tiny{1}}{f}}_R$. To calculate ${\stackrel{\tiny{1}}{f}}_R$, note that $$\label{n3.12} {\stackrel{\tiny{1}}{f}}_m={\stackrel{\tiny{1}}{K}}_{\mu}e^{\mu}_{(m)}\, ,$$ $${\stackrel{\tiny{1}}{K}}_{\mu}=-\eta^{AB}{\stackrel{\tiny{1}}{\Gamma}}_{\mu,\alpha\beta} X^{\alpha}_{,A} X^{\beta}_{,B}$$ $$\label{n3.13} =\cosh^2\beta {\stackrel{\tiny{1}}{\Gamma}}_{\mu,00}+ 2\sinh\beta\,\cosh\beta {\stackrel{\tiny{1}}{\Gamma}}_{\mu,01}+ \sinh^2\beta {\stackrel{\tiny{1}}{\Gamma}}_{\mu,11}- {\stackrel{\tiny{1}}{\Gamma}}_{\mu,33} \, .$$ Simple calculations give $$\label{n3.14} {\stackrel{\tiny{1}}{K}}_{\mu}=-2\sinh^2\beta \,\,\varphi_{,\mu}+ 2\sinh\beta\,\cosh\beta \,\,\varphi_{,1}\delta^0_{\mu}+ 2\sinh^2\beta \,\,\varphi_{,1}\delta^1_{\mu} -2\varphi_{,3}\delta^3_{\mu} \, .$$ Using these results one easily obtains $$\label{n3.15} {\stackrel{\tiny{1}}{f}}_1=2\sinh^2\beta\, \cosh\beta \, \, \varphi_{,X}\, , \hspace{0.5cm} {\stackrel{\tiny{1}}{f}}_2=-2\sinh^2\beta\,\, \varphi_{,Y} \, ,$$ $$\label{n3.16} {\stackrel{\tiny{1}}{f}}_0=2\sinh\beta\, \cosh^2\beta \, \, \varphi_{,X}\, , \hspace{0.5cm} {\stackrel{\tiny{1}}{f}}_3=-2\cosh^2\beta\,\, \varphi_{,Z} \, ,$$ The components $f_R$ normal to the string world-sheet are components of the physical force acting on the string. The components $f_A$ acting along the string provided a motion along the world-sheet which has no physical meaning and can be removed by coordinate transformations. String Scattering ================= Equation (\[n3.11\]) for string propagation in a weak gravitational field can be easily solved. The retarded Green’s function for the 2D $\Box$-operator is $$\label{n4.1} {G}_{0}\left(\sigma,\tau \mid \sigma',\tau'\right) = {1 \over 2}\, \Theta(\tau - \tau' - \mid \sigma - \sigma' \mid)\, .$$ Using this Green’s function we can write a solution of (\[n3.11\]) in the form[^7] $$\label{n4.2} {\chi}_m(\tau,\sigma) ={\chi}_m^0(\tau,\sigma)+{\chi}_m^+(\tau+\sigma)+ {\chi}_m^-(\tau-\sigma)\, ,$$ where $${\chi}_m^0(\tau,\sigma)=-\int_{{\tau}_{0}}^{{\tau}}{d\tau'} \int_{-\infty}^{\infty}{d\sigma'} {G}_{0}\left(\sigma,\tau \mid \sigma',\tau'\right) {f}_m(\tau',\sigma')$$ $$\label{n4.3} =-{1\over 2}\int_{\tau_0}^{\tau} d\tau' \int_{\sigma-(\tau-\tau')}^{\sigma+\tau-\tau'} d\sigma' f_m(\tau',\sigma') \,$$ is a solution of inhomogeneous equation and ${\chi}_m^{\pm}$ are solutions of homogeneous equation which are fixed by the initial data $$\label{n4.4} {\chi}_m(\tau_0,\sigma)={\chi}_m^+(\tau_0+\sigma)+ {\chi}_m^-(\tau_0-\sigma)\, ,\hspace{0.5cm} \dot{{\chi}}_m(\tau_0,\sigma)=\dot{\chi}_m^+(\tau_0+\sigma)+ \dot{\chi}_m^-(\tau_0-\sigma)\, .$$ Let us first consider perturbations perpendicular to the direction of motion (the $Y$-direction), described by $\chi_2$. We assume that initially (at the infinite past) $\chi_2=0$. For this initial condition at $\tau_0=-\infty$,   ${\chi}_2^{\pm}=0$. The asymptotic final solution (at the infinite future) takes the form $$\label{n4.5} {\chi}_2(\tau=\infty)= \lim_{\tau\rightarrow \infty}{\chi}_2^0(\tau,\sigma) =-{1\over 2}\int_{-\infty}^{\infty}{d\tau'} \int_{-\infty}^{\infty}{d\sigma'}\, \, {f}_2(\tau',\sigma')\, .$$ Substituting expression (\[n3.15\]) for $f_2$ and making a change of variables of integration from $(\tau,\sigma)$ to $(X,Z)$ we get $$\label{n4.6} {\chi}_2(\tau=\infty)=-\sinh\beta\, \int_{-\infty}^{\infty}{dX} \int_{-\infty}^{\infty}{dZ}\, \, {\partial\varphi\over \partial Y}(X,Y_0,Z)\, .$$ The integral represents the flux of the Newton gravitational field through the plane $Y=Y_0$ which is equal to $2\pi M$ (that is a half of the total flux $4\pi M$). Using this simple observation we get that as the result of scattering the string as the whole is displaced in $Y$-direction by a constant value $$\label{n4.7} {\chi}_2(\tau=\infty)=-2\pi M\sinh\beta\, .$$ At late but finite time only part of the string is displaced. The size of the displaced region grows with the velocity of light. The transition between the “old” and “new” phases occurs at two kinks moving in the opposite direction. The late time solution can be found explicitly, and is schematically shown in Figure 1. The background world-sheet sweeps out a flat plane in space; denote this the [*in-string plane*]{}, the plane in which the motion of the string lies at early times. At late times, the scattered string approaches another plane, offset from the in-string plane down by $|{\chi}_{2}^{\infty}|$; denote this the [*out-string plane*]{}. As the energy acquired by the string is propagated to infinity through the two kinks, more and more of the string falls to the out-string plane. The asymptotic deflection, ${\chi}_{2}^{\infty}$, is determined by the properties of the encounter, and is given by (\[n4.7\]). =N Y If a straight string starts its motion ($\tau=0$) at $X_0<0$, $$\label{n4.8} {\chi}_2(0,\sigma)=\dot{\chi}_2(0,\sigma)=0\, ,$$ and the solution has the form $$\label{n4.9} {\chi}_2(\tau,\sigma)=-M\sinh\beta\,\left[H_{+}(\tau,\sigma)+ H_{-}(\tau,\sigma)\right] \, ,$$ where $$H_{\pm}(\tau,\sigma)= \arctan{\left[{{Y}_{0}^{2} + ({X}_{0}+\tau\,\sinh{\beta})\, ({X}_{0}+{s}_{\pm}\,\sinh{\beta}) \over {Y}_{0} \,\sinh \beta \, \, R(\tau,\sigma)}\right]}$$ $$\label{n4.10} -\arctan{\left[{{X}_{0}\left({X}_{0} + {s}_{\pm} \, \sinh{\beta}\right) + {Y}_{0}^{2} \over {Y}_{0} \,\sinh{ \beta }\sqrt{{\rho}^{2}+{s}_{\pm}^{2}}}\right]}\, .$$ We use the notation $$\label{n4.11} R^{2}(\tau,\sigma) = {\left({X}_{0}+\tau \sinh{\beta}\right)}^{2} + {Y}^{2}_{0} + {\sigma}^{2}\, ,\hspace{0.5cm} {\rho}^{2} = {X}_{0}^{2} + {Y}_{0}^{2}\, ,\hspace{0.5cm} s_{\pm}=\tau\pm\sigma\, .$$ At the moment when $\tau\sinh\beta=-X_0$, the string passes at the closest distance from the source of the gravitational field. In order to study the late time behavior of the string, let us consider the limit when $X_0=-L$, $\tau\sinh\beta=2L$, and $L\rightarrow\infty$. In this limit, the expression for $H_{\pm}$ simplifies to $$\label{n4.12} H_{\pm}\approx\arctan{\left[{L\pm\sigma\sinh\beta\over{Y_0\sinh\beta \sqrt{1+(\sigma/L)^2}}}\right]}+ \arctan{\left[{L\pm\sigma\sinh\beta\over{Y_0\sinh\beta \sqrt{1+((2/\sinh\beta)\pm\sigma/L)^2}}}\right]}\, .$$ For $H_{\pm}$ the kink is located near $\sigma=\mp L/\sinh\beta$. Using this fact we can further simplify the asymptotic expression for $H_{\pm}$ and to write it in the form $$\label{n4.13} H_{\pm}\approx 2\arctan{\left[{L\pm\sigma\sinh\beta \over{Y_0\cosh\beta}}\right]}\, .$$ At late time in the asymptotic region where $g_{\mu\nu}\approx\eta_{\mu\nu}$ the action (\[n1.1\]) can be written as the sum of the action for the straight string and a term which is quadratic in perturbations. This term is of the form (for details see Ref.[@FrLa:94]) $$\label{n4.12a} I_2=-{\mu \over 2}\,\int d\tau d\sigma \sqrt{-h}h^{AB}\chi^R_{,A}\chi^R_{,B}\, .$$ Hence the contributions of $\chi_2$ to the energy is $$\label{n4.12b} E = {\mu \over 2} \int_{-\infty}^{\infty} { d\sigma\,\left\{ {\left({\partial \chi_2 \over \partial \tau}\right)}^{2}+ {\left({\partial \chi_2 \over \partial \sigma}\right)}^{2}\right\}}\, .$$ Using solution (\[n4.13\]) and $$\label{pert.25a} {\partial \chi_2 \over \partial \tau} = {\partial \chi_2 \over \partial L}\,{\partial L \over \partial \tau} = {\sinh{\beta} \over 2}{\partial \chi_2 \over \partial L}\, ,$$ the integrals can be evaluated in a straightforward manner. In the limit $L~\rightarrow~\infty$, the energy carried away by each of the kinks has a very simple form $$\label{pert.25b} E = {5 \mu \over 32 \pi} {{A}_{\infty}^{2} \over w} \, ,$$ where $w = Y_0\,\coth{\beta}$ is the width of the kinks, and ${A}_{\infty} = \mid \chi_2(\tau = \infty)\mid$ their late-time amplitude. One can also obtain solutions for the other components of $\chi_m$. Substituting (\[n3.15\]) and (\[n3.16\]) into (\[n4.3\]) and performing the integrations one gets $$\begin{aligned} \label{pert.21} \chi_0(\tau,\sigma) & = & M\,{\cosh}(\beta) \left[ \ln{\left({F}_{+}(\tau,\sigma)\right)} + \ln{\left({F}_{-}(\tau,\sigma)\right)} \right. \\ \nonumber & + & {1 \over 2}\,{\cosh}(\beta)\, \left. \left[ \it{sgn}{({s}_{+})} \ln{\left({G}_{+}(\tau,\sigma)\right)} + \it{sgn}{({s}_{-})} \ln{\left({G}_{-}(\tau,\sigma)\right)} \right]\right]\\ {\chi}_{1}(\tau,\sigma) & = & -M \,{\sinh}(\beta) \left[ \ln{\left({F}_{+}(\tau,\sigma)\right)} + \ln{\left({F}_{-}(\tau,\sigma)\right)} \right. \\ \nonumber & + & {1 \over 2}\,{\cosh}(\beta)\, \left.\left[ \it{sgn}{({s}_{+})} \ln{\left({G}_{+}(\tau,\sigma)\right)} + \it{sgn}{({s}_{-})} \ln{\left({G}_{-}(\tau,\sigma)\right)} \right]\right]\\ {\chi}_{3}(\tau,\sigma) & = & M\,{\cosh}(\beta)\, \left[ \ln{\left({F}_{+}(\tau,\sigma)\right)} - \ln{\left({F}_{-}(\tau,\sigma)\right)} \right]\end{aligned}$$ where $$\begin{aligned} \label{pert.23} {F}_{\pm}(\tau,\sigma) & = & {R\,\cosh{\beta} + \tau\,{\cosh}^{2}{\beta} + {X}_{0}\,\sinh{\beta} - {s}_{\pm} \over \cosh{\beta}\,\sqrt{{\rho}^{2} + {s}_{\pm}^{2}} + {X}_{0}\,\sinh{\beta} - {s}_{\pm}}\\ {G}_{\pm}(\tau,\sigma) & = & {\sqrt{{\rho}^{2} + {s}_{\pm}^{2}} - \mid {s}_{\pm} \mid \over \sqrt{{\rho}^{2} + {s}_{\pm}^{2}} + \mid {s}_{\pm} \mid }\end{aligned}$$ As was done for $\chi_2$, expressions (\[pert.23\]) can be rewritten in terms of the parameter $L$ (with $L \gg Y_0$), $$\begin{aligned} \label{pert.23a} {F}_{\pm} & = & {\cosh{\beta} \sqrt{1+(\sigma/L)^2} + \left({\cosh}^{2}{\beta} + 1\right)/\sinh{\beta} - (2/\sinh\beta\pm\sigma/L) \over \cosh{\beta}\,\sqrt{1+(2/\sinh\beta\pm\sigma/L)^2} - \sinh{\beta}- (2/\sinh\beta\pm\sigma/L)}\\ {G}_{\pm} & = & {\sqrt{1+(2/\sinh\beta\pm\sigma/L)^2} - \mid 2/\sinh\beta\pm\sigma/L \mid \over \sqrt{1+(2/\sinh\beta\pm\sigma/L)^2} + \mid 2/\sinh\beta\pm\sigma/L \mid}\end{aligned}$$ In rewriting the expressions in terms of the location of the kink, $\sigma=\mp L/\sinh\beta$, one sees that, ${F}_{\pm} \rightarrow \infty$ and ${G}_{\pm} \rightarrow \left(\cosh{\beta}-1\right)/ \left(\cosh{\beta}+1\right)$. Whereas the contributions $\ln G_\pm$ are well behaved, those from $\ln F_\pm$ generate a logarithmic divergence in $\chi_0$ and $\chi_1$. This divergence is the result of the long-range nature of gravitational forces and it is similar to the logarithmic divergence of the phase for the Coulomb scattering in quantum mechanics. It vanishes for potentials vanishing at infinity rapidly enough. The perturbations $\chi_m$ are illustrated by Figures 2–4, where the solutions are applied to the case of a straight string with initial velocity $v = \tanh{\beta} = 0.76c$ and impact parameter $b = {Y}_{0} = 40 {r}_{g}$ (for $-2000 {r}_{g} < \sigma < 2000 {r}_{g}$). These figures show each perturbation at late proper time, when the string is well past the black hole. The ${\chi}^{0}$ and ${\chi}^{1}$ perturbations (Figure 2) are exceedingly small. The ${\chi}^{2}$ perturbation (Figure 3) describes the deformation of the string normal to the background world-sheet; the two kink-like pulses propagating away from the $Z = 0$ plane at the speed of light are clearly visible, and their amplitude is considerably larger that any of the other perturbations. These pulses carry energy away to infinity and, in the process, shift the string’s late-time position roughly $3.5 {r}_{g}$ below the original position. The ${\chi}^{3}$ perturbation (Figure 4) represents lateral displacements of points on the string towards the $Z = 0$ plane; the amplitude of these displacements is small in the weak field limit, but will become significant in the limit of ultra-relativistic velocity and shallow impact parameter, where lateral displacements of the string are involved in transient loop formation [@DVFr:98]. =N Y =N Y =N Y The perturbation solutions can be used to reconstruct the full world-sheet of the string in Cartesian coordinates, using $$\label{pert.24} {\cal X}^{\mu}= {X}^{\mu} + \chi^m e^{\mu}_{(m)} \, .$$ Such a reconstruction is shown in Figure 5, and was also used to generate the schematic representation in Figure 1. Figure 5 shows a sequence of string configurations separated by constant intervals of proper time in two separate views. The view on the left looks down on the XZ plane and shows the outward propagation of the two pulses (the black hole lies at the origin). Note that the view is a 3D projection; the kinks appear to extend in the X-direction, but they actually lie in the Y-direction (the effect is an artifact of the viewpoint chosen for this view). The view on the right looks toward the origin along the direction of motion, and shows the growth of the perturbations along the Y axis. Comparing to Figures 2 through 4, it is easily seen that the shape of the perturbed world-sheet is almost completely determined by the $\chi_2$ perturbation. The contribution from the other perturbations is undetectable on the scale used in Figure 5. At late times, the string is deflected by an amount $|{\chi}_{2}^{\infty}|$, as given by (\[n4.7\]). =Y Y To summarize, the scattering of the string in the weak gravitational fields calculated in the first order in $\varphi$ results in the displacement of the string in the direction perpendicular to the motion by the value ${\chi}_2(\tau=\infty)=-2\pi GM\sinh\beta/c^2$. At any finite but large value of $\tau$ a solution represents a kink and anti-kink of the width $Y_0\coth\beta=Y_0 c/v$ propagating in the opposite directions with the velocity of light and leaving behind them the string in the new “phase” with $Y=Y_0+{\chi}_2(\tau=\infty)$. Low-Velocity Limit ================== As was already mentioned, the components $f_R$ normal to the string world-sheet are components of the physical force acting on the string. As can be seen from (\[n3.15\]), the force $f_R$ acting on the string vanishes in the limit $v\rightarrow 0$. This fact has a simple physical explanation. As was shown in [@FrSkZeHe:89], a static string configuration in a static spacetime is a geodesic in a spacetime with the metric $|g_{00}|g_{ij}$, which in our case takes the form $$\label{n5.1} dS^2=|g_{00}|g_{ij}dx^i dx^j=(1-2\Phi)(1+2\Psi)(dX^2+dY^2+dZ^2)\, .$$ In the leading order, $\Phi=\Psi=\varphi$, and the string is a straight line [@ShVi:94]. In other words, in the first-order approximation a force acting on a static string in a static spacetime of a black hole vanishes. For this reason, the leading terms in the expansion of the force are of the second order in $\varphi$, and they remain so until $v/c\sim (GM/b_{imp})^{1/2}$. In this section we discuss the effect of these second order terms on the motion of the string in the limit of very small velocities. Substituting (\[n2.8\]) into the dynamical equations (\[n1.3\]) we get $$\label{n5.2} n^{\mu}_R\,\, \Box {\stackrel{\tiny{2}}{\chi}}{}^{\! R}= {\stackrel{\tiny{2}}{f}}{}^{\! \mu}\equiv A^{\mu}+B^{\mu}\, ,$$ where the $\Box$-operator is given by (\[n3.10\]) and, $$\label{n5.3} A^{\mu}=-\eta^{AB}{\stackrel{\tiny{2}}{\Gamma}}{}^{\! \mu}_{\alpha\beta}X^{\alpha}_{,A} X^{\beta}_{,B}\, , \hspace{0.5cm} B^{\mu}=-\eta^{AB}{\stackrel{\tiny{1}}{\Gamma}}{}^{\! \mu}_{\alpha\beta}({\stackrel{\tiny{1}}{\chi}}{}^{\! m}_{,B} X^{\alpha}_{,A} e^{\beta}_{(m)}+{\stackrel{\tiny{1}}{\chi}}{}^{\! m}_{,A} X^{\beta}_{,B} e^{\alpha}_{(m)}) \, .$$ Note that $$\label{n5.4} {\stackrel{\tiny{2}}{\Gamma}}{}^{\!\mu}_{\alpha\beta}=\eta^{\mu\nu} {\stackrel{\tiny{2}}{\Gamma}}_{\nu,\alpha\beta}+ {\stackrel{\tiny{1}}{\gamma}}{}^{\!\mu\nu}{\stackrel{\tiny{1}}{\Gamma}}_{\nu,\alpha\beta}\,$$ where ${\stackrel{\tiny{1}}{\Gamma}}_{\nu,\alpha\beta}$ is given by (\[n3.9\]) and, $$\label{n5.5} {\stackrel{\tiny{2}}{\Gamma}}_{\nu,\alpha\beta}=2\varphi\, (\varphi_{,\alpha}\, \pi_{\nu\beta} +\varphi_{,\beta}\, \pi_{\nu\alpha}-\varphi_{,\nu}\, \pi_{\alpha\beta}) \, .$$ We focus our attention on the corrections to the motion of the string in $Y$-direction. It is easy to verify that $$\label{n5.6} A^{2}=\cosh^2\beta {\stackrel{\tiny{2}}{\Gamma}}{}^{\! 2}_{00}- {\stackrel{\tiny{2}}{\Gamma}}{}^{\! 2}_{33}= 2\varphi\varphi_{,2}[(1-a)\cosh^2\beta -(1-b)]\, .$$ Calculations also give $$\label{n5.7} B^2=2\varphi_{,2}\left[(\cosh^2\beta+\sinh^2\beta){\stackrel{\tiny{1}}{\chi}}_{0,0}- \sinh(2\beta){\stackrel{\tiny{1}}{\chi}}_{1,0}+{\stackrel{\tiny{1}}{\chi}}_{3,3}\right]- 2\varphi_{,3}{\stackrel{\tiny{1}}{\chi}}_{2,3}\, .$$ At low velocities one has $$\label{n5.8} A^{2}\sim 2\varphi\varphi_{,2} (b-a)\, ,\hspace{0.5cm} B^2\sim 2\varphi_{,2} ({\stackrel{\tiny{1}}{\chi}}_{0,0}+{\stackrel{\tiny{1}}{\chi}}_{3,3})-2\varphi_{,3}{\stackrel{\tiny{1}}{\chi}}_{2,3}\, .$$ Equation (\[n4.9\]) shows that ${\stackrel{\tiny{1}}{\chi}}_{2}$ vanishes at $\beta\rightarrow 0$. Using relation (\[n3.6a\]), we finally get $$\label{n5.9} {\stackrel{\tiny{2}}{f}}_{2}\sim -2(2+a-b) \varphi\varphi_{,2} \, .$$ Using a relation similar to (\[n4.5\]) we get $$\label{n5.10} {\stackrel{\tiny{2}}{\chi}}_2(\tau=\infty)={2+a-b\over 2\sinh\beta}\, \int_{-\infty}^{\infty}{dX} \int_{-\infty}^{\infty}{dZ}\, \, {\partial\varphi^2\over \partial Y}(X,Y_0,Z)\, .$$ Calculating the integral one gets $$\label{n5.11} {\stackrel{\tiny{2}}{\chi}}_2(\tau=\infty)=-{\pi M^2(2+a-b)\over Y_0\sinh\beta }\,.$$ For the scattering of the string on the Schwarzschild black hole, $a=-1$ and $b=3/4$, so that one has $$\label{n5.12} {\stackrel{\tiny{2}}{\chi}}_2(\tau=\infty)=-{\pi (GM)^2\over 4c^4 Y_0\sinh\beta }\,.$$ Using the expressions for the coefficients $a$ and $b$ for the scattering on a Reissner-Nordstrom black hole (see footnote 1) one gets $$\label{n5.13} {\stackrel{\tiny{2}}{\chi}}_2(\tau=\infty)=-{\pi ((GM)^2-GQ^2)\over 4c^4 Y_0 \sinh\beta }\,.$$ These results are in the complete agreement with the results obtained by Page [@Page:98]. Conclusions =========== We analyzed the motion of a cosmic string in the gravitational field of a black hole in the approximation where the field is weak. In particular, we demonstrated that after passing the black hole, the string continues its motion with the same velocity $\bf v$ as before scattering, but it is displaced in the direction to the black hole and perpendicular to $\bf v$ by the distance $\Delta b\sim -2\pi GM\sinh\beta/c^2 -\pi (GM)^2/ (4c^3 v b_{imp})$, where $b_{imp}$ is the impact parameter. If $|b_{imp}-\Delta b|\gg r_g=2GM/c^2$ the string moves always in a weak field. If $|b_{imp}-\Delta b|\sim r_g=2GM/c^2$ the string enters the strong-field region near the black hole and it can be captured. This allows us to give the following estimate of the critical capture impact parameter. For low velocity, $(v/c)\ll 1$, $$\label{n6.1} b_{capture} \sim {GM \over c^2} \left(\pi\over 4\right)^{1/2}\,\left(v\over c\right)^{-1/2}\, .$$ For $v/c>(GM/b_{imp})^{1/2}$ the first term in $\Delta b$ dominates. This means that the capture impact parameter grows for both small and large values of $v/c$, and hence there exists a velocity $v$ for which the capture impact parameter has minimum value. This conclusion is confirmed by the results of the numerical computations of the capture impact parameter [@DVFr:98]. The numerical results demonstrate also that for the ultrarelativistic velocities the critical impact parameter $b_{capture}$ reaches the value $3\sqrt{3}GM/c^2$, that is the same value as the capture parameter for the ultrarelativistic particles. Besides giving us a qualitative understanding of the scattering and capture of cosmic strings by black holes, the analysis of the weak-field approximation is important for the numerical study of these processes in the strong field. Before and after the scattering the string moves far from the black hole, where the gravitational field is weak. Thus one can use the above analysis to provide a well-defined description of “in”- and “out”-states of the string and to formulate the scattering problem. We discuss this in Ref.[@DVFr:98]. :  This work was partly supported by the Natural Sciences and Engineering Research Council of Canada. One of the authors (V.F.) is grateful to the Killam Trust for its financial support. The authors are grateful to Arne Larsen for early insights into the perturbative equations of motion. The authors are also grateful to Don Page, who made his paper [@Page:98] available prior to publication. We also thank him for various discussions. [9]{} E. P. S. Shellard and A. Vilenkin, [*Cosmic Strings.*]{} (Cambridge Univ. Press, Cambridge) (1994). V.P. Frolov, S. Hendy, and A.L. Larsen, Phys. Rev. [**D54**]{}, 5093 (1996). V.P. Frolov, S. Hendy, and J.P. De Villiers, Class.Quant.Grav. [**14**]{}, 1099 (1997). A.M. Polyakov, Phys. Lett. [**B103**]{}, 207-210 (1981). V.P. Frolov, V.D. Skarzhinsky, A.I. Zelnikov, and O. Heinrich, Phys. Lett. [**B224**]{}, 255 (1989). B. Carter and V.P. Frolov, Class.Quant.Grav. [**6**]{} 569 (1989). S. Lonsdale and I. Moss, Nucl. Phys. [**B298**]{}, 693 (1988). J.P. De Villiers and V.P. Frolov, [*Gravitational Capture of Cosmic Strings by a Black Hole.*]{} Preprint, [**Alberta Thy 14-97; gr-qc/9711045**]{}. J.P. De Villiers and V.P. Frolov, [*Gravitational Scattering of Cosmic Strings by a Black Hole.*]{} (paper under preparation). D.N. Page [*Gravitational Capture and Scattering of Straight Test Strings with Large Impact Parameters*]{} Preprint, [**Alberta Thy 05-98; gr-qc/98mmddnn**]{}. C.W. Misner, K.S. Thorne, and J.A. Wheeler, [*Gravitation*]{} (W.H. Freeman, San Francisco, 1973). G.W. Gibbons and M.J. Perry, Nucl. Phys. [**B146**]{}, 90 (1978). V.P. Frolov and A.L. Larsen, Nucl. Phys. [**B414**]{}, 129 (1988). [^1]: e-mail: [email protected] [^2]: e-mail: [email protected] [^3]: Preprint Alberta Thy 04-98 [^4]: The same form (\[n2.2\]) of the metric is valid for the charged black hole (with charge $Q$). For the Reissner-Nordstrom metric describing such a black hole, one has $1-2\Phi=(1+\varphi+q \varphi^2)^{-2}(1-q\varphi^2)^2$, $1+2\Psi=(1+\varphi+q \varphi^2)^{2}$, $q=1-(Q/M)^2$. For this metric, the expansion (\[n2.4\]) is also valid with $a={1\over 2}(q-3)$, $b={1\over 4}(q+2)$. [^5]: In the general case besides the trace and trace-free “longitudinal” part there is also a “transverse trace-free” part $\gamma^{\tiny TT}_{AB}$ which obeys the equation $\gamma^{\tiny TT}_{AB,C}\eta^{BC}=0$ (see e.g., [@GiPe:78]). It is easy to verify that a regular solution of this equation on the two-dimensional world-sheet vanishes. [^6]: It should be emphasized that metric $h_{AB}$ on the two-dimensional world-sheet can always be transformed by means of special choice of the coordinates to the form $h_{AB}=\Omega\eta_{AB}$. Our choice of the gauge fixing condition is the infinitesimal form of this relation. [^7]: In this section we consider only first-order corrections to string motion. The superscript 1 in ${\stackrel{\tiny{1}}{\chi}}_m$ and similar quantities in this section is omitted for briefness.
--- author: - | Denis Michel\ Universite de Rennes1-IRSET. Campus de Beaulieu Bat. 13. 35042 Rennes cedex\  [email protected] title: Kinetic approaches to lactose operon induction and bimodality --- **Abstract**\ The quasi-equilibrium approximation is acceptable when molecular interactions are fast enough compared to circuit dynamics, but is no longer allowed when cellular activities are governed by rare events. A typical example is the lactose operon (*lac*), one of the most famous paradigms of transcription regulation, for which several theories still coexist to describe its behaviors. The *lac* system is generally analyzed by using equilibrium constants, contradicting single-event hypotheses long suggested by Novick and Weiner (Novick and Weiner 1957 Proc. Natl. Acad. Sci. USA. 43, 553-566) and recently refined in the study of (Choi et al. 2008 Science 322, 442-446). In the present report, a *lac* repressor (LacI)-mediated DNA immunoprecipitation experiment reveals that the natural LacI-*lac* DNA complex built in vivo is extremely tight and long-lived compared to the time scale of *lac* expression dynamics, which could functionally disconnect the abortive expression bursts and forbid using the standard quasi-equilibrium modes of bistability. As alternatives, purely kinetic mechanisms are examined for their capacity to restrict induction through: (i) widely scattered derepression related to the arrival time variance of a predominantly backward asymmetric random walk and (ii) an induction threshold arising in a single window of derepression without recourse to nonlinear multimeric binding and Hill functions. Considering the complete disengagement of the *lac* repressor from the *lac* promoter as the probabilistic consequence of a transient stepwise mechanism, is sufficient to explain the sigmoidal *lac* responses as functions of time and of inducer concentration. This sigmoidal shape can be misleadingly interpreted as a phenomenon of equilibrium cooperativity classically used to explain bistability, but which has been reported to be weak in this system.\ **Keywords:** Lactose operon; Induction threshold; Single rebinding interval.\ **Highlights** - New models are presented to explain the all-or-nothing *lac* phenotypes. - *Lac* repressor dissociation can be modeled as the final event of a Markovian chain. - Kinetic origin of an induction threshold in a single repressor rebinding interval. - *Lac* behaviors do not require multimeric cooperativity.\ ![image](graphabstract.png){width="16cm"}\ [2]{} Introduction ============ Although the living cell is an open system out of equilibrium, the quasi-equilibrium approximation of fast nondriven interactions is accepted for modeling biochemical circuits using only time-independent equilibrium binding constants. Moreover, through time scale separation, stochastic interactions can generate graded outputs even in single cells (Michel, 2009). But this widespread approach no longer holds for slow and bursty molecular phenomena (Choi et al., 2010) which facilitate the crossing of switching thresholds. This could be the case for the switch of the lactose operon (*lac*) of *Escherichia coli* which has been proposed to result from bursting expression caused by slow binding cycles of the *lac* repressor (LacI) to the *lac* promoter (Choi et al., 2010). The particular situation examined here is that the switch can, or cannot, change state in a single interval between the dismantlement and the possible reformation of the *lac* repression complex. This possibility is based on the experimental observation that this molecular complex naturally built in vivo, can be very long-lived at the time scale considered, thus suggesting a new way to model the *lac* system. *Lac* has become a celebrated model of gene regulation and a masterpiece of biochemistry textbooks, for which different interpretations exist depending on the relative kinetics of *lac* expression and of the interaction between *lac* DNA and LacI. Since its discovery fifty years ago (Jacob, 2011), *lac* continues to attract attention as an archetype of nongenetic phenotypic switch, characterized by seemingly inconsistent behaviors depending on whether they are examined at the population or single-cell levels (Vilar et al., 2003). Upon addition of a gratuitous inducer, individual bacteria suddenly and asynchronously jump to the induced phenotype, contrasting with the progressive induction of the populational response which depends sigmoidally on both time and inducer concentration. Before identifying the molecular actors underlying these phenomena, *lac* induction in the population has been interpreted as the average of many abrupt single cell switches (Benzer, 1953; Novick and Weiner, 1957). *Lac* expression is regulated by inhibition of an inhibition, which is a typical positive circuit whose capacity to generate multiple steady states has long been established (Jacob and Monod, 1963; Thomas, 1998). LacI can shut off *lac* expression by preventing RNA polymerase (RNAP) to either access the *lac* promoter (P*lac*) or to leave it (Lee and Goldfarb, 1991; Sanchez et al., 2011), with the same consequence of inhibiting reiterative transcription. The admitted mode of action of the inducer is to sequester LacI in a form no longer capable of binding the *lac* operator (LacO DNA), thereby allowing RNAP to initiate transcription (Wilson et al., 2007). The induced phenotype can survive periods of absence of the inducer. This phenomenon has been attributed to an indirect feedback in which synthesis of permease, a product of the *lac* operon, leads to the import of more inducer, thus clicking the system at maximal expression (Wilson et al., 2007). After LacI release from P*lac*, *lac* expression may produce enough permease for maintaining the bacterium in the induced state. Even in case of transient inducer withdrawal, newly supplied inducer would massively enter the cell through presynthesized permeases. Following this pioneer example of *lac*, positive circuits then strikingly accumulated in the literature, suggesting that they are pivotal elements structuring gene regulatory networks (GRNs), responsible for bistability, hysteresis and memory (Thomas, 1998; Nicol-benoît et al., 2012). However, *lac* retains some distinctive features in systems biology. In their visionary study, Novick and Weiner postulated that the *lac* switch could be triggered by a single random event (Novick and Weiner, 1957). This hypothesis has been completed by Choi et al. (Choi et al., 2008), who proposed that given the low concentration of LacI in the cell, the time necessary for rebinding to P*lac* can be long enough to allow *lac* induction. This view in which the dissociation of LacI from LacO can be considered as a single-event, contrasts with the classical treatment with dynamic interactions, illustrated below by that of (Ozbudak et al., 2004). In this latter treatment, the equilibrium constant of LacI-LacO interaction is used in the *lac* production function, implicitly assuming that both association and dissociation events between LacI and LacO are frequent compared to the synthesis of *lac* products. The poor representation of single event models in the literature could be explained by the predominant belief that interactions are highly dynamic in the cell. But in fact single event models restore the initial, static view of gene repression in bacteria. In addition, it will be shown that single event mechanisms are capable of giving rise to persistent binary phenotypes, as the classical cooperative model. This possibility can solve the apparent paradox that strong equilibrium cooperativity is postulated (Ozbudak et al., 2004), whereas allosteric and cooperative phenomena were shown to be absent or very weak in the *lac* system (Barkley et al., 1975; Brenowitz et al., 1991; Chen et al., 1994). These allosteric interpretations could have been misleadingly suggested by the apparent sigmoidal behaviours of *lac* expression at the population level, but the brutal phenotype switch of every bacterium has been shown unrelated to population curves (Vilar et al., 2003). The fraction of induced bacteria in the population gradually increases in a sigmoidal manner, both along time for a given concentration of inducer and, symmetrically, as function of inducer concentration for a given time. Sigmoidicity in time, illustrated for example by the elegant figure obtained at low inducer concentration by Novick and Weiner (Novick and Weiner, 1957), is classically observed for positive feedbacks with reiterative amplification cycles, but this cannot be the correct explanation for *lac* switches which abruptly shift from zero to max. At the population level, the *lac* induction curve follows a sigmoidal function of inducer concentration, evoking equilibrium cooperative phenomena, such as: (**i**) the cooperative fixation of the inducer in the different protomers of LacI multimers (Ozbudak et al., 2004; Zhan et al., 2010), or (**ii**) the cooperative fixation of the LacI tetramer on two operators in P*lac* (Oehler et al., 1990; Oehler et al., 2006; Daber et al., 2009). This cooperativity is in turn believed to be necessary for *lac* bistability. Accordingly, Hill functions proved very helpful and are extensively used in most mathematical analyses of bistability (Cherry and Adler, 2000; Sobie, 2011). But since these treatments are poorly compatible with: (**i**) the reported weakness of allosteric cooperativity in this system (Barkley et al., 1975; Brenowitz et al., 1991; Chen et al., 1994) and (**ii**) the single event hypothesis (Novick and Weiner, 1957; Choi et al., 2008; Choi et al., 2010), alternative interpretations will be provided in the present article, in an attempt to reconcile the data obtained at the population and the single cell levels. This study is restricted to the minimal core *lac* regulation network, that will be shown sufficient for capturing certain characteristics of *lac*. Graded *vs* binary repression by LacI ===================================== The basic differences between quasi-equilibrium and transient mechanisms are schematized in the case of the competition between RNAP and LacI (Fig.1). In the first mechanism (Fig.1a), both complexes are capable of interacting alternately with P*lac*. The phenotypic outcome of this situation is dictated only by the time of presence of LacI, which is determined by the equilibrium constant of the LacI-LacO interaction. This mode of lac repression has been described through different approaches (Ozbudak et al., 2004; Garcia and Phillips, 2011) similarly assuming graded fractional occupation times, i.e. which can take any intermediate real value between 0 and 100%. These graded mechanisms do not forbid a certain degree of heterogeneity in case of slow LacO-LacI interactions leading to transcriptional bursts which could be the rate-limiting molecular events for reaching a switching threshold (Choi et al., 2008). The limit case considered here is derived from an experiment of DNA immunoprecipitation presented below, showing that the association between LacI and P*lac* can be very long-lived. In this extreme hypothesis, *lac* induction is proposed to result from an event of complete LacI dissociation and to depend on the number of transcription initiation events taking place before reformation of a repression complex. Of course using the LacI-LacO equilibrium constant, as in (Ozbudak et al., 2004), would be inappropriate if a single dissociation event is sufficient to trigger the *lac* switch, giving rise to a purely kinetic mechanism. When P*lac* is cleared after dismantlement of a repressor complex, molecular races begin between LacI and RNAP to bind to DNA, possibly generating large transcriptional bursts (Choi et al., 2008). Alternatively, successive slow repression-derepression cycles can give bursts of proteins by translational amplification, even when a single mRNA is produced per derepression window (Choi et al., 2010). A simple possibility considered here is that a race occurs between one event of LacI rebinding and a certain number of RNAP-mediated transcription rounds. The outcome of this race determines if *lac* is induced or not. In this scheme, two types of noninduced bacteria coexist at low inducer concentration (Fig.1b): those in which LacI did not dissociate from P*lac* and those in which LacI dissociated from, but rebound to P*lac* before the achievement of a threshold number of transcription events, thereby reinitializing the chain. Conversely, the induced bacteria are those which overcame two successive conditions: (**i**) the complete dissociation of LacI and (**ii**) the absence of premature re-repression. The single random event ======================= Following the proposal of (Choi et al., 2008), the limiting step for reaching the *lac* expression threshold necessary for switching to the high expression state could be the complete dissociation of LacI. Although the release of LacI from P*lac* is the keystone of *lac* regulation, in fact little is known about the precise molecular mechanisms underlying this dissociation. The role initially attributed to the inducer was to force LacI to dissociate from DNA. Such an active dislodgement mechanism would however require some complex intramolecular energy transfer. A simpler scenario could be that LacI occasionally and spontaneously dissociates and is then prevented to rebind by the inducer, when present (Jacob and Monod, 1961; Choi et al., 2008). A third possibility exists. If the conformations of the different subunits of LacI are not mutually constrained, the repressor could fix the inducer while still bound to DNA thanks to its multimeric structure. Indeed, LacI is an homo-tetramer made of two linked homodimers, each one binding to a LacO palindromic site. The LacO palindrome is itself made of two half-sites, each one interacting with a LacI protomer (Wilson et al., 2007). The LacI tetramer can bind simultaneously to two operators in P*lac*, with formation of a DNA loop possibly strengthened by DNA-bending proteins. The different operators of P*lac* have been shown to cooperate for repression (Oehler et al., 1990) and to be involved in the genesis of the sigmoidal shape of the response to inducer (Oehler et al., 2006). These studies were however interpreted by using a continuous treatment that is not compatible with the discontinuous single event hypothesis. It is postulated here that the binding to either LacO or to the inducer are mutually exclusive for a single LacI protomer, but not for tetrameric LacI whose different protomers can have different binding states. Based on this hypothesis, the association between a LacI tetramer and P*lac* can be very long-lived, as the tetramer is embedded into a macromolecular repression complex comprising a stabilized DNA loop, two *lac* operators and accessory molecules. In vitro studies allowed to measure affinity constants between LacI and LacO, to determine that the DNA-binding unit of LacI is the dimer (Chen and Matthews, 1994) and that there are extensive monomer-monomer interactions (Lewis, 2005). However, a general problem in biochemical modeling is that kinetic and thermodynamic parameters determined in vitro are rarely transposable to in vivo conditions (Beard, 2011). This is particularly true for *lac* because additional factors could stabilize the LacI-LacO loop in vivo, including DNA supercoiling and DNA-bending proteins, such as cAMP receptor (or catabolic regulatory) protein CRP (Lewis, 2005; Kuhlman et al., 2007), HU (Becker et al., 2007) or IHF. A loss of IHF has been observed upon *lac* induction (Grainger et al., 2006). Negative supercoiling has been predicted to render the loop complex essentially nondissociable (Brenowitz et al., 1991). Finally, an overlooked aspect of CRP, beside its known role in stimulating *lac* transcription after LacI release, is to repress *lac* expression by facilitating LacI-mediated DNA looping (Fried and Hudson, 1996; Balaeff et al., 2004; Kuhlman, Zhang et al. 2007). This ambiguous action of CRP could explain the complexity of the combined effects of cAMP and *lac* inducer (Setty et al., 2003; Kuhlman et al., 2007).\ ![image](figure1.png){width="7.7cm"}\ **Figure 1**. Models of inhibition of *lac* transcription through P*lac* occupation by LacI. (**a**) RNAP can access DNA during the periods of absence of LacI, determined through a steady-state fractional occupation approach. LacI interaction with a single operator (O1) overlapping P*lac*, is sufficient for this graded mode of repression. (**b**) In this alternative picture, the transcriptional competence of P*lac* is determined by rare events of disruption and reformation of repression complexes. In this binary mode of repression, only stable repression complexes are taken into account, with two operators and a DNA hairpin stabilized by DNA-bending proteins. \ These collective molecular influences could make the apparent (re)-association rate of each LacI protomer, much higher than the dissociation rate, thus forbidding molecular escape. The probability of simultaneous dissociation of all the protomers roughly corresponds to the product of the probabilities of dissociation of the individual protomers, which can be extremely low. One step beyond, one cannot exclude that these stable interactions can be preserved during replication, in the same manner that in eukaryotes, certain nucleosomes split and duplicate during replication (Xu et al., 2010) and heterochromatin (repressive for gene expression) can be written out during mitosis (Moazed, 2011). The presence of 3 operators in the *lac* operon, together with the multimeric nature of LacI, appear well suited to perpetuate the *lac* repression complex during cell division. In this context, the *lac* repressor complex would disassemble only upon inducer addition. To get an idea of the stability of the LacI-DNA interaction in vivo, the resistance of purified LacI-P*lac* complexes formed in vivo was tested in vitro under stringent washing conditions. To this end, an indirect DNA immunoprecipitation method analogous to a chromatin immunoprecipitation assay (ChIP) was used. As shown in Fig.2, the molecular complex including the *lac* operator and LacI can be isolated and preserved in absence of chemical crosslinking, a phenomenon which has never been reported so far for any transcription factor. This P*lac*-LacI interaction pre-built in vivo, is shown to survive in vitro many cycles of washing and tube transfers, during 72 hours and in presence of 0.5 M NaCl (Fig.2). Such a stability is exceptional compared to classical DNA-protein interactions measured in vitro. This ChIP-like experiment strongly supports the possibility that in vitro and in vivo dissociations rates can be very different.\ ![image](figure2.png){width="11cm"}\ **Figure 2**. Stability of the LacI-LacO interaction. LacI-DNA complexes were prepared from *E. coli* K12 cells submitted or not to chemical crosslinking using formaldehyde, and the persistence of complexes was tested after two day-long washing. (**a**) P*lac* DNA fragment was immunoprecipitated by incubation of LacI antibody with an excess of cellular extracts (P, pellet; S, supernatant). P*lac* immunoprecipitation was no longer obtained in absence of protein A-coated beads (PA-beads), ruling out nonspecific adsorption to tube surfaces, and in absence of LacI antibody, ruling out nonspecific binding to the beads. (**b**) The persistence of LacI-Plac interaction was tested by high stringency washing of the DNA protein complexes attached to PA beads. A total of 4 tube transfers and 10 cycles of supernatant removal and replacement with fresh washing medium, were used for the final condition. \ [2]{} Sigmoidal behaviors of LacI dissociation can result from a chain of stochastic events ===================================================================================== In absence of precise knowledge about the organization of the LacI repressor complex, a very simplified hypothetical chain of events following inducer addition is proposed in Fig.3b, beginning with a spontaneous dissociation from one of the four LacO half-sites, likely to correspond to the less perfect one. This event would immediately reverse unless the dissociated protomer is sequestered by a molecule of inducer. Hence, the concentration of the inducer in the cell is critical for maintaining this essential step. The same process should repeat for the second and third inducer fixation, until the 7th step and the complete clearance of P*lac*. If the 4 operator half-sites were strictly identical, this chain would have been disordered, but since this is not the case, a predominant ordered scheme will be considered, corresponding to the inverse ranking of half-site affinity for LacI. This stepwise mechanism is strongly supported by the suboptimal organization of the natural LacO sequence, compared to the artificial one used in (Sadler et al., 1983; Simons et al., 1984) and whose perfect symmetry was shown to enhance LacI binding. The defective spacing between half-sites is likely to create a tension between contiguous LacI protomers, forcing one of them to frequently dissociate. In this respect, it seems reasonable to suppose that the suboptimal organization of LacO would not have been evolutionary selected if the fixation of inducer to DNA-bound LacI was sufficient to remove it through an allosteric transition. In the scheme of Fig.3b, the single event postulated by (Novick and Weiner, 1957) is proposed to precisely correspond to the 7th step of the chain, instead of the synthesis of a single permease imagined by these authors. This step is a prerequisite for priming the switch but may not always trigger the switch, owing to an induction threshold (section 5.2.5). Let us first consider how this chain of events can explain the observed responses of bacterial cultures to inducer addition. Sigmoidicity in time -------------------- *Lac* induction is clearly sigmoidal in time, particularly at low inducer concentration, as illustrated by the Fig.2 of (Novick and Weiner, 1957) obtained at steady state bacterial concentration. Sigmoidicity in time can have several origins. The establishment of positive feedbacks is typically sigmoidal along time, irrespective of the existence of a threshold. In this case, sigmoidicity emerges from reiterative cycles of amplification whatever the number of relays involved in the circuit and even with hyperbolic production functions. But this explanation cannot be retained here since individual *lac* switches are quasi-instantaneous (Vilar et al., 2003; Choi et al., 2008). Alternatively, as proposed here, sigmoidicity in time can also emerge from the averaged occurrence of a single event at the population level, in relation with the delay imposed by a chain of previous events. In the model presented below, the probability of complete release of LacI from LacO is conditioned by the sequence of events detailed in Fig.3b, where certain transitions depend on the concentration of the inducer ($ A $). If the triggering event was not causally subordinated to preliminary events or if a single transition in the chain was much more limiting kinetically than the other ones, then, the process would not have been sigmoidal. The probability of occurrence of this single molecular event is visible only at the population level. The positive feedback is not involved in the observed sigmoidicity of the populational *lac* response, but only explains how a single event can trigger the switch. Before reaching the 7th state ($ x_7 $ in Fig.3b), the LacI/DNA complex should first pass through all the previous intermediate states. The total number of jumps can dramatically increase if backward transitions are more probable than forward ones, which is the case in absence or presence of low amounts of inducer in the cell, so that LacI dissociates very rarely. By contrast in presence of increasing concentrations of inducer, the probability of the 2d, 4th and 6th transitions can approach, equalize or exceed the reverse ones.\ Biochemical delay between inducer addition and the complete dissociation of LacI from P*lac* -------------------------------------------------------------------------------------------- In addition to the 15 transitions represented in Fig.3b, many elementary events are expected to take place between inducer addition and LacI disengagement (Appendix A1), which can be represented in the theoretical form of Fig.3a. The rates $ k $ are labeled with + or - depending on whether they correspond to forward or backward transitions, and the suffix numbers refer to the starting states. In this system, the mean time of arrival, holding for any chain length, is\ $$\left \langle T \right \rangle=\sum_{h=0}^{n-1}\sum_{i=0}^{n-h-1}\frac{1}{k^{-}_{i}}\prod_{j=i}^{h+i}\frac{k^{-}_{j}}{k^{+}_{j}}$$\ which simplifies when all the backward rates are the same $ (k^{-})$ and the forward rates are the same $ (k^{+})$, $$\left \langle T \right \rangle=\sum_{i=0}^{n-1}(n-i)(k^{-})^{i}/(k^{+})^{i+1}$$ This value further simplifies if all the reaction rates are identical $ (k^{-} = k^{+} = k) $, $$\left \langle T \right \rangle= \frac{n(n+1)}{2k}$$ The mean time defined in Eq.(1) is relatively complicated but the corresponding probability distribution is even more awkward, as illustrated by the simple example of the two-step reaction shown in Appendix A2d. ![image](figure3.png){width="8cm"}\ **Figure 3**. Stochastic chain leading to LacI dissociation. (**a**) *n* step-random walk. The initial state $ x_0 $ is a reflecting boundary while $ x_n $ is an absorbing state ($ k_n^{-} = 0 $). (**b**) Simplified scenario of dissociation of a LacI tetramer from two palindromic operators (opposite red arrows). In this stepwise mechanism, the random event triggering lac expression is the 7th step, which is itself subordinated to the achievement of the preceding chain of events. The 7th event can be considered as micro-irreversible in practice, considering the low concentration of LacI and the long waiting time before rebinding. The DNA binding steps $ k_1^- $ and $ k_3^- $ are approximated as first-order conformational transitions independent of the concentration of LacI in the cell. “*A*” is the inducer (activator). The chain of successive LacO half-site clearances postulated in this scheme is only the predominant one, following the same order that the LacI affinity ranking, so that no statistical balancing of the microscopic rate constants is used for simplicity. (**c**) Accumulation of induced bacteria (circles) at low concentrations of inducer (Data from Novick and Weiner, 1957). Plain line drawn to the probability of completion of a large random walk (Eq.(4)). \ To simplify the question, let us assume that in the course of the switch and for a critical concentration of inducer, there could be a particular situation in which the resistance and the tendency to dissociation roughly equalize and the process becomes a random walk. The highest entropy random walk, called isotropic or symmetric, is obtained when all the transitions are equally probable, so that the degree of uncertainty about the position of a walker is maximal. Using this approximation and according to the central limit theorem, the probability of occupation of the different states follows a Gaussian distribution. For long chains, all mathematical methods converge towards a similar expression of the probability to reach the state $ x_n $, as function of the dimensionless intrinsic time of the walk $\theta$. \[E:gp\] $$P_{n}(\theta )=1-\frac{4}{\pi}\sum_{j(odd)=1}^{\infty }\frac{(-1)^{\frac{j-1}{2}}}{j}\operatorname{e}^{-\frac{\pi ^2j^2}{8}\theta }$$ \[E:gp1\] where $\theta$ can be defined in multiple ways depending on the approach. In the discrete random walk approach, $$\theta = N/n^2$$ \[E:gp2\] In the time continuous approach, $$\theta = 2kt/n^2$$ \[E:gp3\] or $$\theta = t/\left \langle T \right \rangle$$ \[E:gp4\] (Appendix A1). Whatever the method used, Eq.(4) is a sigmoidal function of either *N* or *t*. The elimination of the number of elementary steps could make Eqs.4a/4d, a satisfying universal substitute for approximating the evolution of long isotropic chains whose explicit calculation is impossible. If the rates are not too different from each others, and if assuming that LacI-LacO interactions are preserved during DNA replication, fitting this theoretical probability distribution to the experimental points of Novick and Weiner leads to a mean time of about 27 hours for 7 $ \mu $M of the inducer methyl-1-thio-$ \beta $-D-galactopyranoside (TMG). However, this fitting is approximate since the isotropic random walk is an oversimplified hypothesis and the concentration of inducer could not correspond precisely to the point of equivalence between forward and reverse rates. In addition, all the bacteria in which LacI dissociates from P*lac* do not necessarily become induced (section 5.2.5), so that the Fig.2 of Novick and Wiener could superpose bacteria whose successful windows of derepression are spread out in time. Nevertheless, the matching between these curves and these points (Fig.3c) is satisfying enough to make plausible this origin of sigmoidicity. Increasing the response tightness at high inducer concentration --------------------------------------------------------------- Increasing the inducer concentration strongly reduces the delay of response and increases the tightness of its time distribution. This latter effect could be related to the fact that pseudo-first order rates largely exceed the rates of the corresponding backward transitions (Fig.3b), and could reflect the well-established focusing effect of stepwise oriented mechanisms. Subdividing an event of given duration into many hierarchical sub-events whose sum has the same duration, increases the complexity of the global probability density, but also the reproducibility of the total completion time. Indeed, the variance of a multistep chain made of consecutive events, exponentially spaced with variance $ 1/k^2 $, is lower than that of a single stochastic transition of identical mean waiting time since $$\sum_{j=1}^{n}\left (\frac{1}{k_{j}} \right )^2< \left (\sum_{j=1}^{n}\frac{1}{k_{j}} \right )^2$$ The focusing effect of multiple stochastic events has been invoked to explain the tightness of reaction time of a stepwise mechanism such as the reproducible photoreceptor responses to single-photons arising from a series of consecutive Poisson events (Doan et al., 2006). Inducer concentration-dependent sigmoidicity --------------------------------------------- *Lac* induction is a sigmoidal function of inducer concentration, as experimentally observed both in vivo for bacterial cultures (Fig.4a) and in vitro when LacI is bound simultaneously to two operators (Oehler et al., 2006). ![image](figure4.png){width="7.5cm"}\ **Figure 4**. (**a**) $ \beta $-galactosidase activity in bulk assay is a sigmoidal function of inducer. (**b**) Heterogeneous $ \beta $-galactosidase staining obtained at the single cell level with 0.5 $ \mu $M IPTG. The intermediate staining could be due to the basal expression rate $ k_0 $ postulated in presence of LacI and, after LacI release, to the progressive accumulation of $ \beta $-galactosidase before reaching the steady state value $ r/(k_0 + k_1) $. \ Concentration-dependent sigmoidicity is the classical nonlinearity condition retained for generating bistability from a positive circuit (Chung and Stephanopoulos, 1996; Cherry and Adler, 2000); but no clear consensus emerged about the existence of such a cooperativity for *lac*. ### Quasi-equilibrium sigmoidal function of inducer concentration Ligand-dependent sigmoidicity is generally thought to result from nonindependent multiple binding (O’Gorman et al., 1980; Sharp, 2011). Collective binding processes are numerous in the *lac* system. *Lac* repressor is a tetramer, which can bind to several inducer molecules (Ozbudak et al., 2004; Kuhlman et al., 2007), and to different operators in P*lac* (Oehler et al., 1990; Oehler et al., 2006; Daber et al., 2009). Hill and cooperative Adair saturation functions are generally selected to define production functions (Chung and Stephanopoulos, 1996; Ozbudak et al., 2004; Yildirim and Kazanci, 2011). Of course, the sets of differential equations (ODEs) with such production functions predict *lac* bistability, but this modeling is questionable from the single event hypothesis and considering the poor binding cooperativity reported for the *lac* system components. In this respect, analyzing any sigmoidal curve with “observed Hill coefficients”, is not really relevant when sigmoidicity does not relate to a genuine Hill process. An alternative mechanism is proposed below. ### Concentration-dependent transient sigmoidal responses The stepwise process described in Fig.3b starting from the hairpin repression complex, allows transient sigmoidicity to arise from the combination of pseudo-first-order functions of the inducer concentration. For a given time after inducer addition, the probability that LacI dissociates from a LacO hairpin complex increases in a sigmoidal function of inducer concentration. The simple example of a two-step reaction shown in Appendix A2d, is sufficient to display sigmoidicity. Deriving phenomenological Hill coefficients from such curves would be obviously nonrelevant. In absence of DNA loop, the induction of *lac* in vivo, as well as the clearance of P*lac* in vitro, were shown to take a nonsigmoidal shape (Oehler et al., 2006). This finding has been interpreted through the existence of allosteric cooperativity, but it could result as well from the modification of the number of steps in the dissociation chain. This kinetic mode of ultrasensitivity to inducer concentration is not comparable to the cooperative one invoked to generate bistability. LacI dissociation envisioned as a turning point in the *lac* system =================================================================== The event of disruption of the LacI-mediated repression complex is embedded in the general *lac* circuit detailed below. The *lac* circuit ----------------- Once LacI is released from DNA, RNAP can repeatedly access P*lac* and initiate successive rounds of *lac* mRNA transcription through a hit-and-run mechanism, at approximately constant rate $ k_1 $, considering the relative invariance of RNAP concentration in the cell. The value of $ k_1 $ corresponds to that of the pseudo-first-order rate of productive collision between RNAP holoenzyme (including the initiation factor $ \sigma 70 $) and P*lac*, followed by a rapid and driven elongation process. The evolution of *lac* mRNA results from a chain of connected events detailed below, in which the parameters subject to transient evolution immediately after LacI release will be labeled with (*t*). *Lac* mRNAs (*M*) evolve following Eq.(6), where $ k_0 $ is the basal transcription rate in presence of LacI and CRP, and $ r_M $ is the rate of removal of *lac* mRNAs. $$\frac{dM(t)}{dt}=k_{0}+k_{1}P(t)-r_{M}M(t)$$ In GRN modeling, the maximal transcription rate is weighted by a probability, generally that an activator is present, or that a repressor is absent (Michel, 2010). In the case of *lac*, this probability written *P*(*t*) in Eq.(6), can have different meanings depending on the model used and will be defined through several functions $(f_1)$. For comparison, *P*(*t*) will be defined as either (**i**) the steady state fractional time of absence of the repressor (involving many events) under the classical graded repression hypothesis, or (**ii**) the probability that LacI does not rebind prematurely under the single event hypothesis (5.2.5). In all cases, this probability is a function of *lac* expression, leading to permease synthesis, allowing the entry of more inducer in the cell, that ultimately decreases the probability of LacI rebinding, in a circular relationship. *Lac* proteins ($ L $) follow $$\frac{dL(t)}{dt}=s\ M(t)-r_{L}L(t)$$ where *s* and $ r_L $ are the rates of synthesis and removal of the products respectively. For simplicity, all *lac* proteins, including permease and $ \beta $-galactosidase, are supposed to be degraded at the same rate. The intracellular amount of inducer written $ A $ (activator), depends on the passive, reversible and saturable transport of external inducer through permease, classically described by $$\frac{dA(t)}{dt}=vL(t)\left (\frac{A_{out}}{K_{o}+A_{out}}-\frac{A(t)}{K_{i}+A(t)} \right )$$ The concentration of intracellular inducer then directly regulates the fraction of total repressor that is competent for binding DNA ($ R_{c} $), through a function $ f_2 $ $$R_{c}(t)=f_{2}\left (R_{T},A(t) \right )$$ for which, once again, several definitions will be envisioned depending on the model used and $ R_T $ is the total concentration of LacI, supposed to be roughly constant. Simplifying hypotheses can be used to define this function. In particular, inducer concentration can be considered as higher than the low concentration of LacI in the cell, so that it can be incorporated into pseudo-first-order binding rates and the amounts of free and bound inducer can be approximated as equivalent. It is important to note that the probability that LacI does not rebind also depends on time and on *lac* products, including (**i**) permease which regulates the intracellular concentration of inducer and (**ii**) $ \beta $-galactosidase for metabolizable inducers. Since it is supposed to fluctuate slowly relatively to gene expression, $ R_{c} $ will be incorporated into the pseudo-first-order rate of LacI binding to LacO, such that $ k_{2}(t)=k^{*}_{2}R_{c}(t) $, where $ k_2^* $ is a second-order binding rate. The low number of repressors in the cell led certain authors to distinguish between their free and total concentrations (Yagil and Yagil, 1971; Berg and Blomberg, 1977), but considering that there is a unique combination of strong operators in the cell, the total number of LacI ($ VR_{c} $) and the number of LacI not bound to LacO ($ VR_{c}-1 $) will be considered as equivalent, irrespective of whether LacI jumps in solution or slides along DNA. A refinement in this system is that contrary to $ k_1 $, $ k_2 $ is not constant but decreases with time since $ R_{c} $ depends on the concentration of inducer, that is itself a function of the number of permeases in the cell. To evaluate the general behavior of the system, further simplifications can be made. The *lac* products (including mRNA and proteins) are fused together and collectively written $ x(t) $, by assuming that transcription elongation and translation of *lac* mRNAs (occurring simultaneously in bacteria) are quasi-instantaneous. In this respect, note that population heterogeneity has also been visualized at the level of *lac* mRNA (Tolker-Nielsen et al., 1998), suggesting that it is not primarily due to translational bursts. The import of inducer by permease is supposed to be far from saturation and its export is considered as negligible soon after LacI release. To complete the system, it is now necessary to define the functions $ f_1 $ and $ f_2 $ postulated above to describe $ P(t) $ and $ R_{c}(t) $. The question of *lac* bistability --------------------------------- Multistability is a general mechanism to explain how nonunique cell types can stably coexist within the same environment, but the origin, and even the existence of *lac* bistability, raised active debates and remain open questions (Díaz-Hernández and Santillán, 2010; Veliz-Cuba and Stigler, 2011; Savageau, 2011). The notion of bistability is delicate in this system since the bimodal distribution of bacteria, induced and uninduced, is observable only for a particular, low range of inducer concentration and it is difficult to certify experimentally that this bimodality is really stationary rather than extended transient. If the induced bacteria seem indeed to remain in their state for generations, it is less clear that uninduced bacteria cannot switch to the induced state. In addition, using a low inducer concentration, as for the Fig.2 of Novick and Wiener reproduced here in Fig.3c, bacteria should be examined very long after inducer addition and renewal to consider that a steady state is reached. ### The logic of *lac* bistability If postulating that LacI frequently dissociates from LacO according to a dynamic model, a bistability threshold is required since otherwise the feedback would be primed by low stochastic *lac* expression as soon as the inducer in present. A “priming threshold” of *lac* expression is necessary to ensure the relative stability of the uninduced population fraction. Such a threshold before ignition is typically provided by multistability (Chung and Stephanopoulos, 1996) and is all the more necessary for *lac* that a basal expression is a prerequisite for *lac* induction. In fact, *lac* expression is an all-or-almost-nothing phenomenon. In the population assay shown on Fig.4a, a low but reproducible basal *lac* activity, of about 4% of the maximal activity, is observed. The homogeneous pattern of LacZ staining suggests that this expression cannot be attributed to a fraction of induced bacteria, but to a fraction of maximal *lac* expression in every bacterium, whose rate is written $ k_0 $ in Eq.(6). This basal expression seems at first glance contradictory with the all-or-none hypothesis but is in fact necessary for allowing the bacteria to sense the availability of lactose in their environment. This low level of transcription in presence of LacI could for instance correspond to the small bursts described in (Choi et al., 2008), or to cAMP-liganded CRP-promoted transcription (Kuhlman et al., 2007). This basal activity is both compatible with and necessary for *lac* induction. Indeed, (**i**) Basal *lac* expression is necessary to generate small amounts of permease/LacY for allowing the entry of the first molecules of inducer. (**ii**) In natural conditions, basal *lac* expression is also necessary to generate small amounts of LacZ, to convert these first molecules of lactose into allolactose, which is the genuine *lac* inducer (Wilson et al., 2007). The long-standing apparent paradox that *lac* should be active to be activatable, is typical of bacterial regulatory networks and is also encountered for example in the case of the SOS regulon. But to be compatible with the model, basal *lac* expression should be low enough to not prime the positive feedback circuit in all cells once inducer is present. This latter condition is classically fulfilled if the system is robustly bistable. ### The classical quasi-equilibrium model of bistability The ultrasensitivity of *lac* expression to inducer concentration, with a sigmoidal or S-shaped input function, has been shown crucial for the all-or-none nature of the switch. *Lac* expression is generally supposed to be governed by graded interaction cycles between LacI and LacO, that can be up- and down-regulated in a continuous manner, thus allowing to fine-tune the fraction of time during which LacO is unoccupied (Díaz-Hernández and Santillán, 2010; Yildirim and Kazanci, 2011). To adapt the model of graded repression to *lac*, one should distinguish two modes of interaction between LacI and P*lac*: (**i**) the one previously described, holding before the initial dissociation, very stable and requiring two operators (section 5.2.5) and (**ii**) an other one, examined below (sections 5.2.3 and 5.2.4), reversible and more dynamic, between LacI and a single operator (O1), as schematized in Fig.1a. To illustrate the principle of graded repression, let us consider the typical example of bistability model described in (Ozbudak et al., 2004), one of the most frequently cited articles on the *lac* system. The *lac* products evolve according to $$\frac{dx}{dt}=k_{s}P-r_{x}x$$ where $ k_s $ is the maximal rate of synthesis, and $ r_x $ the rate of removal. $ P $ is the probability that LacI is not bound to LacO, depending on $ R_{c} $. The occupation of LacI by the inducer (*A*) is rapid enough to be approximated as a quasi-equilibrium. $$P=\frac{1}{1+K_R R_{c}}$$ Eq.(11), where $ K_{R} $ is an equilibrium binding constant, describes a situation of graded repression in which $ P $ can take any real intermediate value between 0 and 1, even if slower interactions and transcriptional bursting could introduce some heterogeneity (Golding et al., 2005). $ R_{c} $ should now be determined. Its definition can be envisioned in two manners, with or without Hill cooperativity. ### Quasi-equilibrium hypothesis with strong (Hill) equilibrium cooperativity Assuming that the concentration of *A* largely exceeds that of LacI, a highly cooperative fixation of the inducer to LacI, postulated in (Ozbudak et al., 2004), is described by $$\frac{R_{c}}{R_{T}}=\frac{1}{1+(K_A A)^n}$$ Eq.(12) is a sigmoidal Hill repression function of the inducer concentration, reflecting strong cooperativity since it presupposes that LacI exists essentially in two forms, either devoid of, or filled with inducer. The decrease or the elimination of intermediate microstates characterise all cooperative phenomena (Michel, 2011). The approach to bistability using Hill functions long proved very successful (Cherry and Adler, 2000; Sobie, 2011) and it works well for *lac*. Note that in addition to the Hill function, MWC equations were also used to describe allosteric binding (Appendix 2) and are also adequate for generating bistability. ### Quasi-equilibrium model without Hill cooperativity Given that the interaction between LacI and the inducer proved to be not or weakly cooperative, the different partially liganded microstates of LacI (Fig.5a) cannot be ignored and the ratio $ R_{c}/R_{T} $ should be determined without recourse to a Hill function. The general function describing nonallosteric interactions can be obtained by enumerating the microstates capable of binding to the bipartite operator O1. This palindromic sequence can be occupied in a graded manner by all LacI tetramers with two adjacent protomers devoid of inducer. They correspond to LacI microstates occupied by either 0, 1 or 2, but not 3 or 4 molecules of inducer. Moreover, only certain forms of di-liganded LacI can bind the operator, since at least one dimer in the tetramer should be devoid of inducer (group $ R_{c1} $ in Fig.5a). In this respect, fluorescent protein-tagged LacI, known to form dimers but not tetramers, have been shown sufficient to bind to LacO with short residence times (Elf et al. 2007). ![image](figure5.png){width="7cm"}\ **Figure 5**. (**a**) LacI tetramer microstates with respect to inducer binding. (**b**) 7 microstates ($ R_{c1} $) out of 16, are capable of interacting dynamically with a single palindromic operator, by sharing time with RNAP as supposed in classical *lac* models. (**c**) Only one microstate ($ R_{c2} $) can form a stable hairpin through bridging two operators, thereby relocking P*lac* and preventing RNAP to reinitiate transcription. \ When using the same microscopic binding constant ($ K_A $) for inducer binding to the different LacI protomers, the fraction of LacI defined above corresponds to 7 microstates out of 16 (Fig.5a). An additional subtlety in this enumeration, is that the binding rate of the microstate $ R_{c2} $ (Fig.5a, top left) should be multiplied by 2 because it can bind O1 in 2 different manners. If $ R_{T} $ is the total concentration of diffusing *R* in the cell, supposed to be roughly constant, then $ R_{c1} $ can be written using an Adair formula as follows $$\frac{R_{c1}}{R_{T}}= \frac{2+4K_{A}A+2(K_{A}A)^2 }{(1+K_{A}A)^4}=\frac{2}{(1+K_{A}A)^2}$$ which could have been anticipated from the bilateral symmetry of the LacI tetramer. Bistability is no longer recovered using this value, suggesting that strong Hill-like cooperativity is required for the quasi-equilibrium mode of bistability. The graded mechanism described in sections 5.2.2 and 5.2.3 is very predominant in biochemical models but is not suited to the single event view and would be poorly compatible with the stability of the LacI-LacO interaction. Intriguingly, among the different mechanisms of equilibrium cooperativity available, the one sometimes retained in the context of *lac*, is the popular Monod-Wyman-Changeux (MWC) model (O’Gorman et al., 1980; Sharp, 2011; Dunaway et al., 1980). This choice appears particularly not tenable under the hypothesis of delayed LacI rebinding described by Choi et al. (Choi et al., 2008), since it would lead to the complete release of LacI tetramers at every symmetric conformational change of the tetramer. A nearest neighbor sequential allosteric mechanism such as the KNF model (Koshland et al., 1966), would be more compatible with the quasi-irreversible dissociation model since it would explain how the departure of an inducer molecule from a LacI protomer, favors the dissociation from DNA of a neighboring partner. An alternative threshold mechanism can be proposed, that is relieved from the quasi-equilibrium assumption and the use of equilibrium constants.\ ### Modeling a single molecular race The traditional mode of bistability obtained in the previous sections, using ODEs with sigmoidal production functions containing equilibrium constants, is not expected to hold for P*lac* in case of rare turnovers of the repression complex. Hence, an alternative model, purely kinetic, is proposed to give similar results for bacteria in which LacI dissociated from LacO. Based on the ChIP experiment, if a window of repression is long enough compared to the stability of *lac* products, it could erase the results of the previous window of derepression which failed to trigger the induction and eliminate the inducer from the cell. Under this hypothesis, the capacity of bacteria to become induced or not should be determined during a single waiting time before reformation of this complex. This alternative mechanism is related to the question raised in (Choi et al., 2008): “Does every complete dissociation event lead to a phenotype transition?”. In this situation, the system is not determined by a steady state, but by an initial rate, as postulated by Bolouri and Davidson for transcriptional cascades during development (Bolouri and Davidson, 2003) and whose approach will be partially picked up here. Contrary to classical GRN modeling in which the probability associated to the maximal transcription rate is defined by a steady state occupation ratio taking into account both the association and dissociation rates of a transcription factor, it is now the probability that LacI does not shut off again P*lac* too soon, to allow a certain threshold number of transcription rounds to occur. The number of *lac* expression events necessary to reach the point of no return of the switch will be called *n*. Let be $ k_2 $ the pseudo-first-order rate (including the total concentrations of LacI supposed to be invariant), of P*lac* relocking upon LacI-mediated re-building of an hairpin repression complex (section 3). Under the single event hypothesis, the positive feedback is a race between LacI and RNAP. To make the race fairer and to counterbalance their different kinetics due to the higher concentration of RNAP in the cell, $ n $ *lac* expression rounds (total duration $ n/k_1 $) are supposed to be necessary during the mean time window of repression complex reformation ($1/ k_2 $). With given rate constants $ k_1 $ and $ k_2 $, this probability decreases when $ n $ increases (Fig.6a). The probability of $ n $ transcription events without any LacI relocking should be determined. At every time point, the probability that *lac* is expressed is simply $ k_1/(k_1+k_2) $. Given the memoryless nature of this stochastic molecular race, the probability that it repeats $ n $ times consecutively is $ (k_1/(k_1+k_2))^n $. This intuitive result can be recovered rigorously from transient state probabilities as follows. The probability of no LacI relocking, which decreases exponentially with time with a density function $ k_2 \exp(-k_2 t)$, should be weighted by the probability of reaching the *n*th expression round. This latter probability, written below $ P(X_1 \geq n) $, is the complement to one of the sum of probabilities of 0, 1, 2, .. until ($ n-1 $) hits, $$P(X_{1}\geq n)=1-\sum_{j=0}^{n-1}P(X_{1}=j)$$ where $ P(X_1 =j) $ is a Poisson distribution of parameter $ k_1t $. Hence, the final probability of success is $$\int_{t=0}^{\infty}k_{2}\operatorname{e}^{-k_{2}t}\left (1-\sum_{j=0}^{n-1}\frac{(k_{1}t)^{j}}{j!} \operatorname{e}^{-k_{1}t} \right )dt=\left (\frac{k_{1}}{k_{1}+k_{2}} \right )^n$$ This probability corresponds to the proportion of cells that reach the induced state, in the cellular subpopulation in which LacI unhooked from P*lac*. The direct calculation of *n* is complicated by the fact, not taken into account above, that $ k_2 $ depends on *lac* expression. Considering that the essential form of LacI capable of relocking the system is the microstate $ R_{c2} $ devoid of inducer (Fig.5a), the set of relationships described above can be compressed into the following deterministic equation $$\frac{dx}{dt}=k_{0}+k_{1}\left (\frac{k_{1}}{k_{1}+k_{2}\frac{1}{(1+\beta x)^4}}\right )^n-rx$$ with the initial condition, subject to stochastic variance, $ x_0 = k_0/r $ at $ t_0 $. For appropriate parameter ranges, this retroaction is typically capable of splitting the population into induced and uninduced cells (Fig.6b). Cooperative binding of inducer can still be introduced through a Hill function, simply by replacing $ (1+\beta x)^4 $ by $ 1+(\beta x)^4 $ in Eq.(16); but this postulate is not necessary to obtain the two populations, contrary to the previous quasi-equilibrium approach. This feature could be a sign that the kinetic approach is more realistic than the classical one in absence of obvious equilibrium cooperativity in the system. The negative integral of Eq.(16) resembles a sort of Waddington’s landscape (Ferrell, 2012), in which the stochastic initial spreading of bacteria converges towards the two point attractors for *lac* expression corresponding to the induced and uninduced states. Several points should be noted: (**i**) the bistable-like profile of Fig.6b is obtained because the system is restricted to a single LacI rebinding interval, and (**ii**) the evolution of *lac* expression described by Eq.(16), is purely probabilistic and meaningless at the level of a single cell and is envisioned as the global behavior of a container of bacteria. In a given bacterium, the race is won by either LacI or RNAP. *Lac* expression is binary in single cells and can take only two values (either $ k_0 $ or $ k_0+k_1 $), while the continuous Eq.(16) describes the graded variation of the fraction of inducible bacteria in the container. In addition, the splitting of the population into uninduced and induced states (Fig.6b) does not occur simultaneously for all the bacteria, since this process begins with the single event that is itself distributed along time, as described in section 4.2. In this development, the threshold $ n $ is not a predetermined constant to be reached, but is a commitment point regulated in concert with the combination of kinetic parameters of interaction. $$n=\frac{\ln (\rho x_{s}-\gamma )}{4\ln (1+\beta x_{s})-\ln (\alpha +(1+\beta x_{s})^4)}$$ where $ \gamma = k_0/k_1, \alpha = k_2/k_1, \rho = r/k_1 $ and $ x_s $ is the threshold amount of *lac* products over which *lac* is induced. ![image](figure6.png){width="7.8cm"}\ **Figure 6**. (**a**) Probability that *n* events occur before a single competing event (5 times slower in this example) (**b**) Probabilistic evolution rate of *lac* products depending on their initial amounts, for a given inducer concentration (using Eq.(16) and the relative values, setting $ k_1=1, k_0=0.04, k_2=0.1, r = 0.2, \beta $= 0.4 and $ n $ = 100). Dots indicate the expected populations of bacteria, either uninduced or induced, after a derepression window. \ Experimental methods ==================== LacI-mediated P*lac* DNA immunoprecipitation without crosslinking ----------------------------------------------------------------- The stability of LacI-P*lac* association was evaluated through a method equivalent to chromatin immunoprecipitation (ChIP) and performed using non-engineered *E. coli* K12 cells (P4X strain), but with and without chemical crosslinking. The longevity of LacI-P*lac* association was compared between (**i**) basal conditions, (**ii**) in long term presence of high inducer concentration (Isopropyl-$ \beta $-D-1-thiogalactopyranoside IPTG), expected to dismantle the repressor complex, and (**iii**) in presence of glucose (20 mM), possibly influencing LacI binding indirectly through CRP (Fried and Hudson, 1996; Balaeff et al., 2004; Kuhlman et al., 2007). Cells were grown for 48 hours, with inoculation of fresh medium with 1/50 of the previous culture every 16 hours. All bacterial cultures were then pelleted and rinsed twice in ice-cold TBS, resuspended in 6 ml binding buffer (10 mM Tris (pH 7.3), 1 mM EDTA, 50 mM KCl, 100 $ \mu $g/mL bovine serum albumin, 5% v/v glycerol). Each bacterial culture was then split into two parts, one of which was incubated 30 min further in presence of 1% formaldehyde, then treated for 5 min with 0.5 M glycine, repelleted and resuspended in 3 ml binding buffer. Mechanical lysis of the cells and moderate chromosome fragmentation were achieved simultaneously by sonication (3 pulses of 30 s separated by 30 s in thawing ice). After centrifugation and elimination of pellets containing remnants and unbroken cells, LacI antibody (Rabbit anti-Lac I antibody 600-401-B04 Rockland Immunochemicals Inc., P.O. BOX 326, Gilbertsville, Pennsylvania, USA) was incubated with an excess of bacterial lysate supernatant (500 $ \mu $l), overnight on a rotating wheel at 4$^\circ$C. 50 $ \mu $l Protein A coupled magnetic beads (Invitrogen Dynal AS, Oslo, Norway) where then added and incubated for 4 hours at 4$^\circ$C. Beads were collected using the Dynal magnet. The first supernatant was removed and stored for control PCR and the pellet was rinsed 5 times using binding buffer and transferred to new tubes. As indicated in Fig.2, the pellets were then washed for 24 or 72 hours with either 0.35 M or 0.5 M NaCl. The presence of DNA fragments in the pellets was then assessed by PCR (28 cycles: denaturation 30 s 95$^\circ$C, annealing 30 s 55$^\circ$C and elongation 30 s 72$^\circ$C) using primers located in the N-acetylglutamate kinase (ArgB) gene unrelated to *lac* (*ArgB*-F: 5’-AGGTTTGTTTCTCGGTGACG-3’; *ArgB*-R: 5’-GTTGCCCTTCGTCTGTTACG-3’ yielding a 167 bp-long amplification fragment) or to the lactose promoter P*lac* (P*lac*-F: 5’-TCCGGCTCGTATGTTGTGTG-3’; P*lac*-R 5’-AGGCGATTAAGTTGGGTAACG-3’, yielding a 142 bp-long amplification fragment). The same results were obtained in 3 independent experiments starting from fresh bacteria. The molecular weight marker is the 100 bp GeneRuler from Fermentas. $ \beta $-galactosidase assays ------------------------------ Bulk expression assays were performed using 4-methylumbelliferone-$ \beta $-D-galactopyranoside (MUG) as a substrate, as described in (Vidal-Aroca et al., 2006). For in situ $ \beta $-Galactosidase staining, cells were incubated with 1 mg/ml 5-bromo-4-chloro-3-indolyl-$ \beta $-D-galactopyranoside (X-gal), 2 mM , 5 mM , 5 mM in phosphate buffered saline, until appearance of blue staining. They were then rinsed and fixed for 1 min in 1% glutaraldehyde, 1mM . Discussion ========== The present interpretations of *lac* features are based on the redefinition of the single event early postulated in 1957 by Novick and Weiner. This event cannot be the synthesis of a single molecule of permease as supposed by Novick and Weiner, because of the necessity of a threshold of permeases. The single event can no more be a single transcription round of the *lac* gene, since a steady state basal expression has been evidenced. Hence, alternatively, the single event could be the complete release of the *lac* repressor, in line with the simplistic mode of action of LacI described in the initial reports, but which now appears unusual under the current assumption that interactions are highly dynamic in the cell. Static interactions, which are obvious for multimolecular complexes, are rediscovered in eukaryotic cells (Hemmerich et al., 2011) and are completely conceivable for prokaryotic repressor complexes. A simple way to prolong molecular association in the cell is the hierarchical binding of capping molecules to preformed molecular assembly, leading to the molecular trapping of nucleating components in absence of active dismantling (Michel, 2011). While in quasi-equilibrium approaches, the interactions between LacI and P*lac* are considered as faster than the *lac* regulatory circuit, in the present model they are slower. In presence of low inducer concentration, the mechanisms described in this study are expected to maintain the presence of uninduced cells in the population in several ways: by strongly reducing the number of bacteria with open P*lac*, by erasing the memory of previous bursts of expression, and by adjusting the probability of switching *lac* on in a single burst. The actual mechanism of *lac* repression by conjugation of several operators remains controversial, but experiments of deletion of individual operators, as well as of mutation of LacI making it non-tetramerizable, strongly decreased *lac* repression, pointing out the importance of a supramolecular repression complex. In absence of inducer, a single LacI tetramer would be locked forever in a large molecular complex including a stabilized DNA hairpin with two operators (Brenowitz et al., 1991). Of course, such a stability cannot be recovered using reporter plasmids and tagged modified non-tetramerizable artificial LacI constructs, for which graded quasi-equilibrium interpretations can hold, contrary to endogenous *lac* components. Owing to its unicity, LacI dissociation would work as a turning point of the *lac* system, disconnecting the preceding and the following processes. Briefly, those occurring before complete LacI dissociation explain populational behaviors and generate a first filter for low inducer concentration, while those occurring after dissociation can still contribute to the all-or-nothing phenotypes. To describe this situation, before LacI dissociation, only one LacI tetramer has to be taken into consideration: the one precisely bound to P*lac*; whereas after dissociation, all LacI tetramers present in the cell should be incorporated in the treatment, since at low inducer concentration, certain can remain inducer-free. The shape of the population curves could directly reflect the probability of occurrence of a single event, that itself depends on the achievement of a series of previous events. Under this assumption, the sigmoidal *lac* induction curves are purely populational effects skipping the single cell level. In this alternative picture, the inherently unpredictable all-or-nothing behavior is unrelated to the population response because of the uncoupling action of the fast positive feedback taking place in individual bacteria. A single stochastic event can generate smooth population behaviors, in the same manner that unpredictable microscopic association and dissociation jumps are converted into smooth exponential curves in surface plasmon resonance experiments. The figure 2 of the article of Novick and Weiner could illustrate particularly well the fact that a purely mathematical notion: the probability of occurrence (continuous and ranging from 0 to 1), of a binary event (discontinuous, with two discrete values 0 or 1), directly translates into the concrete partition of all-or-none phenotypes in the bacterial culture. Hence, *lac* is a typical system for which the population behavior is well defined whereas the state of each cell is indeterminate. In the same way, the result of the molecular race following de-repression is also dictated by chance only, but can be captured probabilistically, that is to say deterministically, at the population level. The example of *lac* shows that populational studies and classical deterministic treatments can be fully appropriate for shedding light on microscopic phenomena through materializing probability distributions. The mechanisms proposed here to give rise to the coexistence of uninduced and induced phenotypes in the population are transient, mathematically speaking, but can in practice lead to durable bimodality. (**i**) A random walk to an absorbing state is inherently transient but can be astonishingly long for heterogeneous random walks with predominant backward transitions as obvious in Eq.(1), and its individual arrivals are widely dispersed in time. For instance, for a succession of $ n $ identical transitions and when $ k^{-}>>k^{+} $, the mean time and standard deviation reach the same geometric function of the number of steps $ \sigma(T)=\left \langle T \right \rangle \sim (1/k^{+})(k^{-}/k^{+})^{n-1} $. (**ii**) The single race following derepression can, for appropriate parameter ranges, provide a simple general mechanism to limit cellular induction to a small fraction of the population. One cannot exclude that highly desynchronized single events explosively amplified by positive feedback in every cell contribute for a significant fraction of the observed all-or-nothing behavior of *lac*. There is some confusion about *lac* bistability, sometimes put forward to describe the bimodal distribution of bacteria. In fact, bistability is not necessary to explain certain features of *lac* and the binary individual phenotypes can be unrelated to sigmoidal responses to inducer concentration. Examples of mechanisms that can underlie different possible situations in which single cell and population behaviours are disconnected, are listed in Appendix A2. In these examples, the principle of ligand-induced de-repression is retained (i.e. inhibition by a DNA-binding repressor, itself suppressed by an inducer). In fact, *lac* is supposed to be bistable not only to explain the prolonged bimodal partition of the population, but most importantly to explain the existence of an induction threshold preventing the generalized induction of bacteria at low inducer concentration and solving the apparent paradox that *lac* expression is necessary for *lac* induction. These characteristics are to some extent also obtained with the kinetic mechanisms described here. The basal expression of permease allows bacteria to sense the presence of lactose in their environment and a basal level of $ \beta $-galactosidase is also required to isomerize lactose into allolactose. But these basal levels should be low enough to avoid priming the positive feedback when lactose concentration is insufficient to justify the metabolic investment for the bacteria to synthesize *lac* machineries. From a physiological perspective and if gratuitous inducers have some relevance, this mechanism would be of great evolutionary interest since it offers different advantages: (**i**) At the level of a single cell, it allows that the decision to use or not lactose is unequivocal, either minimal or maximal. (**ii**) At the level of the population, particularly when the substrate concentration is low and uncertain, it allows that only a fraction of the cells invests into this expensive specialization reducing the division rate.\ Beyond its apparent simplicity, the *lac* system remains largely enigmatic. For complex biochemical systems, inaccurate models can be unintentionally decorated by misleading experiments. Even when reduced to its basic components, reliable experimental results can be interpreted through several ways, leading to strikingly different models. In the present case, S-shaped dose-response curves can be interpreted as well using quasi-equilibrium cooperativity (Oehler et al., 2006), with similar outputs for single cells and populations, or as the distribution of a transient process, for which single cell and populations behaviours can be disconnected (Appendix A2). The existence of *lac* bistability also remains to be confirmed (Savageau, 2011). Alternative mechanisms possibly contributing to *lac* behaviors and justifying the tetrameric structure of LacI are added here to the debate, but the saga of the lactose operon is likely to remain far from complete. Appendices ========== Appendix 1: Random walk modeling of the chain of events leading to micro-irreversible LacI dissociation ------------------------------------------------------------------------------------------------------- The 15 transitions represented in Fig.3b are not elementary events but are themselves composite reactions including a lot of micro-substeps (individual chemical bond breakages and so on), so that their designation as Poisson transitions is only approximate, particularly for pseudo-first-order reactions. In addition to the minimal chain drawn in Fig.3b, P*lac* clearance is also conditioned by other upstream phenomena necessary to convey the inducer close to LacO and many other interfering parameters including negative effect of *lac* induction on the growth rate, molecules dilution or crowding, diffusion and translocation, bacterial background, metabolism, accessory factors present in the repression complex etc. It can however be proposed that the most elementary biochemical transitions, the “atoms of dynamics”, refractory to experimental measurements, are naturally and necessarily distributed according to an exponential law, because this distribution is the only one that is fully memoryless. In real biochemical networks, every transition is conditioned by specific activation energies, so that the rates can be different. The probabilities of the forward and backward jumps starting from any state $ x_{i} $, are not 1/2 and 1/2 as for isotropic random walks, but $ k_i^+/(k_i^-+k_i^+) $ and $ k_i^-/(k_i^-+k_i^+) $ respectively. Different approaches can be used to calculate the probability of completion of such chains, including a discrete random walk approach, in which $ x_{0} $ is a reflecting wall and $ x_{n} $ is an absorbing one, and a time continuous Markovian approach using a set of linear differential equations with an absorbing final state $ (k_n^- = 0) $ and solving Eq.(A1). The system can be described with the same rule for every state $ i $ such that $ 1 < i < n-1 $, as follows\ $ \begin{pmatrix} \dot{x}_{0}\\ \hphantom{\vdots} \\ ..\\ \hphantom{\vdots} \\ \dot{x}_{i}\\ \hphantom{\vdots} \\ ..\\ \hphantom{\vdots} \\ \dot{x}_{n-1}\\ \hphantom{\vdots} \\ \dot{x}_{n}\\ \end{pmatrix} = \begin{pmatrix} -k^{+}_{0} & k^{-}_{1} & ... & 0 & 0 & 0 & ... & 0 & 0 & 0 \\ ... & ... & ... & ... & ... & ... & ... & ... & ... & ... \\ 0 & 0 & ... & k^{+}_{i-1} & -(k^{-}_{i}+k^{+}_{i}) & k^{-}_{i+1} & ... & 0 & 0 & 0 \\ ... & ... & ... & ... & ... & ... & ... & ... & ... & ... \\ 0 & 0 & ... & 0 & 0 & 0 & ... & k^{+}_{n-2} & -(k^{-}_{n-1}+k^{+}_{n-1}) & 0 \\ 0 & 0 & ... & 0 & 0 & 0 & ... & 0 & k^{+}_{n-1} & 0 \\ \end{pmatrix} \begin{pmatrix} x_{0}\\ \hphantom{\vdots} \\ ..\\ \hphantom{\vdots} \\ x_{i}\\ \hphantom{\vdots} \\ ..\\ \hphantom{\vdots} \\ x_{n-1}\\ \hphantom{\vdots} \\ x_{n}\\ \end{pmatrix} $\ (A1)\ **Correspondences between discrete and time continuous random walk approaches**\ \ If using a discrete random walk approach, Eq.(4a) is the complement to one of the probability to remain confined in the $ n $-1 first positions. This probability simplifies for large $ N $ and $ n $ and for a moderate ratio $ N/n^2 $ as described in (Ruelle, 2006). In this case, $ \theta $ is the ratio of the number of jumps over the square of the number of steps in the chain (Eq.(4b)), that is itself the mean number of jumps required to complete the chain, according to the well established random diffusion rate.\ For the continuous time differential approach with a single rate $ k $, the probability of state $ n-1 $ is the probability density of state $ n $. Finally it is even more useful to make $ \theta $ independent of the number of jumps and of steps, because these microscopic values are generally unknown in real chains of events. The equivalence between the discrete and time continuous approaches can be derived as follows: The mean time of a single jump $ \tau $, such that $ t = N \tau $, is $ \tau = (n+1)/2nk $ (considering that the waiting times are 1/$ k $ for the state $ x_0 $ and 1/2$ k $ for all the other positions) and given the mean completion time of the whole isotropic chain (Eq.(3)), $ N/n^2 $ is equivalent to $ t/\left \langle T \right \rangle $. Appendix 2: Table A1 --------------------- Single cell binarity and population sigmoidicity are not necessarily related. The ligand-dependent transcriptional probability functions shown in absence of self-regulated circuits, illustrate the possible disconnection between sigmoidicity and bimodality.\ **Left column**. Single cell (symbolized by disks) and population (curves) responses to inducer. Greyscale (graded) or bitmap (all-or-nothing) expression intensities can be obtained through fast equilibrium, in panels (a) and (c), or single event-primed switches in panels (b) and (d), respectively. For the single event mechanisms, the system is freezed at a given time after inducer addition. The individual behaviors are not necessarily related to the apparent population induction shape, and bimodality does not always require population sigmoidicity. In panels (a) and (b), the populational responses are non-sigmoidal while in panels (c) and (d), they are sigmoidal, also said to be ultrasensitive.\ **Right column**. Examples of equations underlying the different situations shown in the left column. Non-zero values of $ F $ are possible for \[$ A $\] = 0 in the fast equilibrium mechanisms. The equivalence between population and single cell behaviors in the fast equilibrium mechanisms, relies on the ergodic equivalence between space averaging and time averaging respectively.\ (**a**) The repressor ($ R $) interacts with DNA through short association/dissociation cycles and the inducer ($ A $) determines the fraction of time in which the promoter ($ Prom $) is occupied. $ K_R $ is the binding equilibrium constant between $ R $ and $ Prom $ and $ K_A $ the binding constant of the inducer with $ R $.\ (**b**) The stochastic dissociation of the repressor is a very single stochastic event triggered by the inducer and corresponding to the release of monomeric repressor $ R $ from DNA. This mechanism is less likely to occur than the chain of events postulated in (d), because of the inevitable risk of unwanted priming in fluctuating biological systems.\ (**c**) As in panel (a), the inducer equilibrates rapidly with the repressor and the repressor rapidly equilibrates with DNA, so that equilibrium cooperativity can apply to the saturation of $ R $ by the inducer, as well as to the saturation of $ Prom $ by $ R $. The formulations shown use the MWC model of cooperativity which necessitates few parameters and has already been used for LacI. Three cases can be envisioned.\ • $ c_1 $ Cooperative fixation of the inducer to $ R $. $ C_R $ corresponds to the repressor conformational equilibrium constant.\ • $ c_2 $ Cooperativity concerns the fixation of $ R $ to the two operators of *Prom*. $ C_L $ is the $ Prom $-looping constant and $ P $ is the fraction of unliganded $ R $ supposed to be bound non-cooperatively by $ A $.\ A mix of $ c_1 $ and $ c_2 $, with involvement of both $ C_R $ and $ C_L $, is also possible (not formulated).\ (**d**) Model retained in the present study. The induced phenotype appears once the final event of a chain conditioned by inducer addition, occurs. The example of equation shown corresponds to a chain of two consecutive events, in which the first one is reversible and the two forward transitions depend on the inducer (Fig.A1).\ ![image](TableA2.png){width="14.2cm"}\ [2]{} ![image](figureA2.png){width="8cm"}\ **Figure A2**. Time- and concentration-dependent completion of a two-step chain with two pseudo-first-order forward rates. The probability of state $ x_2 $ is a function of the combination of time after inducer addition and of inducer concentration. 3D plot drawn to Eq.(A2-5) with $ k_1^-/k_0^+ = k_1^-/k_1^+ = 3 $ and $ P(x_0)=1 $ at $ t_0=0 $ .\ **Acknowledgements** I thank the Carlos Blanco’s lab for giving the *E. coli* K12 cells and synthetic media; Philippe Ruelle and Jean-Christophe Breton for discussions helpful for building Eq.(4); Catherine Martin-Outerovitch and Henri Wróblewski for their help in bacterial staining and microscopy and the anonymous reviewers for very useful suggestions. References ========== Balaeff, A., Mahadevan, L., Schulten, K. 2004. Structural basis for cooperative DNA binding by CAP and lac repressor. Structure 12, 123-132.\ Barkley, M.D., Riggs, A.D., Jobe, A., Burgeois, S. 1975. Interaction of effecting ligands with lac repressor and repressor-operator complex. Biochemistry 14, 1700-1712.\ Beard, D.A. 2011. Simulation of cellular biochemical system kinetics. Wiley Interdiscip. Rev. Syst. Biol. Med. 3, 136-146.\ Benzer, S. 1953. Induced synthesis of enzymes in bacteria analyzed at the cellular level. Biochim. Biophys. Acta 11, 383-395.\ Berg, O.G., Blomberg, C. 1977. Mass action relations in vivo with application to the lac operon. J. Theor. Biol. 67, 523-533.\ Bolouri, H., Davidson, E.H. 2003. Transcriptional regulatory cascades in development: initial rates, not steady state, determine network kinetics. Proc. Natl. Acad. Sci. U.S.A. 100, 9371-9376.\ Brenowitz, M., Pickar, A., Jamison, E. 1991. Stability of a Lac repressor mediated “looped complex”. Biochemistry 30, 5986-5998.\ Chen, J., Alberti, S., Matthews, K.S. 1994. Wild-type operator binding and altered cooperativity for inducer binding of lac repressor dimer mutant R3. J. Biol. Chem. 269, 12482-12487.\ Chen, J., Matthews, K.S. 1994. Subunit dissociation affects DNA binding in a dimeric lac repressor produced by C-terminal deletion. Biochemistry 33, 8728-8735.\ Cherry, J.L., Adler, F.R. 2000. How to make a biological switch. J. Theor. Biol. 203, 117-133.\ Choi, P.J., Cai, L., Frieda, K., Xie, X.S. 2008. A stochastic single-molecule event triggers phenotype switching of a bacterial cell. Science 322, 442-446.\ Choi, P.J., Xie, X.S., Shakhnovich, E.I. 2010. Stochastic switching in gene networks can occur by a single-molecule event or many molecular steps. J. Mol. Biol. 396, 230-244.\ Chung, J.D., Stephanopoulos, G. 1996. On physiological multiplicity and population heterogeneity of biological systems. Chem. Eng. Sci. 51, 1509-1521.\ Daber, R., Sharp, K., Lewis, M. 2009. One is not enough. J. Mol. Biol. 392, 1133-1144.\ Díaz-Hernández, O., Santillán, M. 2010. Bistable behavior of the lac operon in E. coli when induced with a mixture of lactose and TMG. Front Physiol 1, 22.\ Doan, T., Mendez, A., Detwiler, P.B., Chen, J., Rieke, F. 2006. Multiple phosphorylation sites confer reproducibility of the rod’s single-photon responses. Science 313, 530-533.\ Dunaway, M., Manly, S.P., Matthews, K.S. 1980. Model for lactose repressor protein and its interaction with ligands. Proc. Natl. Acad. Sci. U.S.A. 77, 7181-7185.\ Elf, J. Li, G.W., Xie, X.S. 2007. Probing transcription factor dynamics at the single-molecule level in a living cell. Science 316, 1191-1194.\ Ferrell, J.E. Jr. 2012. Bistability, bifurcations, and Waddington’s epigenetic landscape. Curr. Biol. 22, R458-R466.\ Fried, M.G., Hudson, J.M. 1996. DNA looping and lac repressor-CAP interaction. Science 274, 1930–1931; author reply 193 1-1932.\ Garcia, H.G., Phillips, R. 2011. Quantitative dissection of the simple repression input-output function. Proc. Natl. Acad. Sci. U.S.A. 108, 12173-12178.\ Golding, I., Paulsson, J., Zawilski, S.M., Cox, E.C. 2005. Real-time kinetics of gene activity in individual bacteria. Cell 123, 1025-1036.\ Becker, N.A., Kahn, J.D., Maher III, L.J. 2007. Effects of nucleoid proteins on DNA repression loop formation in *Escherichia coli*. Nucl. Acids. Res. 35, 3988-4000.\ Grainger, D.C., Hurd, D., Goldberg, M.D., Busby, S.J.W. 2006. Association of Nucleoid Proteins with Coding and Non-Coding Segments of the Escherichia Coli Genome. Nucl. Acids Res. 34, 4642-4652.\ Hemmerich, P., Schmiedeberg, L., Diekmann, S. 2011. Dynamic as well as stable protein interactions contribute to genome function and maintenance. Chromosome Res. 19, 131-151. Jacob, F. 2011. The birth of the operon. Science 332, 767.\ Jacob, F., Monod, J. 1961. Genetic regulatory mechanisms in the synthesis of proteins. J. Mol. Biol. 3, 318-356.\ Jacob, F., Monod, J. 1963. Genetic repression, allosteric inhibition, and cellular differentiation. In Cytodifferential and Macromolecular Synthesis, Ed Locke M. New York Academic Press 30-64.\ Koshland, D.E., Jr, Némethy, G., Filmer, D. 1966. Comparison of experimental binding data and theoretical models in proteins containing subunits. Biochemistry 5, 365-385.\ Kuhlman, T., Zhang, Z., Saier, M.H. Jr, Hwa, T. 2007. Combinatorial transcriptional control of the lactose operon of Escherichia coli. Proc. Natl. Acad. Sci. U.S.A. 104, 6043-6048.\ Lee, J., Goldfarb, A. 1991. lac repressor acts by modifying the initial transcribing complex so that it cannot leave the promoter. Cell 66, 793-798.\ Lewis, M. 2005. The lac repressor. C. R. Biol. 328, 521-548.\ Michel, D. 2009. Fine tuning gene expression through short DNA-protein binding cycles. Biochimie 91, 933-941.\ Michel, D. 2010. How transcription factors can adjust the gene expression floodgates. Prog. Biophys. Mol. Biol. 102, 16-37.\ Michel, D. 2011. Basic statistical recipes for the emergence of biochemical discernment. Prog. Biophys. Mol. Biol. 106, 498-516.\ Moazed, D. 2011. Mechanisms for the inheritance of chromatin states. Cell 146, 510-518.\ Nicol-Benoît, F., Le-Goff, P., Le-Dréan, Y., Demay, F., Pakdel, F., Flouriot, G., Michel, D. 2012. Epigenetic memories: structural marks or active circuits? Cell. Mol. Life Sci. 69, 2189-2203.\ Novick, A., Weiner, M. 1957. Enzyme induction as an all-or-none phenomenon. Proc. Natl. Acad. Sci. U.S.A. 43, 553-566.\ O’Gorman, R.B., Rosenberg, J.M., Kallai, O.B., Dickerson, R.E., Itakura, K., Riggs, A.D., Matthews, K.S. 1980. Equilibrium binding of inducer to lac repressor.operator DNA complex. J. Biol. Chem. 255, 10107-10114.\ Oehler, S., Alberti, S., Müller-Hill, B. 2006. Induction of the lac promoter in the absence of DNA loops and the stoichiometry of induction. Nucl. Acids Res. 34, 606-612.\ Oehler, S., Eismann, E.R., Krämer, H., Müller-Hill, B. 1990. The three operators of the lac operon cooperate in repression. EMBO J. 9, 973-979.\ Ozbudak, E.M., Thattai, M., Lim, H.N., Shraiman, B.I., Van Oudenaarden, A. 2004. Multistability in the lactose utilization network of Escherichia coli. Nature 427, 737-740.\ Ruelle, P. 2006. Lectures notes on random walks PHYS 2122\ (http://www.fyma.ucl.ac.be/data/pdf/random-walks-v4.pdf)\ Sadler, J.R., Sasmor, H., Betz, J.L. 1983. A perfectly symmetric lac operator binds the lac repressor very tightly. Proc. Natl. Acad. Sci. U.S.A. 80, 6785-6789.\ Sanchez, A., Osborne, M.L., Friedman, L.J., Kondev, J., Gelles, J. 2011. Mechanism of transcriptional repression at a bacterial promoter by analysis of single molecules. EMBO J. 30, 3940-3946.\ Savageau, M.A. 2011. Design of the lac gene circuit revisited. Math. Biosci. 231, 19-38.\ Setty, Y., Mayo, A.E., Surette, M.G., Alon, U. 2003. Detailed map of a cis-regulatory input function. Proc. Natl. Acad. Sci. U.S.A. 100, 7702-7707.\ Sharp, K.A. 2011. Allostery in the lac operon: population selection or induced dissociation? Biophys. Chem. 159, 66-72.\ Simons, A., Tils, D., von Wilcken-Bergmann, B., Müller-Hill, B. 1984. Possible ideal lac operator: Escherichia coli lac operator-like sequences from eukaryotic genomes lack the central G-X-C pair. Proc. Natl. Acad. Sci. U.S.A. 81, 1624-1628.\ Sobie, E.A. 2011. Bistability in biochemical signaling models. Sci Signal 4, tr10.\ Thomas, R. 1998. Laws for the dynamics of regulatory networks. Int. J. Dev. Biol. 42, 479-485.\ Tolker-Nielsen, T., Holmstrøm, K., Boe, L., Molin, S. 1998. Non-genetic population heterogeneity studied by in situ polymerase chain reaction. Mol. Microbiol. 27, 1099-1105.\ Veliz-Cuba, A., Stigler, B. 2011. Boolean models can explain bistability in the lac operon. J. Comput. Biol. 18, 783-794.\ Vidal-Aroca, F., Giannattasio, M., Brunelli, E., Vezzoli, A., Plevani, P., Muzi-Falconi, M., Bertoni, G. 2006. One-step high-throughput assay for quantitative detection of beta-galactosidase activity in intact gram-negative bacteria, yeast, and mammalian cells. BioTechniques 40, 433-434.\ Vilar, J.M.G., Guet, C.C., Leibler, S. 2003. Modeling network dynamics: the lac operon, a case study. J. Cell Biol. 161, 471-476.\ Wilson, C.J., Zhan, H., Swint-Kruse, L., Matthews, K.S. 2007. The lactose repressor system: paradigms for regulation, allosteric behavior and protein folding. Cell. Mol. Life Sci. 64, 3-16.\ Xu, M., Long, C., Chen, X., Huang, C., Chen, S., Zhu, B. 2010. Partitioning of histone H3-H4 tetramers during DNA replication–dependent chromatin assembly. Science 328, 94-98. Yagil, G., Yagil, E. 1971. On the relation between effector concentration and the rate of induced enzyme synthesis. Biophys. J. 11, 11-27.\ Yildirim, N., Kazanci, C. 2011. Deterministic and stochastic simulation and analysis of biochemical reaction networks: The lactose operon example. Methods Enz. 487, 371-395.\ Zhan, H., Camargo, M., Matthews, K.S. 2010. Positions 94-98 of the lactose repressor N-subdomain monomer-monomer interface are critical for allosteric communication. Biochemistry 49, 8636-8645.\
--- abstract: | In 1977, Orlik–Randell construct a nice integral basis of the middle homology group of the Milnor fiber associated to an invertible polynomial of chain type and they conjectured that it is represented by a distinguished basis of vanishing cycles. The purpose of this paper is to prove the algebraic counterpart of the Orlik–Randell conjecture. Under the homological mirror symmetry, we may expect that the triangulated category of maximally-graded matrix factorizations for the Berglund–Hübsch transposed polynomial admits a full exceptional collection with a nice numerical property. Indeed, we show that the category admits a Lefschetz decomposition with respect to a polarization in the sense of Kuznetsov–Smirnov, whose Euler matrix are calculated in terms of the “zeta function" of the inverse of the polarization. As a corollary, it turns out that the homological mirror symmetry holds at the level of lattices, namely, the Grothendieck group of the category with the Euler form is isomorphic to the middle homology group with the intersection form (with a suitable sign). address: - 'Department of Mathematics, Graduate School of Science, Osaka University, Toyonaka Osaka, 560-0043, Japan' - 'Department of Mathematics, Graduate School of Science, Osaka University, Toyonaka Osaka, 560-0043, Japan' author: - Daisuke Aramaki - Atsushi Takahashi title: 'Maximally-graded matrix factorizations for an invertible polynomial of chain type' --- Introduction ============ Let $f$ be an invertible polynomial of chain type, namely, a polynomial of the form $$f= f(x_1,\dots, x_n):= x_1^{a_1}x_2 + x_2^{a_2}x_3 + \cdots + x_{n-1}^{a_{n-1}}x_n + x_n^{a_n}, \quad a_i\ge 2.$$ One can associate an interesting algebro-geometric invariant, the triangulated category ${\rm HMF}^{L_f}_S(f)$ of $L_f$-graded matrix factorizations of $f$, where $S:=\CC[x_1,\dots, x_n]$ and $L_f$ is the maximal grading of $f$ (see Section 1). This category ${\rm HMF}^{L_f}_S(f)$ is considered as an analogue of the bounded derived category of coherent sheaves on a smooth proper algebraic variety. Let $\widetilde f$ be the Berglund–Hübsch transpose of $f$ defined by $$\widetilde f=\widetilde f (x_1,\dots ,x_n):=x_1^{a_1} +x_1 x_2^{a_2}+\dots+x_{n-1}x_n^{a_n},$$ which (also) has an isolated singularity only at the origin $0\in\CC^n$. A distinguished basis of vanishing $(n-1)$-spheres in the Milnor fiber of $\widetilde f$ can be categorified to an $A_\infty$-category ${\rm Fuk}^{\to}(\widetilde f)$ called the directed Fukaya category. Its derived category $D^b{\rm Fuk}^\to(\widetilde f)$ is an invariant of the polynomial $\widetilde f$ as a triangulated category and $D^b{\rm Fuk}^\to(\widetilde f)$ has a full exceptional collection by construction. The Berglund–Hübsch transposition of invertible polynomials together with the Berglund–Henningson duality of their symmetries [@BHe] yields several mirror symmetry conjectures. For example, the topological mirror symmetry conjecture is now proven in [@EGT]. Here in this paper, we are interested in the homological mirror symmetry conjecture, namely, we expect the following equivalence of triangulated categories to hold: $${\rm HMF}^{L_f}_S(f)\cong D^b{\rm Fuk}^\to(\widetilde f).$$ There is much evidence of the above conjecture which follow from related results by several authors (cf. [@T1; @KST1; @KST2; @Ue; @FU1; @FU2; @LP]). In just a few days before the submission to the arXiv, a closely related work by Habermann–Smith [@HS] has appeared. They seem to prove the conjecture for invertible polynomials in two variables, where in order to show the equivalence they seem to use the same full strongly exceptional collection as the one in an unpublished work by the second named author and Yoko Hirano (cf. [@T3]). Based on this conjecture, it is natural to expect the existence of a full exceptional collection in ${\rm HMF}^{L_f}_S(f)$. Recently, Hirano–Ouchi [@HO] prove this by their semi-orthogonal decomposition theorem and an induction on the number of variables in $f$. However, unfortunately, it was not clear how to choose matrix factorizations which form a full exceptional collection. In 1977, Orlik–Randell [@OR Theorem 2.11] construct a nice $\ZZ$-basis of the middle homology group of the Milnor fiber associated to $\widetilde f$ and conjectured that the $\ZZ$-basis is represented by a distinguished basis of vanishing cycles [@OR Conjecture 4.1]. The purpose of this paper is to prove the algebraic counterpart of the Orlik–Randell conjecture. More precisely, we explicitly construct a full exceptional collection motivated by their $\ZZ$-basis, which is moreover a Lefschetz decomposition in the sense of Kuznetsov–Smirnov [@KuS Definition 2.3], whose Euler matrix are calculated in terms of the “zeta function" of the inverse of the polarization. Most important part of our main theorem (Theorem \[thm: main\]) is the following \[thm: intro\] There exist an exceptional object $E\in {\rm HMF}^{L_f}_S(f)$ and an autoequivalence $\tau$ on ${\rm HMF}^{L_f}_S(f)$ such that $(E,\tau E,\dots ,\tau^{\widetilde \mu_n -1} E)$ is a full exceptional collection: $${\rm HMF}^{L_f}_S(f)\cong \langle E,\tau E,\dots ,\tau^{\widetilde \mu_n -1} E\rangle,$$ whose Euler matrix is given by $$\prod_{i=0}^{n}\left(1-N^{d_i}\right)^{(-1)^{n-i+1}},$$ where $\widetilde \mu_n:=d_n-d_{n-1}+\dots +(-1)^{n-1}d_1+(-1)^{n}$, $d_i:=a_1\dots a_i$, is the Milnor number of $\widetilde f$, and $N:=(\delta_{i+1,j})$ is the regular nilpotent $\widetilde \mu_n\times \widetilde \mu_n$-matrix. We give here an outline of this paper. In Section 2, after introducing the definition of invertible polynomials and their maximal gradings, we recall some definitions and facts on matrix factorizations necessary in later sections. Section 3 is the main part of this paper. We first show the existence of the polarization $\tau$ of index $\widetilde{\mu}$ in the sense of Kuzunetsov-Smirnov [@KuS Definition 2.3], the autoequivalence $\tau$ on ${\rm HMF}^{L_f}_S(f)$ satisfying $$T^{-n-2k}\S = \tau^{-\widetilde{\mu}},$$ for some $k\in\ZZ$ where $\S$ is the Serre functor on ${\rm HMF}^{L_f}_S(f)$. This motivates us a Lefschetz decomposition with respect to $\tau$ due to [@HO]. Note that the automorphism induced by $T^{-n}\S$ on the Grothendieck group can be considered as the mirror dual object to the Milnor monodromy associated to the singularity associated to $\widetilde f$. Therefore, we then show the same linear algebraic property as the Milnor monodromy have. Based on this observation, we find a candidate of the object $E$ in Theorem \[thm: intro\] and come to our main theorem in this paper (Theorem \[thm: main\]). Section 4 is devoted to the proof of Theorem \[thm: main\]. It is Lemma \[lem:key\] that enables us to show that the given exceptional collection is full, whose $\ZZ$-graded version is also used in [@KST2]. It is also the key that one can calculate systematically the spaces of morphisms due to Dyckerhof (the $L_f$-graded modification of Proposition 4.3 in [@D]). It is known by by Kreuzer–Skarke [@KrS] that any invertible polynomial is a Thom–Sebastiani sum of chain type polynomials and loop type polynomials. For an invertible polynomial of loop type, two important problems are still open; to prove the existence of a full exceptional collection and to construct systematically a “good one" among full exceptional collections. We hope to solve these in the near future. [**Acknowledgements.**]{} We would like to thank Yuuki Hirano and Genki Ouchi for valuable discussion. The second named author is grateful to Wolfgang Ebeling who prompted him to look at the Orlik–Randell conjecture. The second named author is supported by JSPS KAKENHI Grant Number JP16H06337. Preliminaries ============= In this section, we set up several definitions which are used in the present paper. Invertible polynomials ---------------------- Let $f(x_1,\ldots, x_n)\in \CC[x_1,\dots, x_n]$ be a non-degenerate weighted homogeneous polynomial, namely, a polynomial with an isolated singularity at the origin with the property that there are positive integers $w_1,\ldots ,w_n$ and $d$ such that $f(\lambda^{w_1} x_1, \ldots, \lambda^{w_n} x_n) = \lambda^d f(x_1,\ldots ,x_n)$ for $\lambda \in \CC^\ast$. A non-degenerate weighted homogeneous polynomial $f(x_1,\ldots ,x_n)$ is called [*invertible*]{} if the following conditions are satisfied: 1. the number of variables ($=n$) coincides with the number of monomials in the polynomial $f(x_1,\ldots x_n)$, namely, $$f(x_1,\ldots ,x_n)=\sum_{i=1}^n c_i\prod_{j=1}^nx_j^{E_{ij}}$$ for some coefficients $c_i\in\CC^\ast$ and non-negative integers $E_{ij}$ for $i,j=1,\ldots, n$, 2. the matrix $E:=(E_{ij})$ is invertible over $\QQ$. Since we work over the complex number field $\CC$, without loss of generality one may assume that $c_i=1$ for $i=1,\ldots, n$, which can be achieved by rescaling the variables. There is also a classification of invertible polynomials. It is known by [@KrS] that an invertible polynomial $f$ is a Thom–Sebastiani sum of invertible polynomials of the following types: 1. $x_1^{a_1}x_2 + x_2^{a_2}x_3 + \cdots + x_{n-1}^{a_{n-1}}x_n + x_n^{a_n}$ (chain type; $n\ge 1$); 2. $x_1^{a_1}x_2 + x_2^{a_2}x_3 + \cdots + x_{n-1}^{a_{n-1}}x_n + x_n^{a_n}x_1$ (loop type; $n\ge 2$). Let $f(x_1,\dots ,x_n)=\displaystyle\sum_{i=1}^na_i\prod_{j=1}^nx_j^{E_{ij}}$ be an invertible polynomial. Consider the free abelian group $\displaystyle\bigoplus_{i=1}^n\ZZ\vec{x}_i\oplus \ZZ\vec{f}$ generated by the symbols $\vec{x}_i$ for the variables $x_i$ for $i=1,\dots, n$ and the symbol $\vec{f}$ for the polynomial $f$. The [*maximal grading*]{} $L_f$ of the invertible polynomial $f$ is the abelian group defined by the quotient $$L_f:=\left.\left(\bigoplus_{i=1}^n\ZZ\vec{x}_i\oplus \ZZ\vec{f}\right)\right/\left(\vec{f}-\sum_{j=1}^nE_{ij}\vec{x}_j;\ i=1,\dots ,n\right).$$ Note that $L_f$ is an abelian group of rank $1$ which is not necessarily free. Matrix factorizations --------------------- In this subsection, we recall some properties of $L_f$-graded matrix factorizations attached to an invertible polynomial $f$. All the results are slight modifications of those in [@KST1; @KST2; @Or1; @Or2]. Set $S:=\CC[x_1,\dots. x_n]$, which is naturally an $L_f$-graded $\CC$-algebra; $$S=\bigoplus_{\vec{l}\in L_f} S_{\vec{l}}.$$ For any $L_f$-graded $S$-module $M$, $M(\vec{l})$ denotes the grading shift by $\vec{l}\in L_f$ of $M;$ $$M(\vec{l})=\bigoplus_{\vec{l'}\in L_f} M(\vec{l})_{\vec{l'}},\quad M(\vec{l})_{\vec{l'}}:=M_{\vec{l}+\vec{l'}},$$ which induces the autoequivalence functor $(\vec{l})$ for $\vec{l}\in L_f$ on ${\rm gr}^{L_f}\text{-}S$, the category of finitely generated $L_f$-graded $S$-modules. Let $F_0,F_1$ be $L_f$-graded free modules and $f_0:F_0\to F_1$, $f_1:F_1\to F_0(\vec{f})$ be morphisms such that $f_1\circ f_0=f\cdot{\rm id}_{F_0}, f_0(\vec{f})\circ f_1=f\cdot{\rm id}_{F_1}$. The tuple $(F_0,F_1,f_0,f_1)$ is called a $L_f$-graded [*matrix factorization*]{} of $f$ and denoted by $$\overline{F}:=\Big(\mf{F_0}{f_0}{f_1}{F_1}\Big).$$ For an $L_f$-graded matrix factorization $\overline{F}$, the rank of $F_0$ must coincide with the one of $F_1$, which we shall call the [*size*]{} of the matrix factorization $\overline{F}$. Let $\overline{F}=\Big(\mf{F_0}{f_0}{f_1}{F_1}\Big)$ and $\overline{F'}=\Big(\mf{F'_0}{f'_0}{f'_1}{F'_1}\Big)$ be $L_f$-graded matrix factorizations. 1. A [*morphism*]{} $\phi: \overline{F}\to \overline{F'}$ of $L_f$-graded matrix factorizations from $\overline{F}$ to $\overline{F'}$ is a pair $\phi=(\phi_0,\phi_1)$ where $\phi_0:F_0\to F'_0$ and $\phi_1:F_1\to F'_1$ are morphisms in ${\rm gr}^{L_f}\text{-}S$ such that $\phi_1\circ f_0=f'_0\circ \phi_0$ and $\phi_0(\vec{f})\circ f_1=f'_1\circ \phi_1$. 2. The morphism $\phi=(\phi_0,\phi_1):\overline{F}\to \overline{F'}$ is called [*null-homotopic*]{} if there is a pair $\psi=(\psi_0,\psi_1)$ where $\psi_0:F_0\to F'_1(-\vec{f})$ and $\psi_1:F_1\to F'_0$ are morphisms in ${\rm gr}^{L_f}\text{-}S$ such that $\phi_0=f'_1(-\vec{f})\circ \psi_0+\psi_1\circ f_0$ and $\phi_1=\psi_0(\vec{f})\circ f_1+f'_0\circ \psi_1$. The category ${\rm MF}^{L_f}_S(f)$ of $L_f$-graded matrix factorizations of $f$ is a Frobenius category whose morphisms factoring through projectives coincide with null-homotopic morphisms. Therefore, its stable category $${\rm HMF}^{L_f}_S(f):=\underline{{\rm MF}}^{L_f}_S(f)$$ has a natural structure of a triangulated category due to [@Ha]. We shall denote by $T$ the translation functor on ${\rm HMF}^{L_f}_S(f)$. By definition of the triangulated structure on ${\rm HMF}^{L_f}_S(f)$, one easily see the following. Each exact triangle in ${\rm HMF}^{L_f}_S(f)$ is isomorphic to a triangle of the form $$\overline{F}\stackrel{[\phi]}{\longrightarrow} \overline{F'}\stackrel{}{\longrightarrow} C(\phi)\stackrel{}{\longrightarrow} T\overline{F}$$ for some $\overline{F},\overline{F'}\in {\rm HMF}^{L_f}_S(f)$ and $\phi\in{\rm MF}^{L_f}_S(f)(\overline{F},\overline{F'})$, a lift of $[\phi]\in{\rm HMF}^{L_f}_S(f)(\overline{F},\overline{F'})$. Since $f$ has an isolated singularity at the origin, we have the following The category ${\rm HMF}^{L_f}_S(f)$ has the following properties: 1. It is finite. Namely, for all $\overline{F}, \overline{F'}\in {\rm HMF}^{L_f}_S(f)$, we have $$\sum_{p\in\ZZ}\dim_\CC {\rm HMF}^{L_f}_S(f)(\overline{F},T^p \overline{F'})<\infty.$$ 2. It is idempotent complete. Namely, for any $\overline{F}\in {\rm HMF}^{L_f}_S(f)$ and any idempotent $e\in {\rm HMF}^{L_f}_S(f)(\overline{F},\overline{F})$, there exists an object $\overline{F'}\in {\rm HMF}^{L_f}_S(f)$ and a pair of morphisms $\phi\in {\rm HMF}^{L_f}_S(f)(\overline{F},\overline{F'})$ and $\phi'\in {\rm HMF}^{L_f}_S(f)(\overline{F'},\overline{F})$ such that $\phi'\circ \phi=e$ and $\phi\circ \phi'={\rm id}_{\overline{F'}}$ The autoequivalence functor $(\vec{l})$ for $\vec{l}\in L_f$ on ${\rm gr}^{L_f}\text{-}S$ induces an autoequivalence on ${\rm HMF}^{L_f}_S(f)$, which we denote by the same symbol $(\vec{l})$. Explicitly, the action of $(\vec{l})$ takes an object $\overline{F}$ to the object $$\overline{F}(\vec{l}):= \Big(\mf{F_0(\vec{l})}{f_0(\vec{l})}{f_1(\vec{l})}{F_1(\vec{l})}\Big),$$ and takes a morphism $\phi=(\phi_0,\phi_1)$ to the morphism $\phi(\vec{l}):=(\phi_0(\vec{l}),\phi_1(\vec{l}))$. Similary, the translation functor $T$ on ${\rm HMF}^{L_f}_S(f)$ takes an object $\overline{F}$ to the object $$T\overline{F} :=\Big(\mf{F_1}{-f_1\quad}{-f_0(\vec{f})\quad}{F_0(\vec{f})}\Big),$$ and takes a morphism $\phi=(\phi_0,\phi_1)$ to the morphism $T(\phi):=(\phi_1,\phi_0(\vec{f}))$. From these descriptions of autoequivalences $(\vec{l})$ and $T$, we obtain the following: On the category ${\rm HMF}^{L_f}_S(f)$, we have $T^2=(\vec{f})$. It is also known that Auslander–Reiten duality [@AR] yields the existence of the Serre functor on ${\rm HMF}^{L_f}_S(f)$. The functor $$\S:=T^{n-2}\circ (-\vec{\varepsilon}_f)=T^n\circ(-\vec{x}_1-\dots -\vec{x}_n),\quad \vec{\varepsilon}_f:=\left(\sum_{i=1}^n\vec{x}_i\right) -\vec{f},$$ defines the Serre functor on ${\rm HMF}^{L_f}_S(f)$, namely, there exists a bi-functorial isomorphism $${\rm HMF}^{L_f}_S(f)(\overline{F}, \overline{F'})\cong {\rm HMF}^{L_f}_S(f)(\overline{F'},\S\overline{F})^\ast,$$ where ${}^\ast$ denotes the duality over $\CC$. We also recall some notions necessary for statements below. Let $\T$ be a $\CC$-linear triangulated category $\T$ with the translation functor $T$. 1. An object $E$ in $\T$ is called an [*exceptional object*]{} (or is called [*exceptional*]{}) if $\T(E,E)=\CC\cdot {\rm id}_E$ and $\T(E,T^p E)=0$ when $p\ne 0$. 2. An [*exceptional collection*]{} $\E=(E_1,\dots, E_\mu)$ in $\T$ is a finite set of exceptional objects satisfying the condition $\T(E_i,T^p E_j)=0$ for all $p$ and $i>j$. 3. An exceptional collection $\E=(E_1,\dots, E_\mu)$ in $\T$ is called a [*strongly exceptional collection*]{} if $\T(E_i,T^p E_j)=0$ for all $p\ne 0$ and $i,j=1,\dots, \mu$. 4. An exceptional collection $\E=(E_1,\dots, E_\mu)$ in $\T$ is called [*full*]{} if the smallest full triangulated subcategory of $\T$ containing all elements in $\E$ is equivalent to $\T$ as a triangulated category. Let $M$ be an $L_f$-graded $S$-module of the form $S/(p_1, \dots, p_s)$ such that $p_i\in S_{\vec{p}_i}$ for some $\vec{p}_i\in L_f$, $p_1,\dots , p_s$ forms a regular sequence and $f\in (p_1, \dots, p_s)$. Write $f=p_1h_1+\dots +p_s h_s$ with $h_i\in S_{\vec{f}-\vec{p}_i}$ and put $P:=\oplus_{i=1}^{s}S(-\vec{p}_i)$. Then the Koszul resolution of $M$ as an $L_f$-graded $S$-module $$0\longrightarrow \wedge^s P\longrightarrow \wedge^{s-1} P \longrightarrow \dots \longrightarrow \wedge^1 P\longrightarrow \wedge^{0} P=S\longrightarrow M\longrightarrow 0,$$ yields the $L_f$-graded matrix factorization $\overline{F}=(\mfs{F_0}{f_0}{f_1}{F_1})$ of $f$ such that $$F_0:=\bigoplus_{k}\left(\wedge^{2k} P\right)(k\vec{f}),\quad F_1:=\bigoplus_{k}\left(\wedge^{2k-1} P\right)(k\vec{f}),$$ which is unique up to isomorphism in ${\rm HMF}^{L_f}_{S}(f)$ and is called the [*stabilization*]{} of $M$ and will be denoted by $M^{stab}$. For matrix factorizations from stabilizations, there is a convenient way to calculate their spaces of morphisms. See [@D Proposition 4.3]. Note that the category ${\rm HMF}^{L_f}_{S}(f)$ always contains an object $\CC^{stab}$, the stabilization of the simple $L_f$-graded $S$-module $\CC=S/(x_1,\dots, x_n)$, which plays an important role. In particular, we have the following key lemma, the $L_f$-graded version of the one by Kajiura-Saito-Takahashi [@KST2 Theorem 4.5]. \[lem:key\] Let $\E$ be an exceptional collection in ${\rm HMF}^{L_f}_{S}(f)$ and $\langle \E\rangle$ be the smallest full triangulated subcategory of ${\rm HMF}^{L_f}_{S}(f)$ containing all elements in $\E$. Suppose that $\langle \E\rangle$ satisfies the following conditions$:$ 1. $\langle \E\rangle$ is closed under the grading shift $(\vec{l})$ for all $\vec{l}\in L_f$. 2. $\langle \E\rangle$ contains $\CC^{stab}$. Then, the natural fully faithful functor $\langle \E\rangle\longrightarrow {\rm HMF}^{L_f}_{S}(f)$ is a triangulated equivalence. It is natural from the homological mirror symmetry conjecture to expect that the category ${\rm HMF}^{L_f}_S(f)$ admits a full exceptional collection. Indeed, there are many results which support this expectation (cf. [@T1; @KST1; @KST2; @Ue; @FU1; @FU2; @LP]). For $f$ of chain type, recently Hirano–Ouchi [@HO] prove the existence of the full exceptional collection. The main result in this paper is that, furthermore, there is a “good choice” among full exceptional collections from both combinatorial and geometric points of view. Main result =========== From now on, we shall only consider an invertible polynomial $f$ of chain type; $$f=x_1^{a_1}x_2 + x_2^{a_2}x_3 + \cdots + x_{n-1}^{a_{n-1}}x_n + x_n^{a_n}.$$ For simplicity, we assume that $a_i\ge 2$ for all $i=1,\dots, n$. Set $$d_i:=a_1\cdots a_i,\quad d_0:=1,$$ and define $\widetilde \mu_i$ inductively by $$\widetilde \mu_{i}:=d_{i}-\widetilde \mu_{i-1},\quad \widetilde \mu_0:=1.$$ It is important that $L_f/\ZZ\vec{f}$ is generated by $\vec{x}_1$, which follows from the relations in $L_f$; $$\vec{f}=a_1\vec{x}_1+\vec{x_2}=a_2\vec{x}_2+\vec{x_3}=\dots =a_{n-1}\vec{x}_{n-1}+\vec{x_n}=a_n\vec{x}_n.$$ Indeed, $\vec{x}_i=(-1)^{i-1}d_{i-1}\vec{x}_1$ for $i=2,\dots, n$ and $d_n\vec{x}_1=\vec{0}$ in $L_f/\ZZ\vec{f}$. In particular, from the description of the Serre functor $\S$ on ${\rm HMF}^{L_f}_{S}(f)$, this implies the following There exists an integer $k\in\ZZ$ such that $$T^{-n-2k}\S= \begin{cases} (\widetilde \mu_n \vec{x}_1) & \text{if}\quad n=2m+1,\ m\in \ZZ_{\ge 0},\\ (-\widetilde \mu_n \vec{x}_1) & \text{if}\quad n=2m,\ m\in\ZZ_{\ge 1}. \end{cases}$$ It is shown in [@HO Corollary 4.6] that ${\rm HMF}^{L_f}_{S}(f)$ has a full exceptional collection $\E=(E_1,\dots,E_{\widetilde \mu_n})$. The above proposition means that, if $n$ is odd (resp. even), $\tau:=(-\vec{x}_1)$ (resp. $\tau:=(\vec{x}_1)$) is a polarization of ${\rm HMF}^{L_f}_{S}(f)$ of index $\widetilde \mu_n$ in the sense of Kuznetsov–Smirnov [@KuS Definition 2.1]. Therefore, it is natural to expect the existence of an exceptional object $E$ such that $${\rm HMF}^{L_f}_{S}(f)\cong \langle E, \tau E, \dots, \tau^{\widetilde\mu_n-1} E\rangle,$$ a Lefschetz decomposition of ${\rm HMF}^{L_f}_{S}(f)$ with respect to the above polarization $\tau$ in the sense of [@KuS Definition 2.3]. In order to specify the numerical properties to be satisfied by this exceptional object $E$, following [@OR] we first introduce a polynomial $\varphi_n(t)$ in $t$ of degree $\widetilde \mu_n$; $$\varphi_n(t):=\prod_{i=0}^{n}\left(1-t^{d_i}\right)^{(-1)^{n-i}}.$$ It will turn out later that $\varphi_n(t)$ is the “zeta function" of the automorphism induced by $\tau^{-1}$ on the Grothendieck group $K_0({\rm HMF}^{L_f}_{S}(f))$. \[prop:M\_1\] Let $c'_i$ be the coefficients of $t^i$ in $\varphi_{n}(t)$. 1. We have $$c'_0=1,\quad c'_{\widetilde \mu_n-i}=(-1)^{n+1}c'_i.$$ 2. We have $$\varphi_n(t)=\det ({\bf 1}-tM_1),$$ where $M_1$ is the following $\widetilde \mu_n\times \widetilde \mu_n$-matrix $$M_1:= \begin{pmatrix} -c'_1 & 1 & 0 & \cdots & 0 \\ -c'_2 & 0 & 1 & \ddots & \vdots\\ \vdots & 0 & \ddots & \ddots & 0\\ \vdots & \vdots & \ddots & \ddots & 1 \\ -c'_{\widetilde \mu_n} & 0 & \cdots &0 & 0 \end{pmatrix}.$$ Define a $\widetilde \mu_n\times \widetilde \mu_n$-matrix $\chi_n$ by $$\label{eq:inverse} \chi_n:=\frac{1}{\varphi_n(N)}=\prod_{i=0}^{n}\left(1-N^{d_i}\right)^{(-1)^{n-i+1}},$$ where $N$ is the following nilpotent $\widetilde \mu_n\times \widetilde \mu_n$-matrix $$N:= \begin{pmatrix} 0 & 1 & 0 & \cdots & 0 \\ \vdots & 0 & 1 & \ddots & \vdots\\ \vdots & \ddots & \ddots & \ddots & 0\\ \vdots & \ddots & \ddots & \ddots & 1 \\ 0 & \cdots & \cdots &\cdots & 0 \end{pmatrix}.$$ More concretely, $\chi_n$ is given by $$\chi_n= \begin{pmatrix} c_0 & c_1 & c_2 & \cdots & \cdots & c_{\widetilde \mu_n -2} & c_{\widetilde \mu_n-1}\\ 0 & c_0 & c_1 & c_2 & \ddots & \ddots & c_{\widetilde \mu_n -2}\\ 0 & 0 & c_0 & c_1 & c_2 & \ddots & \vdots\\ \vdots & \ddots & \ddots & \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots &\ddots &c_0 & c_1& c_2\\ \vdots & \ddots & \ddots &\ddots &0 & c_0& c_1\\ 0 & \cdots & \cdots & \cdots & 0 & 0 &c_0 \end{pmatrix},$$ where $c_i$ is the coefficient of $t^i$ in $1/\varphi_n(t)$; $$\frac{1}{\varphi_n(t)}=c_0+c_1 t+\dots +c_{\widetilde \mu_n-1} t^{\widetilde \mu_n-1}+c_{\widetilde \mu_n}t^{\widetilde \mu_n}+\cdots.$$ Note that $c_0=1$ and for each positive integer $j$ we have $$\label{eq: inverse matrix} \sum_{i=0}^\infty c_i c'_{j-i}=0,$$ where we put $c'_{k}=0$ if $k<0$. Set $$M:=(-1)^n\chi_n^{-1}\chi_n^T,$$ and consider its “zeta function" $$\Phi_n(t):={\rm det}\left({\bf 1}-tM\right).$$ Note that $M$ will be considered as a matrix representation of the automorphism induced by $T^{-n}\S$ on the Grothendieck group $K_0({\rm HMF}^{L_f}_{S}(f))$. \[prop:M\] Let the notations be as above. 1. We have $$M=M_1^{\widetilde \mu_n}.$$ 2. We have $$\Phi_n(t)=\prod_{i=0}^{n}\left(1-t^{\frac{d_i}{e_i}}\right)^{(-1)^{n-i}e_i},\quad e_i:={\rm gcd}(d_i,\widetilde \mu_n).$$ In particular, $t^{\widetilde \mu_n}\cdot \Phi_n(t^{-1})$ is the characteristic polynomial of the Milnor monodromy of the singularity associated to the Berglund–Hübsch transpose $\widetilde f$ of $f$ [@BHu]: $$\widetilde f=\widetilde f (x_1,\dots ,x_n)=x_1^{a_1} +x_1 x_2^{a_2}+\dots+x_{n-1}x_n^{a_n}.$$ Using Proposition \[prop:M\_1\] (i) and , we obtain by a direct calculation that $$\chi_n M_1^{\widetilde \mu_n}=\left(\left(\dots\left(\left(\chi_n M_1\right)M_1\right)\dots \right)M_1\right)=(-1)^n\chi_n^T,$$ which implies (i). Together with (i), Proposition \[prop:M\_1\] (ii) yields the first statement of (ii). The characteristic polynomial of the Milnor monodromy of the singularity associated to $\widetilde f$ can be computed using Varchenko’s method [@Va Theorem 4.1.3], which coincides with $t^{\widetilde \mu_n}\cdot \Phi_n(t^{-1})$. In order to have Proposition \[prop:M\] (i), one only needs a polynomial satisfying Proposition \[prop:M\_1\] (i) for some $n$ and $1/\varphi_n(t)$ may not be necessarily a polynomial. For example, one can apply the same proof for $(1-t)^{n+1}$, the “zeta function" of the automorphism induced by $\O(-1)$ on the Grothendieck group of the bounded derived category $\D^b(\PP^n)$ of coherent sheaves on the projective space $\PP^n$, which admits a Lefschetz decomposition $\D^b(\PP^n)\cong \langle \O(0), \O(1),\dots, \O(n)\rangle$ with the polarization $\O(1)$. Note that $(1-t)^{-n-1}$ is the Poincaré series of the ($\ZZ$-graded) polynomial ring in $n+1$-variables, which also calculates the dimension of ${\rm Hom}(\O(0),\O(i))$, $i=0,\dots, n$. Equivalent results are obtained in Balnojan-Hertling [@BaHe], in which paper they consider these in order to reconstruct spectral numbers of a singularity starting from an upper triangular matrix, its Seifert form. According to [@BaHe], Horocholyn [@Ho] seems to be the first one who considers such $\widetilde \mu_n$-th root as $M_1$ here. As is already comment in [@BaHe], Orlik–Randell should have known these including its geometric meaning although they did not write properly in [@OR]. As far as we know, the use of the “zeta functions" and the equation , which simplified the proof so much, and a natural algebraic explanation behind these facts are new. Now we can state our main theorem in this paper, which is motivated by the homological mirror symmetry and the conjecture by Orlik–Randell [@OR Conjecture 4.1] in singularity theory. In 1977, Orlik–Randell found a $\ZZ$-basis of the middle homology group $H_{n-1}(\widetilde f^{-1}(1),\ZZ)$ of the Milnor fiber $\widetilde f^{-1}(1)$ in which the intersection form and the Milnor monodromy are represented by the matrices $(-1)^{(n-1)(n-2)/2}(\chi_n^{-1} +(\chi_n^{-1})^T)$ and $M^{-1}$. Then they conjectured that the $\ZZ$-basis is represented by a distinguished basis of vanishing cycles, whose Seifert matrix is given by $(-1)^{(n-1)(n-2)/2}\chi_n^{-1}$. \[thm: main\] Let the notations be as above. Set $$E_i:=\begin{cases} \left(S/(x_1, x_3, \dots, x_{2m+1})\right)^{stab}(-i\vec{x}_1) & \text{if}\quad n=2m+1,\ m\in \ZZ_{\ge 0},\\ \left(S/(x_2,x_4, \dots, x_{2m})\right)^{stab}(i\vec{x}_1) & \text{if}\quad n=2m,\ m\in\ZZ_{\ge 1}. \end{cases}$$ Then, for each $a\in \ZZ$, $(E_a,\dots E_{\widetilde \mu_n-1+a})$ forms a full exceptional collection in ${\rm HMF}^{L_f}_S(f)$. Moreover, its Euler matrix coincides with $\chi_n$: $$\chi_n=(\chi(E_i,E_j))_{i,j=a}^{\widetilde \mu_n-1+a},\quad \chi(E_i,E_j):=\sum_{p\in\ZZ}(-1)^p\dim_\CC {\rm HMF}^{L_f}_S(f)(E_i,T^pE_j).$$ Once we have a full exceptional collection whose Euler matrix is given by $\chi_n$, then we also have another one with the Euler matrix $\chi_n^{-1}$. In this sense, Theorem \[thm: main\] solves the homological mirror symmetric dual statement of the Orlik–Randell’s conjecture. We immediately have the following For an invertible polynomial of chain type, the homological mirror symmetry at the level of the Grothendieck group holds. Namely, we have the following isomorphism of lattices $$\left(K_0({\rm HMF}^{L_f}_S(f)), \chi+\chi^T\right)\cong \left(H_{n-1}(\widetilde f^{-1}(1),\ZZ),(-1)^{\frac{(n-1)(n-2)}{2}}I\right),$$ where $\chi$ is the Euler form and $I$ is the intersection form. For some special cases, Theorem \[thm: main\] is already known. For example, for $f=x_1^{a_1}$ it follows from [@T1], and for $f=x_1^{2k}x_2+x_2^2$ ($k\ge 1$) and $f=x_1^2x_2+x_2^{a_2}x_3+x_3^2$ one can derive the above statement from the corresponding result in [@KST1]. If $n=2$, then the full exceptional collection in Theorem \[thm: main\] is strong, whose endomorphism algebra is a Nakayama algebra. \[thm:n=2\] For $n=2$, we have an equivalence of triangulated categories $${\rm HMF}^{L_f}_S(f)\cong \D^b{\rm mod}\text{-} A_{\widetilde \mu_2}(a_1),$$ where $A_{\widetilde \mu_2}(a_1)$ is the Nakayama algebra given by the equi-oriented $A_{\widetilde \mu_2}$-quiver $$1 \stackrel{x}{\longrightarrow} 2 \stackrel{x}{\longrightarrow} 3\stackrel{x}{\longrightarrow} \dots \stackrel{x}{\longrightarrow} \widetilde \mu_2-1 \stackrel{x}{\longrightarrow} \widetilde \mu_2,$$ with all relations $x^{a_1}=0$. This is a direct consequence of Theorem \[thm: main\] and Lemma \[lem:4.1\] below for $n=2$. Proof ===== We will apply our key lemma, Lemma \[lem:key\], to prove our main theorem. Morphisms --------- Let the notations be as in the previous sections. Since the object $E_i$ is the stabilization of an $L_f$-graded $S$-module, we obtain the following two lemmas by a direct calculation. \[lem:4.1\] Suppose that $n=2m$, $m\in \ZZ_{\ge 1}$. 1. For each $i\in \ZZ$, we have a natural isomorphism of $L_f$-graded $\CC$-algebras $$\bigoplus_{\vec{l}\in L_f}{\rm HMF}^{L_f}_{S}(f)(E_i, E_i(\vec{l})) \cong S/(x_2, x_4,\dots, x_{2m}, x_1^{a_1},x_3^{a_3},\dots, x_{2m-1}^{a_{2m-1}}),$$ where the product of the LHS is the one induced by the composition map $$\begin{aligned} & &{\rm HMF}^{L_f}_{S}(f)(E_i, E_i(\vec{l}))\times {\rm HMF}^{L_f}_{S}(f)(E_i, E_i(\vec{l}'))\\ &\cong &{\rm HMF}^{L_f}_{S}(f)(E_i(\vec{l}'), E_i(\vec{l}+\vec{l}'))\times {\rm HMF}^{L_f}_{S}(f)(E_i, E_i(\vec{l}))\\ &\longrightarrow & {\rm HMF}^{L_f}_{S}(f)(E_i, E_i(\vec{l}+\vec{l}')).\end{aligned}$$ 2. For each $i\in \ZZ$, we have $$\bigoplus_{\vec{l}\in L_f}{\rm HMF}^{L_f}_{S}(f)(E_i, TE_i(\vec{l}))=0.$$ We can write $f$ as $$f=x_2(x_1^{a_1}+x_2^{a_2-1}x_3)+x_4(x_3^{a_3}+x_4^{a_4-1}x_5)+\dots +x_{2m}(x_{2m-1}^{a_{2m-1}}+x_{2m}^{a_{2m}-1})$$ to describe the stabilization of $S/(x_2, x_4,\dots, x_{2m})$. By this expression, all the statements are straightforward since $a_i\ge 2$ for all $i=1,\dots, 2m$. \[lem:4.2\] Suppose that $n=2m+1$, $m\in \ZZ_{\ge 0}$. 1. For each $i\in \ZZ$, we have a natural isomorphism of $L_f$-graded $\CC$-algebras $$\bigoplus_{\vec{l}\in L_f}{\rm HMF}^{L_f}_{S}(f)(E_i, E_i(\vec{l})) \cong S/(x_1, x_3,\dots, x_{2m+1}, x_2^{a_2},x_4^{a_4},\dots, x_{2m}^{a_{2m}}),$$ 2. For each $i\in \ZZ$, $\oplus_{\vec{l}\in L_f}{\rm HMF}^{L_f}_{S}(f)(E_i, TE_i(\vec{l}))$ has a structure of an $L_f$-graded module over $\oplus_{\vec{l}\in L_f}{\rm HMF}^{L_f}_{S}(f)(E_i, E_i(\vec{l}))$ by the composition map $$\begin{aligned} & &{\rm HMF}^{L_f}_{S}(f)(E_i, TE_i(\vec{l}))\times {\rm HMF}^{L_f}_{S}(f)(E_i, E_i(\vec{l}'))\\ &\cong &{\rm HMF}^{L_f}_{S}(f)(E_i(\vec{l}'), TE_i(\vec{l}+\vec{l}'))\times {\rm HMF}^{L_f}_{S}(f)(E_i, E_i(\vec{l}))\\ &\longrightarrow & {\rm HMF}^{L_f}_{S}(f)(E_i, TE_i(\vec{l}+\vec{l}')), \end{aligned}$$ which is free of rank one generated by an element in ${\rm HMF}^{L_f}_{S}(f)(E_i, TE_i(-\vec{x_1}))$. Moreover, the composition map $$\begin{aligned} & &{\rm HMF}^{L_f}_{S}(f)(E_i, TE_i(\vec{l}))\times {\rm HMF}^{L_f}_{S}(f)(E_i, TE_i(\vec{l}'))\\ &\cong &{\rm HMF}^{L_f}_{S}(f)(TE_i(\vec{l}'), T^2E_i(\vec{l}+\vec{l}'))\times {\rm HMF}^{L_f}_{S}(f)(E_i, TE_i(\vec{l}))\\ &\longrightarrow & {\rm HMF}^{L_f}_{S}(f)(E_i, E_i(\vec{f}+\vec{l}+\vec{l}'))\end{aligned}$$ is zero for all $\vec{l},\vec{l}'\in L_f$. We can write $f$ as $$f=x_1(x_1^{a_1}x_2)+x_3(x_2^{a_2}+x_3^{a_3-1}x_4)+x_5(x_4^{a_4}+x_5^{a_5-1}x_6)+\dots +x_{2m+1}(x_{2m}^{a_{2m}}+x_{2m+1}^{a_{2m+1}-1})$$ to describe the stabilization of $S/(x_1, x_3,\dots, x_{2m+1})$. By this expression, all the statements are straightforward since $a_i\ge 2$ for all $i=1,\dots, 2m+1$. The following is a direct consequence of the above lemmas. For each $a\in\ZZ$, $(E_a,\dots E_{\widetilde \mu_n-1+a})$ forms an exceptional collection. Since $\vec{x}_i= (-1)^{i-1}d_{i-1}\vec{x}_1$ in $L_f/\ZZ\vec{f}$ and $(\vec{f})=T^2$, by calculating the generating function of dimensions of spaces of morphisms, we obtain the following corollaries. Suppose that $n=2m$, $m\in \ZZ_{\ge 1}$. We have $$\begin{aligned} (\chi(E_i,E_j))_{i,j=a}^{\widetilde \mu_n-1+a}&=& \frac{1-N^{a_1}}{1-N}\cdot \frac{1-(N^{d_2})^{a_3}}{1-N^{d_2}}\cdot \dots \cdot \frac{1-(N^{d_{2m-2}})^{a_{2m-1}}}{1-N^{d_{2m-2}}}\\ &=&\prod_{i=0}^{n}\left(1-N^{d_i}\right)^{(-1)^{n-i+1}}=\frac{1}{\varphi_n(N)}=\chi_n.\end{aligned}$$ Suppose that $n=2m+1$, $m\in \ZZ_{\ge 0}$. We have $$\begin{aligned} (\chi(E_i,E_j))_{i,j=a}^{\widetilde \mu_n-1+a}&=&(1-N)\cdot \frac{1-(N^{d_1})^{a_2}}{1-N^{d_1}}\cdot \frac{1-(N^{d_3})^{a_4}}{1-N^{d_3}}\cdot \dots \cdot \frac{1-(N^{d_{2m-1}})^{a_{2m}}}{1-N^{d_{2m-1}}}\\ &=&\prod_{i=0}^{n}\left(1-N^{d_i}\right)^{(-1)^{n-i+1}}=\frac{1}{\varphi_n(N)}=\chi_n.\end{aligned}$$ In these corollaries, we used the fact that $N^{d_n}=0$ due to $d_n> \widetilde \mu_n$. Grading shifts -------------- For each $a\in\ZZ$, denote by $\E_a$ the full triangulated subcategory $\langle E_a,\dots E_{\widetilde \mu_n-1+a}\rangle$ of ${\rm HMF}^{L_f}_{S}(f)$. We shall show that $\E_a$ is closed under the grading shift $(\vec{l})$ for all $\vec{l}\in L_f$, which is done inductively. Since $L_f/\ZZ\vec{f}$ is generated by $\vec{x}_1$ and $(\vec{f})=T^2$, we only need to show that it is closed under $(\vec{x}_1)$. \[lem:4.5\] Suppose that $n=2m$, $m\in \ZZ_{\ge 1}$. For each $i\in\ZZ$, set $$E'_i:=\left(S/(x_1, x_2,x_4, \dots, x_{2m})\right)^{stab}(i\vec{x}_1).$$ Then, there is an exact triangle $$E_{i-1}\longrightarrow E_{i}\longrightarrow E'_i\longrightarrow TE_{i-1}.$$ This is straightforward from the definition of $E_i$ and Lemma \[lem:4.1\]. \[lem:4.6\] Suppose that $n=2m$, $m\in \ZZ_{\ge 1}$. Define $f'\in S':=\CC[x_2,\dots, x_n]$ by $$f'=x_2^{a_2}x_3 + \cdots + x_{2m-1}^{a_{2m-1}}x_{2m} + x_{2m}^{a_{2m}},$$ and set $$\widetilde \mu'_{2m-1}:=\frac{\widetilde \mu_{2m}-1}{a_1}=a_2\dots a_{2m}-a_2\dots a_{2m-1}+\dots -a_2a_3+a_2-1.$$ Then $(E'_{a_1+a},E'_{2a_1+a}\dots E'_{\widetilde \mu'_{2m-1}\cdot a_1+a})$ forms an exceptional collection for each $a\in\ZZ$. If we further assume that Theorem \[thm: main\] holds for $n=2m-1$, then there is a triangulated equivalence $$\langle E'_{a_1+a},E'_{2a_1+a},\dots, E'_{\widetilde \mu'_{2m-1}\cdot a_1+a} \rangle \cong {\rm HMF}^{L_{f'}}_{S'}(f').$$ In particular, it is closed under the grading shift $(\vec{x}_2)$. Identify $L_{f'}$ with the subgroup of $L_f$ generated by $\vec{x}_2,\dots, \vec{x}_{2m}, \vec{f}$. The following expression of $f$ for the description of the stabilization $E'_i$ $$f=x_1(x_1^{a_1-1}x_2)+x_2(x_2^{a_2-1}x_3)+x_4(x_3^{a_3}+x_4^{a_4-1}x_5)+\dots +x_{2m}(x_{2m-1}^{a_{2m-1}}+x_{2m}^{a_{2m}-1})$$ enables us to calculate the spaces of morphisms $$\bigoplus_{\vec{l}'\in L_{f'}}{\rm HMF}^{L_f}_{S}(f)(E'_i, E'_i(\vec{l}')),\quad \bigoplus_{\vec{l}'\in L_{f'}}{\rm HMF}^{L_f}_{S}(f)(E'_i, TE'_i(\vec{l}')),$$ which turn out to be the same as those in Lemma \[lem:4.2\] for $f'$. The rest is clear. Suppose that $n=2m$, $m\in \ZZ_{\ge 1}$ and that Theorem \[thm: main\] holds for $n=2m-1$. Then, $\E_a$ is closed under the grading shift $(\vec{x}_1)$. It is straightforward from Lemma \[lem:4.5\] that $E'_{a_1+a},E'_{2a_1+a},\dots, E'_{\widetilde \mu'_{2m-1}\cdot a_1+a}$ is in the subcategory since $a_1\ge 1$ and $\widetilde \mu'_{2m-1}\cdot a_1=\widetilde \mu_{2m}-1$. Then, Lemma \[lem:4.6\] implies that $E'_{a}$ is also in the subcategory since $a_1\vec{x}_1=-\vec{x}_2$ in $L_f/\ZZ\vec{f}$. Thus we see that $E_{a-1}\in \E_a$ by Lemma \[lem:4.5\] with $i=a$. Repeating this argument and using the fact that $d_{2m}\vec{x_1}=\vec{0}$ in $L_f/\ZZ\vec{f}$ and $(\vec{f})=T^2$, we obtain the statement. \[lem:4.8\] Suppose that $n=2m+1$, $m\in \ZZ_{\ge 1}$. For each $i\in\ZZ$ and $j=1,\dots, a_1$, set $$E''_{i,j}:=\left(S/(x_1^j, x_3,x_5, \dots, x_{2m+1})\right)^{stab}(-i\vec{x}_1).$$ Then, there is an exact triangle $$E''_{i+1,j}\longrightarrow E''_{i,j+1}\oplus E''_{i+1,j-1}\longrightarrow E''_{i,j}\longrightarrow TE''_{i+1,j},$$ where $E''_{i,0}:=0$ and $E''_{i,a_1+1}:=0$. It follows from a direct calculation. For each $i\in\ZZ$, set $E''_i:=E''_{i,a_1}$. Note that we have $$E''_i\cong \left(S/(x_1^{a_1}+x_2^{a_2-1}x_3, x_3,x_5, \dots, x_{2m+1})\right)^{stab}(-i\vec{x}_1).$$ \[lem:4.9\] Suppose that $n=2m+1$, $m\in \ZZ_{\ge 1}$. Define $f''\in S'':=\CC[x_3,\dots, x_n]$ by $$f''=x_3^{a_3}x_4 + \cdots + x_{2m}^{a_{2m}}x_{2m+1} + x_{2m+1}^{a_{2m+1}},$$ and set $$\widetilde \mu''_{2m-1}:=\frac{\widetilde \mu_{2m+1}-a_1+1}{d_2}=a_3\dots a_{2m+1}-a_3\dots a_{2m}+\dots -a_3a_4+a_3-1.$$ Then $(E''_{d_2+a_1-2+a},E''_{2d_2+a_1-2+a},\dots, E''_{\widetilde \mu''_{2m-1}\cdot d_2+a_1-2+a})$ forms an exceptional collection for each $a\in\ZZ$. If we further assume that Theorem \[thm: main\] holds for $n=2m-1$, then there is a triangulated equivalence $$\langle E''_{d_2+a_1-2+a},E''_{2d_2+a_1-2+a},\dots, E''_{\widetilde \mu''_{2m-1}\cdot d_2+a_1-2+a} \rangle \cong {\rm HMF}^{L_{f''}}_{S''}(f'').$$ In particular, it is closed under the grading shift $(\vec{x}_3)$. Identify $L_{f''}$ with the subgroup of $L_f$ generated by $\vec{x}_3,\dots, \vec{x}_{2m+1}, \vec{f}$. The following expression of $f$ for the description of the stabilization $E''_i$ $$f=(x_1^{a_1}+x_2^{a_2-1}x_3)x_2 +x_3(x_3^{a_3-1}x_4)+x_5(x_4^{a_4}+x_5^{a_5-1}x_6)+\dots +x_{2m+1}(x_{2m}^{a_{2m}}+x_{2m+1}^{a_{2m+1}-1})$$ enables us to calculate the spaces of morphisms $$\bigoplus_{\vec{l}''\in L_{f''}}{\rm HMF}^{L_f}_{S}(f)(E''_i, E''_i(\vec{l}'')),\quad \bigoplus_{\vec{l}''\in L_{f'}}{\rm HMF}^{L_f}_{S}(f)(E''_i, TE''_i(\vec{l}'')),$$ which turn out to be the same as those in Lemma \[lem:4.2\] for $f''$. The rest is clear. Suppose that $n=2m+1$, $m\in \ZZ_{\ge 1}$ and that Theorem \[thm: main\] holds for $n=2m-1$. Then, $\E_a$ is closed under the grading shift $(\vec{x}_1)$. Lemma \[lem:4.8\] implies that the objects $E''_{d_2+a_1-2+a}$, $E''_{2d_2+a_1-2+a}$, $\dots$, $E''_{\widetilde \mu''_{2m-1}\cdot d_2+a_1-2+a}$ are in the subcategory since $E''_{i,1}=E_i$ and $\widetilde \mu''_{2m-1}\cdot d_2+a_1-2+a=\widetilde \mu_{2m+1}-1+a$. Then, it follows from Lemma \[lem:4.9\] that $E''_{a_1-2+a}$ is also in the subcategory since $d_2\vec{x}_1=\vec{x}_3$ in $L_f/\ZZ\vec{f}$. Thus we see that $E_{a-1}\in \E_a$ by the use of Lemma \[lem:4.8\] with $i=a,\dots, a_1-2+a$. Repeating this argument and using the fact that $d_{2m}\vec{x_1}=\vec{0}$ in $L_f/\ZZ\vec{f}$ and $(\vec{f})=T^2$, we obtain the statement. $\E_a$ contains $\CC^{stab}$ ---------------------------- Since $\E_a$ is closed under the grading shift $(\vec{l})$ for all $\vec{l}\in L_f$, it is enough to show that $\CC^{stab}(\vec{l})\in\E_a$ for some $\vec{l}\in L_f$. If $n=1$, then it is obvious that $\E_a$ contains $\CC^{stab}(-a\vec{x}_1)$. Suppose that $n=2m+1$, $m\in \ZZ_{\ge 1}$. For each $k=1,\dots, m$, there is an exact triangle $$\begin{aligned} & &\left(S/(x_1, x_2,\dots, x_{2k-2}, x_{2k-1}, x_{2k+1},x_{2k+3},\dots, x_{2m+1})\right)^{stab}(-\vec{x}_{2k})\\ & &\longrightarrow \left(S/(x_1, x_2,\dots, x_{2k-2}, x_{2k-1}, x_{2k+1},x_{2k+3},\dots, x_{2m+1})\right)^{stab}\\ & &\longrightarrow \left(S/(x_1, x_2,\dots, x_{2k-2}, x_{2k-1}, x_{2k}, x_{2k+1},x_{2k+3},\dots, x_{2m+1})\right)^{stab}\\ & &\longrightarrow T\left(S/(x_1, x_2,\dots, x_{2k-2}, x_{2k-1}, x_{2k+1},x_{2k+3},\dots, x_{2m+1})\right)^{stab}(-\vec{x}_{2k}).\end{aligned}$$ Note that $-\vec{x}_2-\vec{x}_4-\dots -\vec{x}_{2m}=(d_1+d_3+\dots +d_{2m-1})\vec{x}_1$ in $L_f/\ZZ\vec{f}$. Since $a_i\ge 2$ for all $i=1,\dots, 2m+1$, it follows that $$\begin{aligned} \widetilde \mu_{2m+1}&=&d_{2m-1} a_{2m}(a_{2m+1}-1)+\widetilde \mu_{2m-1}\\ &>&d_{2m-1}+\widetilde \mu_{2m-1}\\ &>&d_{2m-1}+d_{2m-1}+\widetilde \mu_{2m-3}\\ & >& \dots > d_{2m-1}+d_{2m-1}+\dots +d_3+d_1.\end{aligned}$$ Hence, the above exact triangle yields that $\CC^{stab}(\vec{l})\in \E_a$ for some $\vec{l}\in L_f$. Similarly, suppose that $n=2m$, $m\in \ZZ_{\ge 1}$. For each $k=1,\dots, m$, there is an exact triangle $$\begin{aligned} & &\left(S/(x_1, x_2,\dots, x_{2k-3}, x_{2k-2}, x_{2k}, x_{2k+2},\dots, x_{2m})\right)^{stab}(-\vec{x}_{2k-1})\\ & &\longrightarrow \left(S/(x_1, x_2,\dots, x_{2k-3}, x_{2k-2}, x_{2k}, x_{2k+2},\dots, x_{2m})\right)^{stab}\\ & &\longrightarrow \left(S/(x_1, x_2,\dots, x_{2k-3}, x_{2k-2}, x_{2k-1}, x_{2k}, x_{2k+2},\dots, x_{2m})\right)^{stab}\\ & &\longrightarrow T\left(S/(x_1, x_2,\dots, x_{2k-3}, x_{2k-2}, x_{2k}, x_{2k+2},\dots, x_{2m})\right)^{stab}(-\vec{x}_{2k-1}).\end{aligned}$$ Note that $-\vec{x}_1-\vec{x}_3-\dots -\vec{x}_{2m-1}=-(d_0+d_2+\dots +d_{2m-2})\vec{x}_1$ in $L_f/\ZZ\vec{f}$. Since $a_i\ge 2$ for all $i=1,\dots, 2m+1$, it follows that $$\begin{aligned} -\widetilde \mu_{2m}&=&-d_{2m-2} a_{2m-1}(a_{2m}-1)-\widetilde \mu_{2m-2}\\ &<&-d_{2m-2}-\widetilde \mu_{2m-2}\\ &<&-d_{2m-2}-d_{2m-4}-\widetilde \mu_{2m-4}\\ & <& \dots < -d_{2m-2}-d_{2m-3}-\dots -d_2-d_0.\end{aligned}$$ Hence, the above exact triangle yields that $\CC^{stab}(\vec{l})\in \E_a$ for some $\vec{l}\in L_f$. Thus we have finished the proof of Theorem \[thm: main\]. [\[AR\]]{} M. Auslander, I. Reiten, *Almost split sequences for Z-graded rings*, in: Singularities, Representation of Algebras, and Vector Bundles, Lambrecht, 1985, in: Lecture Notes in Math., vol. [**1273**]{}, Springer, Berlin, 1987, pp. 232–243. S. Balnojan, C. Hertling, *Conjectures on spectral numbers for upper triangular matrices and for singularities*, arXiv: 1712.00388. P. Berglund, M. Henningson: Landau-Ginzburg orbifolds, mirror symmetry and the elliptic genus. Nuclear Physics B [**433**]{} (1995), 311–332. P. Berglund, T. Hübsch, *A generalized construction of mirror manifolds*, Nuclear Physics B [**393**]{} (1993), 377–391. T. Dyckerhof, *Compact generators in categories of matrix factorizations*, Duke Math. J., [**159**]{} (2011), 223–274. W. Ebeling, S. M. Gusein-Zade, A. Takahashi, *Orbifold E-functions of dual invertible polynomials*, J. Geom. Phys. [**106**]{} (2016), 184–191. W. Ebeling, A. Takahashi, *Strange duality of weighted homogeneous polynomials*, Compos. Math. [**147**]{} (2011), no. 5, 1413–1433. D. Eisenbud, *Homological algebra on a complete intersection, with an application to group representations*, Trans. AMS., [**260**]{} (1980) 35–64. M. Futaki and K. Ueda, *Homological mirror symmetry for Brieskorn-Pham singularities*. Sel. Math. New Ser., [**17**]{} (2011), 435–452. M. Futaki and K. Ueda, *Homological mirror symmetry for singularities of type $D$*, Math. Z., [**273**]{} (2013), 633–652. D. Happel, *Triangulated categories in the representation theory of finite-dimensional algebras*, London Mathematical Society Lecture Note Series, 119. Cambridge University Press, Cambridge, 1988. x+208 pp. Y. Hirano, G. Ouchi, *Derived factorization categories of non-Thom–Sebastiani-type sum of potentials*, arXiv:1809.09940. S. Horocholyn, *On the Stokes matrices of the $tt^*$ Toda equation*, Tokyo Journal of Mathematics [**40**]{} (2017), 185–202. M. Habermann, J. Smith, *Homological Berglund–Hübsch mirror symmetry for curve singularities*, arXiv:1903.01351. H. Kajiura, K. Saito, A. Takahashi, *Matrix factorization and representations of quivers. II. Type ADE case*, Adv. Math. [**211**]{}, no. 1, 327–362 (2007). H. Kajiura, K. Saito, A. Takahashi, *Triangulated Categories of matrix Factorizations for regular systems of weights with $\varepsilon=-1$*. Adv. in Math. [**220**]{} (2009), 1602–1654. M. Kreuzer, H. Skarke, *On the classification of quasihomogeneous functions*, Commun. Math. Phys. [**150**]{} (1992), 137–147. A. Kuznetsov, M. Smirnov, *On residual categories for Grassmannians*, arXiv:1802.08097. H. Lenzing, J. A. de la Penã. *Extended canonical algebras and Fuchsian singularities*. Math. Z. [**268**]{} (2011), 143–167. D. Orlov, *Triangulated categories of singularities and D-branes in Landau–Ginzburg models*, Tr. Mat. Inst. Steklova (Algebr. Geom. Metody, Svyazi i Prilozh.) [**246**]{} (2004) 240–262; translation in: Proc. Steklov Inst. Math. [**246**]{} (3) (2004) 227–248. D. Orlov, *Derived categories of coherent sheaves and triangulated categories of singularities*. Algebra, arithmetic, and geometry: in honor of Yu. I. Manin. Vol. II, pp. 503–531, Progr. Math. [**270**]{}. Birkhäuser Boston, Inc., Boston, MA, 2009. P. Orlik and R. Randell, *The Monodromy of Weighted Homogeneous Singularities*, lnventiones Math., [**39**]{}, 199–211 (1977). A. Takahashi, *Matrix factorizations and representations of quivers I*, arXiv:0506347. A. Takahashi, *Weighted Projective Lines Associated to Regular Systems of Weights Dual Type*, Advanced Studies of Pure Mathematics [**59**]{} (2010), 371–388. A. Takahashi, *HMS for isolated hypersurface singularities*, talk at the “Workshop on Homological Mirror Symmetry and Related Topicsâ€?January 19–24, 2009, University of Miami, the PDF file available from https://math.berkeley.edu/ auroux/frg/miami09-notes/ . K. Ueda, *Homological Mirror Symmetry and Simple Elliptic Singularities*. arXiv:math/0604361. A. N. Varchenko, *Zeta-function of monodromy and Newton’s diagram*, Invent. Math. [**37**]{} (1976), 253–262.
--- abstract: 'For given computational resources, the accuracy of plasma simulations using particles is mainly held back by the noise due to limited statistical sampling in the reconstruction of the particle distribution function. A method based on wavelet analysis is proposed and tested to reduce this noise. The method, known as wavelet based density estimation (WBDE), was previously introduced in the statistical literature to estimate probability densities given a finite number of independent measurements. Its novel application to plasma simulations can be viewed as a natural extension of the finite size particles (FSP) approach, with the advantage of estimating more accurately distribution functions that have localized sharp features. The proposed method preserves the moments of the particle distribution function to a good level of accuracy, has no constraints on the dimensionality of the system, does not require an a priori selection of a global smoothing scale, and its able to adapt locally to the smoothness of the density based on the given discrete particle data. Most importantly, the computational cost of the denoising stage is of the same order as one time step of a FSP simulation. The method is compared with a recently proposed proper orthogonal decomposition based method, and it is tested with three particle data sets that involve different levels of collisionality and interaction with external and self-consistent fields.' author: - Romain Nguyen van yen - 'Diego del-Castillo-Negrete' - Kai Schneider - Marie Farge - Guangye Chen title: | Wavelet-based density estimation for noise reduction\ in plasma simulations using particles --- Introduction ============ Particle-based numerical methods are routinely used in plasma physics calculations [@Birdsall1985; @Hockney1988]. In many cases these methods are more efficient and simpler to implement than the corresponding continuum Eulerian methods. However, particle methods face the well known statistical sampling limitation of attempting to simulate a physical system containing $N$ particles using $N_p \ll N$ computational particles. Particle methods do not seek to reproduce the exact individual behavior of the particles, but rather to approximate statistical macroscopic quantities like density, current, and temperature. These quantities are determined from the particle distribution function. Therefore, a problem of relevance for the success of particle-based simulations is the reconstruction of the particle distribution function from discrete particle data. The difference between the distribution function reconstructed from a simulation using $N_p$ particles and the exact distribution function gives rise to a discretization error generically known as “particle noise” due to its random-like character. Understanding and reducing this error is a complex problem of importance in the validation and verification of particle codes, see for example Refs. [@Nevins2005; @Krommes2007; @McMillan2008] and references therein for a discussion in the context of gyrokinetic calculations. One obvious way to reduce particle noise is by increasing the number of computational particles. However, the unfavorable scaling of the error with the number of particles, $\sim 1/\sqrt{N_p}$ [@Krommes1993; @Aydemir1994], puts a severe limitation on this straightforward approach. This has motivated the development of various noise reduction techniques including finite size particles (FSP) [@Hockney1966; @ABL70], Monte-Carlo methods [@Aydemir1994], Fourier-filtering [@Jolliet2007], coarse-graining [@Chen2007], Krook operators [@McMillan2008], smooth interpolation [@Shadwick2008], low noise collision operators [@Lewandowski2005], and Proper Orthogonal Decomposition (POD) methods [@delCastillo2008] among others. In the present paper we propose a wavelet-based method for noise reduction in the reconstruction of particle distribution functions from particle simulation data. The method, known as Wavelet Based Density Estimation (WBDE), was originally introduced in Ref. [@Donoho1996] in the context of statistics to estimate probability densities given a finite number of independent measurements. However, to our knowledge, this method has not been applied before to particle-base computations. WBDE, as used here, is based on truncations of the wavelet representation of the Dirac delta function associated with each particle. The method yields almost optimal results for functions with unknown local smoothness without compromising computational efficiency, assuming that the particles’ coordinates are statistically independent. As a first step in the application of the WBDE method to plasma particle simulations, we limit attention to “passive denoising”. That is the WBDE method is treated as a post-processing technique applied to independently generated particle data. The problem of “active denoising”, e.g. the application of WBDE methods in the evaluation of self-consistent fields in particle in cell simulations, will not be addressed. This simplification will allow us to assess the efficiency of the proposed noise reduction method in a simple setting. Another simplification pertains the dimensionality. Here, for the sake of simplicity, we limit attention to the reconstruction and denoising problem in two dimensions. However, the extension of the WBDE method to higher dimensions is in principle straightforward. Collisions, or the absence of them, play an important role in plasma transport problems. Particle methods handle the collisional and non-collisional parts of the dynamics differently. Fokker-Planck-type collision operators are typically introduced in particle methods using Langevin-type stochastic differential equations. On the other hand, the non-collisional part of the dynamics is described using deterministic ordinary differential equations. Collisional dominated problems tend to washout small scale structures whereas collisionless problems typically develop fine scale filamentary structures in phase space. Therefore, it is important to test the dependence of the efficiency of denoising reconstruction methods on the level of collisionality. Here we test the WBDE method in strongly collisional, weakly collisional and collisionless regimes. For the strongly collisional regime we consider particle data of force-free collisional relaxation involving energy and pinch-angle scattering. The weakly collisional regime is illustrated using guiding-center particle data of a magnetically confined plasma in toroidal geometry. The collisionless regime is studied using particle in cell (PIC) data corresponding to bump-on-tail and two streams instabilities in the Vlasov-Poisson system. Beyond the role of collisions, the data sets that we are considering open the possibility of exploring the role of external and self-consistent fields in the reconstruction of the particle density. In the collisional relaxation problem no forces act on the particles, in the guiding-center problem particles interact with an external magnetic field, and in the Vlasov-Poisson problem particle interactions are incorporated through a self-consistent electrostatic mean field. One of the goals of this paper is to compare the WBDE method with the Proper Orthogonal Decomposition (POD) density reconstruction method proposed in Ref. [@delCastillo2008]. The rest of the paper is organized as follows. In Sect. II we review the main properties of kernel density estimation (KDE) and show its relationship with finite size particles (FSP). We then review basic notions on orthogonal wavelet and multiresolution analysis and outline a step by step algorithm for WBDE. Also, for completeness, in this section we include a brief description of the POD reconstruction method proposed in Ref. [@delCastillo2008]. Section III discusses applications of the WBDE method and the comparison with the POD method. We start by post-processing a simulation of plasma relaxation by random collisions against a background thermostat. We then turn to a $\delta f$ Monte-Carlo simulation in toroidal geometry, whose phase space has been reduced to two dimensions. Finally, we analyze the results of particle-in-cell (PIC) simulations of a 1D Vlasov-Poisson plasma. The conclusions are presented in Sec. IV. Methods ======= This section presents the wavelet-based density estimation (WBDE) algorithm. We start by reviewing basic ideas on kernel density estimation (KDE) which is closely related to the use of finite size particles (FSP) in PIC simulations. Following this, we we give a brief introduction to wavelet analysis and discuss the WBDE algorithm. For completeness, we also include a brief summary of the POD approach. Kernel density estimation ------------------------- Given a sequence of independent and identically distributed measurements, the nonparametric density estimation problem consists in finding the underlying probability density function (PDF), with no a priori assumptions on its functional form. Here we discuss general ideas on this difficult problem for which a variety of statistical methods have been developed. Further details can be found in the statistics literature, e.g. Ref. [@Silverman1986]. Consider a number $N_p$ of statistically independent particles with phase space coordinates $(\mathbf{X}_n)_{1\leq n \leq N_p}$ distributed in $\mathbb{R}^d$ according to a PDF $f$. This data can come from a PIC or a Monte-Carlo, full $f$ or $\delta f$ simulation. Formally, the sample PDF can be written as $$\label{dirac_estimate} f^\delta(\mathbf{x}) = \frac{1}{N_p}\sum_{n=1}^{N_p} \delta(\mathbf{x}-\mathbf{X}_n)$$ where $\delta$ is the Dirac distribution. Because of its lack of smoothness, Eq. (\[dirac\_estimate\]) is far from the actual distribution $f$ according to most reasonable definitions of the error. Moreover, the dependence of $f^\delta$ on the statistical fluctuations in $(\mathbf{X}_n)$ can lead to an artificial increase of the collisionality of the plasma. The simplest method to introduce some smoothness in $f^\delta$ is to use a histogram. Consider a tiling of the phase space by a Cartesian grid with $N_g^d$ cells. Let $\left\{B_\lambda\right\}_{\lambda\in\Lambda}$ denote the set of all cells with characteristic function $\chi_\lambda$ defined as $\chi_\lambda=1$ if $x \in B_\lambda$ and $\chi_\lambda=0$ otherwise. Then the histogram corresponding to the tiling is $$\label{histogram_estimate} f^H(\mathbf{x}) = \sum_{\lambda \in \Lambda} \left(\frac{1}{N_p}\sum_{n=1}^{N_p} \chi_\lambda(\mathbf{X}_n) \right) \chi_\lambda(\mathbf{x})$$ which can also be viewed as the orthogonal projection of $f^\delta$ on the space spanned by the $\chi_\lambda$. The main difference between $f^{\delta}$ and $f^H$ is that the latter cannot vary at scales finer than the grid scale which is of order $N_g^{-1}$. By choosing $N_g$ small enough, it is therefore possible to reduce the variance of $f^H$ to very low levels, but the estimate then becomes more and more biased towards a piecewise continuous function, which is not smooth enough to be the true density. Histograms correspond to the nearest grid point (NGP) charge assignment scheme used in the early days of plasma physics computations [@Hockney1966]. One of the most popular methods to achieve higher level of smoothness is kernel density estimation (KDE) [@Parzen1962]. Given $(\mathbf{X}_n)_{1\leq n \leq N_p}$, the kernel estimate of $f$ is defined as $$\label{kernel_estimate} f^K(\mathbf{x}) = \frac{1}{N_p}\sum_{n=1}^{N_p} K(\mathbf{x}-\mathbf{X}_n)\, ,$$ where the smoothing kernel $K$ is a positive definite, normalized, $\int K =1$, function. Equation (\[kernel\_estimate\]) corresponds to the convolution of $K$ with the Dirac delta measure corresponding to each particle. A typical example is the Gaussian kernel $$\label{gaussian_kernel} K_h(\mathbf{x}) = \frac{1}{(\sqrt{2\pi}h)^d} e^{-\frac{\Vert \mathbf{x} \Vert^2}{2h^2}}$$ where the so-called “bandwidth”, or smoothing scale, $h$, is a free parameter. The optimal smoothing scale depends on how the error is measured. For example, in the one dimensional case, to minimize the mean $L^2$-error between the estimate and the true density, the smoothing volume $h^d$ should scale like ${N_p}^{-\frac{1}{5}}$, and the resulting error scales like $N_p^{-\frac{2}{5}}$ [@Silverman1986]. As in the case of histograms, the choice of $h$ relies on a trade-off between variance and bias. In the context of plasma physics simulations the kernel $K$ corresponds to the charge assignment function [@Hockney1988]. A significant effort has been devoted in the choice of the function $K$ since it has a strong impact on computational efficiency and on the conservation of global quantities. Concerning $h$, it has been shown that it should not be much larger than the Debye length $\lambda_D$ of the plasma to obtain a realistic and stable simulation [@Birdsall1985]. Given a certain amount of computational resources, the general tendency has thus been to reduce $h$ as far as possible in order to fit more Debye lengths inside the simulation domain, which means that the effort has been concentrated on reducing the bias term in the error. Since the force fields depend on $f$ through integral equations, like the Poisson equation, that tend to reduce the high wavenumber noise, we do not expect the disastrous scaling $h \propto {N_p}^{-\frac{1}{5}}$, which would mean $N_p \propto \lambda_D^{5d}$ in $d$ dimensions, to hold. Nevertheless, the problem remains that if we want to preserve high resolution features of $f$ or of the electromagnetic fields, we need to reduce $h$, and therefore greatly increase the number of particles to prevent the simulation from drowning into noise. Bandwidth selection has long been recognized as the central issue in kernel density estimation [@Chiu1991]. We are not aware of a theoretical or numerical prediction of the optimal value of $h$ taking into account the noise term. To bypass this difficulty, it is possible to use new statistical methods which do not force us to choose a global smoothing parameter. Instead, they adapt locally to the behavior of the density $f$ based on the available data. Wavelet based-density estimation, which we will introduce in the next two sections, is one of these methods. Bases of orthogonal wavelets ---------------------------- Wavelets are a standard mathematical tool to analyze and compute non stationary signals. Here we recall basic concepts and definitions. Further details can be found in Ref. [@MF92] and references therein. The construction takes place in the Hilbert space $L^2(\mathbb{R})$ of square integrable functions. An orthonormal family $(\psi_{j,i}(x))_{j\in\mathbb{N},i\in\mathbb{Z}}$ is called a wavelet family when its members are dilations and translations of a fixed function $\psi$ called the mother wavelet: $$\label{wavelet_1} \psi_{j,i}(x) = 2^{j/2}\psi(2^j x-i)$$ where $j$ indexes the scale of the wavelets and $i$ their positions, and $\psi$ satisfies $\int \psi = 0$. In the following we shall always assume that $\psi$ has compact support of length $S$. The coefficients $\langle f \mid \psi_{j,i} \rangle = \int f \psi_{j,i} $ of a function $f$ for this family are denoted by $(\tilde{f}_{j,i})$. These coefficients describe the fluctuations of $f$ at scale $2^{-j}$ around position $\frac{i}{2^j}$. Large values of $j$ correspond to fine scales, and small values to coarse scales. Some members of the commonly used Daubechies 6 wavelet family are shown in the left panel of Fig. 1. It can be shown that the orthogonal complement in $L^2(\mathbb{R})$ of the linear space spanned by the wavelets is itself orthogonally spanned by the translates of a function $\varphi$, called the scaling function. Defining $$\label{phi_shrink} \varphi_{L,i} = 2^{\frac{L}{2}} \varphi(2^L x - i)$$ and the scaling coefficients $\bar{f}_{L,i} = \langle f \mid \varphi_{L,i} \rangle$, one thus has the reconstruction formula: $$\label{wavelet_reconstruction} f = \sum_{i=-\infty} ^{\infty} \bar{f}_{L,i} \varphi_{L,i} + \sum_{j=L}^{\infty}\sum_{i=-\infty}^{\infty} \tilde{f}_{j,i} \psi_{j,i}$$ The first sum on the right hand side of Eq. (\[wavelet\_reconstruction\]) is a smooth approximation of $f$ at the coarse scale, $2^{-L}$, and the second sum corresponds to the addition of details at successively finer scales. If the wavelet $\psi$ has $M$ vanishing moments: $$\label{vanishing_moments} \int x^m \psi(x) dx = 0$$ for $0 \leq m < M$, and if $f$ is locally $m$ times continuously differentiable around some point $x_0$, then a key property of the wavelet expansion is that the coefficients located near $x_0$ decay when $j\to\infty$ like $2^{-j(m+\frac{1}{2})}$ [@Jaffard1991]. Hence, localized singularities or sharp features in $f$ affect only a finite number of wavelet coefficients within each scale. Another important consequence of (\[vanishing\_moments\]) of special relevance to particle methods is that for $0 \leq m < M$, the moments $\int x^m f(x) dx$ of the particle distribution function depend only on its scaling coefficients, and not on its wavelet coefficients. If the scaling coefficients $\overline{f}_{J,i}$ at a certain scale $J$ are known, all the wavelet coefficients at coarser scales ($j \leq J$) can be computed using the fast wavelet transform (FWT) algorithm [@SM00]. We shall address the issue of computing the scaling coefficients themselves in section \[critical\_discussion\]. The generalization to $d$ dimensions involves tensor products of wavelets and scaling functions at the same scale. For example, given a wavelet basis on $\mathbb{R}$, a wavelet basis on $\mathbb{R}^2$ can be constructed in the following way: $$\begin{aligned} \psi^1_{j,i_1,i_2}(x_1,x_2) &=& 2^j\psi(2^j x_1-i_1) \varphi(2^j x_2-i_2) \\ \psi^2_{j,i_1,i_2}(x_1,x_2) &=& 2^j\varphi(2^j x_1-i_1) \psi(2^j x_2-i_2) \\ \psi^3_{j,i_1,i_2}(x_1,x_2) &=& 2^j\psi(2^j x_1-i_1) \psi(2^j x_2-i_2 ) \, ,\end{aligned}$$ where we refer to the exponent $\mu = 1, 2, 3$ as the direction of the wavelets. This name is easily understood by looking at different wavelets shown in Fig. \[daubechies\_wavelets\_1D\] (right). The corresponding scaling functions are simply given by $2^j \varphi(2^j x_1 - i_1)\varphi(2^j x_2 - i_2)$. Wavelets on $\mathbb{R}^d$ are constructed exactly in the same way, but this time using $2^d-1$ directions. To lighten the notation we write the $d$-dimensional analog of Eq. (\[wavelet\_reconstruction\]) as $$\begin{aligned} \label{wavelet_reconstruction_2D} f = \sum_{\lambda\in\Lambda_{\phi,L}} \overline{f}_\lambda \phi_\lambda + \sum_{\lambda\in\Lambda_{\psi,L}} \tilde{f}_\lambda \psi_\lambda\end{aligned}$$ where $\lambda = (j,\mathbf{i},\mu)$ is a multi-index, with the integer $j$ denoting the scale and the integer vector $\mathbf{i} = (i_1,i_2,\ldots)$ denoting the position of the wavelet. The wavelet multiresolution reconstruction formula in Eq. (\[wavelet\_reconstruction\]) involves an infinite sum over the position index $i$. One way of dealing with this sum is to determine a priori the non-zero coefficients in Eq. (\[wavelet\_reconstruction\]), and work only with these coefficients, but still retaining the full wavelet basis on $\mathbb{R}^d$ as presented above. Another alternative, which we have chosen because it is easier to implement, is to periodize the wavelet transform on a bounded domain [@SM00]. Assuming that the coordinates have been rescaled so that all the particles lie in $[0,1]^d$, we replace the wavelets and scaling functions by their periodized counterparts: $$\begin{aligned} \label{periodized_wavelets} \psi_{j,i}(x) & \to & \sum_{l=-\infty}^{\infty} \psi_{j,i}(x+l) \\ \varphi_{j,i}(x) & \to & \sum_{l=-\infty}^{\infty} \varphi_{j,i}(x+l) \, .\end{aligned}$$ Throughout this paper we will consider only periodic wavelets. For the sake of completeness we mention a third alternative which is technically more complicated. It consists in constructing a wavelet basis on a bounded interval [@Cohen1993]. The advantage of this approach is that it does not introduce artificially large wavelet coefficients at the boundaries for functions $f$ that are not periodic. Wavelet based density estimation -------------------------------- The multiscale nature of wavelets allows them to adapt locally to the smoothness of the analyzed function [@SM00]. This fundamental property has triggered their use in a variety of problems. One of their most fruitful applications has been the denoising of intermittent signals [@DJ94]. The practical success of wavelet thresholding to reduce noise relies on the observation that the expansion of signals in a wavelet basis is typically sparse. Sparsity means that the interesting features of the signal are well summarized by a small fraction of large wavelet coefficients. On the contrary, the variance of the noise is spread over all the coefficients appearing in Eq. (\[wavelet\_reconstruction\_2D\]). Although the few large coefficients are of course also affected by noise, curing the noise in the small coefficients is already a very good improvement. The original setting of this technique, hereafter referred to as global wavelet shrinkage, requires the noise to be additive, stationary, Gaussian and white. It found a first application in plasma physics in Ref. [@Farge2006], where coherent bursts were extracted out of plasma density signals. Since Ref. [@DJ94], wavelet denoising has been extended to a number of more general situations, like non-Gaussian or correlated additive noise, or to denoise the spectra of locally stationary time series [@Sachs1996]. In particular, the same ideas were developed in Ref. [@Vannucci1995; @Donoho1996] to propose a wavelet-based density estimation (WBDE) method based on independent observations. At this point we would like to stress that WBDE assumes nothing about the Gaussianity of the noise or whether or not it is stationary. In fact, under the independence hypothesis – which is admittedly quite strong – the statistical properties of the noise are entirely determined by standard probability theory. We refer to Ref. [@Vidakovic1999] for a review on the applications of wavelets in statistics. In Ref. [@Gassama2007], global wavelet shrinkage was applied directly to the charge density of a 2D PIC code, in a case were the statistical fluctuations were quasi Gaussian and stationary. In particular, an iterative algorithm [@AAMF04], which crucially relies on the stationnarity hypothesis, was used to determine the level of fluctuations. However,in the next section we will show an example where the noise is clearly non-stationary, and this procedure fails. Let us now describe the WBDE method as we have generalized it to several dimensions. The first step is to expand the sample particle distribution function, $f^\delta$, in Eq. (\[dirac\_estimate\]) in a wavelet basis according to Eq. (\[wavelet\_reconstruction\_2D\]) with the wavelet coefficients $$\begin{aligned} \label{empirical_wavelet_coefficients} \overline{f}_{\lambda} & = & \langle f^\delta \mid \varphi_\lambda \rangle = \frac{1}{N_p} \sum_{n=1}^{N_p} \varphi_\lambda(X_n) \\ \tilde{f}_{\lambda} & = & \langle f^\delta \mid \psi_\lambda \rangle = \frac{1}{N_p} \sum_{n=1}^{N_p} \psi_\lambda(X_n) \, .\end{aligned}$$ Since this reconstruction is exact, keeping all the wavelet coefficients does not improve the smoothness of $f^\delta$. The simple and yet efficient remedy consists in keeping only a subset of the wavelet coefficients in Eq. (\[wavelet\_reconstruction\_2D\]). A straightforward prescription would be to discard all the wavelet coefficients at scales finer than a cut-off scale $L$. This approach corresponds to a generalization of the histogram method in Eq. (\[histogram\_estimate\]) with $N_g = 2^L$. Because the characteristic functions $\chi_\lambda$ of the cells in a dyadic grid are the scaling functions associated with the Haar wavelet family, Eqs. (\[wavelet\_reconstruction\_2D\]) and (\[histogram\_estimate\]) are in fact equivalent for this wavelet family. Accordingly, like in the histogram case, we would have to choose $L$ quite low to obtain a stable estimate, at the risk of losing some sharp features of $f$. Better results can be obtained by keeping some wavelet coefficients down to a much finer scale $J > L$. However, to prevent that statistical fluctuations contaminate the estimate, only those coefficients whose modulus are above a certain threshold should be kept. We are thus naturally led to a nonlinear thresholding procedure. In the one dimensional case, values of $J$, $L$, and of the threshold within each scale that yield theoretically optimal results have been given in Ref. [@Donoho1996]. This reference discusses the precise smoothness requirements on $f$, which can accommodate well localized singularities, like shocks and filamentary structures known to arise in collisionless plasma simulations. There remains the question of how to compute the $\tilde{f}_{j,i}$ based on the positions of the particles. Although more accurate methods based on (\[empirical\_wavelet\_coefficients\]) may be developed in the future, our present approximation relies on the computation of a histogram, which creates errors of order $N_g^{-1}$. The complete procedure is described in the following [ **Wavelet-based density estimation**]{} algorithm: 1. \[histogram\_approx\] construct a histogram $f^H$ of the particle data with $N_g = 2^{J_g}$ cells in each direction, 2. \[scaling\_function\_approx\] approximate the scaling coefficients at the finest scale $J_g$ by : $$\label{scaling_function_approximation} \overline{f}_{J_g,\mathbf{i}} \simeq 2^{-{J_g}/{2}} f^H(2^{-J_g} \mathbf{i})$$ 3. compute all the needed wavelet coefficients using the FWT algorithm, 4. keep all the coefficients for scales coarser than $L$, defined by $2^{dL}\sim N_{p}^{\frac{1}{1+2r_{0}}}$ where $r_{0}$ is the order of regularity of the wavelet (1 in our case), 5. discard all the coefficients for scales strictly finer than $J$ defined by $2^{dJ}\sim\frac{N_{p}}{\log_{2}N_{p}}$, 6. \[thresholding\_function\] for scales $j$ in between $L$ and $J$, keep only the wavelet coefficients $\tilde{f}_{\lambda}$ such that $\vert\tilde{f}_{\lambda}\vert\geq T_j = C\sqrt{\frac{j}{N_{p}}}$ where $C$ is a constant that must in principle depend on the smoothness of $f$ and on the wavelet family [@Donoho1996]. In the following, except otherwise indicated, $C=\frac{1}{2}$. For the wavelet bases we used orthonormal Daubechies wavelets with 6 vanishing moments and thus support of size $S=12$ [@Daubechies1992]. In our case, $r_0 = 1$, which means that the wavelets have a first derivative but no second derivative, and the size of the wavelets at scale $L$ for $d=1$ is roughly $N_p^{-\frac{1}{3}}$. Since $N_p \gg 1$, it follows from the definition at stage 5 of the algorithm that the size of the wavelets at scale $J$ is orders of magnitude smaller than that. Using the adaptive properties of wavelets, we are thus able to detect small scale structures of $f$ without compromising the stability of the estimate. Note that the error at stage \[scaling\_function\_approx\] could be reduced by using Coiflets [@Daubechies1993] instead of Daubechies wavelets, but the gain would be negligible compared to the error made at stage \[histogram\_approx\]. We will denote the WBDE estimate of $f$ as $f^W$. In the one-dimensional case, $$\label{explicit_wavelet_estimate} {f^{W}}=\sum_{i=1}^{2^{L}}\overline{f}_{L,i}\varphi_{L,i}+\sum_{j=L}^{J}\sum_{i=1}^{2^{j}}\tilde{f}_{j,i} \rho_{j}(\tilde{f}_{j,i})\psi_{j,i}$$ where $\rho_{j}$ is the thresholding function as defined by stage \[thresholding\_function\] of the algorithm : $\rho_{j}(y) = 0$ if $\vert y \vert \leq T_j$ and $\rho_{j}(y) = 1$ otherwise. Finally, let us propose two methods for applying WBDE to postprocess $\delta f$ simulations. Recall that the Lagrangian equations involved in the $\delta f$ schemes are identical to their full $f$ counterparts. The only difficulty introduced by the $\delta f$ method lies in the evaluation of phase space integrals of the form $\delta I = \int A \cdot (f-f_0)$, where $A$ is a function on phase space and $f_0$ is a known reference distribution function. In these integrals, $f-f_0$ should be replaced by $\delta f$, which is in turn written as a product $wf$, where $w$ is a “weighting” function. Numerically, $w$ is known via its values at particles positions, $w(X_n)$, and the usual expression for $\delta I$ is thus $\delta I = \sum_{n=1}^{N_p} A(X_n) w(X_n)$. We cannot apply WBDE directly to $\delta f$, since this function is not a density function.An elegant approach would be to first apply WBDE to the unweighted distribution $f^\delta$ to determine the set of statistically significant wavelet coefficients, and to include the weights only in the final reconstruction (\[explicit\_wavelet\_estimate\]) of $f^W$. A simpler approach, which we will illustrate in section \[delta\_5d\_example\], consists in renormalizing $\delta f$, so that $%\begin{equation}\label{delta_f_normalization} \int\vert\delta f\vert = 1$, and treat it like a density. Further issues related to practical implementation {#critical_discussion} -------------------------------------------------- In this section we discuss how the WBDE method handles two issues of direct relevance to plasma simulations: conservation of moments and computational efficiency. As mentioned before, due to the vanishing moments of the wavelets in Eq. (\[vanishing\_moments\]), the moments up to order $M$ of the particle distribution distribution are solely determined by its scaling function coefficients. As a consequence, we expect the thresholding procedure to conserve these moments, in the sense that $$\label{moments_def} \mathcal{M}_{m,k}^W = \int x_k^m f^W(\mathbf{x})\mathrm{d}\mathbf{x} \simeq \int x_k^m f^\delta(\mathbf{x})\mathrm{d}\mathbf{x} = \mathcal{M}_{m,k}^\delta$$ for $0 \leq m \leq M-1$ and for all $i\in \{1,\ldots,d\}$. This conservation holds up to round-off error if the wavelet coefficients can be computed exactly. Due to the type of wavelets that we have used, we were not able to achieve this in the results presented here. There remains a small error related to stages 1 and 2 of the algorithm, namely the construction of $f^H$ and the approximation of the scaling function coefficients by Eq. (\[scaling\_function\_approximation\]). They are both of order $N_g^{-1}$. We will present numerical examples of the moments of $f^W$ in the next section. Conservation of moments is closely related to a peculiarity of the denoised distribution function resulting from the WBDE algorithm: it is not necessarily everywhere positive. Indeed, wavelets are oscillating functions by definition, and removing wavelet coefficients therefore cannot preserve positivity in general. Further studies are needed to assess if this creates numerical instabilities when $f^W$ is used in the computation of self-consistent fields. The same issue was discussed in Ref. [@Denavit1972] where a kernel with two vanishing moments was used to linearly smooth the distribution function. The fact that this kernel is not everywhere positive was not considered harmful in this reference. We acknowledge that it may render the resampling of new particles from $f^W$, if it is needed in the future, more difficult. There are ways of forcing $f^W$ to be positive, for example by applying the method to $\sqrt{f}$ and then taking the square of the resulting estimate, but this implies the loss of the moment conservation, and we have not pursued in this direction. ![ \[daubechies\_wavelets\_1D\]Daubechies 6 wavelet family. Left, bold red: scaling function $\varphi$ at scale $j = 5$. Left, bold blue: wavelet $\psi$ at scale $j = 5$. Left, thin black, from left to right: wavelets at scales 6, 7, 8 and 9. Right : (a) 2D scaling function $\varphi(x_1)\varphi(x_2)$. (b) first 2D wavelet $\psi(x_1)\varphi(x_2)$. (c) second 2D wavelet $\varphi(x_1)\psi(x_2)$. (d) third 2D wavelet $\psi(x_1)\psi(x_2)$. ](wavelets3 "fig:"){width="0.5\columnwidth" height="0.4\columnwidth"} ![ \[daubechies\_wavelets\_1D\]Daubechies 6 wavelet family. Left, bold red: scaling function $\varphi$ at scale $j = 5$. Left, bold blue: wavelet $\psi$ at scale $j = 5$. Left, thin black, from left to right: wavelets at scales 6, 7, 8 and 9. Right : (a) 2D scaling function $\varphi(x_1)\varphi(x_2)$. (b) first 2D wavelet $\psi(x_1)\varphi(x_2)$. (c) second 2D wavelet $\varphi(x_1)\psi(x_2)$. (d) third 2D wavelet $\psi(x_1)\psi(x_2)$. ](wavelets4 "fig:"){width="0.44\columnwidth"} The number of arithmetic operations to perform a fast wavelet transform from scale $2^{-J}$ to scale $2^{-L}$ with the FWT in $d$ dimensions is $2S 2^{d(J-L)}$, where $S$ is the length of the wavelet filter (12 for the Daubechies filter that we are using). The definitions of $J$ and $L$ imply that $2^{d(J-L)}$ scales like $\frac{N_p^{\frac{2}{3}}}{\log{N_p}}$. The cost of the binning stage of order $N_{p}$, so that the total cost for computing $f^W$ is $O(N_{p})$, not larger than the cost of one time step during the simulation that produced the data. The amount of memory needed to store the wavelet coefficients during the denoising procedure is proportional to $N_g^d$, which should at least scale like $2^{dJ}$, and therefore also like $N_p$. If one wishes to use a finer grid to ensure high accuracy conservation of moments, the storage requirements grow like $N_g^d$. Thanks to optimized in-place algorithms, the amount of additional memory needed during the computation does not exceed $3S$. Another consequence of using the FWT algorithm is that $N_g$ must be an integer multiple of $2^{J-L}$. For comparison purposes, let us recall that most algorithms to compute the POD have a complexity proportional to $N_g^3$ when $d=2$. To conclude this subsection, Fig. \[example\_1D\] presents an example of the reconstruction of a 1D discontinuous density that illustrates the difference between the KDE and WBDE methods. The probability density function is uniform on the interval $\left[\frac{1}{3},\frac{2}{3}\right]$ and the estimates were computed on $\left[0,1\right]$ to include the discontinuities. The sample size was $2^{14}$, and the binning used $N_g = 2^{16}$ cells to compute the scaling function coefficients. For this 1D case the value $C=2$ was used to determine the thresholds (step \[thresholding\_function\] of the algorithm). The KDE estimate is computed using a Gaussian kernel with smoothing scale $h=0.0138$ [@KDE2003]. The relative mean squared errors associated with the KDE and WBDE estimates are respectively $19.6\times 10^{-3}$ and $6.97\times 10^{-3}$. The error in the KDE estimate comes mostly from the smoothing of the discontinuities. The better performance of WBDE stems from the much sharper representation of these discontinuities. It is also observed that the WBDE estimate is not everywhere positive. The approximate conservation of moments is demonstrated on Table \[example\_1D\_moments\]. Note that the error on all these moments for $f^W$ could be made arbitrary low by increasing $N_g$. The overshoots could also be mitigated by using nearly shift invariant wavelets [@NK01]. ![ \[example\_1D\] Estimation of the density of a sample of size $2^{14}$ drawn uniformly in $[1/3,2/3]$, using Gaussian kernels (left) or wavelets (right). The discontinuous analytical density is plotted with a dashed line in the two cases. ](example_1D_kernel "fig:"){width="0.47\columnwidth"} ![ \[example\_1D\] Estimation of the density of a sample of size $2^{14}$ drawn uniformly in $[1/3,2/3]$, using Gaussian kernels (left) or wavelets (right). The discontinuous analytical density is plotted with a dashed line in the two cases. ](example_1D_wavelets "fig:"){width="0.47\columnwidth"} Proper Orthogonal Decomposition Method -------------------------------------- For completeness, in this subsection we present a brief review of the POD density reconstruction method. For the sake of comparison with the WBDE method, we limit attention to the time independent case. Further details, including the reconstruction of time dependent densities using POD methods can be found in Ref. [@delCastillo2008]. The first step in the POD method is to construct the histogram $f^H$ from the particle data. This density is represented by an $N_x \times N_y$ matrix $\hat{f}_{ij}$ containing the fraction of particles with coordinates $(x,y)$ such that $X_i \leq x< X_{i+1}$ and $Y_i \leq y< Y_{i+1}$. In two dimensions, the POD method is based on the singular value decomposition of the histogram. According to the SVD theorem [@golub_van_loav_1996], the matrix $\hat{f}$ can always be factorized as $\hat{f}= U W V^t$ where $U$ and $V$ are $N_x \times N_x$ and $N_y \times N_y$ orthogonal matrices, $U U^t =V V^t = I$, and $W$ is a diagonal matrix, $W = {\rm diag} \left( w_1, w_2, \ldots w_N \right )$, such that $w_1 \geq w_2\geq \ldots \geq w_N \geq 0$. with $N= {\rm min} (N_x,N_y)$. In vector form, the decomposition can be expressed as \[svd\_vector\] \_[ij]{}= \_[k=1]{}\^N w\_k u\^[(k)]{}\_i v\^[(k)]{}\_j , where the $N_x$-dimensional vectors, $u_i^{(k)}$, and the $N_y$-dimensional vectors, $v_j^{(k)}$, are the orthonormal POD modes and correspond to the columns of the matrices $U$ and $V$ respectively. Given the decomposition in Eq. (\[svd\_vector\]), we define the rank-$r$ approximation of $\hat{f}$ as \[lr\_svd\] \^[(r)]{}\_[ij]{}= \_[k=1]{}\^r w\_k u\^[(k)]{}\_i v\^[(k)]{}\_j , where $1 \leq r < N$, and define the corresponding rank-$r$ reconstruction error as \[nre\] e(r) = || -\^[(r)]{} ||\^2 = \_[i=r+1]{}\^N w\_i\^2 , where $|| A|| = \sqrt{\sum_{i j} A_{ij}^2}$ is the Frobenius norm. Since $\hat{f}^{(r=N)}=\hat{f}$, we define $e(N)=0$. The key property of the POD is that the approximation in Eq. (\[lr\_svd\]) is optimal in the sense that e(r) = [min]{} { ||-g||\^2 | [rank]{} (g) = r . } . That is, of all the possible rank-$r$ Cartesian product approximations of $\hat{f}$, $\hat{f}^{(r)}$ is the closest to $\hat{f}$ in the Frobenius norm. The SVD spectrum, $\{ w_k\}$, of noise free coherent signals decays very rapidly after a few modes, but the spectrum of noise dominated signals is relatively flat and decays very slowly. When a coherent signal is contaminated with low level noise, the SVD spectrum exhibits an initial rapid decay followed by a weakly decaying spectrum known as the noisy plateau. In the POD method the denoised density is defined as the truncation $f^P=\hat{f}^{(r_c)}$, where $r_c$ corresponds to the rank where the noisy plateau starts. In general it is difficult to provide a precise a priori estimate of $r_c$, and this is one of the potential limitations of the POD method. One possible quantitative criterion used in Ref. [@delCastillo2008] is to consider the relative decay of the spectrum, $\Delta(k)=(w_{k+1}-w_k)/(w_{2}-w_1)$, for $k>1$, and define $r_c$ by the condition $\Delta(r_c)\leq \Delta_c$ where $\Delta_c$ is a predetermined threshold. [l\*[6]{}[c]{}r]{} & $m=0$ & $m=1$ & $m=2$ & $m=4$\ $f^K$ & $1.81\cdot 10^{-5}$ & $1.70\cdot 10^{-5}$ & $7.52\cdot 10^{-4}$ & $3.90\cdot 10^{-3}$\ $f^W$ & $1.08\cdot 10^{-11}$ & $1.52\cdot 10^{-5}$ & $2.93\cdot 10^{-5}$ & $5.52\cdot 10^{-5}$\ Applications ============ In this section, we apply the WBDE method to reconstruct and denoise the particle distribution function starting from discrete particle data. The data corresponds to three different groups of simulations: collisional thermalization with a background plasma, guiding center transport in toroidal geometry, and Vlasov-Poisson electrostatic instabilities. The first two groups of simulations were analyzed using POD methods in Ref. [@delCastillo2008]. One of the goals of this section is to compare the POD method with the WBDE method in these cases and in a new Vlasov-Poisson data set. This data set allows the testing of the reconstruction algorithms in a collisionless system that incorporates the self-consistent evaluation of the forces acting on the particles, as opposed to the collisional, test particle problems analyzed before. When comparing the two methods it is important to keep in mind that POD has one free parameter, namely the number $r$ of singular vectors that are retained to reconstruct the denoised distribution function. In the cases studied here we used a best guess for $r$ based on the properties of the reconstruction. In Ref. [@delCastillo2008] the POD method was developed and applied to time independent and time dependent data sets. However, in the comparison with the WBDE method, we limit attention to $2$-dimensional time independent data sets. The accuracy of the reconstruction of the density at a fixed time $t$ will be monitored using the absolute mean square error $$\label{error_e} e = \sum_{i,j} \vert f^{est}(x_i,y_j;t) - f^{ref}(x_i,y_j;t) \vert^2 \, ,$$ where $(x_i,y_j)$ are the coordinates of the nodes of a prescribed $N_g \times N_g$ grid in the space, and $f^{est}$ denotes the estimated density computed from a sample with $N_p$ particles. For the WBDE method $f^{est}=f^W$, and for the POD method $f^{est}=f^P$. In principle, the reference density, $f^{ref}$, in Eq. (\[error\_e\]) should be the density function obtained from the exact solution of the corresponding continuum model, e.g. the Fokker-Planck or the Vlasov-Poisson system. However, when no explicit solution is available, we will set $f^{ref}=f^{H}$ where $f^{H}$ is the histogram corresponding to a simulation with a maximum number of particles available which in the cases reported here correspond to $N_p=10^6$. We will also use the normalized error \[error\_e0\] e\_0= . Collisional thermalization with a background plasma --------------------------------------------------- This first example models the relaxation of a non equilibrium plasma by collisional damping and pitch angle scattering on a thermal background. The plasma is spatially homogeneous and is represented by an ensemble of $N_p$ particles in a three-dimensional velocity space. Assuming a strong magnetic field, the dynamics can be reduced to two degrees of freedom: the magnitude of the particle velocity, $v$, and the particle pitch, $\lambda=\cos \theta$, where $\theta$ is the angle between the particle velocity and the magnetic field. In the continuum limit the particle distribution function is governed by the Fokker-Planck equation which in the particle description corresponds to the stochastic differential equations \[mc\_1\] d = - \_D dt - d \_ , \[mc\_2\] d v = - dt + d\_[v]{} , describing the evolution of $v \in (0, \infty)$ and $\lambda \in [-1,1]$ for each particle, where $d \eta_\lambda$ and $d \eta_v$ are independent Wiener stochastic processes and $\nu_D$, $\nu_s$ and $\nu_{\parallel}$ are functions of $v$. For further details on the model see Ref. [@delCastillo2008] and references therein. We considered simulations with $N_p=10^3$, $10^4$, $10^5$ and $10^6$ particles. The initial conditions of the ensemble of particles were obtained by sampling a distribution of the form \[f\_ic\] f(v,,t=0)= C v\^2 { -} , where a $v^2$ factor has been included in the definition of the initial condition so that the volume element is simply $\mathrm{d}v \mathrm{d}\mu$, $C$ is a normalization constant, $\lambda_0=0.25$, $v_0=5$, $\sigma_\lambda=0.25$ and $\sigma_v=0.75$. This relatively simple problem is particularly well suited for the WBDE method because the simulated particles do not interact and therefore statistical correlations can not build-up between them. Before applying the WBDE method, we analyze the sparsity of the wavelet expansion of $f^\delta$, and compare the number of modes kept and the reconstruction error for different thresholding rules. The plot in the upper left panel of Fig. \[compression\_curve\_time\_dependent\] shows the absolute values of the wavelet coefficients in decreasing order at different fixed times. The wavelet coefficients exhibit a clear rapid decay beyond the few significant modes corresponding to the gross shape of the Maxwellian distribution. A similar trend is observed in the coefficients of the POD expansion shown in the upper right panel of Fig. \[compression\_curve\_time\_dependent\]. However, in the wavelet case the exponential decay starts after more than $100$ modes, whereas in the POD case the exponential decay starts after only one mode. ![ \[compression\_curve\_time\_dependent\] Wavelet and POD analyses of collisional relaxation particle data at different fixed times, with $N_p = 10^5$. Top left: absolute values of the wavelet coefficients sorted by decreasing order (full lines), and thresholds given by the Waveshrink algorithm (dashed lines). Top right: singular values of the histogram used to construct $f^P$. Bottom left: error estimate $\frac{e^{1/2}}{N_g^2} $ with respect to the run for $N_p = 10^6$ as a function of the number of retained wavelet coefficients (full lines), error obtained when using the Waveshrink threshold (dashed lines), and error obtained using the WBDE method (dash-dotted lines). Bottom right: error estimate $\frac{e^{1/2}}{N_g^2}$ for $f^P$ as a function of the number $l$ of retained singular values. ](compression "fig:"){width="0.49\columnwidth"} ![ \[compression\_curve\_time\_dependent\] Wavelet and POD analyses of collisional relaxation particle data at different fixed times, with $N_p = 10^5$. Top left: absolute values of the wavelet coefficients sorted by decreasing order (full lines), and thresholds given by the Waveshrink algorithm (dashed lines). Top right: singular values of the histogram used to construct $f^P$. Bottom left: error estimate $\frac{e^{1/2}}{N_g^2} $ with respect to the run for $N_p = 10^6$ as a function of the number of retained wavelet coefficients (full lines), error obtained when using the Waveshrink threshold (dashed lines), and error obtained using the WBDE method (dash-dotted lines). Bottom right: error estimate $\frac{e^{1/2}}{N_g^2}$ for $f^P$ as a function of the number $l$ of retained singular values. ](compression_svd "fig:"){width="0.49\columnwidth"} ![ \[compression\_curve\_time\_dependent\] Wavelet and POD analyses of collisional relaxation particle data at different fixed times, with $N_p = 10^5$. Top left: absolute values of the wavelet coefficients sorted by decreasing order (full lines), and thresholds given by the Waveshrink algorithm (dashed lines). Top right: singular values of the histogram used to construct $f^P$. Bottom left: error estimate $\frac{e^{1/2}}{N_g^2} $ with respect to the run for $N_p = 10^6$ as a function of the number of retained wavelet coefficients (full lines), error obtained when using the Waveshrink threshold (dashed lines), and error obtained using the WBDE method (dash-dotted lines). Bottom right: error estimate $\frac{e^{1/2}}{N_g^2}$ for $f^P$ as a function of the number $l$ of retained singular values. ](error "fig:"){width="0.49\columnwidth"} ![ \[compression\_curve\_time\_dependent\] Wavelet and POD analyses of collisional relaxation particle data at different fixed times, with $N_p = 10^5$. Top left: absolute values of the wavelet coefficients sorted by decreasing order (full lines), and thresholds given by the Waveshrink algorithm (dashed lines). Top right: singular values of the histogram used to construct $f^P$. Bottom left: error estimate $\frac{e^{1/2}}{N_g^2} $ with respect to the run for $N_p = 10^6$ as a function of the number of retained wavelet coefficients (full lines), error obtained when using the Waveshrink threshold (dashed lines), and error obtained using the WBDE method (dash-dotted lines). Bottom right: error estimate $\frac{e^{1/2}}{N_g^2}$ for $f^P$ as a function of the number $l$ of retained singular values. ](error_svd "fig:"){width="0.49\columnwidth"} The two panels at the bottom of Fig. \[compression\_curve\_time\_dependent\] show the square root of the reconstruction error normalized by $N_g$, $\sqrt{e}/N_g^2$, in the WBDE and POD methods. Because in this case we do not have access to the exact solution of the corresponding Fokker-Planck equation at the prescribed time, we used $f^H$ computed using $N_p=10^6$ particles as the reference density $f^{ref}$ in Eq. (\[error\_e\]). The error observed when applying a global threshold to the wavelet coefficients (bottom left panel in Fig. \[compression\_curve\_time\_dependent\]) is minimal when around $100$ modes are kept whereas in the POD case (bottom right panel in Fig. \[compression\_curve\_time\_dependent\]) the minimal error is reached with about two or three modes. Fig. \[compression\_curve\_time\_dependent\] also shows the wavelet threshold obtained by applying the iterative algorithm based on the stationary Gaussian white noise hypothesis [@AAMF04; @Farge2006]. The error corresponding to this threshold is larger than the optimal error because the noise in this problem is very non-stationary due to the lack of statistical fluctuations in the regions were particles are absent. In contrast, the error corresponding to the WBDE procedure (dash-dotted line) is typically smaller than the optimal error obtained by global thresholding.This is not a contradiction, because the WBDE procedure is not a global threshold, but a level dependent threshold. ![\[time\_dependent\_contour\_plots\] Contour-plots of estimates of $f$ for the collisional relaxation particle data. First row: Histogram method estimated using $N_p=10^{5}$ particles. Second row: Histogram method estimated using $N_p=10^{6}$ particles. Third row: POD method estimated using $N_p=10^5$ particles. Fourth row: WBDE method estimated using $N_p=10^5$ particles. The three columns correspond to $t=28$, $t=44$ and $t=72$ respectively. The plots show twenty isolines, equally spaced in the interval $[0,0.4]$. ](paper){width="1.0\columnwidth"} Figure \[time\_dependent\_contour\_plots\] compares at different times the densities estimated with the WBDE and the POD (retaining only three modes) methods using $N_p=10^5$ particles with the histograms computed using $N_p=10^5$ and $10^6$ particles. The key future to observe is that the level of smoothness of $f^W$ and $f^P$ corresponding to $N_p=10^5$ is similar, if not greater, than the level of smoothness in $f^H$ computed using ten times more particles, i.e. $N_p=10^6$ particles. Table \[time\_dependent\_error\] summarizes the normalized reconstruction errors for $N_p=10^5$ according Eq. (\[error\_e\]) using $f^H$ with $N_p=10^6$ as $f^{ref}$. The WBDE and POD denoising methods offer a significant improvement, approximately by a factor $2$, over the raw histogram method. [l\*[6]{}[c]{}r]{} & $t = 28$ & $t = 44$ & $t = 72$\ $f^H$ & $0.14$ & $0.17$ & $0.12$\ $f^P$ & $0.068$ & $0.090$ & $0.094$\ $f^W$ & $0.064$ & $0.094$ & $0.088$\ A more detailed comparison of the estimates can be achieved by focusing on the Maxwellian final equilibrium state \[f\_max\] f\_M(v)= v\^2 e\^[-v\^2]{} , where, as in Eq. (\[f\_ic\]), the $v^2$ metric factor has been included in the definition of the distribution. For this calculations we considered sets of particles sampled from Eq. (\[f\_max\]) in the compact domain $[-1,1]\times[0,4]$. Since $f_M$ is an exact equilibrium solution of the Fokker-Plack equation, the ensemble of particles will be in statistical equilibrium but it will exhibit fluctuations due to the finite number of particles. Figure \[wavelets\_e\] shows the dependence of the square root of the reconstruction error, $e$ (normalized by $N_g^2$) on the number of particles $N_p$ and the grid resolution $N_g$ for the WBDE and POD methods. The main advantage of this example is that the exact density $f^M$ can be used as the reference density $f^{ref}$ in the evaluation of the error. ![\[wavelets\_e\] Reconstruction error, $\frac{e^{1/2}}{N_g^2} $, as a function of $N_{p}$ for the collisional relaxation particle data corresponding to the Maxwellian equilibrium state. Bold solid lines correspond to the WBDE method, bold dashed lines correspond to the POD method, and thin dashed lines correspond to the histogram method.](err){width="0.8\columnwidth"} Collisional guiding center transport in toroidal geometry {#delta_5d_example} --------------------------------------------------------- The previous example focused on collisional dynamics. However, in addition to collisions, plasma transport involves external and self-consistent electromagnetic fields and it is of interest to test the particle density reconstruction algorithms in these more complicated settings. As a first step on this challenging problem we consider a plasma subject to collisions and an externally applied fixed magnetic field in toroidal geometry. The choice of the field geometry and structure was motivated by problems of interest to magnetically confined fusion plasmas. The data was presented and analyzed using POD method in Ref. [@delCastillo2008]. The phase space of the simulation is five dimensional. However, as in Ref. [@delCastillo2008], we limit attention to the denoising of the particles distribution function along two coordinates corresponding to the poloidal angle $\theta\in[0,2\pi]$ and the cosine of the pitch angle $\mu\in[-1,1]$. The remaining three coordinates have been averaged out for the purpose of this study. The $\theta$ coordinate is periodic, but the pitch coordinate $\mu$ is not. An important issue to consider is that the data was generated using a $\delta f$ code (DELTA5D). Based on an expansion on $\rho/L \ll1$ (where $\rho$ is the characteristic Larmor radius and $L$ a typical equilibrium length scale) the distribution function is decomposed into a Maxwellian part $f_M$ and a first-order perturbation $\delta f$ represented as a collection of particles (markers) f([**x**]{}) = \_n W\_n ([**x**]{}-[**X**]{}\_n) , like in Eq. (\[dirac\_estimate\]) except that each marker is assigned a time dependent weight $W_n$ whose time evolution depends on the Maxwellian background [@parker]. The direct use of $\delta f({\bf x})$ is problematic in the WBDE method because $\delta f$ is not a probability density. To circumvent this problem the WBDE method was applied after normalizing the $\delta_f$ distribution so that $\int \vert\delta f\vert^H =1$, on a $128\times128$ grid. Figure \[fig:d5d\_3d\_comp\] shows contour plots of the histogram $f^H$ corresponding to $N_p=32 \times 10^3$, $64 \times 10^3$, and $1024 \times 10^3$ along with the WBDE and POD reconstructed densities. The POD reconstructions were done using $r=3$ modes, as in Ref. [@delCastillo2008]. It is observed that comparatively high levels of smoothness can be achieved with considerably less particles by using either the WBDE or POD reconstruction methods. The WBDE method provides better results for the $\delta f \sim 0$ contours. This is because POD modes are tensor product functions, that have difficulties in approximating the triangular shape of these contour lines. Note that the boundary artifacts due to periodization of the Daubechies wavelets do not seem to be very critical. The large wavelet coefficients associated with the discontinuity between the values of $\delta f$ at $\mu=\pm1$ are not thresholded, so that the discontinuity is preserved in the denoised function. Figure \[fig:d5d\_RMS\] compares the reconstruction errors in the WBDE, POD, and histogram methods as functions of the number of particles. To evaluate the error we used $f^H$ computed using $N_p=1024 \times 10^3$ as the reference density $f^{ref}$. As in the collisional transport problem, the error is reduced roughly by a factor $2$ for both methods compared to the raw histogram. Note that the scaling with $N_p$ is slightly better for WBDE than for POD. ![ \[fig:d5d\_3d\_comp\] Contour plots of estimates of $f$ for the collisional guiding center transport particle data: Histogram method (first row), POD method (second row), and WBDE method (third row). The left, center and right columns correspond to $N_p = 32\cdot 10^3$ (left), $N_p = 128\cdot 10^3$ (middle) and $N_p = 1024\cdot 10^3$ (right) respectively. The plots show seventeen isolines equally spaced within the interval $[-0.5,0.5]$. ](paper_delta5){width="1.0\columnwidth"} [![ \[fig:d5d\_RMS\] Error estimate, $\frac{e^{1/2}}{N_g^2} $, for collisional guiding center transport particle data according to the histogram, the POD, and the wavelet methods. ](error_delta5d "fig:"){width="0.8\columnwidth"} ]{} Collisionless electrostatic instabilities {#PIC_example} ----------------------------------------- In this section we apply the WBDE and POD methods to reconstruct the single particle distribution function from discrete particle data obtained from PIC simulations of a Vlasov-Poisson plasma. We consider a one-dimensional, electrostatic, collisionless electron plasma with an ion neutralizing background in a finite size domain with periodic boundary conditions. In the continuum limit the dynamics of the distribution function is governed by the system of equations $$\begin{aligned} \partial_t f + v \partial_x f + \partial_x \phi \partial_v f=0 \\ \partial_x^2 \phi = \zeta \int f(x,v,t) dv -1 \, ,\end{aligned}$$ where the variables have been non-dimensionalized using the Debye length as length scale and the plasma frequency as time scale, and $L$ is the length of the system normalized with the Debye length. Following the standard PIC methodology [@Birdsall1985], we solve the Poisson equation on a grid and solve the particle equations using a leap-frog method. The reconstruction of the charge density uses a triangular shape function. We consider two initial conditions: the first one leads to a bump on tail instability, and the second one to a two streams instability. ### Bump on tail instability {#bump_tail} For the bump on tail instability we initialized ensembles of particles by sampling the distribution function $$\label{bump_on_tail_init} f_0(x,v) = \frac{2}{3 \pi \zeta}\frac{1-2 q v+ 2 v^2}{\left( 1 + v^2 \right)^2}\, .$$ using a pseudo-random number generator. This equilibrium is stable for $q \leq 1$ and unstable for $q>1$. The dispersion relation and linear stability analysis for this equilibrium studied in Ref. [@delCastillo1998] was used to benchmark the PIC code as shown in Fig. \[pic\_validation\]. In all the computations presented here $q=1.25$ and $N_p=10^4$, $10^5$ and $10^6$. The spatial domain size was set to $\zeta=16.52$ to fit the wavelength of the most unstable mode. ![ \[pic\_validation\] Electrostatic energy as a function of time in the Vlasov-Poisson PIC simulations of the bump on tail instability for different numbers of particles. The straight lines denote the growth rate predicted by linear stability theory [@delCastillo1998]. ](gamma){width="0.9\columnwidth"} Since the value of $q$ is relatively close to the marginal value, the instability grows weakly and is concentrated in a narrow band in phase space centered around the point where the bump is located, $v \approx 1$ in this case. In order to unveil the nontrivial dynamics we focus the analysis in the band $v\in(-3,3)$, and plot the departure of the particle distribution function from the initial background equilibrium. The POD method is applied directly to $\delta f^H=f^H(x,v,t)-f_0(x,v)$, but the WBDE method is applied to the full $f^H(x,v,t)$, and $f_0(x,v)$ is subtracted only for visualization. Note that because we are considering only a subset of phase space, the effective numbers of particles, $N_p=7318$, $N_p=73143$ and $N_p=731472$, are smaller than the nominal numbers of particles, $N_p=10^4$, $N_p=10^5$ and $N_p=10^6$ respectively. Figure \[bump\_tail\_snapshots\] shows contour plots of $\delta f$, for different number of particles. Since the instability is seeded only by the random fluctuations in the initial condition, increasing $N_p$ delays the onset of the linear stability and this leads to a phase shift of the nonlinear saturated regime. To aid the comparison of the saturated regime for different numbers of particles we have eliminated this phase shift by centering the peak of the particle distributions in the middle of the computational domain. A $256 \times 256$ grid was used in the WBDE method, and a $50\times 50$ grid was used for the histogram and the POD methods. The thresholds for the POD method where $r=1$, $r=2$, and $r=3$ for $N_p=10^4$, $N_p=10^5$ and $N_p=10^6$, respectively. Except for the case where $N_p=10^4$, both the POD and WBDE estimates are very smooth, in agreement with the expected behavior of $f$ for this instability. It is observed that the level of smoothness of the histogram estimated using $10^6$ particles is comparable to the level of smoothness achieved after denoising using only $10^5$ particles. One should mention that for scales between $L$ and $J$ occurring in the WBDE algorithm we find that none of the wavelet coefficients are above the thresholds at each scale. In fact, a simple KDE estimate with a large enough smoothing scale would probably do the job pretty well for this kind of instabilities which do not induce abrupt variations in $f$. Table \[bump\_tail\_error\] shows the POD and WBDE reconstruction errors for $N_p=10^4$ and $N_p=10^5$. The error is computed using formula (\[error\_e0\]), taking for $f_{ref}$ the histogram obtained from the simulation with $N_p=10^6$. ![ \[bump\_tail\_snapshots\] Contour plots of estimates of $\delta f$ for the bump-on-tail instability PIC data at $t=149$: Histogram method (first row), POD method (second row), and WBDE method (third row), The left, center and right columns correspond to $N_p=10^4$, $N_p=10^5$ and $N_p=10^6$ particles respectively. The plots show thirteen contour lines equally spaced within the interval $[-0.012 0.012]$. ](snapshots2_bt){width="1.0\columnwidth"} Figure \[pic\_moments\] shows the relative error on the second order moment : $$\frac{\vert \mathcal{M}_{v,2}^W - \mathcal{M}_{v,2}^\delta \vert}{ \mathcal{M}_{v,2}^\delta }$$ where $\mathcal{M}^W_{v,2}$ is defined by (\[moments\_def\]). A similar quantity is also represented for $f^H$ and $f^P$. The time and number of particles are kept fixed at $t=149$ and $N_p=10^6$, and the grid resolution is varied. As expected, $f^H$ and $f^W$ conserve the second order moment with accuracy $O(N_g^{-1})$. The errors corresponding to $f^P$ is of the same order of magnitude but seems to reach a plateau for $N_g \simeq 1024$. This may be due to the fact that for $N_g \geq 1024$, there is less than one particle per cell of the histogram used to compute $f_P$. ![ \[pic\_moments\] Relative error on the second order moment as a function of the grid resolution, $N_g$, in the POD, WBDE, and histogram methods for the bump on tail instability particle data at $t=149$, with $N_p=10^6$ particles. ](moments){width="0.8\columnwidth"} ### Two-streams instability {#two_streams} As a second example we consider the standard two-streams instability with an initial condition consisting of two counter-propagating cold electron beams initially located at $v=-1$ and $v=1$. This case is conceptually different to the previous one because the initial condition depends trivially on the velocity. Therefore, there is no statistical error in the sampling of the distribution and the noise builds up only due to the self-consistent interactions between particles. In other words, there is initially a strong correlation between particles’ coordinates, which will eventually almost vanish. This situation offers a way to test robustness of the WBDE method with respect to the underlying decorrelation hypothesis. [l\*[6]{}[c]{}r]{} & $N_p=10^4$ & $N_p=10^5$\ $f^H$ & $ 0.443 $ & $ 0.140 $\ $f^P$ & $ 0.163 $ & $ 0.090 $\ $f^W$ & $ 0.173 $ & $ 0.086 $\ The analysis is focused on four stages of the instability corresponding to $t=40$, $60$, $100$, and $400$. Fig. \[two\_streams\_snapshot\] shows a comparison of the raw histogram, the POD and the WBDE reconstructed particle distribution functions at these four instants. Grid sizes were $N_g=1024$ for the WBDE estimate, and $N_g=128$ for the two others. For $t=40$, no noise seems to have affected the particle distribution yet, therefore a perfect denoising procedure should conserve the full information about the particle positions. Although WBDE introduces some artifacts in regions of phase space that should contain no particles at all, it remarkably preserves the global structure of the two streams. This is possible thanks to the numerous wavelet coefficients close to the sharp features in $f$ that are above the thresholds, in contrast to the bump-on-tail case. On the next snapshot at $t=60$, the filaments have overlapped and the system is beginning to loose its memory due to numerical round-off errors. The fastest filaments still visible on the histogram are not preserved by WBDE, but the most active regions are well reproduced. At $t=100$, the closeness between the histogram and the WBDE estimate is striking. To put it somewhat subjectively, one may say that WBDE did not consider most of the rough features present at this stage as ’noise’, since they are not removed. Only with the last snapshot at $t=400$ does the WBDE estimate begin to be smoother than the histogram, suggesting that the nonlinear interaction between particles has introduced randomization in the system. [ ![image](snapshots2.png){width="1.75\columnwidth"} ]{} The POD method is able to track very well the small and large scale structures of the particle density using a significantly smaller number of modes. In particular, for $t=40$, $60$, $100$, and $400$ only $r=28$, $r=27$, $r=18$, and $r=5$ modes were kept. The decrease of the number of modes with time is a result of the lost of fine scale features in the distribution function. Despite this, a limitation of the POD method is the lack of a thresholding algorithm to determine the optimal number of modes a priori. Summary and Conclusion ====================== Wavelet based density estimation was investigated as a post-processing tool to reduce the noise in the reconstruction of particle distribution functions starting from discrete particle data. This is a problem of direct relevance to particle-based transport calculations in plasma physics and related fields. In particular, particle methods present many advantages over continuum methods, but have the potential drawback of introducing noise due to statistical sampling. In the context of particle in cell methods this problem is typically approached using finite size particles. However, this approach, which is closely related to the kernel density estimation method in statistics, requires the choice of a smoothing scale, $h$, (e.g., the standard deviation for Gaussian shape functions) whose optimal value is not known a priori. A small $h$ is desirable to fit as many Debye wavelengths as possible, whereas a large $h$ would lead to smoother distributions. This situation results from the compromise between bias and variance in statistical estimation. To address this problem we proposed a wavelet based density estimation (WBDE) method that does not require an a priori selection of a global smoothing scale and that its able to adapt locally to the smoothness of the density based on the given discrete data. The WBDE was introduced in statistics [@Donoho1996]. In this paper we extended the method to higher dimension and applied it for the first time to particle-based calculations. The resulting method exploits the multiresolution properties of wavelets, has very weak dependence on adjustable parameters, and relies mostly on the raw data to separate the relevant information from the noise. As a first example, we analyzed a plasma collisional relaxation problem modeled by stochastic differential equations. Thanks to the sparsity of the wavelet expansion of the distribution function, we have been able to extract the information out of the statistical fluctuations by nonlinear thresholding of the wavelet coefficients. At late times, when the particle distribution approaches a Maxwellian state, we have been able to quantify the difference between the denoised particle distribution function and its analytical counterpart, thus demonstrating the improvement with respect to the raw histogram. The POD-smoothed and wavelet-smoothed particle distribution functions were shown to be roughly equivalent in this respect. These results were then extended to a more complex situation simulated with a $\delta f$ code. Finally, we have turned to the Vlasov-Poisson problem, which includes interactions between particles via the self-consistent electric field. The POD and WBDE methods were shown to yield quantitatively close results in terms of mean squared error for a particle distribution function resulting from nonlinear saturation after occurrence of a bump-on-tail instability. We have then studied the denoising algorithm during nonlinear evolution after the two-streams instability starting from two counter-streaming cold electron beams. This initial condition violates the decorrelation hypothesis underlying the WBDE algorithm, and thus offers a good way to test its robustness regarding this aspect. The WBDE method was shown to yield qualitatively good results without changing the threshold values. One limitation of the present work comes from the way denoising quality is measured. We have considered the quadratic error on the distribution function $f$ as a first indicator of the quality of our denoising methods. However, it may be more relevant to compute the error on the force fields, which determine the evolution of the simulated plasma. These forces depend on $f$ through integrals, and statistical analysis of the estimation of $f$ using weak norms, like was done in [@Victory1991] in the deterministic case, could therefore be of great help to obtain threshold parameters more efficient than those considered in this study. The computational cost of our method scales linearly with the number of particles and with the grid resolution. Therefore, WBDE is an excellent candidate to be performed at each time step during the course of a simulation. Once the wavelet expansion of the denoised particle distribution function is known, it is possible to continue using the wavelet representation to solve the Poisson equation [@Jaffard1992] and to compute the forces. The moment conservation properties that we have demonstrated in this paper should mitigate the unavoidable dissipative effects implied by the smoothing stage. In Ref. [@McMillan2008], a dissipative term was introduced in a global PIC code to avoid unlimited growth of particle weights in $\delta f$ codes, and this was shown to improve long time convergence of the simulations. It would be of interest to assess if the nonlinear dissipation operator corresponding to WBDE has the same effect. Acknowledgements {#acknowledgements .unnumbered} ---------------- We thank D. Spong for providing the DELTA5D Monte-Carlo guiding center simulation data in Fig.6, originally published in Ref. [@delCastillo2008]. We also thank Xavier Garbet for his comments on the paper and for pointing out several key references. MF and KS acknowledge financial support by ANR under contract M2TFP, Méthodes multiéchelles pour la turbulence dans les fluides et les plasmas. DCN and GCH acknowledge support from the Oak Ridge National Laboratory, managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725. DCN also gratefully acknowledges the support and hospitality of the École Centrale de Marseille for the three, one month visiting positions during the elaboration of this work. This work, supported by the European Communities under the contract of Association between EURATOM, CEA and the French Research Federation for fusion studies, was carried out within the framework of the European Fusion Development Agreement. The views and opinions expressed herein do not necessarily reflect those of the European Commission. [99]{} C. K. Birdsall and A. B. Langdon, [*Plasma Physics via Computer Simulation*]{}, (McGraw-Hill, New-York,1985). Hockney, R. W. and Eastwood, J. W. [*Computer Simulation using Particles*]{} (IOP, Bristol, Philadelphia,1988). W. M. Nevins, G. W. Hammett, A. M. Dimits, W. Dorland and D.E. Shumaker, Phys. Plasmas, [**12**]{}, 122305 (2005). J. A. Krommes, Phys.of Plasmas, [**14**]{}, 090501 (2007). B. F. McMillan, S. Jolliet and T. M. Tran and L. Villard A. Bottino, and P. Angelino, Phys. of Plasmas, [**15**]{}, 052308 (2008). J. A.Krommes, Phys. Fluids B, [**5**]{}, 1066-1100 (1993). A. Y. Aydemir, Phys. Plasmas, [**1**]{}, 822-831 (1994). R. W. Hockney, Phys. of Fluids, [**9**]{}, 1826-1835 (1966). A. B.Langdon, and C. K. Birdsall, Phys. Fluids, [**13**]{}, 2115 (1970). S. Jolliet, A. Bottino, P. Angelino,R. Hatzky, T. M. Tran, B. F. McMillan, O. Sauter, K. Appert, Y. Idomuram and L. Villard, Computer Phyics Communications [**177**]{} 409 (2007) Y. Chen and S. E. Parker, Phys. Plasmas [**14**]{}, 082301 (2007). E. Cormier-Michel, B. A. Shadwick, C. G. R. Geddes, E. Esarey, C. B. Schroeder, and W. P. Leemans, Phys. Rev. E [**78**]{}, 016404 (2008). J. L.V. Lewandowski, Phys. of Plasmas [**12**]{}, 052322 (2005). , D. del-Castillo-Negrete, D. A. Spong, and S. P. Hirshman, Phys. Plasmas, [**15**]{}, 092308 (2008). D. L.Donoho, I. M. Johnstone, G. Keryacharian, and D. Picard, The Annals of statistics, [**24**]{}, 508–539 (1996) B. W. Silverman, [*Density estimation for statistics and data analysis*]{} (Chapman and Hall, 1986). E. Parzen, Annals of Mathematical Statistics, [**33**]{}, 1065-1076 (1962). S.-Tsong Chiu, , The Annals of Statistics, [**19**]{}, 1883-1905 (1991). M. Farge, Ann. Rev. Fluid Mech., [**24**]{}, 395-457 (1992). S. Jaffard, Publicaciones Matematiques, [**35**]{}, 155-168 (1991). S. Mallat, [*A wavelet tour of signal processing*]{} (Academic Press, École polytechnique, 1999). A. Cohen, I. Daubechies, and P. Vial, Applied and Computational Harmonic Analysis, [**1**]{}, 54-81 (1993). D. Donoho, I. Jonhstone, Biometrika, [**81**]{}, 425-455 (1994). M. Farge, K. Schneider, and P. Devynck, Phys. Plasmas, [**13**]{}, 042304 (2006). R. von Sachs, and K. Schneider, Appl. Comput. Harmon. Anal., [**3**]{}, 268-282 (1996). M. Vannucci, and B. Vidakovic, Journal of the Italian Statistical Society, [**6**]{}, 15–19 (1998). B. Vidakovic, [*Statistical Modeling by Wavelets*]{} (Wiley, 1999). S. Gassama, E. Sonnendrcker, K. Schneider, M. Farge, and M. O. Domingues, ESAIM : Proceedings, [**16**]{}, 196-210 (2007). A. Azzalini, M. Farge, and K. Schneider, Appl. Comput. Harmon. Anal., [**18**]{}, 177-185 (2004). I. Daubechies [*Ten Lectures on Wavelets*]{} (SIAM, 1992). I. Daubechies, SIAM Journal of Mathematical Analysis, [**24**]{}, 499-519 (1993). J. Denavit, Journal of Computational Physics, [**9**]{}, 75-98 (1972). A. Ihler, [*KDE toolbox for Matlab, http://www.ics.uci.edu/ \~ihler/code/kde.html*]{}. N. Kingsbury, Applied and Computational Harmonic Analysis, [**10**]{}, 234-253 (2001). D. del-Castillo-Negrete, Phys. of Plasmas, [**5**]{}, 3886 (1998). Golub, G.H., and Van Loan, D. F., [*Matrix computations*]{}, third ed., (The John Hopkins University Press, London, 1996.) S. E. Parker and W. W. Lee, Phys. Fluids B [**5**]{}, 77 (1993). H. D. Victory, and E. J. Allen, SIAM Journal on Numerical Analysis, [**28**]{}, 1207-1241 (1991). S. Jaffard, SIAM Journal on Numerical Analysis, [**29**]{}, 965–986 (1992).
--- abstract: 'We demonstrate that a two-dimensional (2D) atomic array can be used as a novel platform for quantum optomechanics. Such arrays feature both nearly-perfect reflectivity and ultra-light mass, leading to significantly-enhanced optomechanical phenomena. Considering the collective atom-array motion under continuous laser illumination, we study the nonlinear optical response of the array. We find that the spectrum of light scattered by the array develops multiple sidebands, corresponding to collective mechanical resonances, and exhibits nearly perfect quantum-noise squeezing. Possible extensions and applications for quantum nonlinear optomechanics are discussed.' author: - Ephraim Shahmoon - 'Mikhail D. Lukin' - 'Susanne F. Yelin' title: '**Quantum optomechanics of a two-dimensional atomic array**' --- Introduction ============ The study of radiation pressure plays an important role in science and emerging technologies, from the manipulation of ions in quantum information processing [@CZ; @MS], to cooling and monitoring the motion of solid mirrors. [@AKM]. These examples demonstrate the two extreme limits of light-induced motion, which are typically studied; namely, that of single atoms, and that of bulk objects. Situated in between these two extremes, this work deals with the optomechanics of a nearly-perfect mirror made of a single dilute layer of optically-trapped atoms. It is well known that light can dramatically influence the motion of individual atoms, as demonstrated by laser-cooling of atoms [@CCT]. However, due to the small absorption cross-section of individual atoms, efficient optomechanical coupling typically requires interfacing light with highly reflective objects, such as optical cavities [@SK1; @SK; @ESS; @CAM; @RES]. Most optomechanical systems involve the motion of bulk solid objects, such as a movable mirror or membrane inside a cavity, that are coupled to light via radiation pressure [@AKM; @MEY; @DOR; @HAR]. While light can be strongly scattered in this way, its effect on the motion of such macroscopic objects is very limited, due to the extremely small zero-point motion of the latter. Although ground-state cooling of the mechanical state [@MAR; @WIL; @SCH; @CHAN; @TEU] and the generation of squeezed light [@HAM; @SK3; @REG; @SAF] were recently achieved, reaching the single-photon optomechanical regime [@RAB; @GIR] remains an outstanding challenge. ![[]{data-label="fig1"}](fig1.pdf){width="\columnwidth"} In this work, we explore the optomechanics of a single 2D ordered array of optically-trapped atoms, as can be realized e.g. in optical lattices, in a cavity-free environment. It was recently shown, that such a 2D atom array can act as a nearly-perfect mirror, for light whose frequency matches the cooperative dipolar resonance supported by the array [@coop; @ADM]. The mirror formed by such an array is easily pushed by the reflected light. Its zero-point motion is set by the depth of the atomic traps, which even for tight trapping (Lamb-Dicke regime), becomes $10^{-8}$m to $10^{-7}$m, much larger than the $10^{-15}$m to $10^{-13}$m zero-point motion of suspended bulk mirrors or membranes [@AKM; @HAR; @BAC]. Therefore, by combining nearly-perfect reflectivity with a high mechanical susceptibility, 2D atomic arrays could lead to very large optomechanical couplings. We use a quantum-mechanical treatment to study the motion of atoms close to their equilibrium trap positions, under a continuous-wave laser illumination, which is weak enough to neglect internal-state saturation (Fig. 1). Cooperative effects due to dipole-dipole interactions play a central role in this system. First, they lead to a collective dipolar resonance of the *internal* state of the atoms; and second, laser-induced dipolar forces between atoms lead to the formation of collective *mechanical* modes. We show that the light-induced motion of this cavity-free many-atom system can be characterized by its mapping to a standard cavity optomechanics model in its bad-cavity, unresolved sideband regime. We then consider the back-action of this motion on the light, due to the optomechanical response of the array. In particular, we find that the collective mechanical modes imprint multiple sidebands on the spectrum of the light scattered by the array, and that this output light contains quantum correlations both in space and time, exhibiting large spatio-temporal squeezing. These results provide a promising starting point and benchmark for further studies of optomechanics using ordered arrays of trapped atoms. They reveal that significant optomechanical couplings are achievable already at the level of a “bare", cavity-free system of a single 2D array of dozens of atoms. More elaborate schemes may therefore enable reaching novel regimes of nonlinear and few-photon quantum optomechanics, as discussed below. The article is organized as follows. Our theory of optomechanics of 2D atom arrays is presented in Secs. II and III. This includes the description of the system and its collective motion induced by light (Sec. II), and the characterization of the atom array system, via its mapping to the standard cavity optomechanical model (Sec. III). The theory is then applied to predict nonlinear optical phenomena, resulting from light-induced atomic motion: Sec. IV presents the analysis of the intensity spectrum of the output light, whereas Sec. V studies its quantum noise and correlation properties. Finally, we discuss some conclusions and future prospects in Sec. VI. Light-induced collective motion =============================== We consider a 2D array of trapped atoms $n=1,...,N$ at positions $\hat{\mathbf{r}}_n=(\mathbf{r}_n^{\bot},\hat{z}_n)$, illuminated by a right-propagating continuous-wave laser (Fig. 1). Motion is considered only along the longitudinal axis $z$, with $\hat{z}_n$ around the equilibrium position $z=0$, whereas the transverse positions $\mathbf{r}_n^{\bot}$ are assumed to be fixed (deep transverse trapping), forming a 2D lattice in the $xy$ space, e.g. a square lattice with lattice spacing $a$. Our theory below assumes an infinite array, but in practice it is valid for finite mesoscopic arrays ($\sqrt{N}\gg 1$, e.g. $N\sim10^2$) [@notes]. The atoms are modelled as two-level systems with transition frequency $\omega_a$ and radiative width $\gamma$. Dipolar interactions between the array atoms, however, lead to a cooperative shift $\Delta$ and width $\Gamma$ of the atomic transition, reflecting the fact that the atomic dipoles respond collectively to light [@coop]. Nevertheless, for our purposes, these collective dipole modes effectively behave as individual atoms with a “renormalized" (cooperative) resonance frequency $\omega_a+\Delta$ and width $\gamma+\Gamma$ [@notes]. In the following, we discuss the light-induced collective motion of the array atoms. This discussion derives largely from Ref. [@notes], briefly reviewed in Appendix A. ![[]{data-label="fig2"}](fig2a.pdf "fig:") ![[]{data-label="fig2"}](fig2b.jpg "fig:") ![[]{data-label="fig2"}](fig2c.jpg "fig:") ![[]{data-label="fig2"}](fig2d.jpg "fig:") The derivation of the governing equation of atomic motion is based on the following considerations. First, we take advantage of the separation of timescales between the fast internal and slow external atomic degrees of freedom, given by the cooperative decay rate $\gamma+\Gamma=\gamma\frac{3}{4\pi}(\lambda^2/a^2)$ [@coop] and the recoil energy $E_R=\hbar^2q^2/m$, respectively ($q=\omega_L/c=2\pi/\lambda$ being the laser wavenumber and $m$ the atom mass). This allows to adiabatically eliminate the internal degrees of freedom, obtaining a dynamical equation for the external, motional degrees of freedom $\hat{z}_n$. Second, we assume that the atoms remain inside the optical traps of length $<\lambda$, allowing to approximate $|\hat{z}_n|\ll \lambda$. Considering also atoms far from saturation (linearly responding, $\gamma+\Gamma\gg \Omega$, $\Omega$ being the Rabi frequency), we finally obtain (Appendix A): $$\begin{aligned} \dot{\hat{p}}_n&=&-m\nu^2\hat{z}_n+\bar{f}_n-\alpha_n\hat{p}_n+\hat{f}_n(t)+\sum_{m \neq n} K_{nm}(\hat{z}_n-\hat{z}_m), \nonumber\\ \dot{\hat{z}}_n&=&\hat{p}_n/m, \label{EOM}\end{aligned}$$ with $\hat{p}_n$ the momentum of atom $n$. This equation describes a collective Brownian motion, with the explicit expressions for the coefficients $\bar{f}_n,\alpha_n,\hat{f}_n(t),K_{nm}$ given in Appendix A. The first term in Eq. (\[EOM\]) is the restoring force due to the individual trap of an atom (longitudinal trap frequency $\nu$), whereas the next three terms account for light-induced forces including the average force $\bar{f}_n$, and the scattering-induced friction $\alpha_n$ and corresponding Langevin force $\hat{f}_n(t)$. The expressions for $\bar{f}_n$, $\alpha_n$ and $\hat{f}_n(t)$ resemble those from known single-atom theories of light-induced motion [@CCT], except that here the atom-laser detuning $\delta_L$ and width $\gamma$ are modified by their cooperative counterparts $\delta_L-\Delta$ and $\gamma+\Gamma$, respectively. The term with coefficient $K_{nm}$ gives rise to a mechanical coupling between the atoms originating in the laser-induced dipole-dipole forces between pairs of atoms [@LIDDI]. It reflects that the motion of individual atoms is not independent, resulting in collective mechanical modes. Since $K_{nm}\propto \Omega_n^{\ast}\Omega_m$, with $\Omega_n$ the Rabi frequency on atom $n$, the collective mechanical modes crucially depend on the spatial profile of the incident light. To find the modes, we diagonalize Eq. (\[EOM\]) in the absence of forces $\bar{f}_n,\hat{f}_n$ and friction $\alpha_n$, which amounts to the system of coupled oscillators from Fig. 2a. The collective mechanical normal modes of a square array with $N=14^2$ atoms, illuminated by a normal-incident Gaussian beam with waist smaller than the array size, are shown in Fig. 2b (eigenfrequencies) and Figs. 2c,d (spatial profiles). For times $t$ longer than $1/\alpha_n$, the atomic motion in frequency domain becomes (Appendix A), $$\begin{aligned} \hat{z}_n(\omega)&=&\sum_j U_{jn} \hat{z}_j(\omega), \nonumber\\ \hat{z}_j(\omega)&=&\bar{z}_j 2\pi\delta(\omega)+\frac{1}{m\nu_j^2}\chi_j(\omega)\hat{f}_j(\omega), \nonumber\\ \chi_j(\omega)&=&-\frac{\nu_j^2}{\omega^2-\nu_j^2+i\alpha_j\omega}, \quad \bar{z}_j=\frac{\bar{f}_j}{m \nu_j^2}. \label{zo}\end{aligned}$$ Here $U_{jn}$ is the matrix element of the unitary transformation from the real-space lattice basis $n$ to the collective normal mode basis $j$ with eigenfrequencies $\nu_j$, $X_j=\sum_n U^{\ast}_{jn} X_n$ for $X=\bar{f},\hat{f},\hat{z}$ and $\alpha_j=\sum_j |U_{jn}|^2\alpha_n$. The solution $\hat{z}_j(\omega)$ for each normal mechanical mode $j$ consists of an average static shift $\bar{z}_j$ due to the static force $\bar{f}_n$ and a fluctuating part due to the linear mechanical response $\chi_j(\omega)$ to the corresponding Langevin force $\hat{f}_j(\omega)$. Throughout this work, we assume that the atoms remain trapped, requiring that the potential depth $V$ of the traps is larger than the effective temperature $T_e$ associated with the Langevin force (Appendix A), $$T_e=\frac{\hbar\gamma}{2}\frac{(\delta_L-\Delta)^2+\left(\frac{\gamma+\Gamma}{2}\right)^2}{(\Delta-\delta_L)(\gamma+\Gamma)}. \label{Te}$$ We note that for the atoms to remain trapped, $T_e$ has to be positive (and lower than the trapping potential), leading to the requirement of red cooperative detuning, $\delta_L<\Delta$. Mapping to cavity optomechanics =============================== Typical optomechanical systems can be modeled by a single optical cavity (boson mode $\hat{c}$) whose resonant frequency linearly depends on the position of a moving mirror (coordinate $\hat{z}\propto\hat{b}+\hat{b}^{\dag}$), as depicted in Fig. 3a, and with the Hamiltonian $$H=\hbar\omega_c\hat{c}^{\dag}\hat{c}+\hbar\nu \hat{b}^{\dag}\hat{b}+\hbar g\hat{c}^{\dag}\hat{c}(\hat{b}+\hat{b}^{\dag})+(\hat{c}^{\dag}\Omega e^{-i\omega_L t}+\mathrm{h.c.}), \label{Hc}$$ $g$ being the bare optomechanical coupling and $\Omega$ the input field. It is therefore instructive to relate this simple standard model to the optomechanics of the atom array: Although the latter system does not include an optical cavity, it does include a resonator in the form of the internal degrees of freedom of the atoms. To this end, we consider the linearized regime of the cavity optomechanics model, wherein the quantum fluctuations in the cavity and the motion are assumed to be much smaller than their corresponding classical steady-state values, and the Hamiltonian in a laser-rotated frame becomes [@AKM; @MEY] $$H\approx -\hbar\delta_c\hat{c}^{\dag}\hat{c}+\hbar\nu \hat{b}^{\dag}\hat{b}+\hbar (\bar{g}^{\ast}\hat{c}+\bar{g}\hat{c}^{\dag})(\hat{b}+\hat{b}^{\dag}), \label{Hcl}$$ where $\delta_c$ is a shifted laser-cavity detuning [@AKM], $\bar{g}=g\bar{c}$ with $\bar{c}$ the classical steady-state value of the cavity field (c-number), and $\hat{c}$ and $\hat{b}$ the quantum fluctuations of the field and motion, respectively. ![[]{data-label="fig3"}](fig3.pdf){width="\columnwidth"} In contrast, consider now a light-matter interaction Hamiltonian, such as that used to derive Eq. (\[EOM\]), $H_{\mathrm{int}}\sim \hat{\sigma}_n^{\dag} \hbar\Omega_n e^{iq \hat{z}_n}+\mathrm{h.c.}$, with $\hat{\sigma}_n$ the atomic-transition lowering operator and $\Omega_n$ the Rabi frequency at atom $n$. For $|\hat{z}_n|\ll \lambda$, its relevant optomechanical coupling becomes $$H_{\mathrm{int}}\sim \hbar\hat{\sigma}_n^{\dag} \Omega_n q\hat{z}_n+\mathrm{h.c.}=\hbar(\eta\Omega_n^{\ast}\hat{\sigma}_n+\mathrm{h.c.})(\hat{b}_n+\hat{b}_n^{\dag}), \label{Hint}$$ where $\eta=q x_0$ is the Lamb-Dicke parameter, with $x_0=\sqrt{\hbar/(2m\nu)}$ the zero-point motion of an atom inside the trap. The form of $H_{\mathrm{int}}$ is identical to that of the interaction term in (\[Hcl\]), with the internal, dipolar resonances of the linearly-responding atoms in the former, replacing the optical cavity resonator in the latter. Focusing, for now, on a single motional degree of freedom of the array (e.g. a single atom), this suggests the following mapping between the 2D atom array and the cavity optomechanics models (Fig. 3b): -------------------------- ------------------ ------------------- **cavity model** **atom array** \[0.5ex\] resonator mode $\hat{c}$ $\hat{\sigma}$ optomechanical coupling $\bar{g}$ $\eta \Omega$ laser detuning $\delta_c$ $\delta_L-\Delta$ resonator damping rate $\kappa$ $\gamma+\Gamma$ \[1ex\] -------------------------- ------------------ ------------------- : Mapping to cavity optomechanics[]{data-label="table"} Here, we recall the renormalized (cooperative) resonance of the array atoms with frequency $\omega_a+\Delta$ and width $\gamma+\Gamma$ ($\delta_L=\omega_L-\omega_a$ being the “bare" laser-atom detuning). More formally, the above mapping can be justified by deriving the equations of motion for the coordinate $\hat{z}$ and resonator $\hat{c}$ of the standard cavity model, and comparing them with the analogous equations for $\hat{z}_n$ and $\hat{\sigma}_n$ of the atom array. In Appendix B, we show that these two sets of equations are indeed equivalent, by considering the mapping from table I. Moreover, for the specific case of a bad cavity in the weak-coupling and unresolved sideband regimes, $\kappa\gg \bar{g},\nu$, the resonator mode $\hat{c}$ can be adiabatically eliminated, and the resulting equation of motion for $\hat{z}$ is essentially identical to Eq. (\[EOM\]), for a single atom. The multimode, many-atom collective mechanics, i.e. including the term $K_{nm}$ in Eq. (\[EOM\]), can also be captured by the cavity model: It requires to include multiple mechanical modes in the Hamiltonian (\[Hc\]) via an interaction term $\hbar\sum_n g_n\hat{c}^{\dag}\hat{c}(\hat{b}_n+\hat{b}_n^{\dag})$, resulting in an effective coupling parameter $K'_{nm}\propto \bar{g}_n \bar{g}_m$ (Appendix B). In contrast to the cavity model however, the multimode character of the 2D atom array also extends to the output field, resulting in qualitatively new features as explored in the following. Mechanical sidebands in output light ==================================== We now turn to study the optomechanical backaction on the light, in the form of nonlinear optical phenomena. For non-saturated atoms, for which the polarizability is linear, optical nonlinearity originates only in the motion, via the following mechanism: The light pushes the atoms, whose positions are then determined by the intensity of light. In turn, the phase of the light that is scattered off the atoms depends on their positions. This leads to an intensity-dependent phase, as in an optical Kerr medium. More formally, the reflected field from an atomic array is given by the scattered fields from all atoms, each of which is proportional to a phase factor $e^{i 2q z_n}$. For an incident field $E$, radiation pressure leads to $z_n\propto |E|^2$ and hence to intensity-dependent phase factors. In this section we show that the multimode nature of the atom array optomechanics discussed above, manifests itself in the form of sidebands in the spectrum of the output light. The sidebands are located at the resonant frequencies $\nu_j$ of the collective mechanical modes $j$ at which the motion $\hat{z}_n$, and hence the phase factors $e^{i 2q z_n}$, are modulated; and the corresponding weights of these sidebands depend on the spatial profiles of these modes. Output light and nonlinearity ----------------------------- The field scattered off an array of atoms has the form $\sum_n e^{-ik_z \hat{z}_n} \hat{\sigma}_n$. Using the adiabatic solution for the linearly-responding atomic dipoles, $\hat{\sigma}_n(t)$, we obtain the output field in the paraxial approximation (Appendix C) $$\begin{aligned} \widetilde{a}_{\mathbf{k}_{\bot}ks}&=&\beta_{\mathbf{k}_{\bot}}\delta_{s,+}\delta_{kq}+\hat{a}_{\mathbf{k}_{\bot}ks} -g^{\ast}_0 \int_{-\infty}^{\infty}dt e^{i(ck-\omega_L)t} \nonumber\\ &&\times\sum_n e^{-i\mathbf{k}_{\bot}\cdot\mathbf{r}_{n}^{\bot}}\sum_{s'=\pm}e^{-i(s-s')q\hat{z}_n(t)}\frac{\hat{\Omega}_{ns'}(t)}{\delta_L-\Delta+i\frac{\gamma+\Gamma}{2}}. \nonumber\\ \label{aout}\end{aligned}$$ Here, the “output field", $\widetilde{a}_{\mathbf{k}_{\bot}ks}\equiv\hat{a}_{\mathbf{k}_{\bot}ks}(t=\tau) e^{i kc \tau}$, is the slow envelope of the lowering operator of the right/left-propagating ($s \rightarrow \pm$) photon mode with wavevector $\mathbf{k}=(\mathbf{k}_{\bot},k_z=sk)$ ($k\gg|\mathbf{k}_{\bot}|)$, evaluated at the final time $\tau\rightarrow \infty$, much after the atom-laser interaction ends. The “input fields", $\hat{a}_{\mathbf{k}_{\bot}ks}\equiv \hat{a}_{\mathbf{k}_{\bot}ks}(t=-\tau) e^{-i kc \tau}$, are in turn evaluated at the initial time $-\tau\rightarrow -\infty$ before the atom-laser interaction begins, and are hence equal to vacuum fields satisfying, $\hat{a}_{\mathbf{k}_{\bot}ks}|0\rangle=0$. The coherent laser input is represented by the average amplitude (c-number) $\beta_{\mathbf{k}_{\bot}}=(1/N)\sum_n e^{-i\mathbf{k}_{\bot}\cdot\mathbf{r}_{n}^{\bot}}\beta_{n}$, which is related to the Rabi frequency via $\beta_{n}=-i\Omega_{n}/g_0$, with $g_0=\sqrt{\omega_L/(2\varepsilon_0\hbar A L)}d$ the atom-field coupling in the paraxial approximation ($d$ is the dipole matrix element, and $A$ and $L$ the quantization area and length). The Kronecker deltas $\delta_{s,+}$ and $\delta_{kq}$ represent a right-moving laser with frequency $\omega_L=qc$, and $\hat{\Omega}_{ns'}$ is the total Rabi frequency operator (including the input vacuum fluctuations) of the right/left-propagating ($s'\rightarrow \pm$) incident field. In the absence of motion, $\hat{z}_n\rightarrow 0$, the output field is that due to the mirror-like linear response of the ordered atom array [@coop; @ADM] (Appendix C). Frequency components other than that of the laser, appear in the output field due to the motion-induced phase factors $e^{-i(s-s')q\hat{z}_n(t)}$, and originate in a nonlinear optomechanical effect, $\hat{z}_n(t)\sim \hat{f}_n(t)\sim \hat{\Omega}_n^{\dag}(t)\hat{\Omega}_n(t)$. They are most dominant in the *reflected* field, since the phase factors exist only for $s\neq s'$ (oppositely-propagating input and output). We note that this analysis is valid only for the paraxial part of the output field, and can be therefore understood by considering the energy-momentum conservation of a photon colliding with an atom in 1D, where a forward-scattered photon cannot change its energy [@comment1]. Intensity spectrum of reflected light ------------------------------------- Consider now the detection of the left-propagating output field $s\rightarrow-$. Its dominant, average component is the linear reflection of the normal incident laser ($\mathbf{k}_{\bot}\approx 0$). In addition, there exist nonlinearly-scattered field fluctuations at various transverse wavevectors $|\mathbf{k}_{\bot}|>0$, which can be detected at the corresponding far-field angles (Fig. 4b). These components originate in fluctuations in $\hat{z}_n$ which result in an effectively disordered array and therefore in scattering angles beyond that of a flat mirror. The relevant spectrum of this detected total field is defined by $$\begin{aligned} I_{\mathbf{k}_{\bot}}(\omega)&=&\langle \widetilde{a}_{\mathbf{k}_{\bot}k-}^{\dag} \widetilde{a}_{\mathbf{k}_{\bot}k-}\rangle\frac{1}{|\beta_{\mathbf{k}_{\bot}=0}|^2}\frac{L}{c}, \nonumber\\ \label{Idef}\end{aligned}$$ where $\omega=ck-\omega_L$ is the frequency of the field envelope around $\omega_L$, and the averaging is performed with respect to the field vacuum $|0\rangle$. The normalization is with respect to the dominant $\mathbf{k}_{\bot}=0$ component of the normal-incident field $\beta_{\mathbf{k}_{\bot}}$, and with $L/c\rightarrow 2\tau$ (experiment time). We note that this definition coincides with the standard definition, $\propto\langle \hat{E}^{\dag}(\omega)\hat{E}(\omega)\rangle$ for the field component $\mathbf{k}_{\bot}$ (Appendix C). Inserting the output field, Eq. (\[aout\]) with $s\rightarrow-$, into Eq. (\[Idef\]), and expanding to lowest order in $q\hat{z}_n$ (atoms contained in traps), we find that the nonlinear part of the spectrum originates from the correlator $\langle\hat{z}_n(-\omega)\hat{z}_m(\omega)\rangle$, which is evaluated using the solution (\[zo\]). Finally, we obtain (Appendix C) $$\begin{aligned} I_{\mathbf{k}_{\bot}}(\omega)&=&|r|^2 \frac{|\beta_{\mathbf{k}_{\bot}}|^2}{|\beta_{\mathbf{k}_{\bot}=0}|^2} 2\pi \delta(\omega) \nonumber\\ &+&|r|^2 32\eta^4\frac{T_e}{E_R}\sum_{jj'}M_{jj'}\frac{\alpha_0}{\nu^2}\chi^{\ast}_j(\omega)\chi_{j'}(\omega). \label{I}\end{aligned}$$ The first term is the linear reflection with reflection coefficient $$r=-\frac{i(\gamma+\Gamma)/2}{i(\gamma+\Gamma)/2+\delta_L-\Delta}. \label{r}$$ At cooperative resonance, $\delta_L=\Delta$, the reflection is perfect [@coop]. However, realistically, for the atoms to thermalize inside the traps, we require a small red detuning $0<\Delta-\delta_L\ll \gamma+\Gamma$, which may slightly reduce the reflection \[Eq. (\[Te\]), for finite $T_e>0$\]. The second term describes a nonlinear scattered component ($\omega\neq 0$), originated in motion fluctuations inside the traps, who are in turn caused by the light-induced Langevin force $\hat{f}_n(t)$, with an effective temperature $T_e$. Indeed, the frequency dependence of this component derives from the overlap of the collective mechanical responses $\chi_j(\omega)$; its intensity is proportional to $T_e$ and to $\eta^4\propto1/V$, with $\alpha_0=\alpha_{n=0}$ being the friction at the center of the array (atom $n=0$). Since $\chi_j(\omega)$ are centered around the collective mechanical resonances $\nu_j$, this gives rise to sidebands, whose weights are determined by the spatial structure of the modes, contained in the overlap factors $M_{jj'}$ (see Appendix C, Eq. \[Mjj\]). ![[]{data-label="fig4"}](fig4a.jpg "fig:"){width="\columnwidth"} ![[]{data-label="fig4"}](fig4b.pdf "fig:") ![[]{data-label="fig4"}](fig4c.jpg "fig:") Figure 4a, plotted for detection at $\mathbf{k}_{\bot}=0$ with the atom array realization of Fig. 2, clearly exhibits these spectral features. It contains a narrow peak at $\omega=0$ due to the linear reflection, and two broad sidebands centered around the trap frequency $\pm\nu$. Each sideband however, exhibits multiple peaks, with the frequencies $\nu_j$ from Fig. 2b, as clearly seen in the zoom-in, Fig. 4c (blue curve). We recall that higher $\nu_j$ entail higher spatial frequencies in the structure of the collective mode function (Fig. 2c,d). Indeed, Fig. 4c (green curve) displays that by increasing the detection angle, $\mathbf{k}_{\bot}>0$, the sideband components $\nu_j$ beyond $\pm\nu$ become much more prominent. Quantum squeezing of output light ================================= Optical nonlinear phenomena, as the one revealed in the previous section, may in general lead to quantum squeezing in the scattered light; namely, to the reduction of its quadrature quantum noise below that of the vacuum, and which is associated with the generation of entangled photon pairs [@MW; @DF]. The generation of squeezing in the atom-array system can be understood by considering the light-induced motion, $z_n\propto |E|^2$, driven by a field $E$ containing a coherent part $\bar{E}$, and a small vacuum fluctuations component $\hat{\mathcal{E}}$. Then, $z_n\sim (\bar{E}^{\ast}+\hat{\mathcal{E}}^{\dag})(\bar{E}+\hat{\mathcal{E}})\approx |\bar{E}|^2 +\bar{E}^{\ast}\hat{\mathcal{E}}+ \bar{E}\hat{\mathcal{E}}^{\dag}$, so that the phase factor of the output field, $e^{i 2q z_n}$, has the Bogoliubov-transformation form of a squeezed field. Since the system is inherently multimode along the transverse, array plane, entanglement is generated not only between different longitudinal ($\omega=ck-\omega_L$), but also between different transverse ($\mathbf{k}_{\bot}$) photon modes, giving rise to spatio-temporal quantum squeezing [@KOL; @LUG]. In this section, we analyze the quantum noise of the output field, and reveal the possibility for nearly perfect quantum squeezing. To this end, we consider the quantum fluctuations of the reflected field ($s\rightarrow-$) from Eq. (\[aout\]), assuming uniform illumination, $\Omega_n=\Omega$, for which the collective mechanical modes are lattice Fourier modes, $j\rightarrow\mathbf{k}_{\bot}$, with eigenfrequencies $\nu_{\mathbf{k}_{\bot}}$ and corresponding friction $\alpha_{\mathbf{k}_{\bot}}=\alpha$ (Appendix A). Working near cooperative resonance, where the reflection $r\approx -1$, and expanding to lowest order in $q\hat{z}_n$ and in the vacuum field (Bogoliubov approximation), we obtain (Appendix D) $$\widetilde{a}_{\mathbf{k}_{\bot}}(\omega)=u_{\mathbf{k}_{\bot}}(\omega)\hat{a}_{\mathbf{k}_{\bot}}(\omega)+v_{\mathbf{k}_{\bot}\omega}\hat{a}^{\dag}_{-\mathbf{k}_{\bot}}(-\omega), \label{aout2}$$ with the coefficients $$\begin{aligned} u_{\mathbf{k}_{\bot}}(\omega)&=&-1+v_{\mathbf{k}_{\bot}}(\omega), \nonumber\\ v_{\mathbf{k}_{\bot}}(\omega)&=&i8\eta^4\frac{4|\Omega|^2}{(\gamma+\Gamma)^2}\frac{\hbar(\gamma+\Gamma)}{E_R}\frac{\nu^2}{\nu_{\mathbf{k}_{\bot}}^2}\chi_{\mathbf{k}_{\bot}}(\omega). \label{bog}\end{aligned}$$ Here the output field fluctuation $\widetilde{a}_{\mathbf{k}_{\bot}}(\omega)\equiv\widetilde{a}_{\mathbf{k}_{\bot}k-}-r\beta\delta_{\mathbf{k}_{\bot}0}\delta_{kq}$ is given in terms of the incident vacuum fields of the right-propagating modes, $\hat{a}_{\mathbf{k}_{\bot}}(\omega)=\hat{a}_{\mathbf{k}_{\bot},k,+}$ and $\hat{a}_{\mathbf{k}_{\bot}}(-\omega)=\hat{a}_{\mathbf{k}_{\bot},2q-k,+}$, with $\omega=ck-\omega_L$. In general, the output field depends on the vacua of both right- and left-propagating photon modes; however, here we assumed nearly perfect reflection, $r\approx-1$, so that the vacuum fields incident from the right (left-propagating $s\rightarrow-$), are reflected back and do not arrive to the detector at the left. The general case, beyond nearly-perfect reflection, is discussed in Appendix D, and yields similar results. The output field fluctuations from Eq. (\[aout2\]) have the typical form of a squeezed vacuum field, whose reduced quantum fluctuations can be measured via homodyne detection, wherein the output field at the correlated angles $\pm\mathbf{k}_{\bot}$ interferes with a strong coherent local oscillator field with phase $\theta$ (Fig. 5a). The relevant fluctuating part of the detected signal is given by the quadrature operator, $\hat{X}^{\theta}_{\mathbf{k}_{\bot}}(\omega)=e^{-i\theta}\widetilde{a}_{\mathbf{k}_{\bot}}(\omega)+ e^{i\theta}\widetilde{a}^{\dag}_{-\mathbf{k}_{\bot}}(-\omega)$, with the corresponding spatio-temporal noise spectrum, $S^{\theta}_{\mathbf{k}_{\bot}}(\omega)\equiv \langle \hat{X}^{\theta}_{-\mathbf{k}_{\bot}}(-\omega) \hat{X}^{\theta}_{\mathbf{k}_{\bot}}(\omega)\rangle$ [@DF; @KOL]. For each spatio-temporal frequency, $(\mathbf{k}_{\bot},\omega)$, there exists a local oscillator phase that minimizes the noise $S^{\theta}_{\mathbf{k}_{\bot}}(\omega)$ [@DF; @YP]. The resulting spectrum of minimal noise level, the so-called squeezing spectrum, is given by $$\begin{aligned} S_{\mathbf{k}_{\bot}}(\omega)&=&\left(|u_{-\mathbf{k}_{\bot}}(-\omega)|-|v_{\mathbf{k}_{\bot}}(\omega)|\right)^2 \nonumber\\ &=&\left(\sqrt{|v_{\mathbf{k}_{\bot}}(\omega)|^2+1+2\mathrm{Re}[v_{\mathbf{k}_{\bot}}(\omega)]}-|v_{\mathbf{k}_{\bot}}(\omega)|\right)^2. \nonumber\\ \label{Sk}\end{aligned}$$ In the absence of motion, $\chi_{\mathbf{k}_{\bot}}=0$, we have $v_{\mathbf{k}_{\bot}}=0$ and $u_{\mathbf{k}_{\bot}}=-1$, such that the output is just the reflected vacuum with the standard vacuum noise-level $S_{\mathbf{k}_{\bot}}(\omega)=1$. When motion, and hence nonlinearity exist, we may obtain noise reduction (squeezing), $S_{\mathbf{k}_{\bot}}(\omega)<1$. ![[]{data-label="fig5"}](fig5a.pdf "fig:") ![[]{data-label="fig5"}](fig5b.jpg "fig:") ![[]{data-label="fig5"}](fig5c.jpg "fig:") ![[]{data-label="fig5"}](fig5d.jpg "fig:") Squeezing bandwidth and strength -------------------------------- We observe that maximal squeezing (minimal $S$) is obtained for a large coefficient $|v_{\mathbf{k}_{\bot}}|\gg 1$, i.e. near the collective mechanical resonance $\omega=\pm\nu_{\mathbf{k}_{\bot}}$ where $\chi_{\mathbf{k}_{\bot}}(\omega)$ is very large [@comment2], and where the spectrum (\[Sk\]) can be approximated as $S_{\mathbf{k}_{\bot}}(\omega)\approx 1/(4|v_{\mathbf{k}_{\bot}}(\omega)|^2)$, yielding $$S_{\mathbf{k}_{\bot}}(\omega)\approx \frac{1}{W_{\mathbf{k}_{\bot}}^2}\left[\left(\frac{\omega}{\nu_{\mathbf{k}_{\bot}}}\pm1\right)^2+\left(\frac{\alpha}{2\nu_{\mathbf{k}_{\bot}}}\right)^2\right]. \label{peak}$$ The quadratic, power-law frequency-dependence means that the bandwidth of the squeezing spectrum near mechanical resonance $\pm \nu_{\mathbf{k}_{\bot}}$ scales as $W_{\mathbf{k}_{\bot}}\nu_{\mathbf{k}_{\bot}}$, with $$W_{\mathbf{k}_{\bot}}=\frac{\nu^2}{\nu_{\mathbf{k}_{\bot}}^2}B, \quad B=\frac{16|\eta\Omega|^2}{(\gamma+\Gamma)\nu}\Rightarrow \frac{16|\bar{g}|^2}{\kappa\nu}. \label{W}$$ As the parameter $B$ increases, so do the bandwidth and strength of the squeezing, suggesting that this parameter is related to the motion-induced optical nonlinearity. Indeed, by using the mapping to cavity optomechanics from Table I (rightwards double arrow), we note that $B$ is equal to the nonlinear frequency shift of the cavity/resonator, $\sim|\bar{g}|^2/\nu$, in units of its linewidth $\kappa$ [@RAB; @GIR]. For the atom array, we note that the squeezing bandwidth $\sim B$ can be enlarged by increasing e.g. the Rabi frequency $\Omega$ (avoiding saturation) or the lattice spacing $a$ (recalling that $\gamma+\Gamma\propto \lambda^2/a^2$). The squeezing strength can become arbitrary large in principle, with a minimal noise level of order $W_{\mathbf{k}_{\bot}}^{-2}(\alpha/\nu_{\mathbf{k}_{\bot}})^2$. This is typically a very small number, reflecting the mechanical quality factor $\nu/\alpha$ of trapped atoms, or, equivalently, the optomechanical cooperativity $|\bar{g}|^2/(\kappa\alpha)$. Temporal and spatial squeezing spectra -------------------------------------- Fixing the detection angle to $\mathbf{k}_{\bot}=0$, we study the dependence of the squeezing on the frequency $\omega$ (temporal squeezing spectrum) in Fig. 5b. Considering realistic parameters, we observe very strong noise reduction at the corresponding pair of mechanical resonance dips, $\pm \nu_{\mathbf{k}_{\bot}=0}=\pm\nu$, as anticipated above. In Fig. 5c we focus on one of the resonance dips and observe that its bandwidth is much wider than the mechanical linewidth $\alpha/\nu\sim 10^{-4}$ (blue solid curve). Furthermore, it is seen that the squeezing bandwidth widens by increasing either the lattice spacing (red dashed curve) or the Rabi frequency (green dash-dot curve), in agreement with the analysis of Eqs. (\[peak\]) and (\[W\]). Figure 5d displays the dependence of the squeezing on the spatial frequency $k_x$ (spatial squeezing spectrum), by setting $\omega=\nu-14\alpha$ and $k_y=0$. The dependence on $k_x$ is a signature of an effectively *nonlocal* optical nonlinearity of the atom array [@KRO; @EITNLO], whose nonlocality is originated in the dipole-dipole interactions between the atoms. Finally, we address how these results compare with previous studies. Squeezed light generation via optomechanical nonlinearities was studied theoretically within the standard cavity model [@TOM; @FAB; @CLE] and experimentally with an atom cloud or a membrane inside a cavity [@SK3; @REG; @SAF]. Both the cavity model, and the above analysis of the atom array, predict arbitrary strong squeezing, in principle, with similar scalings of strength and bandwidth (within the bad-cavity regime relevant here) [@HAM; @CLE]. Interestingly, in our case this is achieved without a cavity and for dozens of atoms (e.g. $\sqrt{N}\sim10\gg 1$). This is in contrast to cavity-confined and macroscopic objects, such as a membrane or thousands of atoms. More qualitatively, the atom array optomechanics naturally exhibits spatial squeezing, in addition to the temporal squeezing discussed in previous works. Discussion ========== This study establishes the first step in a new direction in the field of quantum optomechanics, wherein the unique collective properties of ordered 2D atomic arrays are harnessed. We stress that the mechanical interactions between the atoms, and the subsequent formation of collective mechanical modes, are not inherent to the atoms, but are rather induced by light (Fig. 2a). Our findings demonstrate that 2D atomic arrays exhibit rich and qualitatively new multimode optomechanical phenomena, already at the level of the “bare", cavity-free system. More quantitatively, the results for squeezing suggest that the optomechanical response of a single mesoscopic atomic array can become comparable or exceed those of current macroscopic cavity systems. As an extension of this work, one can interface the atomic array with a mirror or cavity. This may offer several advantages. First, the *ordered* array scatters only into the paraxial cavity mode, unlike the case of a disordered cloud [@SK3; @SK1; @SK; @CAM], for which scattering to all directions effectively increases $\kappa$. Second, an atom array inside a cavity or in the vicinity of a mirror, may exhibit a drastic reduction of its radiative linewidth $\gamma+\Gamma$. Considering the mapping to the cavity model, this means that $\kappa$ can become sufficiently small to reach the resolved sideband regime. Combined with the strong optomechanical coupling of the atoms, this may pave the way to observe optomechanical effects at the few-photon level, such as photon-blockade and non-Gaussian states [@RAB; @GIR]. Finally, we point out that ordered atomic arrays were recently proposed as a quantum optical platform enabling various applications, such as tunable scattering properties [@coop], topological quantum photonics [@janos; @ADM2; @janos2], lasing [@ABA], and enhanced quantum memories and clocks [@CHA; @ANA; @HEN], all of which based on their collective internal dipolar response. The current study thus opens the way to explore new possibilities by considering and designing both the collective internal and optomechanical responses of atomic arrays. We acknowledge fruitful discussions with Peter Zoller, Dominik Wild, Darrick Chang, Igor Pikovski and János Perczel, and financial support from the NSF, the MIT-Harvard Center for Ultracold Atoms, and the Vannevar Bush Faculty Fellowship. LIGHT-INDUCED MOTION ==================== The discussion and derivation of Eq. (\[EOM\]) is presented in detail in Ref. [@notes]. Here, we review the main results leading to this equation, and provide the expressions for its coefficients. Beginning with the Hamiltonian for photons and atoms and eliminating the photon operators (Markov approximation), we consider the assumptions mentioned in the main text (non-saturated two-level atoms, small-amplitude motion, and paraxial illumination), and obtain [@notes] $$\begin{aligned} \dot{\tilde{\sigma}}_{n}&=&\left[i(\delta_L-\Delta)-\frac{\gamma+\Gamma}{2}\right]\tilde{\sigma}_{n}+i\sum_{s=\pm}e^{isq\hat{z}_n}\left[\Omega_{ns}+\delta\hat{\Omega}_{ns}(t)\right], \nonumber\\ \dot{\hat{p}}_n&=&-m\nu^2\hat{z}_n+\hbar q\sum_{s=\pm}\left[i s e^{isq\hat{z}_n}\left(\Omega_{ns}+\delta\hat{\Omega}_{ns}(t)\right)\tilde{\sigma}_n^{\dag}+\mathrm{h.c.}\right] \nonumber\\ &&+\frac{3}{4}\hbar q\gamma \sum_{m\neq n}\left[\tilde{\sigma}_n^{\dag}F_{nm}q(\hat{z}_n-\hat{z}_m)\tilde{\sigma}_m+\mathrm{h.c.}\right]. \label{HL}\end{aligned}$$ Here $\tilde{\sigma}_n=\hat{\sigma}_n e^{i\omega_L t}$ is the envelope of the lowering operator of the internal state of atom $n$, $\Omega_{ns}$ is the Rabi frequency of the $s\rightarrow\pm$ propagating incident laser ($s\rightarrow+$ for single-sided illumination), and $$\delta\hat{\Omega}_{ns}(t)=\sum_{k>0}\sum_{\mathbf{k}_{\bot}}ig_0 e^{i\mathbf{k}_{\bot}\cdot\mathbf{r}^{\bot}_n}e^{-i(ck-\omega_L)t}\hat{a}_{\mathbf{k}_{\bot}ks}, \label{vac}$$ is the corresponding Rabi frequency of the vacuum fluctuations \[in the paraxial approximation with $g_0=\sqrt{\omega_L/(2\varepsilon_0\hbar A L)}d$\]. The cooperative shift and width are given by $\Delta-i\Gamma/2=-(3/2)\gamma\lambda\sum_{n\neq 0} \mathbf{e}_d^{\dag}\cdot\overline{\overline{G}}(\omega_L,\mathbf{r}_{0}^{\bot},\mathbf{r}_{n}^{\bot})\cdot\mathbf{e}_d$ (finding $\Gamma+\gamma=\gamma\frac{3}{4\pi} \frac{\lambda^2}{a^2}$), where $\overline{\overline{G}}(\omega,\mathbf{r},\mathbf{r}')$ is the dyadic Green’s function [@NH], $\mathbf{e}_d$ the orientation of the dipole element of the atomic transition (taken as circular polarization) and “$n=0$" is the atom at the array center. In the last line, $F_{nm}=\mathbf{e}_d^{\dag}\cdot \overline{\overline{F}}(q\mathbf{r}^{\bot}_n-q\mathbf{r}^{\bot}_m)\cdot\mathbf{e}_d$, $\overline{\overline{F}}$ being the dimensionless tensor $$\begin{aligned} &&F_{ij}(\mathbf{u}) \nonumber\\ &&=\delta_{ij}\frac{e^{iu}}{u^2}\left[\left(i-\frac{1}{u}\right)\left(1+\frac{iu-1}{u^2}\right)+\left(\frac{i}{u^2}-2\frac{iu-1}{u^3}\right)\right] \nonumber\\ &&+\frac{u_i u_j}{u^2}\frac{e^{iu}}{u^2} \nonumber\\ &&\times\left[\left(i-\frac{3}{u}\right)\left(-1+\frac{3-i3u}{u^2}\right) +3\left(-\frac{i}{u^2}-2\frac{1-iu}{u^3}\right)\right], \nonumber\\ %\right. %\nonumber\\ %&+&\left.3\left(-\frac{i}{a^2}-2\frac{1-ia}{a^3}\right)\right]. \label{F}\end{aligned}$$ with $i,j\in\{x,y,z\}$, $F_{ij}=\mathbf{e}_i^{\dag}\cdot \overline{\overline{F}}\cdot\mathbf{e}_j$, $u_i=\mathbf{e}_i\cdot \mathbf{u}$ and $u=|\mathbf{u}|$. By considering the separation of time scales, $\gamma+\Gamma\gg E_R/\hbar,\nu$, the adiabatic steady-state solution of the internal state is found to be [@notes], $$\tilde{\sigma}_n(t)=-\sum_{s=\pm}e^{isq\hat{z}_n}\left[\frac{\Omega_{ns}+\delta\bar{\Omega}_{ns}(t)}{\delta-\Delta+i\frac{\gamma+\Gamma}{2}}+\frac{\Omega_{ns}(sq/m)\hat{p}_n}{\left(\delta-\Delta+i\frac{\gamma+\Gamma}{2}\right)^2}\right], \label{sig}$$ where the last term is a lowest-order correction due to the Doppler effect. Here, $\delta\bar{\Omega}_{ns}(t)\approx \delta\hat{\Omega}_{ns}(t)-\frac{i}{\delta_L-\Delta+i(\gamma+\Gamma)/2}\partial_t\delta\hat{\Omega}_{ns}(t)$ is the vacuum noise in the adiabatic, coarse-grained dynamical picture. The correction, $\partial_t \delta\hat{\Omega}_{ns}$, is required here to guarantee proper quantum dynamics (see below) [@notes]. Inserting Eq. (\[sig\]) into the equation for $\hat{p}_n$ in (\[HL\]), to lowest order in $q\hat{z}_n$, we arrive at Eq. (\[EOM\]) from the main text, with the coefficients (illumination from the left, $\Omega_{ns}=\Omega_n\delta_{s+}$) [@notes]: $$\begin{aligned} \bar{f}_n&=&\hbar q|\Omega_n|^2\frac{\gamma+\Gamma}{(\delta_L-\Delta)^2+\left(\frac{\gamma+\Gamma}{2}\right)^2}, \nonumber\\ \alpha_n&=&\frac{E_R}{\hbar}|\Omega_n|^2\frac{-2(\delta_L-\Delta)(\gamma+\Gamma)}{\left[(\delta_L-\Delta)^2+\left(\frac{\gamma+\Gamma}{2}\right)^2\right]^2}, \nonumber\\ \hat{f}_n(t)&=&\hbar q\sum_{s=\pm}\left[\frac{i\delta\bar{\Omega}_{ns}^{\dag}\Omega_n+i s\delta\hat{\Omega}_{ns}\Omega^{\ast}_n}{\delta_L-\Delta-i\frac{\gamma+\Gamma}{2}} + \mathrm{h.c.}\right], \nonumber\\ K_{nm}&=&\frac{3}{4}\hbar q^2\gamma\left[F_{nm}\frac{\Omega_n^{\ast}\Omega_m}{(\delta_L-\Delta)^2+\left(\frac{\gamma+\Gamma}{2}\right)^2} + \mathrm{c.c.}\right]. \nonumber\\ \label{92}\end{aligned}$$ The Langevin forces $\hat{f}_n(t)$ on different atoms $n,m$ are not independent, since they are originated in the vacuum field and its spatial correlations. Their cross-correlation is found to be $$\begin{aligned} &&\langle\hat{f}_n(t)\hat{f}_m(t')\rangle=2D^{nm}_p\delta(t-t')+i2\tilde{D}^{nm}_p\delta'(t-t'), \nonumber\\ &&D^{nm}_p=(\hbar q)^2\Gamma_{nm}\frac{2\Omega_n^{\ast}\Omega_m}{(\delta_L-\Delta)^2+\left(\frac{\gamma+\Gamma}{2}\right)^2}, \label{fn}\end{aligned}$$ with $\Gamma_{nm}=3\gamma\lambda \mathrm{Im}[\mathbf{e}_d^{\dag}\cdot \overline{\overline{G}}(\omega_L,\mathbf{r}^{\bot}_n,\mathbf{r}^{\bot}_m)\cdot\mathbf{e}_d]$ being the cooperative decay kernel [@LEH]. The term with $\tilde{D}_p^{nm}=-D_p^{nm}\frac{\delta_L-\Delta}{(\delta_L-\Delta)^2+(\gamma+\Gamma)^2/4}$ and $\delta'(t)=\partial_t\delta(t)$, is due to the correction $\partial_t\delta\hat{\Omega}_{ns}(t)$ discussed below Eq. (\[sig\]), and it guarantees that the dynamics of Eq. (\[EOM\]) preserve commutation relations and describe genuine quantum Brownian motion [@notes; @QN]. For the calculations in this paper, however, this correction term is negligible. For a single atom $n$, $D_p^{nn}$ is the momentum diffusion coefficient [@CCT; @notes], which can be associated with an effective temperature of a heat bath formed by the scattering, $$T_e=\frac{D_p^{nn}}{m\alpha_n}. \label{Ted}$$ Using $D_p^{nn}$ and $\alpha_n$ from Eqs. (\[92\]) and (\[fn\]) (noting $\Gamma_{nn}=\gamma$), we obtain $T_e$ from Eq. (\[Te\]). Performing the transformation to the collective mechanical modes $j$, $\hat{z}_j=\sum_n U_{jn}^{\ast} \hat{z}_n$, on Eq. (\[EOM\]), and neglecting the typically very small off-diagonal friction $\alpha_{jj'}\approx \alpha_{j}\delta_{jj'}$ (verified numerically for a variety of incident Gaussian beams), we obtain $$\dot{\hat{p}}_j=-m\nu^2\hat{z}_j+\bar{f}_j-\alpha_j\hat{p}_j+\hat{f}_j(t), \quad \dot{\hat{z}}_j=\hat{p}_j/m. \label{EOMk}$$ For long times $t\gg 1/\alpha_j$ (assuming $\alpha_j>0$, i.e. $\delta_L<\Delta$), the solution in Fourier space, $\hat{z}_n(\omega)=\int_{-\infty}^{\infty}dt e^{i\omega t}\hat{z}_n(t)$, yields Eq. (\[zo\]). The analysis of the general time-dependent solution is discussed in Ref. [@notes]. Finally, consider the case of uniform illumination, $\Omega_n=\Omega$. The collective mechanical modes $j$ then become 2D lattice Fourier modes, $\mathbf{k}_{\bot}$, with $\mathbf{k}_{\bot}=(k_x,k_y)$ inside the Brillouin zone associated with the 2D lattice, $k_{x,y}\in\{-\pi/a,\pi/a\}$, and the corresponding eigenmodes and eigenfrequencies [@notes] $$\hat{z}_{\mathbf{k}_{\bot}}=\frac{1}{N}\sum_n e^{-i\mathbf{k}_{\bot}\cdot \mathbf{r}_{n}^{\bot}}\hat{z}_n, \quad \nu_{\mathbf{k}_{\bot}}=\sqrt{\nu^2+(K_{\mathbf{k}_{\bot}}-K_0)/m}. \label{zk}$$ Here, $K_{\mathbf{k}_{\bot}}=\sum_{n\neq 0}K_{n0}e^{-i\mathbf{k}_{\bot}\cdot \mathbf{r}_{n}^{\bot}}$ and $\nu_{\mathbf{k}_{\bot}}$ can be evaluated by performing the sum $K_{\mathbf{k}_{\bot}}$ numerically. The same transformation, $U_{n\mathbf{k}_{\bot}}=(1/\sqrt{N})e^{i\mathbf{k}_{\bot}\cdot\mathbf{r}_n^{\bot}}$, also applies for $\bar{f}_{\mathbf{k}_{\bot}}$ and $\hat{f}_{\mathbf{k}_{\bot}}(t)$. The friction coefficient $\alpha$ is equal to $\alpha_n$ form Eq. (\[92\]) with $\Omega_n=\Omega$ and is therefore independent of $\mathbf{k}_{\bot}$. ANALOGY TO CAVITY OPTOMECHANICS =============================== In the following, we derive the equations of motion for the standard cavity optomechanics model in the linearized regime and discuss the analogy of this model to the atom-array case. Optomechanics in the linearized regime -------------------------------------- Beginning from the linearized Hamiltonian (\[Hcl\]), we find the equations of motion for the cavity mode $\hat{c}$ and mirror momentum $\hat{p}=im\nu x_0(\hat{b}^{\dag}-\hat{b})$, $$\begin{aligned} \dot{\hat{c}}&=&\left[i\delta_c-\frac{\kappa}{2}\right]\hat{c}-i\frac{\bar{g}}{x_0}\hat{z}+ i\delta\hat{\Omega}(t), \nonumber\\ \dot{\hat{p}}&=&-m\nu^2\hat{z}-2m\nu x_0\left(\bar{g}^{\ast}\hat{c}+\bar{g}\hat{c}^{\dag}\right), \label{cav1}\end{aligned}$$ with $\dot{\hat{z}}=\hat{p}/m$ and $\hat{z}=x_0(\hat{b}+\hat{b}^{\dag})$. The cavity damping $\kappa$ and corresponding quantum-noise field $\delta\hat{\Omega}(t)$ are due to the out-coupling from the cavity mirror to outside propagating modes, satisfying $[\delta\hat{\Omega}(t),\delta\hat{\Omega}^{\dag}(t')]=\kappa\delta(t-t')$. Turning to the atom array, and in analogy to the cavity optomechanics case, we wish to linearize the coupled equations of motion, (\[HL\]), around the classical steady-state solution. To this end, we consider the classical part of the linear-response solution from (\[sig\]), $\overline{\sigma}_n=-\frac{\Omega_n}{\delta_L-\Delta+i(\gamma+\Gamma)/2}$, together with $q\hat{z}_n\ll 1$, and write Eqs. (\[HL\]) to linear order in the operators: $$\begin{aligned} \dot{\check{\sigma}}_n&=&\left[i(\delta_L-\Delta)-\frac{\gamma+\Gamma}{2}\right]\check{\sigma}_n-q\Omega_n \hat{z}_n+i\sum_{s=\pm}\delta \hat{\Omega}_{ns}(t), \nonumber\\ \dot{\hat{p}}_n&=&-m\nu^2\hat{z}_n-\hbar q\left(i\Omega_n^{\ast}\check{\sigma}_n-i\Omega_n\check{\sigma}_n^{\dag}\right) \nonumber\\ &&+\bar{f}_n+\sum_{m\neq n}K_{nm}\left(\hat{z}_n-\hat{z}_m\right)+\hat{f}_n^{(1)}(t), \label{atom1}\end{aligned}$$ where $\check{\sigma}_n(t)=\tilde{\sigma}_n(t)-\overline{\sigma}_n$ is the small amplitude of $\tilde{\sigma}_n(t)$ around its steady-state linear solution $\overline{\sigma}_n$. The formal equivalence of the equations for $\hat{c}$ and $\check{\sigma}_n$ from (\[cav1\]) and (\[atom1\]) is apparent, considering $\bar{g}=-i\eta\Omega_n$ and the rest of the mapping from table I \[recalling $\eta=q x_0$ and noting that a phase factor $(-i)$ was dropped in the main text, for simplicity\]. This equivalence holds also by comparing the equation for $\hat{p}$ from (\[cav1\]) with the first line of the equation for $\hat{p}_n$, using $E_R/(\hbar\nu)=2\eta^2$ (we note that a correction to $\nu$ was neglected here). The first term in the second line in the equation for $\hat{p}_n$ is the average force $\bar{f}_n$ which implicitly exists also in the dynamics for $\hat{p}$ in Eq. (\[cav1\]), since the latter is written for fluctuations around the average motion \[originated in the linearized Hamiltonian (\[Hcl\])\]. The collective mechanical term $K_{nm}$ from the equation for $\hat{p}_n$ does not appear in the Eqs. (\[cav1\]) for the simple cavity model, however, it can be accounted for by considering a modified cavity model (see subsection 3 below). The last term in Eq. (\[atom1\]) for $\hat{p}_n$ is a Langevin force, which is absent in the cavity model, $$\hat{f}_n^{(1)}(t)=\hbar q\sum_{s=\pm}\left[\frac{i s\delta\hat{\Omega}_{ns}\Omega^{\ast}_n}{\delta_L-\Delta-i\frac{\gamma+\Gamma}{2}} + \mathrm{h.c.}\right]. \label{fn1}$$ This extra Langevin force originates in the direct coupling between motion and the vacuum field, via the phases $e^{i k_z \hat{z}_n}$ of the photon-atom Hamiltonian. This is in contrast to the cavity model wherein only the cavity mode is directly coupled to the vacuum of the outside modes. This means that for the consideration of quantum noise in the output fields, the two models may not be exactly equivalent. We note that the force $\hat{f}_n^{(1)}(t)$ appears as a component of the Langevin force from Eq. (\[92\]). Bad-cavity limit ---------------- We shall now consider the bad-cavity limit, where $\kappa$ is the fastest time scale, and obtain a diffusion equation for the cavity model, in analogy to Eq. (\[EOM\]). Formally solving the equation for $\hat{c}$ from (\[cav1\]), within a time interval $\Delta t$ ending at $t$, and denoting the mechanical-mode envelope, $\tilde{b}(t)=e^{i\nu t}\hat{b}(t)$ \[recalling $\hat{z}=x_0(\hat{b}+\hat{b}^{\dag})$\], we have $$\begin{aligned} \hat{c}(t)&=&\hat{c}(t-\Delta t)e^{(i\delta_c-\frac{\kappa}{2})\Delta t}+e^{(i\delta_c-\frac{\kappa}{2}) t}\int_{t-\Delta t}^t dt'e^{-(i\delta_c-\frac{\kappa}{2}) t'} \nonumber\\ &\times&\left[-i\bar{g}\left(\tilde{b}(t')e^{-i\nu t'}+\tilde{b}^{\dag}(t')e^{i\nu t'}\right)+i\delta\hat{\Omega}(t')\right]. \label{31b}\end{aligned}$$ Next, we assume the separation of time scales between the fast cavity damping $\kappa$ and the slow mechanical envelope dynamics $\tau_m^{-1}\equiv \dot{\tilde{b}}/\tilde{b}$. This allows to move to coarse-grained dynamics with resolution $\Delta t$ satisfying $\kappa^{-1}\ll \Delta t\ll\tau_m$, where the envelope $\tilde{b}(t')\approx \tilde{b}(t)$ can be pulled outside of the integral, obtaining, $$\hat{c}(t)\approx \frac{\bar{g} \hat{b}(t)}{\delta_c+\nu+i\kappa/2}+\frac{\bar{g} \hat{b}^{\dag}(t)}{\delta_c-\nu+i\kappa/2}-\frac{\delta\hat{\Omega}(t)}{\delta_c+i\kappa/2}. \label{31i}$$ Here the Langevin noise is taken within a bandwidth $2\pi/\Delta t$ of the coarse-grained time resolution. Finally, inserting this result into the equation for $\hat{p}$ from (\[cav1\]), we obtain $$\dot{\hat{p}}\approx-m\nu^2\hat{z}-\alpha_{\mathrm{opt}}\hat{p}+\hat{f}_{\mathrm{opt}}(t), \label{EOMc}$$ where a correction to $\nu$ is neglected here [@AKM]. The resulting optically-induced friction and Langevin force read $$\begin{aligned} \alpha_{\mathrm{opt}}&=&|\bar{g}|^2 \left[\frac{\kappa}{(\delta_c+\nu)^2+(\kappa/2)^2}+\frac{\kappa}{(\delta_c-\nu)^2+(\kappa/2)^2}\right] \nonumber\\ &\approx& - |\bar{g}|^2 2\nu\frac{2\delta_c\kappa}{\left[\delta_c^2+(\kappa/2)^2\right]^2}, \nonumber\\ \hat{f}_{\mathrm{opt}}(t)&=&\frac{\hbar}{x_0}\left[\frac{\bar{g}\delta\hat{\Omega}^{\dag}(t)}{\delta_c-i\kappa/2}+\frac{\bar{g}^{\ast}\delta\hat{\Omega}(t)}{\delta_c+i\kappa/2}\right], \label{32a}\end{aligned}$$ The second approximate equality in $\alpha_{\mathrm{opt}}$ is valid within the unresolved sideband limit, $\kappa\gg \nu$. Coming back to the condition $\tau_m^{-1}\ll \kappa$ for existence of separation of time scales (and coarse-grained dynamics), we can identify from Eq. (\[EOMc\]) and the expression for $\alpha_{\mathrm{opt}}$ (e.g. for $\kappa \gtrsim \nu,\delta_c$), that $\tau_m^{-1}\sim \alpha_{\mathrm{opt}}\lesssim |\bar{g}|^2/\kappa$, so that the separation of time scales requires the so-called weak coupling regime, $\kappa\gg \bar{g}$. Considering the mapping from table I it is easy to verify that the friction coefficients $\alpha_n$ \[Eq. (\[92\]), atom-array model\] and $\alpha_{\mathrm{opt}}$ \[Eq. (\[32a\]), cavity model\] are identical within the bad-cavity limit $\kappa\gg \bar{g},\nu$, wherein $\kappa$ is the fastest time scale (in analogy to $\gamma+\Gamma$, the fastest time scale assumed for the atom array). The analogy between the Langevin forces, $\hat{f}_{\mathrm{opt}}(t)$ from Eq. (\[32a\]), and $\hat{f}_{n}(t)$ from Eq. (\[92\]), is apparent if we identify the input vacuum fluctuations $\delta\hat{\Omega}$ with the vacuum field on a single-atom, $\delta\hat{\Omega}_n$. We recall that the absence of an average-force term, $\bar{f}$, in Eq. (\[EOMc\]) is merely due to the fact that it is already written for fluctuations around the average motion. Collective mechanical coupling ------------------------------ In order to account for the multimode mechanics of the atom array, we replace the single-mode $\hat{b}$ of the cavity-optomechanics model by the modes $\hat{b}_n$, such that the corresponding mechanical and interaction terms in the Hamiltonian (\[Hc\]) become $\hbar\nu\sum_n \hat{b}_n^{\dag} \hat{b}_n$ and $\hbar\sum_n g_n\hat{c}^{\dag}\hat{c}(\hat{b}_n+\hat{b}_n^{\dag})$, respectively, with $g_n$ the optomechanical coupling between the mode $n$ and the cavity. The interaction term in Eq. (\[cav1\]) for $\hat{c}$ then becomes, $-i\sum_n\bar{g}_n (\hat{b}_n^{\dag}+\hat{b}_n)$, with $\bar{g}_n=g_n\bar{c}$. This results in an equation of motion for the momentum $\hat{p}_n$ of the mechanical mode $n$, in the from of Eq. (\[EOMc\]), but with an additional interaction term $-\sum_{m\neq n} K'_{nm} \hat{z}_m$. The resulting mechanical coupling coefficient is found to be $$\begin{aligned} K'_{nm}&=&\frac{\hbar}{x_0^2}\left[\frac{\bar{g}_n^{\ast}\bar{g}_m}{\delta_c-i\kappa/2}+\frac{\bar{g}_n\bar{g}_m^{\ast}}{\delta_c+i\kappa/2}\right] \nonumber\\ &\rightarrow& 2\hbar q^2(\delta_L-\Delta) \frac{\Omega_n \Omega_m }{(\delta_L-\Delta)^2+\left(\frac{\gamma+\Gamma}{2}\right)^2}, \label{44b}\end{aligned}$$ where the expression in the second line is obtained via the mapping $\eta\Omega_n=\bar{g}_n$ and by taking real $\bar{g}_n$. The above coefficient $K'_{nm}$, though not identical to $K_{nm}$ from Eq. (\[92\]), has a similar structure, suggesting that the multimode cavity optomechanical model can indeed capture the multimode motion of the atom array from Eq. (\[EOM\]). INTENSITY SPECTRUM ================== Here we provide more details on the derivation of the output field. Eq. (\[aout\]), and the definition and calculation of the intensity spectrum from Eqs. (\[Idef\]) and (\[I\]). Output field ------------ The formal solution for the paraxial photon modes at time $t$, evolved from initial time $t_0$, is found as usual from the original atom-photon Hamiltonian [@notes], as $$\begin{aligned} \widetilde{a}_{\mathbf{k}_{\bot}ks}(t)&=&\widetilde{a}_{\mathbf{k}_{\bot}k s}(t_0)+\sum_n g_0^{\ast}e^{-i\mathbf{k}_{\bot}\cdot\mathbf{r}_n^{\bot}} \nonumber\\ &&\times\int_{t_0}^{t}dt' e^{i(ck-\omega_L)t'}e^{-isq \hat{z}_n(t')}\tilde{\sigma}_n(t'), \label{3}\end{aligned}$$ with $\widetilde{a}_{\mathbf{k}_{\bot}ks}(t)=\hat{a}_{\mathbf{k}_{\bot}ks}(t)e^{ick t}$, and where the field is initially in the vacuum state, $\hat{a}_{\mathbf{k}_{\bot}ks}(t_0)|0\rangle=0$. Since we are interested in a steady-state solution for the fields, we take the initial time $t_0$ to be in the far past, $t_0=-\tau\rightarrow-\infty$, whereas the relevant observation time $t$ is taken at the end of the experiment, $t=\tau\rightarrow \infty$. Inserting the steady-state solution for $\tilde{\sigma}_n$ from (\[sig\]) into Eq. (\[3\]) (neglecting the Doppler correction and taking $\delta\bar{\Omega}_{ns}\approx \delta\hat{\Omega}_{ns}$), and adding the laser input $\beta_{\mathbf{k}_{\bot}}$, we arrive at Eq. (\[aout\]) from the main text. The laser input is added separately since it was taken here as an external input, which is nevertheless equivalent to considering an initial coherent state. If we neglect the motion, taking $\hat{z}_n\rightarrow 0$ in Eq. (\[aout\]), we arrive at the result, $$\widetilde{a}_{\mathbf{k}_{\bot}ks}=\left(\beta_{\mathbf{k}_{\bot}s}\delta_{kq}+\hat{a}_{\mathbf{k}_{\bot}ks}\right)+r\sum_{s'=\pm}\left(\beta_{\mathbf{k}_{\bot}s'}\delta_{kq}+\hat{a}_{\mathbf{k}_{\bot}ks'}\right). \label{alin}$$ Here, we used the expressions for $r$ and $\delta\hat{\Omega}_{ns}$ \[Eqs. (\[r\]) and (\[vac\])\], together with $\beta_{n}=-i\Omega_{n}/g_0$ and $g_0=\sqrt{\omega_L/(2\varepsilon_0\hbar A L)}d=\sqrt{(c/L)(\gamma+\Gamma)/(2N)}$ (recalling $\gamma+\Gamma=\gamma\frac{3}{4\pi}(\lambda^2/a^2)$ [@coop]), and considering an incident field from both sides $s$, for generality. This result reflects the linear response of the mirror to the input from both sides $s\rightarrow\pm$ (average + vacuum fluctuations), with reflection and transmission coefficients $r$ and $t=1+r$. Intensity spectrum ------------------ The standard definition of the intensity spectrum is given by the intensity in frequency space, $$\begin{aligned} G^{(1)}_{\mathbf{k}_{\bot}s}&=&\langle \hat{E}_{\mathbf{k}_{\bot}s}^{\dag}(\omega)\hat{E}_{\mathbf{k}_{\bot}s}(\omega)\rangle \nonumber\\ &=&\int dt \int dt' e^{-i\omega(t-t')}\langle \hat{E}_{\mathbf{k}_{\bot}s}^{\dag}(t)\hat{E}_{\mathbf{k}_{\bot}s}(t')\rangle, \label{G}\end{aligned}$$ where here the detection of a field propagating in the $\mathbf{k}_{\bot}s$ direction is considered. The general expression for the electric field operator in the paraxial approximation is given by $$\hat{E}_{\mathbf{k}_{\bot}s}(z,t)=\sum_{k>0}E_V e^{isk z}\hat{a}_{\mathbf{k}_{\bot}ks}(t), \quad E_V=\sqrt{\frac{\hbar kc}{2\varepsilon_0 AL}}. \label{E}$$ The field operator that enters into the spectrum (\[G\]) however, is that detected far from the atom array, *after* the interaction between the laser pulse, of duration $\sim\tau$, and the atom array is over, i.e. for $t>\tau$. Therefore, for the duration $t-\tau$ after the “passage time" $\tau$, the field propagates freely, and we can write $$\hat{a}_{\mathbf{k}_{\bot}ks}(t)=e^{-ick(t-\tau)}\hat{a}_{\mathbf{k}_{\bot}ks}(\tau)=e^{-ick t}\widetilde{a}_{\mathbf{k}_{\bot}ks}, \label{7}$$ recalling the notation $\widetilde{a}_{\mathbf{k}_{\bot}ks}=\widetilde{a}_{\mathbf{k}_{\bot}ks}(\tau)=e^{ick \tau}\hat{a}_{\mathbf{k}_{\bot}ks}(\tau)$. Substituting (\[7\]) for $\hat{a}_{\mathbf{k}_{\bot}ks}(t)$ inside the expression for the field in (\[E\]), and inserting the latter into the spectrum definition (\[G\]) \[with $\sum_k\rightarrow \frac{L}{2\pi c}\int d\omega$ and $2\pi\delta(ck-ck')=\delta_{kk'}L/c]$, we obtain, $G^{(1)}_{\mathbf{k}_{\bot}s}=|E_V|^2\langle \widetilde{a}_{\mathbf{k}_{\bot}ks}^{\dag}\widetilde{a}_{\mathbf{k}_{\bot}ks}\rangle$, which is identical to the definition from Eq. (\[Idef\]), up to a normalization factor. *Calculation of the spectrum from Eq. (\[I\]).—* Inserting the output field (\[aout\]) into the spectrum (\[Idef\]) and expanding to second order in $q\hat{z}_n$, we need to evaluate the correlator $\langle\hat{z}_n(-\omega)\hat{z}_m(\omega)\rangle$. Using the solution (\[zo\]) for $\hat{z}_n(\omega)$, this requires the calculation of $\langle\hat{f}_n(-\omega)\hat{f}_{m}(\omega)\rangle$, which is found from Eq. (\[fn\]) as $$\begin{aligned} &&\langle\hat{f}_n(-\omega)\hat{f}_{m}(\omega)\rangle= \nonumber\\ &&2\frac{L}{c} D_p^{nm}\left[1+\frac{\omega(\delta_L-\Delta)}{(\delta_L-\Delta)^2+\left(\frac{\gamma+\Gamma}{2}\right)^2}\right]\approx 2\frac{L}{c} D_p^{nm}. \nonumber\\ \label{corf}\end{aligned}$$ The second approximate equality is valid for the frequency bandwidth of our slow dynamics, wherein $\omega\ll \gamma+\Gamma$, and amounts to neglecting the $\propto\delta'(t-t')$ correction in the correlation of $\hat{f}_n(t)$ from Eq. (\[fn\]). By further neglecting small corrections of order $|r|^2q^2\langle\hat{z}_n^2\rangle$ to the amplitude of the linear spectral peak, we finally obtain the result from Eq. (\[I\]), with $$M_{jj'}=\frac{\tilde{\beta}_{\mathbf{k}_{\bot}j}\beta_{\mathbf{k}_{\bot}j'}}{|\beta_{\mathbf{k}_{\bot}=0}|^2}\frac{\nu^4}{\nu_j^2\nu{j'}^2}\sum_{n,m}U_{jn}^{\ast}U_{j'm}\frac{\Gamma_{nm}}{\gamma}\frac{\Omega_n^{\ast}\Omega_m}{|\Omega_0|^2}, \label{Mjj}$$ and where $\beta_{\mathbf{k}_{\bot}j}=(1/N)\sum_ne^{-i\mathbf{k}_{\bot}\cdot\mathbf{r}_n^{\bot}}U_{jn}\beta_n$, and $\tilde{\beta}_{\mathbf{k}_{\bot}j}=(1/N)\sum_ne^{i\mathbf{k}_{\bot}\cdot\mathbf{r}_n^{\bot}}U_{jn}\beta_n^{\ast}$. QUANTUM SQUEEZING ================= In the following, we elaborate on several topics related to the analysis of the quantum squeezing from Sec. V. Output field fluctuations ------------------------- In order to arrive at Eq. (\[aout2\]) for the quantum fluctuations of the output field, we first expand Eq. (\[aout\]) to lowest order in $q\hat{z}_n$. Next, we neglect the term proportional to the product of the motion and field fluctuations, $\propto \hat{f}_n \delta\hat{\Omega}_n$, since it is second order in the vacuum fluctuations (Bogoliubov-like approximation/linearization). By considering uniform illumination, $\beta_s=|\beta_s|e^{i\phi_s}$ (from both sides of the array $s\rightarrow\pm$), we then obtain $$\begin{aligned} \widetilde{a}_{\mathbf{k}_{\bot}ks}&=& \left(\beta_s \delta_{\mathbf{k}_{\bot}0}\delta_{kq}+\hat{a}_{\mathbf{k}_{\bot}ks}\right)+r\sum_{s'=\pm}\left(\beta_{s'} \delta_{\mathbf{k}_{\bot}0}\delta_{kq}+\hat{a}_{\mathbf{k}_{\bot}ks'}\right) \nonumber\\ &-&ir\sum_{s'=\pm}\left[\tilde{\mu}_{\mathbf{k}_{\bot}kss'}\hat{a}^{\dag}_{-\mathbf{k}_{\bot},2q-k,s'}+\bar{\mu}_{\mathbf{k}_{\bot}kss'}\hat{a}_{\mathbf{k}_{\bot},k,s'}\right], \nonumber\\ \label{42}\end{aligned}$$ with $$\begin{aligned} \tilde{\mu}_{\mathbf{k}_{\bot}kss'}&=&\eta^2\frac{\nu^2}{\nu^2_{\mathbf{k}_{\bot}}}\chi_{\mathbf{k}_{\bot}k}\sum_{pp'}\frac{\beta_p\beta^{\ast}_{p'}c}{L N\nu}(s-p)(p'r^{\ast}+s'r)e^{i2\phi_{p'}}, \nonumber\\ \bar{\mu}_{\mathbf{k}_{\bot}kss'}&=&\eta^2\frac{\nu^2}{\nu^2_{\mathbf{k}_{\bot}}}\chi_{\mathbf{k}_{\bot}k}\sum_{pp'}\frac{\beta_p\beta^{\ast}_{p'}c}{L N\nu}(s-p)(p'r+s'r^{\ast}), \label{43}\end{aligned}$$ and where we denoted $\chi_{\mathbf{k}_{\bot}k}=\chi_{\mathbf{k}_{\bot}}(kc-\omega_L)$. The first line is the linear mirror response \[Eq. (\[alin\])\], whereas the nonlinear, motion-induced response is described by the second line, which contains the Bogoliubov-type coupling between annihilation and creation field operators. For illumination only from the left ($\beta_s=\beta \delta_{s+}$), the above expression for the reflected field ($s\rightarrow-$) becomes \[using $|\beta|^2 c/(LN)=2|\Omega|^2/(\gamma+\Gamma)$\], $$\widetilde{a}_{\mathbf{k}_{\bot}ks}=r\beta\delta_{\mathbf{k}_{\bot}0}\delta_{kq}+\sum_{s=\pm}\left[u_{\mathbf{k}_{\bot}ks}\hat{a}_{\mathbf{k}_{\bot},k,s}+v_{\mathbf{k}_{\bot}ks}\hat{a}^{\dag}_{-\mathbf{k}_{\bot},2q-k,s}\right] \label{47a}$$ with $$\begin{aligned} &&u_{\mathbf{k}_{\bot}k+}=r+ir'\mu_{\mathbf{k}_{\bot}k}, \quad\quad\quad v_{\mathbf{k}_{\bot}k+}=ir'e^{i2\phi}\mu_{\mathbf{k}_{\bot}k}, \nonumber\\ &&u_{\mathbf{k}_{\bot}k-}=1+r-ir''\mu_{\mathbf{k}_{\bot}k}, \quad v_{\mathbf{k}_{\bot}k-}=r''e^{i2\phi}\mu_{\mathbf{k}_{\bot}k}, \nonumber\\ \label{47b}\end{aligned}$$ where $r'=\mathrm{Re}[r]$, $r''=\mathrm{Im}[r]$, and $\mu_{\mathbf{k}_{\bot}k}=-ir v_{\mathbf{k}_{\bot}}(\omega)$ with $v_{\mathbf{k}_{\bot}}(\omega)$ from Eq. (\[bog\]). At cooperative resonance, $\delta_L=\Delta$, we have $r'=r=-1$ and $r''=0$, so that $u_{\mathbf{k}_{\bot}k-},v_{\mathbf{k}_{\bot}k-}=0$ and the output field depends only on the $s\rightarrow+$ fluctuations. However, in practice, for the atoms to thermalize, we need a non-vanishing friction $\alpha>0$, which requires $\delta_L-\Delta<0$ \[Eq. (\[92\])\]. In the main text, we simplify the presentation by considering the regime $|\delta_L-\Delta|\ll\gamma+\Gamma$ for which $r'\approx r \approx -1$ and $r''\ll 1$, taking $r''\rightarrow 0$ in Eq. (\[47b\]), thus obtaining the field fluctuations in Eq. (\[aout2\]) and the resulting nearly-perfect squeezing. Allowing for a finite value for $r''$ and $1-r$, leads to extra noise inserted by the vacuum modes $s\rightarrow-$ transmitted from the right, which may slightly degrade the squeezing. This can be avoided however, by considering a modified detection scheme, as discussed in subsection 3 below. Squeezing at mechanical resonance: Discussion --------------------------------------------- The analysis of the squeezing around the mechanical resonance $\omega=\pm\nu_{\mathbf{k}_{\bot}}$ in Sec. V \[Eqs. (\[peak\]), (\[W\]) and Fig. 5c\], revealed that its bandwidth is typically much greater than the mechanical width $\alpha$. Therefore, the value of the squeezing exactly on mechanical resonance (e.g. within a width $\alpha$ around it) is unimportant, and the expression from (\[peak\]) suffices to discuss the squeezing at the resonance for any practical purpose. Nevertheless, and from a purely formal aspect, we now briefly elaborate on the quantum noise of the field within a width $\alpha$ from $\pm\nu_{\mathbf{k}_{\bot}}$, where Eq. (\[peak\]) is supposedly invalid (since $\mathrm{Im}[\chi_{\mathbf{k}_{\bot}}(\omega)]$ is large, see comment [@comment2]). We first note the commutation relation of the output field from Eq. (\[aout2\]): $[\widetilde{a}_{\mathbf{k}_{\bot}}(\omega),\widetilde{a}^{\dag}_{\mathbf{k}_{\bot}}(\omega)]=1-2\mathrm{Re}[v_{\mathbf{k}_{\bot}}(\omega)]$. This expression is equal to $1$, as it should, for all $\omega$ except at a region of width $\sim \alpha$ around the mechanical resonance, where $\mathrm{Re}[v_{\mathbf{k}_{\bot}}(\omega)]\propto\mathrm{Im}[\chi_{\mathbf{k}_{\bot}}(\omega)]$ does not vanish. Formally, this means that any statement on quantum noise at this narrow (and practically irrelevant) region is meaningless, since the commutation relations are wrong. This is an artifact of the adiabatic-elimination (coarse-grained dynamics) we employed, where high frequencies of quantum noise are ignored. In principle, this can be fixed by using a more careful treatment of the output field [@optomech]. ![[]{data-label="fig6"}](fig6.pdf){width="\columnwidth"} Beyond perfect reflection ------------------------- As explained above, the output field (\[aout2\]) and the resulting squeezing discussed in the main text, are obtained from the more general result of Eq. (\[43\]), by assuming single-sided illumination and nearly perfect reflection. The latter assumption amounts to $|\delta_L-\Delta|\ll\gamma+\Gamma$, and is used above to neglect the influence of the left-propagating vacuum. This way, one remains with a single output port ($s\rightarrow-$) and a single input port ($s\rightarrow+$), avoiding an additional input port whose noise can degrade the squeezing. Nevertheless, we demonstrate in the following, that even if one gives up nearly-perfect reflection, such that two input ports with their vacuum noises exist, the same optimal squeezing can be achieved by considering a balanced scheme with two output ports [@VUL]. To this end, we consider the scheme from Fig. 6: A uniform incident laser propagates from both sides, with equal magnitude $\Omega$, and a phase difference $\phi$. The detected output field is given by a superposition of the outputs from both sides, $\widetilde{a}_{\mathbf{k}_{\bot}k}=\frac{1}{\sqrt{2}}[\widetilde{a}_{\mathbf{k}_{\bot}k+}+e^{i\varphi}\widetilde{a}_{\mathbf{k}_{\bot}k-}]$, with an adjustable interference phase $\varphi$. Choosing $\phi=\pi$ and $\varphi=0$, and using the expression for the output fields $s\rightarrow\pm$ from Eq. (\[43\]), we obtain the detected output field fluctuations (subtracting the average), $$\widetilde{a}_{\mathbf{k}_{\bot}k}=u_{\mathbf{k}_{\bot}k}\check{a}_{\mathbf{k}_{\bot}k}+v_{\mathbf{k}_{\bot}k}\check{a}^{\dag}_{-\mathbf{k}_{\bot},2q-k}, \label{10a}$$ with $\check{a}_{\mathbf{k}_{\bot}k}=\frac{1}{\sqrt{2}}[\hat{a}_{\mathbf{k}_{\bot}k+}+\hat{a}_{\mathbf{k}_{\bot}k-}]$ (satisfying $[\check{a}_{\mathbf{k}_{\bot}k},\check{a}^{\dag}_{\mathbf{k}_{\bot}k}]=1$), and the Bogoliubov coefficients $$\begin{aligned} u_{\mathbf{k}_{\bot}k}&=&1+2r+\frac{r}{r^{\ast}}v_{\mathbf{k}_{\bot}k} \nonumber\\ v_{\mathbf{k}_{\bot}k}&=&i|r|^2 8\eta^4\frac{4|\Omega|^2}{(\gamma+\Gamma)^2}\frac{\hbar(\gamma+\Gamma)}{E_R}\frac{\nu^2}{\nu_{\mathbf{k}_{\bot}}^2}\chi_{\mathbf{k}_{\bot}k}. \label{10c}\end{aligned}$$ This output field has the same form as that from Eq. (\[aout2\]), with the vacuum of the superposition mode $\check{a}_{\mathbf{k}_{\bot}k}$ in the former, replacing that of the $\hat{a}_{\mathbf{k}_{\bot}k+}$ mode in the latter. For $r\rightarrow -1$, the Bogoliubov coefficients in (\[10c\]) become identical to those from Eq. (\[bog\]), this time without the need to ignore the noise from any input port. Moreover, even for smaller $|r|$, $|v_{\mathbf{k}_{\bot}k}|$ can still get very large and lead to nearly-perfect squeezing as before. J. I. Cirac and P. Zoller, Phys. Rev. Lett. **74**, 4091 (1995). K. M[ø]{}lmer and A. S[ø]{}rensen, Phys. Rev. Lett. **82**, 1835 (1999). M. Aspelmeyer, T. J. Kippenberg and F. Marquardt, Rev. Mod. Phys. **86**, 1391 (2014). C. Cohen-Tannoudji, “Atomic Motion in Laser Light“, in *Fundamental Systems in Quantum Optics*, Les Houches, Session LIII, 1990, pp. 1-164 (Elsevier Science Publisher B.V., 1992). D. Stamper-Kurn, arXiv:1204.4351 (2012). S. Gupta, K. L. Moore, K. W. Murch and D. Stamper-Kurn, Phys. Rev. Lett. **99**, 213601 (2007). F. Brennecke, S. Ritter, T. Donner and T. Esslinger, Science **322**, 235 (2008). S. Camerer, M. Korppi, A. Jöckel, D. Hunger, T. W. Hänsch and P. Treutlein, Phys. Rev. Lett. **107**, 223001 (2011). J. Restrepo, C. Ciuti and I. Favero, Phys. Rev. Lett. **112**, 013601 (2014). P. Meystre, Ann. Phys. **525**, 215 (2013). A. Dorsel, J. D. McCullen, P. Meystre, E. Vignes, and H. Walther, Phys. Rev. Lett. **51**, 1550 (1983). J. D. Thompson, B. M. Zwickl, A. M. Jayich, F. Marquardt, S. M. Girvin and J. G. E. Harris, Nature **452**, 72 (2008). F. Marquardt, J. P. Chen, A. A. Clerk, and S. M. Girvin, Phys. Rev. Lett. **99**, 093902 (2007). I. Wilson-Rae, N. Nooshi, W. Zwerger, and T. J. Kippenberg, Phys. Rev. Lett. **99**, 093901 (2007). A. Schliesser, O. Arcizet, R. Rivière, G. Anetsberger and T. J. Kippenberg, Nat. Phys. **5**, 509 (2009). J. Chan, T. P. Mayer Alegre, A. H. Safavi-Naeini, J.T. Hill, A. Krause, S. Gröblacher, M. Aspelmeyer and O. Painter, Nature **478**, 89 (2011). J. D. Teufel, T. Donner, D. Li, J. W. Harlow, M. S. Allman, K. Cicak, A. J. Sirois, J. D. Whittaker, K. W. Lehnert and R. W. Simmonds, Nature **475**, 359 (2011). K. Hammerer, C. Genes, D. Vitali, P. Tombesi, G. Milburn, C. Simon and D. Bouwmeester, arXiv:1211.2594 (2012). D. W. C. Brooks, T. Botter, S. Schreppler, T. P. Purdy, N. Brahms and D. M. Stamper-Kurn, Nature **488**, 476 (2012). T. P. Purdy, P.-L. Yu, R. W. Peterson, N. S. Kampel, and C. A. Regal, Phys. Rev. X **3**, 031012 (2013). A. H. Safavi-Naeini, S. Gröblacher, J. T. Hill, J. Chan, M. Aspelmeyer and O. Painter, Nature **500**, 185 (2013). P. Rabl, Phys. Rev. Lett. **107**, 063601 (2011). A. Nunnenkamp, K. B[ø]{}rkje and S. M. Girvin, Phys. Rev. Lett. **107**, 063602 (2011). E. Shahmoon, D. Wild, M. Lukin and S. Yelin, Phys. Rev. Lett. **118**, 113601 (2017). R. J. Bettles, S. A. Gardiner and C. S. Adams, Phys. Rev. Lett. **116**, 103602 (2016). P. Weber, J. Güttinger, A. Noury, J. Vergara-Cruz and A. Bachtold, Nat. Comm. **7**, 12496 (2016). E. Shahmoon, M. D. Lukin and S. F. Yelin, ”Collective motion of an atom array under laser illumination", manuscript in preparation. E. Shahmoon and G. Kurizki, Phys. Rev. A **89**, 043419 (2014). For photon energies below twice the atomic rest-mass $2mc^2$. D. F. Walls and G. J. Milburn, *Quantum Optics*, (Springer, 1995). P. D. Drummond and Z. Ficek (Eds.), *Quantum Squeezing* (Springer-Verlag, Berlin Heidelberg, 2004). M. I. Kolobov, Rev. Mod. Phys. **71**, 1539 (1999). A. Gatti, E. Brambilla and L. Lugiato, Prog. Optics **51**, 251 (2008). M. J. Potasek and B. Yurke, Phys. Rev. A **35**, 3974(R) (1987). A more precise condition appears to be $|v_{\mathbf{k}_{\bot}}|\gg 1,\mathrm{Re}[v_{\mathbf{k}_{\bot}}(\omega)]$, so that strong squeezing should be observed for frequencies where $\mathrm{Re}[\chi_{\mathbf{k}_{\bot}}(\omega)]\gg \mathrm{Im}[\chi_{\mathbf{k}_{\bot}}(\omega)]$, i.e. close to the mechanical resonance, but slightly detuned (e.g. by a mechanical linewidth $\alpha$). This subtle point has no practical importance, however, considering that the squeezing bandwidth is much wider than $\alpha$ \[see Eqs. (\[peak\]) and (\[W\])\], as further discussed in Appendix D. W. Królikowski, O. Bang, N. I. Nikolov, D. Neshev, J. Wyller, J. J. Rasmussen and D. Edmundson, J. Opt. B: Quantum Semiclass. Opt. **6**,S288 (2004). E. Shahmoon, P. Grisins, H. P. Stimming, I. Mazets and G. Kurizki, Optica **3**, 725 (2016). S. Mancini and P. Tombesi, Phys. Rev. A **49**, 4055 (1994). C. Fabre, M. Pinard, S. Bourzeix, A. Heidmann, E. Giacobino, and S. Reynaud, Phys. Rev. A **49**, 1337 (1994). A. Kronwald, F. Marquardt and A. A. Clerk, New J. Phys. **16**, 063058 (2014). J. Perczel, J. Borregaard, D. E. Chang, H. Pichler, S. F. Yelin, P. Zoller and M. D. Lukin, Phys. Rev. Lett. **119**, 023603 (2017). R. J. Bettles, J. Minář, C. S. Adams, I. Lesanovsky and B. Olmos, Phys. Rev. A **96**, 041603(R) (2017). J. Perczel, J. Borregaard, D. E. Chang, H. Pichler, S. F. Yelin, P. Zoller and M. D. Lukin, Phys. Rev. A **96**, 063801 (2017). V. Mkhitaryan, L. Meng, A. Marini, F. J. Garcia de Abajo, arXiv:1807.03231 (2018). M. T. Manzoni, M. Moreno-Cardoner and A. Asenjo-Garcia, J. V. Porto, A. V. Gorshkov, and D. E. Chang, N. J. Phys. **20**, 083048 (2018). A. Asenjo-Garcia, M. Moreno-Cardoner, A. Albrecht, H. J. Kimble, and D. E. Chang, Phys. Rev. X **7**, 031024 (2017). L. Henriet, J. S. Douglas, D. E. Chang and A. Albrecht, arXiv:1808.01138 (2018). L. Novotny and B. Hecht, *Principles of Nano-Optics* (Cambridge University Press, 2006). R. H. Lehmberg, Phys. Rev. A 2, 883 (1970). C. W. Gardiner and P. Zoller, *Quantum Noise*, 2nd Edition (Springer-Verlag, Berlin Heidelberg, 2000). E. Shahmoon and S. F. Yelin (unpublished). I. D. Leroux, M. H. Schleier-Smith, H. Zhang and V. Vuletić, Phys. Rev. A **85**, 013803 (2012).
--- abstract: 'We investigate the sound propagation in an air-filled tube periodically loaded with Helmholtz resonators. By tuning the Helmholtz with the Bragg resonance, we study the efficiency of slow sound propagation in the presence of the intrinsic viscothermal losses of the system. While in the lossless case the overlapping of the resonances results in slow sound induced transparency of a narrow frequency band surrounded by a strong and broadband gap, the inclusion of the unavoidable losses imposes limits to the slowdown factor and the maximum transmission. Experiments, theory and finite element simulations have been used for the characterization of acoustic wave propagation. Experiments, in good agreement with the lossy theory, reveal the possibility of slowing sound at low frequencies by 20 times. A trade-off among the relevant parameters (delay time, maximum transmission, bandwidth) as a function of the tuning between Bragg and Helmholtz resonance frequency is also presented.' author: - 'G. Theocharis' - 'O. Richoux' - 'V. Romero-García' - 'V. Tournat' title: Slow sound propagation in lossy locally resonant periodic structures --- Introduction ============ Locally resonant acoustic metamaterials [@Lu] derive their unique properties e.g. negative effective mass density [@Ping] and negative bulk modulus [@Fang], from local resonators contained within each unit cell of engineered structures. Due to these effective parameters, a plethora of fascinating phenomena have been proposed over the last years, including negative refraction, super-absorbing sound materials, acoustic focusing, and cloaking (see Ref. \[\] and references therein). Although the inclusion of losses in locally resonant structures is very important, their role has been underestimated while in some studies totally ignored. Loss is not only an unavoidable feature, but also it may have deleterious consequences on some of the novel features of metamaterials[@meta] including double negativity and cloaking. Recent works on both photonic[@Huang04; @Pedersen; @McPhedran] and phononic[@Moiseyenko11; @Hussein09; @Andreassen13; @Laude] periodic structures show that the dispersion relation can be dramatically altered. In particular, flat propagating bands corresponding to slow-wave propagation, acquire an enhanced damping as compared to bands with larger group velocities[@Pedersen; @Moiseyenko11]. The aim of this work is to study the influence of losses on slow sound propagation in periodic locally resonant structures. For this reason, we theoretically and experimentally analyze the sound propagation in a tube periodically loaded with Helmholtz resonators (HRs) taking into account the viscothermal losses[@Zwikker]. In particular, we investigate configurations where the Bragg resonance frequency due to periodicity and the frequency of the Helmholtz resonators either coincide or are very close to each others. In the first case, a super-wide and strongly attenuating band gap is created. This property has recovered interest during last years in different branches of science including elastic waves[@Xiao], split-ring microwave propagation [@Paris], sonic crystals [@Page], and duct acoustics [@Wang], among others. In acoustics, this tuning, first studied by Sugimoto[@Sugimoto], is of great importance for sound and vibration isolation[@Wang; @Seo]. In the case of slightly detuned resonances, i.e. once the Bragg and the local resonance are slightly different, an almost flat band appears, a feature which is particular useful for slow waves applications[@Boudouti]. Here, we make use of this detuning to theoretically and experimentally examine the effect of losses in the slow sound band. We focus on both periodic systems and finite periodic arrays with $N$ side HRs. Using the transmission matrix method, we characterize the group index, the bandwidth, and the slow-wave limits of these structures, showing good agreement with experiments. The limit of the slow sound due to losses is of relevant importance for the design of narrow-band transmission filters and switches. Moreover, it could also open perspectives in the way to control the nonlinear effects at the local resonances [@PRERichoux], which could increase the functionality of the acoustic metamaterials leading to novel acoustic devices for the sound control at low frequencies. Theory ====== The propagation of linear, time-harmonic acoustic waves in a waveguide periodically loaded by side branches has been first studied in Ref. \[\]. Using Bloch theory and the transfer matrix method, one can derive the following dispersion relation (see also Refs. \[\],\[\]): $$\label{eq2} \cos (qL) = \cos(kL)+ j\frac{Z_{0}}{2Z_{b}}\sin(kL),$$ where $q$ is the Bloch wave number, $k$ is the wave number in air, $L$ the lattice constant, $Z_{b}$ the input impedance of the branch (see Ref. \[\] for the case of HR branch), and $Z_{0}=\rho_{0}c_{0}/S$ the acoustic impedance of the waveguide where $S$ is its cross-sectional area; $\rho_{0}, c_{0}$ the density and the speed of sound in the air respectively, and $j=\sqrt{-1}$. The transmission coefficient through a finite lattice can be derived using the transmission matrix method. For the case of $N$ side branches, the total transmission matrix can be expressed as follows [@Pierce; @Seo] $$\begin{aligned} \label{eq1} \left( \begin{array}{c} P_{1} \\ U_{1} \end{array} \right) & = & (M_b M_T)^{N-1}M_b \left( \begin{array}{c} P_2 \\ U_2 \end{array} \right) , \nonumber \\ & = & \left( \begin{array}{cc} T_{11} & T_{12} \\ T_{21} & T_{22} \end{array} \right)\left( \begin{array}{c} P_2 \\ U_2 \end{array} \right),\end{aligned}$$ where $$\begin{aligned} \label{eq2} M_T &=& \left( \begin{array}{cc} \cos(kL) & jZ_0 \sin(kL) \\ \frac{j}{Z_0} \sin(kL) & \cos(kL) \end{array} \right),\\ M_b &= & \left( \begin{array}{cc} 1 & 0 \\ 1/Z_b & 1 \end{array} \right) ,\end{aligned}$$ represent the transmission matrices for the propagation through a length $L$ in the waveguide and through a resonant branch respectively. $P_1$ ($U_1$) and $P_2$ ($U_2$) are the pressure (and respectively volume velocity) at the entrance and at the end of the system. Considering the previous equations, the pressure complex transmission coefficient can then be calculated [@Seo] as $$\label{Trans} t =\frac{2}{T_{11}+T_{21}/Z_0+T_{21}Z_0+T_{22}}.$$ The sound waves are always subjected to viscothermal losses on the wall and to radiation losses. Viscothermal losses are taken into account by considering a complex expression for the wave number. In our case, we used the model of losses from Ref. \[\], namely we replace the wave number and the impedances by the following expressions $$\begin{aligned} k=\frac{\omega}{c_0}(1+\frac{\beta}{s}(1+(\gamma -1)/\chi))\\ Z_c = \rho_0 c_0( 1+\frac{\beta}{s}(1-(\gamma -1)/\chi))\end{aligned}$$ by setting $s=r/\delta$ where $\delta=\sqrt{\frac{2\mu}{\rho_0\omega}}$ is the viscous boundary layer thickness, $\mu$ being the viscosity of air, $\chi=\sqrt{P_r}$ with $P_r$ the Prandtl number, $\beta=(1-j)/\sqrt{2}$, $\gamma$ the heat capacity ratio of air and $r$ the radius of the considered tube. Radiation losses, which appear at each connection between the waveguide and the HRs, are accounted for through a length correction of the HRs neck defined in the description of the experimental set-up. Experimental apparatus ====================== The experimental apparatus that we used in this work to calculate the dispersion relation of periodic systems and the transmission coefficient of a finite periodic locally resonant system is shown in Fig. \[Schem\]. Each HR is made of a neck (cylindrical tube with an inner radius $R_{n}=1$ cm and a length $l_n=2$ cm), and a cavity (cylindrical tube with an inner radius $R_{c}=2.15$ cm and a variable length, $l_c$). We use different configurations through this work from $3$ to $6$ HRs, loaded periodically along a cylindrical waveguide with an inner radius $R=2.5$ cm, $0.5$ cm wall thickness, and total length of $3$ m. The last HR is always connected at a distance of $L/2$ cm from the end of the set-up, $x_{end}$, where $L=30$ cm is the constant distance between adjacent resonators. The sound source is a piezo-electric buzzer embedded in the impedance sensor [@ImpedanceSensor] which is placed at $x_{in}$. One BK 4136 microphone, carefully calibrated, is placed at the other end (rigid termination) of the cylindrical waveguide. The frequency range of the applied signal is below the first cutoff frequency of the waveguide, $f_{c01}=4061$ Hz, and thus the propagation can be considered one-dimensional. The end correction of the neck was experimentally measured to be $1$ cm by comparing the input impedances (experimentally and theoretically) for different volumes of the HR cavity. For that purpose, one HR connected to a tube of a length of $15$ cm, rigidly closed, has been used. ![(Color online) Schematic of the experimental apparatus.[]{data-label="Schem"}](setup.pdf){width="8cm"} ![image](Dispersion_Relation_Group_Index.pdf){width="15cm"} The input impedance measurement setup (see Fig. \[Schem\]), together with the transmission matrix method allow us to experimentally evaluate the dispersion relation of periodic systems. To do that we use the measured input impedance $Z=\frac{P_1}{U_1}$ and the transfer impedance $Z_{T}=\frac{P_2}{U_1}$. From $Z$, one can calculate the acoustic impedance at the position $x_{0}$, $\widetilde{Z}=\frac{\widetilde{P}_1}{\widetilde{U}_1}$, which is located $L/2=15$ cm from the first HR as well as the $\widetilde{Z}_{T}=\frac{P_2}{\widetilde{U}_1}$ (see Fig. \[Schem\]). Then, the impedance matrix of the symmetric structure from $x_{0}$ until the rigid end is given by: $$\begin{aligned} \label{Zmatrix} \left( \begin{array}{cc} \widetilde{Z} & \widetilde{Z}_{T} \\ \widetilde{Z}_{T} & \widetilde{Z} \end{array} \right), \end{aligned}$$ from which the transmission matrix of the symmetric system is deduced. Once, we have the transmission matrix of the symmetric periodic structure, we can calculate the Bloch wave number $q$, using $q = \frac{1}{NL}\arccos(\frac{\widetilde{Z}}{\widetilde{Z}_{T}})$[@Lissek; @Caloz] and as a consequence the dispersion relation of the system. The exact shape of the dispersion curve is obtained by phase unwrapping and by restoring the phase origin [@Caloz]. Due to the presence of imperfect matching at the end caused by the rigid termination, some finite size effects are expected in the experimental characterization. This setup can be also used for the experimental calculation of the transmission coefficient. To do that we replace the rigid end termination by an anechoic termination made of a $10$ m long waveguide partially filled with porous plastic foam to suppress as much as possible the back propagative waves. In this case, the microphone is placed at a distance of L=$30$ cm after the last HR, namely the same distance as between the source and the first HR. Using the transmission matrix method, considering anechoic termination, and assuming the symmetry of the structure, the complex transmission coefficient $t$ reads as follows $$t=\frac{2 Z_T}{Z+Z_0}.$$ Results and discussion ====================== We start by studying the coupling between the Bragg and the resonance band gap taking into account the presence of the viscothermal losses. In Fig. \[DR\](a)-\[DR\](d), one can see both the experimental (red continuous line) and theoretical (black dashed line) complex dispersion relations for two different resonant frequencies of the HRs in comparison with the theoretical lossless case (green dotted line). The experimental setup is composed of $3$ HRs. The slight differences between theory and experiments are due to finite-size effects (see for example Fig. \[DR\](a)-(d) at $f\simeq323$ Hz). For the fixed lattice distance of $L=30$ cm, the first Bragg resonance appears at $f_{B}=\frac{c_0}{2L}\approx 571$ Hz. The resonance frequency of the HRs $f_{0}$ is in general unknown. One can use the traditional lumped-parameter model [@Pierce] to obtain an analytical expression. However, this is valid only at very small frequencies and it requires the knowledge of the end corrections. Therefore, in order to tune the resonant frequency with the Bragg’s frequency, we experimentally calculate the dependence of the imaginary part of the complex dispersion relation on the length of the cavity $l_c$ of the HR (see Fig. \[DR\](e)). We define a detuning length parameter, $\Delta l=l_0-l_c$, where $l_0$ corresponds to the cavity length at which $f_0=f_B$. Thus, $\Delta l$ measures how far we are from the complete overlap between the Bragg and the HR resonance. As shown in Fig. \[DR\](e), if $\Delta l<0$ ($\Delta l>0$) the HR resonance approaches the Bragg’s one from lower (higher) frequencies. According to Ref. \[\], for the case of $\Delta l=0$ a wide band-gap appears in the region $f_{B}(1-(\kappa/2)^{1/2})<f<f_{B}(1+(\kappa/2)^{1/2})$, where $\kappa=\frac{S_c l_c}{SL}$ measures the smallness of the cavity’s volume relative to the unit-cell’s volume. For our case, white dashed line in Fig. \[DR\](e), the above expression predicts a band gap for $416.5<f<727.7$ Hz, which is in very good agreement with the experiments (Fig. \[DR\](a)-(b)) for the case of $\Delta l\simeq0$ cm. However, it is very difficult to find in practice the case of $\Delta l=0$ because one needs to control either the length of the cavity or the lattice constant with a high precision. When $\Delta l \approx0$ one can observe that the lossless theory (green dotted line in Fig. \[DR\](a)) predicts a flat branch inside the band gap. This branch is drastically reduced once losses are introduced. We continue by studying the detuned case, i.e., the case $\Delta l\neq0$. In this situation, as shown in Fig. \[DR\](e) between the Bragg and the resonance bands, there is a range of frequencies with small attenuation (small Im($qL$)). For example the real part of the complex dispersion relation for the case $\Delta l=-0.4$ cm (see Fig. \[DR\](c)), shows an almost flat real band, which means slow sound propagation. We introduce the group index as a slowdown factor from the speed of sound $c$, defined as $n_g\equiv c/\upsilon_{g}$ where $\upsilon_{g}= \left(\frac{\partial \omega}{\partial Re(q)}\right)$ is the group velocity. In lossy periodic structures, the real and imaginary parts of the group velocity correspond to propagation velocity and pulse reshaping respectively (see [@McPhedran] and references within). Negative values of $n_g$ correspond to negative group velocity induced by the losses, as it has been also reported in Ref. \[\]. Figure \[DR\](f) shows the theoretical and experimental group index obtained from the data shown in Fig. \[DR\](c). It is worth noting that through the bandwidth of the transmitted frequencies (marked in Fig. \[DR\](f) with the double arrow) the group index is $n_g>20$. This slowdown factor is comparable with the results of previously reported experiments [@slowsound]. Figure \[DR\](e) shows also that the bandwidth of the transmission band depends on $\Delta l$. Now, we investigate in more detail this slow sound propagation in lossy finite lattice of HRs. In particular, the slow sound propagation is characterized using a delay time and we study the trade-offs among this delay and the transmission losses in a finite lattice. ![(Color online)(a) Transmission coefficient vs. frequency for a lattice of 6 HRs with $L=30$ cm and $\Delta l=-0.8$ cm. Experimental data (red solid line) and their comparison with the lossless (black dashed line) lossy theory (green dotted line). (b) Trade-offs among delay (solid blue) and maximum transmission (dashed green) for a lattice of 6 HRs as a function of the bandwidth and $\Delta l$.[]{data-label="TGT"}](Trans_groupIndex_Tradeoffs.pdf){width="8.5cm"} Figure \[TGT\](a) shows the experimental transmission amplitude in comparison with both the lossless and the lossy theoretical predictions for a lattice of $N=6$ HRs for the detuned case of $\Delta l=-0.8$ cm. For the lossless case, there is not a wide transmission band, but $N-1$ narrow transparent ($T=|t|^2=0$ dB) peaks as shown in the inset of Fig. \[TGT\](a). Comparing the lossy with the lossless case, one can observe that the losses reduce and smooth the transmission amplitude of the $N-1$ peaks creating a broadband of transmitted frequencies in good agreement with the experiments. Thus, losses reduce the transparency but create a bandwidth of transmitted modes with similar values of transmission. In order to analyze the slow down of sound in this transmitted range of frequencies, we define the time delay of a pulse propagating through the whole length $(N-1)L$ as $\tau = \frac{(N-1)L}{\upsilon_{g}^{0}}$. $\upsilon_{g}^{0}$ is the group velocity at the frequency, denoted by the circle in Fig. \[DR\](f), at which the group velocity dispersion (GVD)[@GVD] is minimum to reduce pulse distortion. Figure \[TGT\](b) summarizes the trade-offs among the relevant parameters, i.e., $\tau$, $\Delta l$, bandwidth and maximum transmission for the analyzed case of $N=6$ HRs. The maximum transmission in dB has been calculated by Eq. (\[Trans\]). As one can observe, to obtain a large time delay using a fixed number of identical resonators, a very small detuning is needed, i.e., $\Delta l\simeq0$. However, as the detuning is decreased, the bandwidth of the propagating frequencies becomes smaller and the overall losses of the structure become larger. As it is well established in lossy photonic [@Pedersen] and phononic [@Laude] crystals, losses particularly impact slow wave modes. It is worth noting that modes with group velocities near zero can disappear when losses are considered [@Reza]. ![image](lossless_vs_losses_extrusion.pdf){width="15cm"} Numerical simulations using Finite Element Method (FEM) have been performed to highlight the effect of losses in the field distribution inside the waveguide loaded with 6 HRs with $\Delta l =-0.8$ cm (as in Fig. \[TGT\]). A plane wave travelling from left to right is considered being the ends of the tube in the numerical domain surrounded by perfectly matched layers in order to numerically approximate the Sommerfeld radiation condition. Figures \[6HR\](a) and \[6HR\](b) show frequency-position maps of the sound pressure level for the lossless and lossy cases respectively. In the lossless case, the $N-1$ transmitted peaks with the corresponding $N-1$ Fabry-Pérot resonances inside the finite structure are clearly observed. For these resonant frequencies there is no reflection at the entrance of the tube. However for the lossy case, Fig. \[6HR\](b), the behavior is dramatically modified: the resonances are destroyed and reflections appear at the entrance and thus standing waves are generated. Therefore the system is not any more completely transparent. Conclusions =========== In conclusion, we have shown experimentally and theoretically that the presence of losses can drastically influence the slow sound propagation through a periodic locally resonant structure. For the tuned case, i.e the Helmholtz resonance frequency and the Bragg frequency are identical, a super-wide and strongly attenuated band gap appears. For the detuned case, we have shown an experimental group index larger than $20$. We have also investigated in detail the slow sound propagation in a finite lattice of HRs with losses showing that losses reduce and smooth the transmission amplitude of the peaks creating a broadband of transmitted frequencies in good agreement with the experiments. A trade-off among the relevant parameter (delay time, maximum transmission, bandwidth and detuning) has been presented showing that the near-zero group velocity theoretically predicted disappears due to losses. Finally, using simulated acoustic wave fields into the structure, we have pointed out the presence of reflected waves in the lossy case oppositely to the lossless case. We believe that this experimental and theoretical study shows the great importance of losses in acoustic wave propagation through periodic locally resonant structure and contributes to very promising research in the field of acoustic metamaterials, acoustic transmission filters and slow wave applications. We acknowledge V. Pagneux and A. Maurel for useful discussions. GT acknowledges financial support from FP7-People-2013-CIG grant, Project 618322 ComGranSol. VRG acknowledges financial support from the “Pays de la Loire” through the post-doctoral programme. [99]{} M.-H. Lu, L. Feng and Y.-F. Chen, Materials Today [**12**]{} (12), 34(2009). L. Zhengyou, Z. Xixiang, M. Yiwei, Y. Y. Zhu, Y. Zhiyu, C. T. Chan and S. Ping, Science [**289**]{}, 1734 (2000). N. Fang, D. J. Xi, J. Y. Xu, M. Ambati, W. Srituravanich, C. Sun and X. Zhang, Nat. Mater. [**5**]{} 452 (2006). P. A. Deymier, [*Acoustic Metamaterials and Phononic Crystals*]{} (Springer, Heidelberg, 2013); R. V. Craster and S. Guenneau [*Acoustic Metamaterials: Negative Refraction, Imaging, Lensing and Cloaking*]{} (Springer, Heidelberg, 2013). L. Solymar and E. Shamonina, [*Waves in Metamaterials*]{} (Oxford, University Press, New York, 2009). K.C. Huang, E.Lidorikis, X. Jiang, J.D. Joannopoulos and K. Nelson, Phys. Rev. B, [**69**]{}, 195111, (2004). J. G. Pedersen, S. Xiao, N. A. Mortensen, Phys. Rev. B [**78**]{}, 153101 (2008). P. Y. Chen et.al., Phys. Rev. A [**82**]{}, 053825 (2010). R.P. Moiseyenko and V. Laude, Phys. Rev. B, [**83**]{}, 064301, (2011). M.I. Hussein, Phys. Rev. E, [**80**]{}, 212301 (2009). R. P. Moiseyenko, V. Laude, Phys. Rev. B [**83**]{}, 064301 (2011). E. Andreassen and J.S. Jensen, J. Sound. Vib. [**135**]{}, 041015, (2013). C. Zwikker and C. W. Kosten, , (Elsevier Publishing Company, Inc., Amsterdam, 1949). Y. Xiao, B.  R. Mace, J. Wen, X. Wen, Phys. Lett. A [**375**]{}, 1485 (2011). N. Kaina, M. Fink, G. Lerosey, submitted. C. Cro[ë]{}nne, E. J.  S. Lee, H. Hu, J.  H. Page, AIP Advances [**1**]{}, 041401 (2011). X. Weng, C. Mak, J. Acoust. Soc. Am. [**131**]{}(2), 1172 (2011). N. Sugimoto and T. Horioka, J. Acoust. Soc. Am. [**97**]{}(3), 1446 (1995). S.-H. Seo, Y.-H. Kim, J. Acoust. Soc. Am. [**118**]{}(4), 2332 (2005). E.H. El Boudouti et. al., J. Phys.: Condens. Matter [**20**]{}, 255212 (2008). O. Richoux, V. Tournat, T. Le Van Suu, Phys. Rev. E [**75**]{}, 026615 (2007). C. E. Bradley, J. Acoust. Soc. Am. [**96**]{}(3), 1844 (1994). O. Richoux, V. Pagneux, Europhys. Lett., [**59**]{}(1), 34 (2002). A. D. Pierce, [*Acoustics : An introduction to its physical principles and applications*]{}, (Mac Graw Hill, 1981). C. A. Macaluso, J.-P. Dalmont, J. Acoust. Soc. Am. [**129**]{}(1), 404 (2011). F. Bongard, H. Lissek, and J.  R. Mosig, Phys. Rev. B [**82**]{}, 094306 (2010). C. Caloz and T. Itoh [*Electromagnetic Metamaterials: Transmission Line Theory and Microwave Applications*]{}, (Wiley-Interscience and IEEE Press, Hoboken, NJ, 2006). A. Santillán and S. I. Bozhevolnyi, Phys. Rev. B [**84**]{}, 064304 (2011). Once the group velocity depends on frequency in a medium, one can introduce the Group Velocity Dispersion (GVD), GVD$=\partial^2k/\partial\omega^2$. This parameter is for example used in optics for the analysis of the dispersive temporal broadening or compression of pulses. A. Reza, M. M. Dignam, and S. Hughes, Nature (London) [**455**]{}, E10 (2008).
**Some remarks on structural matrix rings** **and matrices with ideal entries** Stephan Foldes Tampere University of Technology, PL 553, 33101 Tampere, Finland [email protected] Gerasimos Meletiou TEI of Epirus, PO Box 110, 47100 Arta, Greece [email protected] December 2010 **Abstract** *Associating to each pre-order on the indices* $1,...,n$* the corresponding structural matrix ring, or incidence algebra, embeds the lattice of* $n$-*element* *pre-orders into the lattice of* $n\times n$ *matrix rings. Rings within the order-convex hull of the embedding, i.e. matrix rings that contain the ring of diagonal matrices, can be viewed as incidence algebras of ideal-valued, generalized pre-order relations. Certain conjugates of the upper or lower triangular matrix rings correspond to the various linear orderings of the indices, and the incidence algebras of partial orderings arise as intersections of such conjugate matrix rings.* Keywords: structural matrix ring, incidence algebra, pre-order, quasi-order, triangular matrix, conjugation, semiring, ideal lattice, subring lattice$$$$ **1 Conjugate subrings** *Rings* are understood to be possibly non-commutative, and to have a *unit* (multiplicatively neutral) element, which is assumed to be distinct from the zero (null) element and is denoted by $1$ or $I$ or a similar symbol$.$ *Subrings* are understood to contain the unit element. An $n\times n$ *matrix* is viewed as a map defined on the set $\mathbf{n}^{2}=\left\{ 1,...,n\right\} ^{2}.$ (Here $n\geq 1$ is assumed.) The ring of $n\times n$ matrices over a ring $R$ is denoted by $M_{n}(R).$ A *pre-order*, also called *quasi-order,* is a reflexive and transitive binary relation $\lesssim $ on a set $S$, an *order* (or *partial order*) is an anti-symmetric pre-order, and a *linear (*or* total) order* is an order in which any two elements are comparable. Instead of the generic notation $\lesssim ,$ specific pre-orders may be denoted by other symbols such as $\theta $. Given a ring $R$ and a pre-order $\lesssim $ on $\mathbf{n}$, the *structural matrix ring* $M_{n}(\lesssim ,R)$ over $R$ is defined by $$M_{n}(\lesssim ,R)=\left\{ \text{ }A\in M_{n}(R):\text{ }\forall i,j\text{ \ \ }A(i,j)=0\text{ unless }i\lesssim j\text{ }\right\}$$The full matrix ring $M_{n}(R)$, the subrings of all upper triangular (respectively lower triangular), and of all diagonal matrices, are examples of structural matrix rings. Structural matrix rings are essentially the same as incidence algebras of finite pre-ordered sets, although the latter term is sometimes used under the assumption that the base ring $R$ is a field \[F\] or that the pre-order in question is an order, possibly on an infinite set, as in \[R\]. Ring-theoretical properties of incidence algebras most relevant to the present context were studied in \[DW, F, MSW, W1, W2\]. The set of all pre-orders defined on any given set constitutes a lattice whose minimum is the equality relation. **Proposition 1**  *For any ring* $R,$* the map associating to each pre-order* $\lesssim $* on* $\mathbf{n}$ *the corresponding structural matrix ring* $M_{n}(\lesssim ,R)$*provides an embedding of the lattice of pre-orders on* $\mathbf{n}$ *into the lattice of subrings of the matrix ring* $M_{n}(R).$ *The embedding preserves also infinite greatest lower and least upper bounds.    * **Proof  **Preservation of greatest lower bounds is obvious. The preservation of upper bounds is a consequence of the description of the least upper bound of a family of pre-orders as the transitive closure of the least binary relation that is implied by the pre-orders in the family. $\ \ \ \ \ \square $ The lattice embedding described in the proposition above is generally not surjective, and for $n\geq 2$ its range is order-convex if and only if $R$ is a division ring. Section 2 will provide a description of the order-convex hull of the embedding’s range. Subrings $S$ and$\ T$ of $M_{n}(R)$ are said to be *permutation conjugates* if there is a permutation matrix $P$ such that$$S=\left\{ PAP^{-1}:A\in T\right\}$$Such subrings are obviously isomorphic under the automorphism $A\mapsto PAP^{-1}$ of $M_{n}(R).$ All the $n!$ linear orders on the finite set $\mathbf{n}$ are isomorphic, and the isomorphisms among them are precisely the self-bijections of the underlying set $\mathbf{n.}$ Consequently we have: **Proposition 2**  *For any pre-order* $\lesssim $ *on* $\mathbf{n}$ *and any ring* $R,$ *the following conditions are equivalent*: \(i) $\lesssim $ *is a linear order*, \(ii) $M_{n}(\lesssim ,R)$ *is a permutation conjugate of the ring of upper triangular matrices,* \(iii) $M_{n}(\lesssim ,R)$ *is a permutation conjugate of the ring of lower triangular matrices.            *$\square $ The well-known fact that every partial order is the intersection of its linear extensions yields: **Proposition 3  ***For any pre-order* $\lesssim $ *on* $\mathbf{n}$ *and any ring* $R,$ *the following conditions are equivalent*: \(i) $\lesssim $ *is an order*, \(ii) $M_{n}(\lesssim ,R)$ *is the intersection of some permutation conjugates of the ring of upper triangular matrices,* \(iii) $M_{n}(\lesssim R)$ *is the intersection of some permutation conjugates of the ring of lower triangular matrices.     *$\square $ These considerations were first developed, in the context of fields, in \[FM1\]. They were related to a ring property concerning one-sided and two-sided inverses in \[FSW\], addressing questions originating in \[C\] and \[SW\]. $$$$ **2  Matrices with ideal entries** The order-convex hull of the embedding provided by Proposition 1 consists obviously of those matrix rings that contain the ring of diagonal matrices. We now show that these rings can be viewed as generalized incidence algebras, corresponding to generalized relations that are the analogues of pre-order relations in a reticulated semiring-valued framework. By a *semiring* we understand a set endowed with a binary operation called *sum* (denoted additively) and a binary operation called *product* (denoted multiplicatively) such that: \(i) the sum operation defines a commutative semigroup, \(ii) the product operation defines a semigroup, \(iii) both distributivity laws $a(b+c)=ab+ac$ and $(b+c)a=ba+ca$ hold for all members $a,b,c$ of the underlying set. The set $\mathcal{I}(R)$ of (bilateral) ideals of any ring $R$ is a semiring under ideal sum and product of ideals. This semiring is lattice-ordered by inclusion. The set $M_{n}(\mathcal{I}(R))$ of $n\times n$ matrices with entries in $\mathcal{I}(R)$ is again a semiring under the obvious sum and product operations, also lattice-ordered by entry-wise inclusion: lattice join coincides with semiring sum, denoted $+$, while the lattice meet operation $\wedge $ is entry-wise intersection$\mathbf{.}$ The lattice $M_{n}(\mathcal{I}(R))$ is complete, it is isomorphic to the $n^{2}$-th Cartesian power of the complete lattice $\mathcal{I}(R).$ The matrix $\mathbf{I,}$ with the improper ideal $(1)=R$ in diagonal positions and the trivial ideal $(0)$ in off-diagonal positions, is multiplicatively neutral in the semiring $M_{n}(\mathcal{I}(R))$. For every matrix $\mathbf{U}$ with ideal entries, $\mathbf{U}\in M_{n}(\mathcal{I}(R))$, consider the set of matrices$$G=\left\{ A\in M_{n}(R):\text{ }\forall i,j\text{ \ \ }A(i,j)\in \mathbf{U}(i,j)\right\}$$This set is always an additive subgroup of $M_{n}(R),$ and it is a subring if and only if $\ \mathbf{U}^{2}+\mathbf{I}\leq \mathbf{U}$  in the ordered semiring of $n\times n$ matrices with ideal entries. In that case, we say that the subring $G$ is *defined by* $\mathbf{U.}$ There is a natural lattice embedding from the lattice of all pre-orders on $\mathbf{n}$ into the lattice $M_{n}(\mathcal{I}(R)),$ namely the map $\theta \mapsto \mathbf{U}$ where $\mathbf{U(}i,j)$ is the improper or trivial ideal of $R$ according to whether the relation $i\theta j$ holds or not. The matrix $\mathbf{U}$ so obtained always satisfies $\mathbf{U}^{2}+\mathbf{I}\leq \mathbf{U}$, and the subring of $M_{n}(R)$ defined by it coincides with the structural matrix ring $M_{n}(\theta ,R).$ For this reason we denote by $M_{n}(R,\mathbf{U})$ the matrix ring defined by any $\mathbf{U}\in M_{n}(\mathcal{I}(R))$ satisfying $\mathbf{U}^{2}+\mathbf{I}\leq \mathbf{U}.$ Any $\mathbf{U}\in M_{n}(\mathcal{I}(R))$ is viewed as an $\mathcal{I}(R)$-valued relation on the $n-$element set $\mathbf{n}$, the inequality $\mathbf{U}^{2}+\mathbf{I}\leq \mathbf{U}$ generalizes transitivity and reflexivity of relations, and $M_{n}(R,\mathbf{U})$ may then be thought of as the incidence algebra of the generalized pre-order $\mathbf{U}$. Matrices with ideal entries $\mathbf{U}\in M_{n}(\mathcal{I}(R))$satisfying the generalized transitivity condition $\mathbf{U}^{2}\leq \mathbf{U}$ only were used by van Wyk \[W1\], and recently again by Meyer, Szigeti and van Wyk \[MSW\], in the description of ideals of stuctural matrix rings . Any matrix with ideal entries $\mathbf{U}\in M_{n}(\mathcal{I}(R))$ satisfying $\mathbf{U}^{2}+\mathbf{I}\leq \mathbf{U}$ is said to be *reflexive-transitive.* If* *$R$ is a division ring, then the range of the map $\theta \mapsto \mathbf{U}$ described above, embedding the pre-order lattice into $M_{n}(\mathcal{I}(R)),$ is exactly the set of reflexive-transitive matrices. For any ring $R,$ the set of reflexive-transitive matrices with ideal entries constitutes a complete lattice under the ordering of $M_{n}(I(R))$ (but not a sublattice of $M_{n}(I(R))$ in general)$.$ **Proposition 4  ***For any ring* $R$, *the map* $\mathbf{U\mapsto }M_{n}(R,\mathbf{U})$ *establishes a lattice isomorphism between:* \(i) *the lattice* $\left\{ \mathbf{U}\in M_{n}(\mathcal{I}(R)):\mathbf{U}^{2}+\mathbf{I}\leq \mathbf{U}\right\} $* of* $n\times n$* reflexive-transitive matrices with ideal entries,* \(ii) *the lattice of subrings of* $M_{n}(R)$ *containing all diagonal matrices.* **Proof**  Obviously if $\mathbf{U}^{2}+\mathbf{I}\leq \mathbf{U}$then $M_{n}(R,\mathbf{U})$ contains all diagonal matrices, and if $\mathbf{U\subseteq V}$ then $M_{n}(R,\mathbf{U})\subseteq M_{n}(R,\mathbf{V}).$ Conversely, let $N\subseteq M_{n}(R)$ be any matrix ring including all diagonal matrices. For every $1\leq i,j\leq n$ the set $U_{ij}=\left\{ A(i,j):A\in N\right\} $ is an ideal of $R$, the matrix $\mathbf{U}=(U_{ij})$ with ideal entries can be seen to satisfy $\mathbf{U}^{2}+\mathbf{I}\leq \mathbf{U,}$ and $M_{n}(R,\mathbf{U})=N.$   $\ \ \ \ \ \ \ \ \square $ The proof above is presented in \[FM2\] in somewhat greater detail, together with some consequences. For division rings, where there is only the trivial and the improper ideal, the structure of the lattice of subrings of $M_{n}(R)$ containing all diagonal matrices is independent of the choice of the particular division ring $R.$ In the general case, it is the ideal lattice structure of $R$ that determines the structure of this upper section of the subring lattice of $M_{n}(R).$ $$$$ **References** \[C\] P.M. Cohn, Reversible rings, Bull. London Math. Soc. 31 (1999) 641–648 \[DW\] S. Dascalescu, L. van Wyk, Do isomorphic structural matrix rings have isomorphic graphs? Proc. Amer. Math. Soc. 124 (1996) 1385–1391. \[F\] R.B. Feinberg, Polynomial identities of incidence algebras, Proc. Amer. Math. Soc. 55 (1976) 25–28. \[FM1\]  S. Foldes, G. Meletiou, On incidence algebras and triangular matrices, Rutcor Res.Report 35-2002, Rutgers University, 2002. Available at rutcor.rutgers.edu/rrr \[FM2\]  S. Foldes, G. Meletiou, On matrix rings containing all diagonal matrices, Tampere University of Technology, August 2007 http://math.tut.fi/algebra/ \[FSW\]  S. Foldes, J. Szigeti, L. van Wyk, Invertibility and Dedekind finiteness in structural matrix rings. Forthcoming in Linear and Multilin. Algebra 2011. Manuscript iFirst 2010, 1–7, http://dx.doi.org/10.1080/03081080903357653 \[MSW\] J. Meyer, J. Szigeti, L. van Wyk: On ideals of triangular matrix rings, Periodica Math. Hung. Vol. 59 (1) (2009), 109-115 \[R\]  G.-C. Rota, On the foundations of combinatorial theory I. Theory of Möbius functions, Zeitschrift Wahrscheinlichkeitstheorie 2 (1964) 340–368 \[SW\] J. Szigeti, L. van Wyk, Subrings which are closed with respect to taking the inverse, J. Algebra 318 (2007) 1068–1076 \[W1\] L. van Wyk, Special radicals in structural matrix rings, Communications Alg. 16 (1988) 421-435 \[W2\] L. van Wyk, Matrix rings satisfying column sum conditions versus structural matrix rings, Linear Algebra Appl. 249 (1996) 15–28
--- abstract: 'The dynamics of the jamming transition in a three-dimensional granular system under vertical vibration is studied using diffusing-wave spectroscopy. When the maximum acceleration of the external vibration is large, the granular system behaves like a fluid, with the dynamic correlation function $G(t)$ relaxing rapidly. As the acceleration of vibration approaches the gravitational acceleration $g$, the relaxation of $G(t)$ slows down dramatically, and eventually stops. Thus the system undergoes a phase transition and behaves like a solid. Near the transition point, we find that the structural relaxation shows a stretched exponential behavior. This behavior is analogous to the behavior of supercooled liquids close to the glass transition.' author: - Kipom Kim - Jong Kyun Moon - Jong Jin Park - Hyung Kook Kim - Hyuk Kyu Pak date: 'January 12, 2005' title: Jamming transition in a highly dense granular system under vertical vibration --- Introduction ============ Granular materials are the particle systems in which the size of particle is large and the effect of thermal agitation is negligible[@Review]. Recently there has been much interest in the physics of noncohesive granular materials lying on a vertically vibrating surface. When the vibration intensity is large, the granular systems show the properties of fluids, such as convection[@Convection1; @Convection2], heaping[@Heaping; @Gas], traveling surface waves[@Pak], pattern formation[@Swinney; @Kim], and size segregation[@Segre]. When the vibration intensity is small, disordered granular materials become jammed, behaving like a system with infinite viscosity. After Liu and Nagel proposed an idea unifying the glass transition and the jamming behavior[@Liu], the behavior of the jamming transition has been studied extensively[@Weitz; @OHern; @Coniglio; @Nicodemi; @Bideau; @DAnna1; @Bonn; @Powders]. In this model, the inverse density of the system $\rho^{-1}$, temperature $T$, and stress $\sigma$ form the axes of a three-dimensional phase diagram, with the jammed state in the inner octant and the unjammed state outside. In athermal macroscopic systems like granular materials, thermal temperature does not play any important role. Instead, it is an effective temperature that relates the random motion of the particles. When jammed by lowering the effective temperature, the system is caught in a small region of phase space with no possibility of escape. For thermal systems, if the molecules are bulky and of irregular shape, or if the liquid is cooled too rapidly for the crystalline structure to form, at low temperature it vitrifies into a rigid phase that retains the disordered molecular arrangements of the liquid, creating a glass state. If the idea of the jamming phase diagram is correct, one can apply many ideas of the glass transition to explain the jamming behavior in athermal systems. In order to study the dynamics of granular systems, one might want to track the motion of the individual particles. However, due to the opacity of the granular systems, most experimental work has been limited to the study of the external features of the granular flow or the motion of tracer particles. Recently, noninvasive experimental techniques using magnetic resonance imaging (MRI), positron emission particle tracking (PEPT), and diffusing-wave spectroscopy (DWS) have overcome the problem of opacity in the three-dimensional granular system and have made observation of internal features of the flow possible[@Convection1; @Pept; @Durian; @You]. Since the spatiotemporal scale of the particle fluctuations is much smaller than the resolution of MRI and PEPT, only DWS can provide a statistical description of a highly dense system of small particles with adequate resolution[@DWS]. To briefly explain the DWS technique, photons are scattered consecutively by many particles in a highly dense medium. This diffusive nature of light in a strongly scattered medium results in a scattered light intensity that fluctuates with time. Thus the temporal decay of the light intensity autocorrelation function is used to study very small relative motions of the scatterers in the medium. The DWS technique has been successfully applied to study the motion of granular particles in a channel flow, a gas-fluidized bed, an avalanche flow, and a vibro-fluidized bed[@Durian; @You]. In this paper, the fluidization and jamming process of a thick and highly dense three-dimensional vibro-fluidized granular bed is studied using DWS. Exploring the temperature axis of the jamming phase diagram[@Liu], we compare our experimental result with theoretical concepts developed in the study of supercooled liquids close to the glass transition. Our result provides strong evidence of the analogy between the dynamics of granular materials and the behavior of supercooled liquids close to the glass transition. Experimental Setup ================== Figure 1 is a simple schematic diagram of the experimental setup. A 180 mm high, 90 mm wide ($L_x$), and 10 mm thick ($L_y$) rectangular glass vessel is filled with glass beads of diameter $270 \pm 20 \mu$m at the total depth of $L_z=100$mm. An electromagnetic shaker vibrates the vessel vertically with the form $A~{\rm sin}\left(2\pi f t\right)$, where $A$ is the amplitude and $f$ is the frequency of the vibration. One can construct a dimensionless acceleration amplitude, whose maximum value is $$\Gamma = 4 \pi^2 f^2 A/g ,$$ where $g$ is the gravitational acceleration[@Heaping]. The vessel is much heavier than the total mass of the glass beads so that the collisions of the vessel with the granular medium do not disturb its vertical vibration. The air inside the vessel is evacuated below the pressure of 0.1Torr, where the volumetric effect of the gas can be neglected[@Gas]. The acceleration amplitude $\Gamma$ and the vibration frequency $f$ are monitored by the accelerometer which is positioned on the vessel. All the data are taken in a steady state condition where the system is well compacted[@Bideau]. For the DWS measurements, an Ar laser beam with the wavelength of 488nm is directed on the wide side of the vessel, and the transmitted light intensity is detected by a photomultiplier tube (PMT). The center of scattering volume is located at the position ($x,z$), where $x$ is the horizontal distance from the central-vertical axis of the vessel and $z$ is the vertical distance below the free surface. The intensity output $I$ of the PMT is fed to a computer-controlled digital correlator (BI9000AT; Brookhaven Inst.). The intensity autocorrelation function, which is calculated by the digital correlator, is defined as $$g_2 (t) = <I(t) I^* (0)> / <I(0)>^2 = 1 + \kappa G (t) ,$$ where $t$ is a delay time, $\kappa$ is a factor determined by the optical geometry of the experiment[@Chu], and the bracket indicates a time average. In a highly dense granular medium, incident photons are multiply scattered by the glass beads. When the beads in the medium move, the intensity $I$ measured at the detector fluctuates with time. Thus the intensity autocorrelation function has the information of the movement of the beads in the medium. When $I$ is periodic, there are echoes in the correlation function at the integral multiples of the oscillation period[@Echo]. In this experiment, the position of the incident laser beam is fixed and the scattering medium vibrates with the form of $A~{\rm sin}\left(2\pi f t\right)$. Therefore when all the particles in the scattering volume exhibit perfectly reversible periodic motion, the correlation function will return to its initial value 1\[=$G(0)$\] at each multiple of the oscillation period. When only a fraction of the scatterers undergoes a reversible motion in the scattering volume, the height of the echoes will decay in time. Lastly, when all the scatterers exhibit an irreversible motion, there is no echo in the correlation function. Therefore the degree of fluidization can be characterized by the decay time of the height of the echoes in $G(t)$. Results and Discussion ====================== Fluidization point ------------------ Figure 2(a) shows the correlation function at various dimensionless accelerations at the position ($x=0$mm, $z=50$mm) in the case of the vibration frequency of $f=50$Hz and the granular bed of $L_z=100$mm. When the maximum acceleration is smaller than the gravitational acceleration, $\Gamma<1$, the correlation function oscillates with echoes at every multiple of the period $T$($\equiv f^{-1}$). Here, the height of each echo does not change, that is $G_{echo}(t)=1$. Since the photons meet the same scattering volume at every vibration period, $G_{echo}(t)=1$ implies that there is no relative movement of the particles in the scattering volume. This suggests that the granular system is solidlike at $\Gamma<1$, where the viscosity of the system appears to diverge. As the dimensionless acceleration is increased above $\Gamma = 1$, however, the height of the echoes in the correlation function relax in time. At a certain critical dimensionless acceleration $\Gamma_m$, the height of the first echo disappears completely, \[$G(T)=0$\]. For $\Gamma > \Gamma_m$, $G(t)$ decays very rapidly without any echoes. This implies that at $\Gamma > \Gamma_m$, the particle positions become completely randomized by the external vibrations and the statistical description of the internal motion becomes important. In this sense, $\Gamma_m$ is the critical dimensionless acceleration where the scattering volume becomes fluidized. This fast decay in $G(t)$ at $\Gamma > \Gamma_m$ contains information about the short-time dynamics of the granular particles[@You; @DAnna2]. Figure 2(b) shows the height of the first echo $G (T)$ as a function of $\Gamma$ at different vertical positions $z$. For a given position inside the vessel, the height of the first echo decreases monotonically with $\Gamma$. Thus the fraction of the particles which undergo an irreversible motion increases with $\Gamma$. To determine $\Gamma_m$, the data points are fitted with the function, $\tanh [\alpha (\Gamma - \Gamma_m)]$. Figure 3(a) shows the critical dimensionless acceleration $\Gamma_m$ at various horizontal positions $x$ with $z$=50mm and $f=50$Hz. Note that $\Gamma_m$ is independent of the horizontal position. Figure 3(b) plots $\Gamma_m$ as a function of the vertical position $z$ with $x=0$mm in the case of $f=50$Hz and $L_z=100$mm. The critical value of the dimensionless acceleration increases linearly with the distance from the free surface, except near the free surface of the granular system. The solid line in this figure is a straight line with a slope of 0.0034. The result that $\Gamma_m$ is nearly $1$ near the free surface of the granular system agrees with many previous experimental studies[@Convection2; @Heaping; @Gas; @Pak]. Figure 3(c) shows $\Gamma_m$ at various total depths $L_z$ in the case of the fixed position ($x=0$mm, $z=50$mm). No dependence of $\Gamma_m$ on the total depth $L_z$ suggests that the relevant parameters in the fluidization process are just the distance from the free surface, $z$, and $\Gamma_m$. According to this picture, when the maximum acceleration is larger than the gravitational acceleration ($\Gamma>1$), there is a liquid-solid interface in the granular system. The upper part of the system is liquidlike and the lower part is solidlike, and the position of the interface depends on $\Gamma$. Relaxation near the transition point ------------------------------------ At $1 < \Gamma < \Gamma_m $, we observe a dramatic slowing down of the relaxation in the height of the echoes with decreasing $\Gamma$. Figure 4(a) shows the height of the echoes of the correlation function $G_{echo}(t)$ up to a delay time of several hundred vibration periods. The unit of time in this figure is the vibration period $T$. This shows that the dynamics of granular particles undergoes structural relaxation below $\Gamma_m$. The data points are fitted with a stretched exponential decay function, $$G_{echo}(t) = G_o\exp[-(t/\tau)^{\beta}].$$ Here, the characteristic relaxation time $\tau$ depends on $\Gamma$. The fitting parameters of the stretched exponential decay function are $G_o$, $\tau$, and $\beta$. During the fitting process, the value of $G_o$ is fixed at $G_o=0.89$, which is chosen from the data at $\Gamma=1.06$ when the delay time approaches to $0$. In the theory of the glass transition, the value of $G_o$ depends on the scattering geometry and depends little on temperature and density[@MCT]. The fixed value of $G_o$ is the major reason for the poor fitting at $\Gamma=1.08$. Figure 4(b) shows the same data with the scaled horizontal axis of $(t/\tau)^{\beta}$, where $\beta=0.50\pm0.07$. All the data collapse well on a straight line with a small value of the parameter $\beta$ in Eq. (3) (Here, $\beta =1$ implies a simple exponential relaxation[@Log; @Glass]). This collapse is analogous to the behavior of supercooled liquids close to the glass transition. Figure 5 shows $\tau$ at various $\Gamma$ from the same data in Fig.4. Here, the characteristic relaxation time $\tau$ increases rapidly as $\Gamma$ approaches a specific value. The solid line in this figure is a fit with the form of $\tau \sim (\Gamma-\Gamma_c)^{-\gamma}$, where $\Gamma_c=1.048\pm0.001$ and $\gamma=4.63\pm0.06$. The power law divergence of the characteristic relaxation time is similar with that of mode coupling theory (MCT). According to MCT, the characteristic relaxation time $\tau$ at a temperature $T$ in the glassy relaxation process is of the form $\tau \sim (T-T_c)^{-\gamma}$[@MCT]. Here, $T_{c}$ is a dynamical crossover temperature from a liquidlike to a solidlike regime, and is located between the glass transition temperature $T_{g}$ and the melting transition temperature $T_{m}$[@Nicodemi; @MCT]. From the analogy between the dynamics of the granular materials under vertical vibration and that of supercooled liquids close to the glass transition, one expects that the effective temperature of the granular system under vertical vibration is related to the dimensionless acceleration. Therefore we conjecture that $\Gamma_m$, $\Gamma_c$, and $\Gamma=1$ in the dynamics of the granular materials under vertical vibration correspond to $T_{m}$, $T_{c}$, and $T_{g}$ in the behavior of supercooled liquids close to the glass transition. Conclusion ========== The jamming process of a highly dense three-dimensional granular system under vertical vibration is experimentally studied using the DWS technique. At $\Gamma > \Gamma_m $, the granular system behaves like fluids. At $\Gamma < 1 $, the system is arrested in the glassy state, where the viscosity of the system appears to diverge. At $1 < \Gamma < \Gamma_m $, the structural relaxation shows a stretched exponential behavior and the divergence of the characteristic relaxation time $\tau$ can be described by the power law function which is similar to MCT. This behavior provides the evidence of the analogy between the dynamics of granular materials and the behavior of supercooled liquids close to the glass transition. We would like to thank Y. H. Hwang, B. Kim, J. Lee, J. A. Seo, F. Shan, K. W. To, and W. Goldburg for valuable discussions. This work was supported by Grant No. R01-2002-000-00038-0 from the Basic Research Program of the Korean Science $\&$ Engineering Foundation, by Grant No. KRF2004-005-C00065 from the Korea Research Foundation, and by Pusan National University in the program, Post-Doc. 2005. H. M. Jaeger and S. R. Nagel, Science [**255**]{}, 1523 (1992); H. M. Jaeger, S. Nagel, and R. P. Behringer, Rev. Mod. Phys.[**68**]{}, 1259 (1996); P. G. de Gennes, Rev. Mod. Phys. [**71**]{}, S374 (1999). E. E. Ehrichs et al., Science [**267**]{}, 1632 (1995). J. B. Knight et al., Phys. Rev. E [**54**]{}, 5726 (1996). P. Evesque and J. Rajchenbach, Phys. Rev. Lett. [**62**]{}, 44 (1989); S. B. Savage, J. Fluid Mech.[**194**]{}, 457 (1988); E. Clément, J. Duran, and J. Rajchenbach, Phys. Rev. Lett. [**69**]{}, 1189 (1992). H. K. Pak, E. Van Doorn, and R. P. Behringer, Phys. Rev. Lett. [**74**]{}, 4643 (1995). H. K. Pak and R. P. Behringer, Phys. Rev. Lett. [**71**]{}, 1832 (1993). F. Melo, P. B. Umbanhowar, and H. L. Swinney, Phys. Rev. Lett. [**75**]{}, 3838 (1995); P. B. Umbanhowar, F. Melo, and H. L. Swinney, Nature (London)[**382**]{}, 793 (1996). K. Kim and H. K. Pak, Phys. Rev. Lett. [**88**]{}, 204303 (2002). J. B. Knight, H. M. Jaeger, and S. R. Nagel, Phys. Rev. Lett. [**70**]{}, 3728 (1993); D. C. Hong, P. V. Quinn, and S. Luding, Phys. Rev. Lett. [**86**]{}, 3423 (2001). A. J. Liu and S. R. Nagel, Nature (London) [**396**]{}, 21 (1998). V. Trappe, V. Prasad, L. Cipelletti, P. M. Segre, and D. A. Weitz, Nature (London) [**411**]{}, 772 (2001). C. S. O’Hern, S. A. Langer, A. J. Liu, and S. R. Nagel, Phys. Rev. Lett. [**86**]{}, 111 (2001); C. S. O’Hern, L. E. Silbert, A. J. Liu, and S. R. Nagel, Phys. Rev. E [**68**]{}, 011306 (2003). M. Nicodemi and A. Coniglio, Phys. Rev. Lett. [**82**]{}, 916 (1999). M. Tarzia, A. de Candia, A. Fierro, M. Nicodemi, and A. Coniglio, Europhys. Lett. [**66**]{}, 531 (2004); A. Coniglio, A. de Candia, A. Fierroa, M. Nicodemia, and M. Tarziab, Physica A [**344**]{}, 431 (2004); P. Richard, M. Nicodemi, R. Delannay, P. Ribiére, and D. Bideau, Nat. Mater. [**4**]{}, 121 (2005). P. Philippe and D. Bideau, Phys. Rev. Lett. [**91**]{}, 104302 (2003). G. D’Anna and G. Gremaud, Nature (London) [**413**]{}, 407 (2001). P. Coussot, Q. D. Nguyen, H. T. Huynh, and D. Bonn, Phys. Rev. Lett. [**88**]{}, 175501 (2002); P. Coussot et al., Phys. Rev. Lett. [**88**]{}, 218301 (2002); J. C. Baudez and P. Coussot, Phys. Rev. Lett. [**93**]{}, 128302 (2004). J. M. Valverde, M. A. S. Quintanilla, and A. Castellanos, Phys. Rev. Lett. [**92**]{}, 258303 (2004); R. D. Wildman, J. M. Huntley, and D. J. Parker, Phys. Rev. Lett. [**86**]{}, 3304 (2001). N. Menon and D. J. Durian, Science [**275**]{}, 1920 (1997); N. Menon and D. J. Durian, Phys. Rev. Lett. [**79**]{}, 3407 (1997); P. -A. Lemieux and D. J. Durian, Phys. Rev. Lett. [**85**]{}, 4273 (2000); P. K. Dixon and D. J. Durian, Phys. Rev. Lett. [**90**]{}, 184302 (2003). S. Y. You and H. K. Pak, J. Korean Phys. Soc. [**38**]{}, 577 (2001). G. Maret and P. E. Wolf, Z. Phys. B [**65**]{}, 409 (1987); D. J. Pine, D. A. Weitz, P. M. Chaikin, and E. Herbolzheimer, Phys. Rev. Lett. [**60**]{}, 1134 (1988). B. Chu, [*Dynamic Light Scattering: Basic Principles and Practice*]{} (Academic Press, New York, 1991). P. Hébraud, F. Lequeux, J. P. Munch, and D. J. Pine, Phys. Rev. Lett. [**78**]{}, 4657 (1997). G. D’Anna, P. Mayor, A. Barrat, V. Loreto, and F. Nori, Nature (London) [**424**]{}, 909 (2003). W. Götze and L. Sjögren, Rep. Prog. Phys. [**55**]{}, 241 (1992). At nonsteady state, the relaxation can be well fitted by an inverse-logarithmic function. However, at steady state, the relaxation is better fitted by a stretched exponential function. J. B. Knight et al., Phys. Rev. E [**51**]{}, 3957 (1995). G. Tarjus and D. Kivelson, in [*Jamming and Rheology*]{}, edited by A. J. Liu and S. R. Nagel (Taylor $\&$ Francis, New York, 2001), pp. 20-38.
--- abstract: 'Many real-world optimisation problems can be stated in terms of submodular functions. A lot of evolutionary multi-objective algorithms have recently been analyzed and applied to submodular problems with different types of constraints. We present a first runtime analysis of evolutionary multi-objective algorithms for chance-constrained submodular functions. Here, the constraint involves stochastic components and the constraint can only be violated with a small probability of $\alpha$. We show that the GSEMO algorithm obtains the same worst case performance guarantees as recently analyzed greedy algorithms. Furthermore, we investigate the behavior of evolutionary multi-objective algorithms such as GSEMO and NSGA-II on different submodular chance constrained network problems. Our experimental results show that this leads to significant performance improvements compared to the greedy algorithm.' author: - Aneta Neumann - Frank Neumann bibliography: - 'references.bib' title: 'Optimising Monotone Chance-Constrained Submodular Functions Using Evolutionary Multi-Objective Algorithms' --- Introduction ============ Evolutionary algorithms have been widely applied to solve complex optimisation problems. They are well suited for broad classes of problems and often achieve good results within a reasonable amount of time. The theory of evolutionary computation aims to explain such good behaviours and also point out the limitations of evolutionary computing techniques. A wide range of tools and techniques have been developed in the last 25 years and we point the reader to [@BookDoeNeu; @DBLP:books/daglib/0025643; @Auger11; @ncs/Jansen13] for comprehensive presentations. Stochastic components play a crucial role in many real-world applications and chance constraints allow to model constraints that can only be violated with a small probability. A chance constraint involves random components and it is required that the constraint is violated with a small probability of at most $\alpha$. We consider chance constraints where the weight $W(S)$ of a possible solution $S$ can violate a given constraint bound $C$ with probability at most $\alpha$, i.e. $\Pr[W(S) > C] \leq \alpha$ holds. Evolutionary algorithms have only recently been considered for chance constrained problems and we are pushing forward this area of research by providing a first runtime analysis for submodular functions. In terms of the theoretical understanding and the applicability of evolutionary algorithms, it is desirable to be able to analyse them on a broad class of problems and design appropriate evolutionary techniques for such classes. Submodular functions model a wide range of problems where the benefit of adding solution components diminishes with the addition of elements. They have extensively been studied in the literature [@Nemhauser:1978; @DBLP:journals/mp/NemhauserWF78; @vondrak2010submodularity; @DBLP:books/cu/p/0001G14; @DBLP:conf/icml/BianB0T17; @DBLP:conf/kdd/LeskovecKGFVG07; @pmlr-v65-feldman17b; @pmlr-v97-harshaw19a] and allow to model a variety of real-word applications [@DBLP:conf/aaai/KrauseG07; @DBLP:conf/stoc/LeeMNS09; @golovin2011adaptive; @DBLP:conf/aaai/MirzasoleimanJ018]. In recent years, the design and analysis of evolutionary algorithms for submodular optimisation problems has gained increasing attention. We refer to the recent book of Zhou et al. [@DBLP:books/sp/ZhouYQ19] for an overview. Such studies usually study evolutionary algorithms in terms of their runtime and approximation behaviour and evaluate the performance of the designed algorithms on classical submodular combinatorial optimisation problems. To our knowledge, there is so far no runtime analysis of evolutionary algorithms for submodular optimisation with chance constraints and the runtime analysis of evolutionary algorithms for chance constrained problems has only started recently for very special cases of chance-constrained knapsack problem [@DBLP:conf/foga/0001S19]. Chance constraints are in general hard to evaluate exactly, but well-known tail inequalities such as Chernoff bounds and Chebyshev’s inequality may be used to estimate the probability of a constraint violation. We provide a first runtime analysis by analysing [GSEMO]{}together with multi-objective formulations that use a second objective taking the chance constraint into account. These formulations based on tail inequalities are motivated by some recent experimental studies of evolutionary algorithms for the knapsack problem with chance constraints [@DBLP:conf/gecco/XieHAN019; @DBLP:journals/corr/abs-2004-03205; @DBLP:journals/corr/abs-2002-06766]. The [GSEMO]{}algorithm has already been widely studied in the area of runtime analysis in the field of evolutionary computation [@DBLP:journals/ec/FriedrichN15] and more broadly in the area of artificial intelligence where the focus has been on submodular functions and Pareto optimisation [@DBLP:conf/nips/QianYZ15; @DBLP:conf/nips/QianS0TZ17; @DBLP:conf/ijcai/QianSYT17; @DBLP:conf/aaai/RoostapourN0019]. We analyse this algorithm in the chance constrained submodular optimisation setting investigated in [@DBLP:journals/corr/abs-1911-11451] in the context of greedy algorithms. Our analyses show that [GSEMO]{}is able to achieve the same approximation guarantee in expected polynomial time for uniform IID weights and the same approximation quality in expected pseudo-polynomial time for independent uniform weights having the same dispersion. Furthermore, we study [GSEMO]{}experimentally on the influence maximization problem in social networks and the maximum coverage problem. Our results show that [GSEMO]{}significantly outperforms the greedy approach [@DBLP:journals/corr/abs-1911-11451] for the considered chance constrained submodular optimisation problems. Furthermore, we use the multi-objective problem formulation in a standard setting of NSGA-II. We observe that NSGA-II is outperformed by [GSEMO]{}in most of our experimental settings, but usually achieves better results than the greedy algorithm. The paper is structured as follows. In Section \[sec2\], we introduce the problem of optimising submodular functions with chance constraints, the [GSEMO]{}algorithm and tail inequalities for evaluating chance constraints. In Section \[sec3\], we provide a runtime analysis for submodular functions where the weights of the constraints are either identically uniformly distributed or are uniformly distributed and have the same dispersion. We carry out experimental investigations that compare the performance of greedy algorithms, [GSEMO]{}, and NSGA-II in Section \[sec4\] and finish with some concluding remarks. Preliminaries {#sec2} ============= Given a set $V = \{v_1, \ldots, v_n\}$, we consider the optimization of a monotone submodular function $f \colon 2^V \rightarrow {\mathbb{R}}_{\ge 0}$. We call a function monotone iff for every $S, T \subseteq V$ with $S \subseteq T$, $f(S) \leq f(T)$ holds. We call a function $f$ submodular iff for every $S, T \subseteq V$ with $S \subseteq T$ and $x \not \in T$ we have $$f(S \cup \{x\}) - f(S) \geq f(T \cup \{x\}) - f(T).$$ Here, we consider the optimization of a monotone submodular function $f$ subject to a chance constraint where each element $s \in V$ takes on a random weight $W(s)$. Precisely, we examine constraints of the type $$\Pr[W(S) > C] \leq \alpha.$$ where $W(S)=\sum_{s\in S}{w(s)}$ is the sum of the random weights of the elements and $C$ is the given constraint bound. The parameter $\alpha$ specifies the probability of exceeding the bound $C$ that can be tolerated for a feasible solution $S$. The two settings, we investigate in this paper assume that the weight of an element $s\in V$ is $w(s) \in [a(s)-\delta, a(s)+\delta]$, $\delta \leq \min_{s \in V} a(s)$, is chosen uniformly at random. Here $a(s)$ denotes the expected weight of items $s$. For our investigations, we assume that each item has the same dispersion $\delta$. We call a feasible solution $S$ a $\gamma$-approximation, $0 \leq \gamma \leq 1$, iff $f(S) \geq \gamma \cdot f(OPT)$ where $OPT$ is an optimal solution for the given problem. Chance Constraint Evaluation based on Tail Inequalities ------------------------------------------------------- As the probability $(Pr(W(X)> C)$ used in the objective functions is usually computational expensive to evaluate exactly, we use the approach taken in [@DBLP:conf/gecco/XieHAN019] and compute an upper bound on this probability using tail inequalities [@Motwani1995]. We assume that $w(s) \in [a(s)-\delta, a(s)+\delta]$ holds for each $s \in V$ which allows to use Chebyshev’s inequality and Chernoff bounds. The approach based on (one-sided) Chebyshev’s inequality used in [@DBLP:conf/gecco/XieHAN019] upper bounds the probability of a constraint violation by $$\label{thm:CHB} \hat{\Pr}(W(X) > C)\leq\frac{\delta^2|X|}{\delta^2 |X| + 3(C-E_W(X))^2}$$ The approach based on Chernoff bounds used in [@DBLP:conf/gecco/XieHAN019] upper bounds the probability of a constraint violation by $$\label{thm:CHF} \hat{\Pr}[W(X)> C] \leq \left(\frac{e^{\frac{C-E_W(X)}{\delta |X|}}}{\left(\frac{\delta |X| + C-E_W(X)}{\delta |X|}\right)^{\frac{\delta |X| + C-E_W(X)}{\delta |X|}}}\right)^{\frac{1}{2}|X|}$$ We use $\hat{\Pr}(W(X) > C)$ instead of $\Pr(W(X) > C)$ for our investigations usng multi-objective models of the problem. Multi-Objective Formulation --------------------------- Following the approach of Yue et al. [@DBLP:conf/gecco/XieHAN019] for the chance constrained knapsack problem, we evaluate a set $X$ by the multi-objective fitness function $g(X)=(g_1(X), g_2(X))$ where $g_1$ measures the tightness in terms of the constraint and $g_2$ measures the quality of $X$ in terms of the given submodular function $f$. We define $$g_1(X)=\left\{ \begin{array}{lccr} E_W (X)-C & \text{if} & (C-E_W (X))/(\delta \cdot |X|) \geq 1\\ \hat{Pr}(W(X)> C) &\text{if} & {(E_W (X)<C) \wedge (C-E_W (X))/(\delta |X| < 1)}\\ 1+(E_W (X)-C) &\text{if} & {E_W (X)\geq C} \label{g1x} \end{array} \right.$$ and $$g_2(X)=\left\{ \begin{array}{lccr} f(X) & \text{if}& { g_1 (X)\leq \alpha}\\ -1 & \text{if}& {\hat{Pr}(W(X)> C) >\alpha} \label{g2x} \end{array} \right.$$ where $E_W(X) = \sum_{s\in X} a(s)$ denotes the expected weight of the solution. The term $(C-E_W (X))/(\delta \cdot |X|) \geq 1$ in $g_1$ implies that a set $X$ of cardinality $|X|$ has probability $0$ of violating the chance constraint due to the upper bound on the intervals. We say a solution $Y$ dominates a solution $X$ (denoted by $Y \succcurlyeq X$) iff $g_1(Y)\leq g_1(X) \land g_2(Y) \geq g_2 (X)$. We say that $Y$ strongly dominates $X$ (denoted by $Y \succ X$) iff $Y \succcurlyeq X$ and $g(Y)\not =g(X)$ The dominance relation also translates to the corresponding search points used in [GSEMO]{}. Comparing two solutions, the objective function guarantees that a feasible solution strongly dominates every infeasible solution. The objective function ${g_1}$ ensures that the search process is guided towards feasible solutions and that trade-offs in terms of the probability of a constraint violation and the function value of the submodular function $f$ are computed for feasible solutions. Global SEMO ----------- Our multi-objective approach is based on a simple multi-objective evolutionary algorithm called Global Simple Evolutionary Multi-Objective Optimizer (GSEMO, see Algorithm \[alg:GSEMO\]) [@DBLP:conf/stacs/GielW03]. The algorithm encodes sets as bitstrings of length $n$ and the set $X$ corresponding to a search point $x$ is given as $X=\{v_i \mid x_i=1\}$. We use $x$ when referring to the search point in the algorithm and $X$ when referring to the set of selected elements and use applicable fitness measure for both notations in an interchangeable way. GSEMO starts with a random search point $x\in\{0,1\}^n$. In each iteration, an individual $x \in P$ is chosen uniformly at random from the current population $P$ In the mutation step, it flips each bit with a probability $1/n$ to produce an offspring $y$. $y$ is added to the population if it is not strongly dominated by any other search point in $P$. If $y$ is added to the population, all search points dominated by $y$ are removed from the population $P$. We analyze [GSEMO]{}in terms of its runtime behaviour to obtain a good approximation. The expected time of the algorithm required to achieve a given goal is measured in terms of the number of iterations of the repeat loop until a feasible solution with the desired approximation quality has been produced for the first time. Choose $x \in \{0,1\}^n$ uniformly at random $P\leftarrow \{x\}$ Runtime Analysis {#sec3} ================ In this section, we provide a runtime analysis of [GSEMO]{}which shows that the algorithm is able to obtain a good approximation for important settings where the weights of the constraint are chosen according to a uniform distribution with the same dispersion. Uniform IID Weights ------------------- We first investigate the case of uniform identically distributed (IID) weights. Here each weight is chosen uniformly at random in the interval $[a-\delta, a+\delta]$, $\delta \leq a$. The parameter $\delta$ is called the dispersion and models the uncertainty of the weight of the items. \[thm:iid\] Let $k=\min\{n+1, \lfloor C/a \rfloor \}$ and assume $\lfloor C/a \rfloor=\omega(1)$. Then the expected time until [GSEMO]{}has computed a $(1-o(1))(1-1/e)$-approximation for a given monotone submodular function under a chance constraint with uniform iid weights is $O(nk(k+\log n))$. Every item has expected weight $a$ and uncertainty $\delta$. This implies $g_1(X)=g_1(Y)$ iff $|X|=|Y|$ and $E_W(X)=E_W(Y)<C$. As [GSEMO]{}only stores for each fixed $g_1$-value one single solution, the number of solutions with expected weight less than $C$ is at most $k=\min \{n+1, C/a\}$. Furthermore, there is at most one individual $X$ in the population $g_2(X)=-1$. Hence, the maximum population that [GSEMO]{}encounters during the run of the algorithm is at most $k+1$. We first consider the time until [GSEMO]{}has produced the bitstring $0^n$. This is the best individual with respect to $g_1$ and once included will always stay in the population. The function $g_1$ is strictly monotone decreasing with the size of the solution. Hence, selecting the individual in the population with the smallest number of elements and removing one of them least to a solution with less elements and therefore with a smaller $g_1$-value. Let $\ell=|x|_1$ be the number of elements of the solution $x$ with the smallest number of elements in $P$. Then flipping one of the $1$-bits corresponding these elements reduces $k$ by one and happens with probability at least $\ell/(en)$ once $x$ is selected for mutation. The probability of selecting $x$ is at least $1/(k+1)$ as there are at most $k+1$ individuals in the population. Using the methods of fitness-based partitions, the expected time to obtain the solution $0^n$ is at most $$\sum_{\ell=1}^n \left(\frac{\ell}{e(k+1)n}\right)^{-1} = O(nk \log n).$$ Let ${k_{opt}}=\lfloor C/a \rfloor$, the maximal number of elements that can be included in the deterministic version of the problem. The function $g_1$ is strictly monotonically increasing with the number of elements and each solution with same number of elements has the same $g_1$-value. We consider the solution $X$ with the largest $k$ for which $$f(X) \geq (1 - (1- 1/{k_{opt}})^k) \cdot f(OPT)$$ holds in the population and the mutation which adds an element with the largest marginal increase $g_2(X \cup \{x\}) - g_2(X)$ to $X$. The probability for such a step picking $X$ and carrying the mutation with the largest marginal gain is $\Omega(1/kn)$ and its waiting time is $O(kn)$. This leads to a solution $Y$ for which $$f(X) \geq (1 - (1- 1/{k_{opt}})^{k+1}) \cdot f(OPT)$$ holds. The maximal number of times such a step is required after having included the search point $0^n$ into the population is $k$ which gives the runtime bound of $O(k^2n)$. For the statement on the approximation quality, we make use of the lower bound on the maximal number of elements that can be included using the Chernoff bound and Chebyshev’s inequality given in [@DBLP:journals/corr/abs-1911-11451]. Using Chebyshev’s inequality (Equation \[thm:CHB\]) at least $$k_1^*= \max \left\{k \mid k+ \frac{\sqrt{(1-\alpha)k\delta^2}}{\sqrt{3\alpha}a} \leq {k_{opt}}\right\}$$ elements can be included and when using Chernoff bound (Equation \[thm:CHF\]), at least $$k_2^*= \max \left\{k \,\middle|\, k+ \frac{\sqrt{3\delta k \ln(1/\alpha)}}{a} \leq {k_{opt}}\right\} \label{kstar}$$ elements can be included. Including $k^*$ elements in this way least to a solution $X^*$ with $$f(X^*)\geq (1 - (1- 1/{k_{opt}})^{k^*}) \cdot f(OPT).$$ As shown in [@DBLP:journals/corr/abs-1911-11451], both values of $k_1^*$ and $k_2^*$ yield $(1-o(1))(1-1/e) \cdot f(OPT)$ if $\lfloor C/a \rfloor=\omega(1)$ which completes the proof. $\Box$ Uniform Weights with the Same Dispersion ---------------------------------------- We now assume that the expected weights do not have to be the same, but still require the same dispersion for all elements, i.e. $w(s) \in [a(s)-\delta, a(s)+\delta]$ holds for all $s \in V$. We consider the (to be minimized) objective function $ \hat{g_1}(X)=E_W (X) $ (instead of $g_1$) together with the previously defined objective function $g_2$ and evaluate a set $X$ by $\hat{g}(X)=(\hat{g}_1(X), g_2(X))$. We have $Y \succeq X$ iff $\hat{g_1}(Y) \leq \hat{g_1}(X)$ and $g_2(Y) \geq g_2(X)$ Let $a_{\max}=\max_{s \in V} a(s)$ and $a_{\min}=\min_{s \in V} a(s)$, and $\delta \leq a_{\min}$. The following theorem shows that [GSEMO]{}is able to obtain a $(1/2-o(1))(1-1/e)$-approximation if $\omega(1)$ elements can be included in a solution. \[thm:samedisp\] If $C/a_{\max}=\omega(1)$ then [GSEMO]{}obtains a $(1/2-o(1))(1-1/e)$-approximation for a given monotone submodular function under a chance constraint with uniform weights having the same dispersion in expected time $O(P_{\max} \cdot n (C/a_{\min}+ \log n + \log (a_{\max}/a_{\min})))$. We first consider the time until the search point $0^n$ is included in the population. We always consider the individual $x$ with the smallest $\hat{g_1}$-value. Flipping every $1$-bit of $x \not = 0^n$ leads to an individual with a smaller $\hat{g_1}$-value and is therefore accepted. Furthermore, the total weight decrease of these $1$-bit flips is $\hat{g_1}(x)$ which also equals the total weight decrease of all single bit flip mutation when taking into account that $0$-bit flips give decrease of the $\hat{g_1}$-value of zero. A mutation carrying out a single bit flip happens each iteration with probability at least $1/e$. The expected decrease in $\hat{g_1}$ is therefore at least by a factor of $(1- 1/(P_{\max}en))$ and the expected minimal $\hat{g_1}$-value in the next generation is at most $$(1- 1/(P_{\max}\cdot en)) \cdot \hat{g_1}(x).$$ We use drift analysis, to upper bound the expected time until the search point $0^n$ is included in the population. As $a_{\min}\leq \hat{g_1}(x) \leq n a_{\max}$ holds for any search point $x\not=0^n$, the search point $0^n$ is included in the population after an expected number of $O(P_{\max}n(\log n +\log (a_{\max}/a_{\min})))$ steps. After having include the search point $0^n$ in the population, we follow the analysis of POMC for subset selection with general deterministic cost constraints [@DBLP:conf/ijcai/QianSYT17] and always consider the individual $x$ with the largest $\hat{g_1}$-value for which $$g_2(x) \geq \bigg[1- \prod_{k=1}^{n} \left(1- \frac{a(k)x_k}{C}\right) \bigg] \cdot f(OPT).$$ Note that the search point $0^n$ meets this formula. Furthermore, we denote by $\hat{g_1}^*$, the maximal $\hat{g_1}$-value for which $\hat{g_1}(x) \leq \hat{g_1}^*$ and $$g_2(x) \geq \bigg[1- \left(1- \frac{\hat{g_1}^*}{Cr}\right)^r \bigg] \cdot f(OPT).$$ for some $r$, $0 \leq r \leq n-1$, holds. We use $\hat{g_1}^*$ to track the progress of the algorithm and it has been shown in [@DBLP:conf/ijcai/QianSYT17] that $\hat{g_1}^*$ does not decrease during the optimisation process of [GSEMO]{}. Choosing $x$ for mutation and flipping the $0$-bit of $x$ corresponding to the largest marginal gain in terms of $g_2/\hat{g_1}$ gives a solution $y$ for which $$\begin{aligned} g_2(y) & \geq & \bigg[1- \left(1- \frac{a_{\min}}{C}\right) \cdot \left(1- \frac{\hat{g_1}^*}{Cr}\right)^r \bigg] \cdot f(OPT)\\ & \geq & \bigg[1- \left(1- \frac{\hat{g_1}^*+a_{\min}}{C(r+1)}\right)^{r+1} \bigg] \cdot f(OPT)\\\end{aligned}$$ holds and $\hat{g_1}^*$ increases by at least $a_{\min}$. The $\hat{g_1}^*$-value for the considered solution, can increase at most $C/a_{\min}$ times and therefore, once having included the search point $0^n$, the expected time until such improvements have occurred is $O(P_{\max}nC/a_{\min})$. Let $x^*$ be the feasible solution of maximal cost included in the population after having increased the $\hat{g_1}^*$ at most $C/a_{\min}$ times as described above. Furthermore, let $v^*$ be the element with the largest $g_2$-value not included in $x^*$ and $\hat{x}$ be the solution containing the single element with the largest $g_2$-value. $\hat{x}$ is produced from the search point $0^n$ in expected time $O(P_{\max}n)$. Let $r$ be the number of elements in a given solution. According to [@DBLP:journals/corr/abs-1911-11451], the maximal $\hat{g_1}^*$ deemed as feasible is at least $$C_1^*=C- \sqrt{\frac{(1 - \alpha) r \delta^2}{3\alpha}}$$ when using Chebyshev’s inequality (Equation \[thm:CHB\]) and at least $$C_2^*=C - \sqrt{3\delta r \ln(1/\alpha)}$$ when using the Chernoff bound (Equation \[thm:CHF\]). For a fixed $C^*$-value, we have $$\bigg[1- \left(1- \frac{C^*}{C(r+1)}\right)^{r+1} \bigg] \cdot f(OPT)$$ We have $\hat{g_1}(x^*)+ a(v^*)>C_1^*$ when working with Chebyshev’s inequality and $\hat{g_1}(x^*)+ a(v^*)>C_2^*$ when using Chernoff bound. In addition, $f(\hat{x}) \geq f(v^*)$ holds. According to [@DBLP:journals/corr/abs-1911-11451], $x^*$ or $\hat{x}$ is therefore a $(1/2-o(1))(1-1/e)$-approximation which completes the proof. $\Box$ For the special case of uniform IID weights, we have $a=a_{\max}=a_{\min}$ and $P_{max}\leq C/a +1$. Furthermore, the solution $x^*$ already gives a $(1-o(1))(1-1/e)$-approximation as the element with the largest $f$-value is included in construction of $x^*$. This gives a bound on the expected runtime of $O(nk (k+\log n))$ to obtain a $(1-o(1))(1-1/e)$-approximation for the uniform IID case when working with the function $\hat{g_1}$ instead of $g_1$. Note that this matches the result given in Theorem \[thm:iid\]. Experimental Investigations {#sec4} =========================== In this section, we investigate the [GSEMO]{}and the [NSGA-II]{}algorithm on important submodular optimisation problems with chance constraints and compare them to the greedy approach given in [@DBLP:journals/corr/abs-1911-11451]. Experimental Setup ------------------ We examine [GSEMO]{}and [NSGA-II]{}for constraints with expected weights $1$ and compare them to the greedy algorithm (GA) given in [@DBLP:journals/corr/abs-1911-11451]. Our goal is to study different chance constraint settings in terms of the constraint bound $C$, the dispersion $\delta$, and the probability bound $\alpha$. We consider different benchmarks for chance constrained versions of the maximum influence problems and the maximum coverage problem. For each benchmark set, we study the performance of the [GSEMO]{}and the [NSGA-II]{}algorithms for different budgets. We consider $C = 20, 50, 100$ for influence maximization and $C=10,15,20$ for maximum coverage. We consider all combinations of $\alpha = 0.1, 0.001$, and $\delta = 0.5, 1.0$ for the experimental investigations of the algorithms and problems. Chebyshev’s inequality leads to better results when $\alpha$ is relatively large and the Chernoff bounds gives better results for small $\alpha$ (see [@DBLP:conf/gecco/XieHAN019; @DBLP:journals/corr/abs-1911-11451]). Therefore, we use Equation \[thm:CHB\] for $\alpha=0.1$ and Equation \[thm:CHF\] for $\alpha=0.001$ when computing the upper bound on the probability of a constraint violation. We allow $5\,000\,000$ fitness evaluations for each evolutionary algorithm run. We run [NSGA-II]{}with parent population size $20$, offspring size $10$, crossover probability $0.90$ and standard bit mutation for $500\,000$ generations. For each tested instance, we carry out $30$ independent runs and report the minimum, maximum, and average results. In order to test the statistical significance of the results, we use the Kruskal-Wallis test with $95$% confidence in order to measure the statistical validity of our results. We apply the Bonferroni post-hoc statistical procedure, that is used for multiple comparison of a control algorithm, to two or more algorithms [@Corder09]. $X^{(+)}$ is equivalent to the statement that the algorithm in the column outperformed algorithm $X$. $X^{(-)}$ is equivalent to the statement that X outperformed the algorithm given in the column. If algorithm $X$ does not appear, then no significant difference was determined between the algorithms. The Influence Maximization Problem ---------------------------------- The influence maximization problem (IM) (see [@DBLP:journals/toc/KempeKT15; @DBLP:conf/kdd/LeskovecKGFVG07; @DBLP:conf/ijcai/QianSYT17; @DBLP:conf/aaai/ZhangV16] detailed descriptions) is a key problem in the area of social influence analysis. IM aims to find the set of the most influential users in a large-scale social network. The primary goal of IM is to maximize the spread of influence through a given social network i.e. a graph of interactions and relationships within a group of users [@chen2009efficient; @DBLP:conf/kdd/KempeKT03]. However, the problem of influence maximization has been studied subject to a deterministic constraint which limits the cost of selection [@DBLP:conf/ijcai/QianSYT17]. The social network is modeled as a directed graph $G=(V,E)$ where each node represents a user, and each edge $(u,v) \in E$ has been assigned an edge probability $p_{u,v}$ that user $u$ influences user $v$. The aim of the IM problem is to find a subset $X \subseteq V$ such that the expected number of activated nodes $E[I(X)]$ of $X$ is maximized. Given a cost function $c \colon V\rightarrow {\mathbb{R}}^+$ and a budget $C\ge 0$, the corresponding submodular optimization problem under chance constraints is given as $$\operatorname*{arg\,max}_{X\subseteq V} E[I(X)] \text{ s.t. } \Pr[c(X)> C]\leq \alpha.$$ For influence maximization, we consider uniform cost constraints where each node has expected cost $1$. The expected cost of a solution is therefore $E_W(X)= |X|$. ------------------------------------------------------- ------- ----- -------- ------------ --------- --------- --------- ------------------- ----------- --------- --------- --------- ------------------- (l[2pt]{}r[2pt]{})[5-9]{} (l[2pt]{}r[2pt]{})[10-14]{} **mean** **min** **max** **std** **stat** **mean** **min** **max** **std** **stat** 0.1 0.5 51.51 **55.75** 54.44 56.85 0.5571 $1^{(+})$ 55.66 54.06 56.47 0.5661 $1^{(+)}$ 0.1 1.0 46.80 **50.65** 49.53 51.68 0.5704 $1^{(+)}$ 50.54 49.61 52.01 0.6494 $1^{(+)}$ 0.1 0.5 90.55 **94.54** 93.41 95.61 0.5390 $1^{(+)},3^{(+)}$ 92.90 90.75 94.82 1.0445 $1^{(+)},2^{(-)}$ 0.1 1.0 85.71 **88.63** 86.66 90.68 0.9010 $1^{(+)},3^{(+)}$ 86.89 85.79 88.83 0.8479 $1^{(+)},2^{(-)}$ 0.1 0.5 144.16 **147.28** 145.94 149.33 0.8830 $1^{(+)},3^{(+)}$ 144.17 142.37 146.18 0.9902 $2^{(-)}$ 0.1 1.0 135.61 **140.02** 138.65 142.52 0.7362 $1^{(+)},3^{(+)}$ 136.58 134.80 138.21 0.9813 $2^{(-)}$ 0.001 0.5 48.19 **50.64** 49.10 51.74 0.6765 $1^{(+)}$ 50.33 49.16 51.25 0.5762 $1^{(+)}$ 0.001 1.0 39.50 **44.53** 43.63 45.55 0.4687 $1^{(+)}$ 44.06 42.18 45.39 0.7846 $1^{(+)}$ 0.001 0.5 75.71 **80.65** 78.92 82.19 0.7731 $1^{(+)}$ 80.58 79.29 81.63 0.6167 $1^{(+)}$ 0.001 1.0 64.49 69.79 68.89 71.74 0.6063 $1^{(+)}$ **69.96** 68.90 71.05 0.6192 $1^{(+)}$ 0.001 0.5 116.05 **130.19** 128.59 131.51 0.7389 $1^{(+)},3^{(+)}$ 127.50 125.38 129.74 0.9257 $1^{(+)},2^{(-)}$ 0.001 1.0 96.18 **108.95** 107.26 109.93 0.6466 $1^{(+)},3^{(+)}$ 107.91 106.67 110.17 0.7928 $1^{(+)},2^{(-)}$ ------------------------------------------------------- ------- ----- -------- ------------ --------- --------- --------- ------------------- ----------- --------- --------- --------- ------------------- : Results for Influence Maximization with uniform chance constraints.[]{data-label="tb:400Cheb"} In order to evaluate the algorithms on the chance constrained influence maximization problem, we use a synthetic data set with $400$ nodes and $1\,594$ edges [@DBLP:conf/ijcai/QianSYT17]. Table \[tb:400Cheb\] shows the results obtained by [GA]{}, [GSEMO]{}, and [NSGA-II]{}for the combinations of $\alpha$ and $\delta$. The results show that [GSEMO]{}obtains the highest mean values compared to the results obtained by [GA]{}and [NSGA-II]{}. Furthermore, the statistical tests show that for most of the combinations of $\alpha$ and $\delta$ [GSEMO]{}and [NSGA-II]{}significantly outperform [GA]{}. The solutions obtained by [GSEMO]{}have significantly better performance than [NSGA-II]{}in the case of a high budget i.e. for $C = 100$. A possible explanation for this is that the relatively small population size of [NSGA-II]{}does not allow one to construct solutions in a greedy fashion, as is possible for [GA]{}and [GSEMO]{}. The Maximum Coverage Problem ---------------------------- The maximum coverage problem [@DBLP:journals/ipl/KhullerMN99; @DBLP:journals/jacm/Feige98] is an important NP-hard submodular optimisation problem. We consider the chance constrained version of the problem. Given a set $U$ of elements, a collection $V = \{S_1,S_2,\ldots,S_n\}$ of subsets of $U$, a cost function c: $2^V\rightarrow {\mathbb{R}}^+$, and a budget $C$, the goal is to find $$\operatorname*{arg\,max}_{X \subseteq V} \{f(X) = |\cup_{S_i \in X} S_i| \text{ s.t. } \Pr(c(X) > C) \leq \alpha\}.$$ We consider linear cost functions. For the uniform case each set $S_i$ has an expected cost of $1$ and we have $E_W(X) = |\{i \mid S_i \in X\}|$. For our experiments, we investigate maximum coverage instances based on graphs. The $U$ elements consist of the vertices of the graph and for each vertex, we generate a set which contains the vertex itself and its adjacent vertices. For the chance constrained maximum coverage problem, we use the graphs frb30-15-01 ($450$ nodes, $17\,827$ edges) and frb35-17-01 ($595$ nodes and $27\,856$ edges) from [@datasetsfrb]. The experimental results are shown in Table \[tb:MCP1\]. It can be observed that [GSEMO]{}obtains the highest mean value for each setting. Furthermore, [GSEMO]{}statistically outperforms [GA]{}for most of the settings. For the other settings, there is no statistically significant difference in terms of the results for [GSEMO]{}and [GA]{}. NSGA-II is outperforming [GA]{}for most of the examined settings and the majority of the results are statistically significant. However, [NSGA-II]{}performs worse than [GA]{}for frb35-17-01 when $C=20$ and $\alpha=0.1$. ------------------------------------------------------- ------- ----- -------- ------------ ------------ --------- --------- ------------------- ---------- --------- --------- --------- --------------------- (l[2pt]{}r[2pt]{})[5-9]{} (l[2pt]{}r[2pt]{})[10-14]{} **mean** **min** **max** **std** **stat** **mean** **min** **max** **std** **stat** 0.1 0.5 371.00 **377.23** 371.00 379.00 1.8323 $1^{(+)}$ 376.00 371.00 379.00 2.5596 $1^{(+)}$ 0.1 1.0 321.00 **321.80** 321.00 325.00 1.5625 $1^{(+)}$ 321.47 321.00 325.00 1.2521 0.1 0.5 431.00 **439.60** 435.00 442.00 1.7340 $1^{(+)},3^{(+)}$ 437.57 434.00 441.00 1.7555 $1^{(+)},2^{(-)}$ 0.1 1.0 403.00 **411.57** 408.00 414.00 1.7750 $1^{(+)}$ 410.67 404.00 414.00 2.5098 $1^{(+)}$ 0.1 0.5 446.00 **450.07** 448.00 451.00 0.8277 $1^{(+)},3^{(+)}$ 448.27 445.00 451.00 1.3113 $1^{(+)},2^{(-)}$ 0.1 1.0 437.00 **443.87** 441.00 446.00 1.2794 $1^{(+)},3^{(+)}$ 441.37 438.00 444.00 1.6914 $1^{(+)},2^{(-)}$ 0.001 0.5 348.00 **352.17** 348.00 355.00 2.4081 $1^{(+)}$ 350.80 348.00 355.00 2.8935 $1^{(+)}$ 0.001 1.0 321.00 **321.67** 321.00 325.00 1.5162 $1^{(+)}$ 321.33 321.00 325.00 1.0613 0.001 0.5 414.00 **423.90** 416.00 426.00 2.4824 $1^{(+)}$ 422.67 419.00 426.00 2.2489 $1^{(+)}$ 0.001 1.0 371.00 **376.77** 371.00 379.00 1.8134 $1^{(+)}$ 376.33 371.00 379.00 2.6824 $1^{(+)}$ 0.001 0.5 437.00 **443.53** 440.00 445.00 1.1958 $1^{(+)},3^{(+)}$ 440.23 437.00 443.00 1.6955 $1^{(+)},2^{(-)}$ 0.001 1.0 414.00 **424.00** 420.00 426.00 1.7221 $1^{(+)}$ 422.50 417.00 426.00 2.5291 $1^{(+)}$ 0.1 0.5 448.00 **458.80** 451.00 461.00 3.3156 $1^{(+)}$ 457.97 449.00 461.00 4.1480 $1^{(+)}$ 0.1 1.0 376.00 **383.33** 379.00 384.00 1.7555 $1^{(+)}$ 382.90 379.00 384.00 2.0060 $1^{(+)}$ 0.1 0.5 559.00 **559.33** 555.00 562.00 2.0057 $3^{(+)}$ 557.23 551.00 561.00 2.4309 $1^{(-)},2^{(-)}$ 0.1 1.0 503.00 **507.80** 503.00 509.00 1.1567 $1^{(+)}$ 507.23 502.00 509.00 1.8323 $1^{(+)}$ 0.1 0.5 587.00 **587.20** 585.00 589.00 1.2149 $3^{(+)}$ 583.90 580.00 588.00 1.9360 $1^{(-)},2^{(-)}$ 0.1 1.0 569.00 **569.13** 566.00 572.00 1.4559 $3^{(+)}$ 565.30 560.00 569.00 2.1520 $1^{(-)},2^{(-)}$ 0.001 0.5 413.00 **423.67** 418.00 425.00 1.8815 $1^{(+)}$ 422.27 416.00 425.00 2.6121 $1^{(+)}$ 0.001 1.0 376.00 **383.70** **3**79.00 384.00 1.1492 $1^{(+)}$ 381.73 377.00 384.00 2.6514 $1^{(+)}$ 0.001 0.5 526.00 **527.97** 525.00 532.00 2.1573 $1^{(+)}$ 527.30 520.00 532.00 2.7436 0.001 1.0 448.00 **458.87** 453.00 461.00 2.9564 $1^{(+)}$ 457.10 449.00 461.00 4.1469 $1^{(+)}$ 0.001 0.5 568.00 **568.87** 565.00 572.00 1.5025 $3^{(+)}$ 564.60 560.00 570.00 2.7618 $1^{(-)}$,$2^{(-)}$ 0.001 1.0 526.00 **528.03** 525.00 530.00 1.8843 $1^{(+)}$ 527.07 522.00 530.00 2.2427 ------------------------------------------------------- ------- ----- -------- ------------ ------------ --------- --------- ------------------- ---------- --------- --------- --------- --------------------- : Results for Maximum Coverage with uniform chance constraints for graphs frb30-15-01 (rows 1-12) and frb35-17-01 dataset (rows 13-24).[]{data-label="tb:MCP1"} Conclusions =========== Chance constraints involve stochastic components and require a constraint only to be violated with a small probability. We carried out a first runtime analysis of evolutionary algorithms for the optimisation of submodular functions with chance constraints. Our results show that [GSEMO]{}using a multi-objective formulation of the problem based on tail inequalities is able to achieve the same approximation guarantee as recently studied greedy approaches. Furthermore, our experimental results show that [GSEMO]{}computes significantly better solutions than the greedy approach and often outperforms NSGA-II. For future work, it would be interesting to analyse other probability distributions for chance constrained submodular functions. A next step would be to examine uniform weights with a different dispersion and obtain results for uniform weights with the same dispersion when using the fitness function $g$ instead of $\hat{g}$. Acknowledgment {#acknowledgment .unnumbered} ============== This work has been supported by the Australian Research Council (ARC) through grant DP160102401 and by the South Australian Government through the Research Consortium “Unlocking Complex Resources through Lean Processing”.
--- abstract: 'Spin excitations are explored in the electron-doped spin-orbit Mott insulator (Sr$_{1-x}$La$_{x}$)$_3$Ir$_2$O$_7$. As this bilayer square lattice system is doped into the metallic regime, long-range antiferromagnetism vanishes, yet a spectrum of gapped spin excitation remains. Excitation lifetimes are strongly damped with increasing carrier concentration, and the energy integrated spectral weight becomes nearly momentum independent as static spin order is suppressed. Local magnetic moments, absent in the parent system, grow in metallic samples and approach values consistent with one $J=\frac{1}{2}$ impurity per electron doped. Our combined data suggest that the magnetic spectra of metallic (Sr$_{1-x}$La$_{x}$)$_3$Ir$_2$O$_7$ are best described by excitations out of a disordered dimer state.' author: - Tom Hogan - Rebecca Dally - Mary Upton - 'J. P. Clancy' - Kenneth Finkelstein - 'Young-June Kim' - 'M. J. Graf' - 'Stephen D. Wilson' bibliography: - 'BibTex.bib' title: 'Disordered dimer state in electron-doped Sr$_{3}$Ir$_{2}$O$_{7}$' --- Models of Heisenberg antiferromagnets on a bilayer square lattice have generated sustained theoretical and experimental interest due to their rich variety of ground states [@PhysRevLett.72.2777; @doi:10.1143/JPSJ.65.594; @PhysRevLett.78.3019; @PhysRevLett.80.5790; @PhysRevB.51.16483]. In zero field, an instability occurs above a critical ratio of interlayer to intralayer magnetic exchange that transitions spins from conventional antiferromagnetism into a dimer state comprised of spin singlets [@PhysRevLett.72.2777; @PhysRevB.61.3475; @PhysRevB.61.3475]. These singlets may interact and form the basis for numerous uncoventional ground states such as valence bond solids [@PhysRevB.69.224416; @PhysRevB.85.180411], quantum spin liquids [@PhysRevLett.80.5790], Bose-glass [@PhysRevLett.95.207206], and other quantum disordered states [@PhysRevB.85.180411]. Realizations of bilayer systems inherently near the critical ratio of interlayer to intralayer coupling however are rare, primarily due to orbital/exchange anisotropies strongly favoring either interplane or intraplane exchange pathways in accessible compounds [@PhysRevLett.87.217201; @PhysRevB.54.16172; @PhysRevB.55.8357]. $J_{\mathrm{eff}}=\frac{1}{2}$ moments are arranged onto a bilayer square lattice within the $n=2$ member of the Sr$_{n+1}$Ir$_{n}$O$_{3n+1}$ Ruddlesden-Popper series, Sr$_3$Ir$_2$O$_7$ (Sr-327) [@SUBRAMANIAN1994645]. The strong spin-orbit coupling inherent to the Ir$^{4+}$ cations in cubic ligand fields renders a largely three dimensional spin-orbit entangled wave function [@PhysRevLett.102.017205; @PhysRevLett.112.026403]. This combined with the extended nature of its $5d$ valence electrons presents Sr-327 as an interesting manifestation of the bilayer square lattice—one where appreciable interlayer coupling potentially coexists with strong intralayer exchange inherent to the single layer analogue Sr$_2$IrO$_4$ (Sr-214) [@PhysRevLett.108.177003]. While its ground state is antiferromagnetic (AF) [@0953-8984-24-31-312202; @PhysRevLett.109.037204; @PhysRevB.86.100401], measurements of magnetic excitations in Sr$_3$Ir$_2$O$_7$ observe anomalous spectra with large spin gaps ($\Delta E\approx 90$ meV) whose values exceed that of the single magnon bandwidth [@Kim; @Sala]. This has led to models shown in Figs. 1 (a) and (b) that cast the underlying exchange into two extremes: A linear spin wave approach (LSW) with a large anisotropy gap and predominantly intraplane exchange [@Kim] versus a bond operator (BO) mean field approach [@PhysRevB.59.111] with a dominant interplane, dimer-like, exchange [@2012arXiv1210.1974M; @Sala]. Recently, an additional excitation attributed to a longitudinal mode associated with triplon excitations was observed supporting the latter approach[@PhysRevB.52.3521]. The comparable ability of both LSW and BO approaches to capture major features of the magnetic spectra of Sr-327 invites further study. In particular, considerable insight can be gained by probing the evolution of spin dynamics as static AF order is suppressed. Recent work has shown that, unlike Sr-214, Sr-327 can be driven into a homogenous metallic state with no static spin order via La-substitution [@Hogan; @Xiang]. Local moment behavior, notably absent in parent Sr-327 [@0953-8984-19-13-136214], appears in these electron-doped samples and hints at an unconventional metallic state [@Hogan]. Here we utilize resonant inelastic x-ray scattering (RIXS) to explore the spin dynamics of (Sr$_{1-x}$La$_{x}$)$_3$Ir$_2$O$_7$ as it transitions from an AF insulator into a paramagnetic metal. Beyond $x=0.04$ ($6\%$ electrons/Ir), AF order vanishes, yet robust magnetic excitations persist deep into the metallic regime. Excitations become overdamped as carriers are introduced, yet the large spin gap inherent to the AF parent state survives into the disordered regime. The spectral weight of magnons in the metallic state becomes nearly momentum independent and exhibits a dispersion best described using the BO representation appropriate for a dimer state [@Sala]. Supporting this, static spin susceptibility measurements resolve the emergence of local moments which grow with increasing La-content and are consistent with a picture where each electron doped breaks a dimer and creates an uncompensated moment. Our aggregate data are best understood in the framework of a disordered dimer state emergent upon electron substitution in La-doped Sr-327. ![Magnon dispersion relations for Sr-327 plotted for (a) LSW model of Kim *et al.*, with data reproduced at $L=26.5$ r.l.u. from Ref. [@Kim] and (b) the BO model of Moretti Sala *et al.*, with data reproduced at $L=28.5$ r.l.u. from Ref. [@Sala]. (c) Exchange terms within the Heisenberg models of the square lattice bilayer with dimer outlined by dashed red line (d) Illustration of momentum cuts in panels (a) and (b) across the AF zone](Fig1) Resonant inelastic x-ray scattering (RIXS) measurements were performed at 27-ID-B at the Advanced Photon Source at Argonne National Lab and resonant elastic x-ray scattering (REXS) measurements were performed at C1 at CHESS. Details regarding the experimental setups are in supplementary information [@Supplemental]. Inelastic spectra were collected near the Ir L$_3$ edge (11.215 keV) in a geometry with an energy resolution at the elastic line $\Delta E_{res}=32$ meV [@Supplemental]. RIXS spectra were collected at $T=40$ K for two La-doped Sr-327 concentrations: $x=0.02$, an AF insulator ($T_{AF}\approx240$ K) and $x=0.07$, a paramagnetic metal [@Hogan]. Momentum positions are denoted using the in-plane $(H,K)$ wave vectors in the approximate tetragonal unit cell ($a\approx b\approx 3.90 \AA$) and, unless stated otherwise, momentum scans were collected at $L=26.5$ \[r.l.u.\]. Representative spectra for both $x=0.02$ and $x=0.07$ samples are shown in Fig. 2. Zone center **Q**$=(\pi, \pi)$ and zone boundary **Q**$=(\pi, 0)$ cuts are shown with the elastic line ($E$), single magnon ($M$), proposed multimagnon ($M^*$), and $d-d$ excitations ($D$) shaded. Excitations were fit to a Lorenztian of the form $I_Q(E)=\frac{2A}{\pi}\frac{\Gamma_Q}{4(E-E_Q)^2+\Gamma_Q^2}$ multiplied by the Bose population factor $(1-e^{\frac{-E}{k_B T}})$. The inverse lifetime values $\Gamma_Q$ for all excitations were substantially greater than the instrumental resolution. ![Representative energy scans collected at $40$ K at fixed **Q** for samples with $x=0.02$ and $x=0.07$. Panels (a) and (b) show scans performed at the AF zone center ($\pi, \pi$) and zone boundary ($\pi$, 0) for AF insulating $x=0.02$ respectively while panels (c) and (d) show the same scans for paramagnetic, metallic $x=0.07$. Features labeled $E$, $M$, $M^{*}$, and $D$ denote scattering from the elastic line, single magnon, multimagnon, and $d-d$ excitations respectively.](Fig2) The dispersions of the $M$ and $M^*$ peaks along the high symmetry directions illustrated in Fig. 1 (d) are plotted for both samples in Fig. 3. Energies of $M$ (squares) and $M^*$ (circles) peaks are shown with the $\Gamma_Q$ associated with $M$ peaks illustrated via the larger shaded regions. Only one feature associated with a single magnon excitation could be identified, and no additional acoustic/optical branches associated with spin waves from a bilayer or longitudinal modes associated with triplon excitations were isolated. Similar to the parent material, the $M$ peaks were independent of $L$ within the zones explored [@Kim; @Supplemental]. Any weak additional modes are obscured due to the overdamping of the $M$ excitations as carriers are introduced [@Supplemental]—an effect which partially convolves the $M$ and $M^*$ features. Despite the absence of this second mode, tests can still be made within the RIXS spectra regarding the suitability of the LSW and BO approaches to the bilayer square lattice Heisenberg Hamiltonian with inplane exchange constants $J_1, J_2, J_3$, interplane exchange $J_c$, and an anisotropy term $\theta$ as illustrated in Fig. 1 (c) [@Sala; @Kim]. Specifically, the data show that the gap energies of $M$-peaks at the ($\pi, \pi$) and ($0, 0$) positions become increasingly inequivalent upon doping. For the $x=0.07$ sample, the AF zone center ($\pi, \pi$) gap value decreases to $E_{\pi , \pi}=73\pm4$ meV whereas the $\Gamma$-point ($0, 0$) gap remains nearly unchanged from the parent system at $E_{0,0}=89\pm4$ meV [@Supplemental]. The differing energies of the $M$ peaks at these two points suggest that a simple LSW model cannot account for the dispersion [@Kim]. In a naive LSW approach, the combined optical plus acoustic spectral weight should remain degenerate at the ($\pi, \pi$) and $(0, 0)$ positions, which for the $x=0.07$ sample would violate the assumption that both an acoustic and optical mode are convolved within the largely $L$-independent $M$ excitations [@Supplemental; @Kim]. The BO approach however allows for nondegenerate spectral weight at these positions through inequivalent transverse mode $E_{\pi, \pi}$ and $E_{0, 0}$ gap values whose ratio is governed by the anisotropy term $cot(\theta)=E_{0,0}/E_{\pi,\pi}$. Therefore, to parameterize the dispersion in electron-doped Sr-327 samples, the BO model was utilized [@2012arXiv1210.1974M; @Sala]. ![Dispersion of $M$ and $M^{*}$ features for (a) $x=0.02$ and (b) $x=0.07$ samples. $M$ and $M^{*}$ peaks are plotted as squares and circles respectively, each with accompanying errors. The larger shaded regions about the $M$-dispersion are the excitations’ FWHM. Solid lines denote fits to the transverse modes using the BO model and dashed lines denote the predicted positions of longitudinal modes.](Fig3) Fits using the BO generated dispersion relations along the pathways illustrated in Fig. 1 (d) are shown as solid lines in Figs. 3 (a) and (b). Due to the suppressed spectral weight expected for the longitudinal mode [@Supplemental; @Sala] and the broadened $\Gamma$ values inherent to doped samples, the predicted longitudinal branches lie convolved either within the FWHM of the $M$ mode or $M^*$ feature. Fits were therefore performed only to the transverse modes’ dispersion, and the predictions for the accompanying longitudinal modes are plotted for reference. Using this parameterization, the coupling constants evolve from $J_1=37.7$ meV, $J_2=-14.0$ meV, $J_3=4.8$ meV, and $J_c=87.6$ meV for the $x=0.02$ sample to $J_1=29.1$ meV, $J_2=-17.0$ meV, $J_3=5.2$ meV, and $J_c=80.1$ meV for the $x=0.07$ sample. The anisotropy term decreased slightly from $\theta=41.2$ for $x=0.02$ to $\theta=37.2$ for $x=0.07$. The inverse lifetimes of the $M$-excitations are largely $Q$-independent and increase from an average value of $\Gamma_{avg}=75$ meV for $x=0.02$ to $\Gamma_{avg}=124$ meV for $x=0.07$ [@Supplemental]. ![(a) Energy integrated spectral weight of $M$ peaks across the AF zone. Data for $x=0.02$ (blue circles) show a maximum at the zone center consistent with its AF ordered ground state. Data for $x=0.07$ (red circles) show a nearly Q-independent response. (b) REXS data showing the absence of AF correlations in the $x=0.05$ sample. Black circles denote $H$-scans through the AF position (0.5, 0.5, 18) in the $\sigma -\pi$ channel. Red circles denote the structural reflection at (0.5, 0.5, 19) in the $\sigma -\sigma$ channel scaled by $1/50$ for clarity. Background has been removed from the data. (c) Curie-Weiss fits to high temperature susceptibility with a temperature independent $\chi_0$ term removed and collected under $H=5$ kOe. (d) Local moments $\mu_{\mathrm{eff}}$/La extracted from fits in panel (c).](Fig4) While electron-doping drives a subtle shift in the $M$ dispersion, the bandwidth is largely unaffected upon transitioning from the AF insulating regime ($x=0.02$) into the paramagnetic, metallic state ($x=0.07$). This is striking, in particular due to the reported absence of magnetic order in the $x=0.07$ sample [@Hogan]. The distribution of the spectral weights of $M$ peaks in both samples further reflect this fact and are plotted in Fig. 4 (a). In the AF $x=0.02$ sample, the energy integrated weight is maximal at the magnetic zone center ($\pi$, $\pi$) as expected [@PhysRevB.84.020403]; however this zone center enhancement vanishes with the loss of AF order in the $x=0.07$ sample. In order to further search for signatures of remnant short-range order in the metallic regime, REXS measurements were collected at $7$ K on an $x=0.05$ crystal. Data collected at the Ir $L_3$ edge are plotted in Fig. 4 (b) showing $H$-scans through the magnetic $(0.5, 0.5, 18)$ and structural $(0.5, 0.5, 19)$ reflections in the $\sigma-\pi$ and $\sigma-\sigma$ channels respectively. No peak is observed in the $\sigma-\pi$ channel at the expected magnetic wave vector; however the weak structural peak apparent in the $\sigma-\sigma$ channel at $(0.5 ,0.5, 19)$ gives a sense of the measurement sensitivity. By performing identical measurements on an $x=0.02$ sample with a known AF moment $m_{AF}\approx0.31$ $\mu_B$, this places a bound of $m_{AF}<0.06$ $\mu_B$ for the metallic x=0.05 concentration [@Hogan; @Supplemental]. Scans along other high symmetry directions also failed to detect static short-range correlations. The absence of static antiferromagnetism in samples with $x>0.04$ is consistent with earlier neutron diffraction measurements [@Hogan] and render it distinct from its single layer analogue, Sr-214. In electron-doped Sr-214, short-range AF order survives to the highest doping levels explored $\approx 12\%$ electrons/Ir [@Xiang; @2016arXiv160307547G] and can account for a magnon dispersion with slightly renormalized magnetic exchange [@2016arXiv160102172L]. In contrast, electron-doping Sr-327 reveals gapped spin excitations that persist beyond the disappearance of AF order. While a slight increase in $J_c/J_1$ from 2.32 to 2.75 accompanies the disappearance of AF order and is naively consistent with predictions for the formation of a dimer state beyond a critical ratio of $J_c/J_1\approx 2.5$ [@PhysRevLett.72.2777; @PhysRevB.61.3475], the extended in-plane exchange and anisotropy terms used in the BO approach of Ref. [@Sala] as well as the presence of doped carriers necessarily modify this critical threshold [@PhysRevB.52.7708]. Although doping complicates models of dimer excitations, it also provides a further test for a hidden dimer state in Sr-327. In the simplest picture, adding an electron to the IrO$_2$ planes creates a nonmagnetic Ir$^{3+}$ site within a sea of $J=\frac{1}{2}$ moments. For a ground state composed of uncorrelated dimers, this nonmagnetic site should break a dimer and leave an uncompensated $J=\frac{1}{2}$ moment behind. Hence, doping the dimer state with electrons should seed nonmagnetic Ir$^{3+}$ sites and create an increasing fraction of weakly coupled, uncompensated spins within the sample. An order by disorder transition should eventually follow among these unfrustrated local moments in the $T=0$ limit [@doi:10.1143/JPSJ.65.2385; @PhysRevLett.86.1086; @PhysRevLett.103.047201]. Intriguingly, previous magnetization measurements reported an unusual Curie-Weiss (CW) response in electron-doped Sr-327 [@Hogan]. This fact combined with the absence of CW behavior in the high temperature susceptibility of the parent system [@Supplemental] suggests a dopant induced local moment behavior. To explore this further, magnetization measurements were performed on a series of Sr-327 samples with varying levels of La-doping. The high temperature CW susceptibilities for each sample are plotted in Fig. 4 (c) and the local paramagnetic moments ($\mu_{\mathrm{eff}}$) are plotted as a function of La-concentration in Fig. 4 (d). The $\mu_{\mathrm{eff}}$ extracted from CW fits grows with increasing doping, and the $\mu_{\mathrm{eff}}$ induced per La-dopant approaches that of uncompensated $J=\frac{1}{2}$ local moments. The absence of static AF order combined with the growth of local moments in the presence of significant AF exchange supports the notion of an underlying disordered dimer state in metallic Sr-327. The nearly $Q$-independent energy-integrated spectral weight of the $M$-excitations in the metallic regime is also consistent with a dimer state where the intradimer coupling ($J_c$) approaches the excitation bandwidth. The small increase in the $J_c/J_1$ ratio as doping is increased from $x=0.02$ and $x=0.07$ samples is however not the likely driver for the dimer state’s stabilization, in particular given that $J_c/J_1\approx3.5$ reported for the AF ordered parent system [@Sala] exceeds the ratios for both of the doped compounds. Additionally, structural changes driven by electron doping in Sr-327 are relatively small, and the nearly cubic ligand field of Sr-327 ($\Delta_d = 1.10\times10^{-4}$ [@PhysRevB.93.134110]) does not change appreciably with electron doping [@Hogan]. Rather, a dimer state is likely stabilized by the critical threshold for dimer formation being driven downward via electron-doping similar to $t-J$ models of hole-doping in bilayer cuprates [@PhysRevB.52.7708; @PhysRevB.60.15201; @PhysRevLett.106.136402]. In summary, RIXS data reveal spin excitations in La-doped Sr-327 that persist across the AF insulator to paramagnetic metal transition. Across the insulator-metal transition, static AF correlations vanish and extended LSW models fail to describe the surviving spin spectra with nondegenerate excitations at the two-dimensional AF zone center and $\Gamma$ points. Rather a BO-based mean field approach, reflective of strong interplane dimer interactions, captures the observed dispersion and suggests a disordered dimer state in the metallic regime. The presence of a hidden, disordered dimer state is supported by bulk magnetization data which reveal the emergence of anomalous local moments in electron-doped Sr-327 and are consistent with dopant-induced creation of uncompensated spins from broken dimer pairs. Our results point toward an unconventional metallic state realized beyond the collapse of spin-orbit Mott state in Sr$_3$Ir$_2$O$_7$. S.D.W. thanks L. Balents for helpful discussions. This work was supported in part by NSF award DMR-1505549 (S.D.W.), as well as by the MRSEC Program of the National Science Foundation under Award No. DMR-1121053 (T.H.). This research used resources of the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357. CHESS is supported by the NSF and NIH/NIGMS via NSF award DMR-1332208. Work at the University of Toronto was supported by the Natural Sciences and Engineering Research Council of Canada through the Discovery Grant and the CREATE program. SQUID measurements were supported in part by NSF award DMR-1337567.
--- abstract: 'We consider the hard-core model in $\mathbb{R}^2$, in which a random set of non-intersecting unit disks is sampled with an intensity parameter $\lambda$. Given $\varepsilon>0$ we consider the graph in which two disks are adjacent if they are at distance $\leq \varepsilon$ from each other. We prove that this graph, $G$, is highly connected when $\lambda$ is greater than a certain threshold depending on $\varepsilon$. Namely, given a square annulus with inner radius $L_1$ and outer radius $L_2$, the probability that the annulus is crossed by $G$ is at least $1 - C \exp(-cL_1)$. As a corollary we prove that a Gibbs state admits an infinite component of $G$ if the intensity $\lambda$ is large enough, depending on $\varepsilon$.' author: - 'Alexander Magazinov[^1]' title: 'On percolation of two-dimensional hard disks' --- Introduction ============ How closely can one pack spheres in space? This basic question has fascinated many mathematicians over the years, motivated by its simplicity and various practical applications. Kepler conjectured in 1611 that the most efficient packing of cannonballs in 3 dimensions is given by the Face-Centered Cubic (FCC) lattice (though this arrangement is not unique). Lagrange proved in 1773 that the most efficient *lattice* packing in 2 dimensions is given by the triangular lattice and this was extended by Gauss in 1831 to show that the FCC lattice achieves the highest *lattice* packing density. The proof that the triangular lattice packing is optimal among all packings is generally attributed to Thue [@Thu]. A standard proof of the two-dimensional analogue of Kepler conjecture is now considered folklore, and can be found in [@FT]. Fejes Tóth suggested in 1953 a method for verifying the Kepler conjecture by checking a finite number of cases. This method was finally applied successfully by Hales in 1997 in his groundbreaking, computer-assisted, proof of the Kepler conjecture. More recently, Cohn and Elkies [@CE] explained how the existence of functions with certain Fourier-analytic properties can be used to give an upper bound on the density of packings. This bound was applied by Viazovska last year in a crowning achievement to prove that the $E_8$ lattice gives the densest packing in 8 dimensions; a work which was immediately extended to prove that the Leech lattice gives the densest packing in 24 dimensions, both long-standing conjectures (see [@Vi1; @Vi2]). The densest packing is, in a sense, perfect, lacking in holes or defects. How do near-optimal packings look like? One natural way to make sense of this question is to consider random packings with high density. A standard model for such random packings is the hard-sphere model described below. The model is parameterized by an intensity parameter $\lambda$ with the average density of the random packing increasing with $\lambda$ in a way that the packings with the maximal density arise formally in the limit $\lambda\to\infty$. It is natural to expect that very dense packings would preserve some of the structure of the densest packings. For instance, that in 2 dimensions they would resemble the triangular lattice and that in 3 dimensions they would resemble the FCC lattice or one of the other packings of maximal density. While this may be true locally, it was proved by Richthammer [@Ri] in a significant breakthrough that configurations sampled from the two-dimensional hard-disk model at high intensity are globally rather different from the triangular lattice in that they lose the translation rigidity of the latter over long distances. It remains a major challenge to understand in what ways are typical configurations of the high-intensity hard-disk model similar, or dissimilar, to the densest packings. For instance, is rotational rigidity preserved in typical configurations in two dimensions? In statistical physics terminology one is interested in characterizing the translation-invariant Gibbs states of the hard-disk model and showing that multiple states exist, equivalently that symmetry breaking occurs, at high intensities. No proof of symmetry breaking is currently known in any dimension. Lyons, Bowen, Radin and Winkler put forward the question of percolation in the hard-disk model. Namely, given a packing of unit disks, let us connect by an edge any two centers whenever the distance between them is at most $2 + \varepsilon$. Does the resulting graph contain an infinite connected component? Bowen, Lyons, Radin and Winkler explicitly conjectured [@BLRW Section 7] that the percolation indeed occurs almost surely in the two-dimensional case for any model obtained as a weak-$*$ limit ($N \to \infty$) of $N\mathbb Z^2$-periodic packings of density $d_N$, where $\lim\limits_{N \to \infty} d_N = d(\varepsilon) < \frac{1}{2\sqrt{3}}$. Aristoff [@Ar] addressed a similar question to that of [@BLRW]: does percolation at distance $2 + \varepsilon$ occur almost surely for Gibbs states of the hard-disk model if the intensity parameter $\lambda$ satisfies $\lambda > \lambda(\varepsilon)$? Aristoff managed to provide a positive answer for $\varepsilon > 1$. Passing from intensity to density is indeed relevant; the former has control over the latter (see [@MMSWD Subsection 2.2 and Appendix A]). In this paper we focus on generalizing of Aristoff’s result by eliminating the restriction $\varepsilon > 1$. This result strongly supports the conjecture by Bowen, Lyons, Radin and Winkler, although the definition of a model just by density, as in [@BLRW], may, a priori, be less restrictive than the definition via the Grand canonical ensemble. We also menton that there are numerous papers considering similar question for another models of statistical mechanics. See, for example [@BL; @Jan; @Stu]. Main result =========== The main result of this paper is concerned with the Poisson hard-disk model in the Euclidean space $\mathbb{R}^d$. In order to give the definition of this model, we first recall the notion of a Poisson point process. Throughout the paper the following notation is used: $\# X$ denotes the cardinality of a finite set $X$; $|A|$ denotes the $d$-dimensional Lebesgue measure of a set $A \subseteq \mathbb R^d$ (whenever we use this notation the dimension is implicit from the context); $\| v \|$ denotes the Euclidean norm of a vector $v \in \mathbb R^d$, correspondingly, $\| x - y \|$ denotes the Euclidean distance between two points $x, y \in \mathbb R^d$. (See [@Kin Section 2.1].) Let $\lambda > 0$. Assume that $\mu$ is a random measure on the algebra $\mathcal B(\mathbb R^d)$ of all Borel sets in $\mathbb R^d$. Let the following properties be satisfied: 1. For set $A \in \mathcal B(\mathbb R^d)$ with $|A| < \infty$ the random variable $\mu(A)$ has the Poisson distribution with parameter $\lambda |A|$. Equivalently, $\Pr(\mu(A) = i) = \exp(-\lambda|A|) \frac{\lambda^i}{i!}$ for every non-negative integer $i$. 2. For every two sets $A, B \in \mathcal B(\mathbb R^d)$ of finite Lebesgue measure, that are disjoint (i.e, $A \cap B = \varnothing$), the random variables $\mu(A)$ and $\mu(B)$ are independent. Then the measure $\mu$ is called a [*Poisson point process*]{} with intensity $\lambda$. A random measure $\mu$ as above can be uniquely identified with a random discrete point set $\eta \subseteq \mathbb R^d$ so that $$\mu(A) = \# (\eta \cap A)$$ for every set $A \in \mathcal B(\mathbb R^d)$. Hence, from now on, the term [*Poisson point process*]{} will refer to a random point set. If $D \subset \mathbb R^d$ is a bounded open set, then the [*Poisson point process of intensity $\lambda$ in $D$*]{} is a random set $\eta \cap D$, where $\eta$ is a Poisson point process of of intensity $\lambda$ in $\mathbb R^d$. The point arrangements appearing in the hard-disk model are picked from a specific space of point arrangements on $\mathbb{R}^d$. This space is introduced in the following Definition \[def:config\]. \[def:config\] Let $$\Omega(\mathbb R^d) = \{ \xi \subset \mathbb R^d: \text{$\| x - y \| > 2$ for all $x, y \in \xi$, $x \neq y$} \}.$$ Then every element $\xi \in \Omega(\mathbb R^d)$ is called a [*configuration*]{}. By Definition \[def:config\], each configuration $\xi$ can be identified with a packing of unit balls $\{ B_1(x) : x \in \xi \}$. Here and further $B_{\rho}(x)$ denotes a ball of radius $\rho$ centered at $x$. The condition $\| x - y \| > 2$ is often referred to as [*interaction between points via hard-core exclusion*]{}. We are ready to present the definition of the Poisson hard-disk model. Let $D\subseteq\mathbb{R}^d$ be a bounded open set and let $\zeta \in \Omega(\mathbb R^d)$. Assume that $\lambda$ be a positive real number. Consider the Poisson point process $\eta$ in $D$ with intensity $\lambda$. Let $\bar{\eta}$ be the conditional distribution of $\eta$ restricted to the event $$\eta \cup (\zeta \setminus D) \in \Omega(\mathbb R^d).$$ Then the random point set $\eta^{[\lambda]}(D, \zeta)$ defined by $$\eta^{[\lambda]}(D, \zeta) = \bar{\eta} \cup (\zeta \setminus D)$$ is called the [*Poisson hard-disk model*]{} on $D$ with intensity $\lambda$ and boundary conditions $\zeta$. Indeed, $\bar{\eta}$ is well-defined, since $$\Pr (\eta \cup (\xi \setminus D) \in \Omega(\mathbb R^d)) \geq \Pr(\eta = \varnothing) > 0.$$ Also, by definition, $\eta^{[\lambda]}(D, \xi) \in \Omega(\mathbb R^d)$ a.s. Our main focus is on the existence of large connected components in the hard-disk model, when we connect two points if their distance is at most $2+\varepsilon$. For the associated ball packings it means that the centers of two balls are connected whenever the distance between the balls does not exceed $\varepsilon$. We formalize this concept in the next definitions. Let $\xi \in \Omega(\mathbb R^d)$ and $\varepsilon > 0$. Consider the graph $G_{\varepsilon}(\xi) = (V_\varepsilon(\xi), E_\varepsilon(\xi))$, where $$V_\varepsilon(\xi) = \xi \quad\text{and}\quad E_\varepsilon(\xi)=\{\{x,y\} : \text{$x,y \in \xi$, $x\neq y$ $\|x-y\| \leq 2+\varepsilon$} \}.$$ We will call $G_{\varepsilon}(\xi)$ the [*connectivity graph*]{} of $\xi$ at distance $\varepsilon$. Let $\varepsilon > 0$ and $0 < L_1 < L_2$ be fixed. We say that a configuration $\xi \in \Omega(\mathbb R^2)$ [*is annulus-crossing*]{} if $\xi \in \operatorname{AC}(\varepsilon, L_1, L_2)$, where $$\begin{split} \operatorname{AC}(\varepsilon, L_1, L_2) = \{ & \zeta \in \Omega(\mathbb R^d) : \text{there exist $x, y \in V_{\varepsilon}(\zeta)$} \\ & \text{such that $x\in(-L_1, L_1)^d$, $y\notin (-L_2, L_2)^d$ and} \\ & \text{some connected component of $G_\varepsilon(\zeta)$ contains} \\ & \text{both $x$ and $y$} \}. \end{split}$$ For the sake of shortening the notation, we write $Q_L = (-L, L)^d$. The dimension is omitted, since it will always be clear from the context. We can now state the main result of the paper. For any $\varepsilon > 0$ there exist positive numbers $\lambda_0$, $c$, $C$ and $L_0$, depending only on $\varepsilon$, such that the following holds. If $\lambda > \lambda_0$, $0 < L_1 < L_2$ and $\eta = \eta^{[\lambda]}(Q_{L_2 + L_0}, \zeta)$ is a two-dimensional Poisson hard-disk model, then one necessarily has $$\Pr\bigl( \eta \in \operatorname{AC}(\varepsilon, L_1, L_2) \bigr) \geq 1 - C \exp(-cL_1).$$ We emphasize that the Main Theorem is restricted to the two-dimensional case. One of the steps of our proof essentially relies on the Jordan theorem implying the intersection of two one-dimensional curves (the so-called “large circuits”). The statement of the Main theorem involves a “buffering region” $Q_{L_2 + L_0} \setminus Q_{L_2}$ which we do not require to be crossed. Moreover, the width of this region, $L_0$, depends on $\varepsilon$. This generality allows us to avoid many technical difficulties, although it is likely that, say, $L_0 \equiv 5$ is still sufficient. The Main Theorem deals with configurations with fixed shape outside a finite region (and coincide with the respective boundary conditions). We will also address a model, the so-called Gibbs distribution, where two samples, say $\xi$ and $\xi'$, typically have unbounded symmetric difference $\xi \bigtriangleup \xi'$. For results concerning the Gibbs distributions, see Section \[sec:gibbs\]. The defect of a configuration ============================= Configurations of the hard-sphere model at high intensity $\lambda$ are locally tightly packed, approximating an optimal packing. It will be helpful in the sequel to quantify the local deviation from an optimal packing. Thus we introduce the notion of a [*defect*]{}. Definition of the two-dimensional defect function ------------------------------------------------- We call a configuration $\xi \in \Omega(\mathbb R^d)$ [*saturated in the $\rho$-neighborhood*]{} of a bounded open domain $D \subset \mathbb R^d$ if $$\sup\limits_{y : \operatorname{dist}(y, D) \leq \rho} \operatorname{dist}(y, \xi) \leq 2.$$ If, moreover, the inequality $$\sup\limits_{y \in \mathbb R^d} \operatorname{dist}(y, \xi) \leq 2$$ holds, then $\xi$ is called [*saturated (in the entire $\mathbb R^d$)*]{}. In other words, $\xi$ is saturated if it is a maximal element of $\Omega(\mathbb R^d)$ with respect to the inclusion. Let $\xi \in \Omega(\mathbb R^d)$. For every point $x \in \xi$ define a set $\mathcal V_{\xi}(x) \subseteq \mathbb R^d$ as follows: $$\mathcal V_{\xi}(x) = \{ y \in \mathbb R^d : \| y - x \| = \inf\limits_{x' \in \xi} \| y - x' \| \}.$$ $\mathcal V_{\xi}(x)$ is called the [*Voronoi cell*]{} of $x$ with respect to $\xi$. The tessellation of $\mathbb R^d$ into Voronoi cells for a given point set $\xi$ is called the [*Voronoi tessellation*]{} for $\xi$. The Voronoi cell $\mathcal V_{\xi}(x)$ is a convex $d$-dimensional polyhedron, possibly unbounded. Each of its facets is contained in the perpendicular bisector to a segment $[x, x']$ for some $x' \in \xi$. The next proposition gives a useful way to bound the cells of the Voronoi tesselation. \[prop:voronoi\_cell\_rad\] Let $\rho > 2$ and $\xi \in \Omega(\mathbb R^d)$. Suppose $\xi$ is saturated in the $\rho$-neighborhood of a bounded open domain $D$. Then each point $x \in \xi \cap D$ satisfies $\mathcal V_{\xi}(x) \subset B_2(x)$. Assume the converse: there is a point $z \in V_{\zeta}(x) \setminus B_2(x)$. Without loss of generality, one can additionally assume that $\| z - x \| \leq \rho$. Then $$\sup\limits_{y : \operatorname{dist}(y, D) \leq \rho} \operatorname{dist}(y, \xi) \geq \| z - x \| > 2.$$ This contradicts the assumption that $\xi$ is saturated in the $\rho$-neighborhood of $D$. We are ready to proceed with the two-dimensional case. The optimal packing of unit disks in the plane is unique with centers of the disks arranged as a regular triangular lattice (the edge of a generating triangle equals 2). The Voronoi tesselation of this lattice is the tiling of the plane into equal regular hexagons of area $2\sqrt{3}$ (see [@FT]). Moreover, in a Voronoi tesselation for any configuration, the volume of each cell must exceed $2\sqrt{3}$ (see Proposition \[prop:key\_defect\_prop\] below). This motivates the following definition. (Defect of a two-dimensional configuration) Let $D \subset \mathbb R^2$ be a bounded open domain and let $\xi, \xi' \in \Omega(\mathbb R^2)$ be two configurations. Suppose that $\xi \subseteq \xi'$ and that $\xi'$ is locally saturated in the $\rho$-neighborhood of $D$ for some $\rho > 2$. The [*defect*]{} $\Delta(\xi, \xi', D)$ of $\xi$ in the domain $D$ with respect to the locally saturated extension $\xi'$ is defined by $$\label{eq:defect_defn} \Delta(\xi, \xi', D) = \sum\limits_{x \in \xi' \cap D} \left( |\mathcal V_{\xi'}(x)| - 2\sqrt{3} \cdot \mathbbm 1_{\xi}(x) \right).$$ There is no equality case for the inequality $|\mathcal V_{\xi'}(x)| \geq 2\sqrt{3}$, since the optimal packing of balls does not correspond to an element of $\Omega(\mathbb R^2)$: the strict inequality $\| x - y \| > 2$ is violated. Properties of the defect function --------------------------------- In this section we give an abstract definition of a defect function by listing the properties that it should satisfy. This definition applies equally well in any dimension. It will be checked in the next section that the two-dimensional defect function introduced above is consistent with the abstract definition. We choose to give an abstract definition, since there is a chance to generalize the Main Theorem to higher dimensions within the same framework. Indeed, the key intermediate lemma, the Thin Box Lemma, relies exclusively on the properties included in Definition \[def:defect\_prop\]. On the other hand, there is a construction in the 3-dimensional space satisfying these properties. The construction is based on Hales’ analysis of the optimal packing there (see [@Ha Theorems 1.5, 1.7, 1.9]). However, the details are left beyond the scope of this paper as we focus on the two-dimensional case. The author is not aware of any constructions of defect functions in dimensions $d \geq 4$. The [*optimal packing density*]{} $\alpha(d)$ in $d$ dimensions is defined by $$\alpha(d) = \limsup\limits_{L \to \infty} \sup\limits_{\xi \in \Omega(\mathbb R^d)} \frac{\# (\xi \cap Q_L)}{|Q_L|}.$$ A [*defect-measuring triple*]{} is a tuple $(\xi, \xi', D)$ where $\xi' \in \Omega(\mathbb R^d)$, $\xi \subseteq \xi'$ and $D$ is a bounded open domain in $\mathbb R^d$. \[def:defect\_prop\] Let $\varepsilon>0$. Let $F$ be a partial function taking defect-measuring triples as arguments and returning real numbers. We say that $F$ [*satisfies the defect function properties at level $\varepsilon$*]{} if there exist real numbers $c, C_{cnt} > 0$ and $\rho \geq 100$ such that the following holds. 1. (Domain of definition.) If $(\xi, \xi', D)$ is a defect-measuring triple and $\xi'$ is saturated in the $\rho$-neighborhood of $D$, then $F(\xi, \xi', D)$ is necessarily defined. 2. (Localization.) If $(\xi_1, \xi'_1, D)$ and $(\xi_2, \xi'_2, D)$ are two defect-measuring triples such that $$\begin{aligned} \xi_1 \bigtriangleup \xi_2 = \varnothing \quad & \text{or} \quad \operatorname{dist}(\xi_1 \bigtriangleup \xi_2, D) > \rho \quad \text{and} \\ \xi'_1 \bigtriangleup \xi'_2 = \varnothing \quad & \text{or} \quad \operatorname{dist}(\xi'_1 \bigtriangleup \xi'_2, D) > \rho, \end{aligned}$$ then either $F$ is defined on both triples and $$F(\xi_1, \xi'_1, D) = F(\xi_2, \xi'_2, D),$$ or $F$ is undefined on both triples. 3. (Positivity.) If $F(\xi, \xi', D)$ is defined, then $F(\xi, \xi', D) \geq 0$. 4. (Monotonicity.) If $F(\xi, \xi', D)$ is defined and $D' \subseteq D$ is an open domain, then $$F(\xi, \xi', D') \leq F(\xi, \xi', D).$$ 5. (Additivity.) If $D_1, D_2 \subset \mathbb R^d$ are two bounded open domains, $D_1 \cap D_2 = \varnothing$ and $F(\xi, \xi', D_1 \cup D_2)$ is defined, then both $F(\xi, \xi', D_1)$ and $F(\xi, \xi', D_2)$ are defined, and $$F(\xi, \xi', D_1 \cup D_2) = F(\xi, \xi', D_1) + F(\xi, \xi', D_2).$$ 6. (Saturation.) If $(\xi, \xi', D)$ is a defect-measuring triple and the inequality $F(\xi, \xi', D) < c$ holds, then one necessarily has $$\xi \cap D = \xi' \cap D.$$ 7. (Connectivity.) If $D$ is convex and a defect-measuring triple $(\xi, \xi', D)$ satisfies $F(\xi, \xi', D) < c$, then there is a connected component $\mathfrak c \subseteq G_{\varepsilon}(\xi \cap D)$ such that $$\{y \in \xi : B_{\rho}(y) \subseteq D \} \subseteq \operatorname{vert}(\mathfrak c).$$ 8. (Distance-decreasing step.) If a defect-measuring triple $(\xi, \xi', D)$, a point $x \in \xi$ and a point $y \in \mathbb R^d$ satisfy the conditions $$\begin{aligned} & F(\xi, \xi', D) < c, \\ & B_{2 + \varepsilon / 2}(x) \subseteq D, \\ & \| x - y \| > \rho, \end{aligned}$$ then there exists a point $x' \in \xi$ such that $$\| x - x' \| < 2 + \frac{\varepsilon}{2} \quad \text{and} \quad \| x' - y \| < \| x - y \|.$$ 9. (Forbidden distances.) If the inequality $F(\xi, \xi', D) < c$ holds for a defect-measuring triple $(\xi, \xi', D)$, then every two points $x, y \in \xi \cap D$ satisfy $$\| x - y \| \notin [2 + 0.9\varepsilon, 2 + \varepsilon].$$ 10. (Point counting.) If $(\xi, \xi', Q_L)$ is a defect-measuring triple on which $F$ is defined then $$F(\xi, \xi', Q_L) \leq |Q_L| - \frac{1}{\alpha(d)} \cdot \# (\xi \cap Q_L) + C_{cnt} L^{d - 1}.$$ We proceed by formulating the key result of this section. There exists $\varepsilon_0 > 0$ such that the two-dimensional defect function $\Delta(\xi, \xi', D)$ defined by  satisfies the defect function properties at every level $\varepsilon \in (0, \varepsilon_0)$. Proof of the Defect Lemma ------------------------- The proof is based on the following Proposition \[prop:key\_defect\_prop\]. \[prop:key\_defect\_prop\] The following assertions are true. 1. Let $\xi \in \Omega(\mathbb R^2)$ and $x \in \xi$. Assume that the Voronoi cell $\mathcal V_{\xi}(x)$ is bounded. Then $|\mathcal V_{\xi}(x)| \geq 2 \sqrt{3}$. 2. There exists $c > 0$ such that the following holds. If a configuration $\xi \in \Omega(\mathbb R^2)$ and a point $x \in \xi$ satisfy $|\mathcal V_{\xi}(x)| < 2 \sqrt{3} + c$, then $\mathcal V_{\xi}(x)$ is necessarily a hexagon. 3. For every $\delta > 0$ there exists $c(\delta) > 0$ such that the following holds. If a configuration $\xi \in \Omega(\mathbb R^2)$ and a point $x \in \xi$ satisfy $|\mathcal V_{\xi}(x)| < 2 \sqrt{3} + c(\delta)$, then there is a regular hexagon $H$ of area $2 \sqrt{3}$ centered at $x$ such that $d_{Haus}(\mathcal V_{\xi}(x), H) < \delta$. (The notation $d_{Haus}(\cdot, \cdot)$ denotes the Hausdorff distance between planar convex bodies.) See [@FT Chapter 3] or [@Rog]. We are ready to prove the Defect Lemma. Let us address each defect function property. [**1. Domain of definition.**]{} We show that any choice of $\rho \geq 100$ is sufficient to fulfill this condition. Indeed, for every $x \in \xi' \cap D$ the Voronoi cell $\mathcal V_{\xi'}(x)$ is bounded by Proposition \[prop:voronoi\_cell\_rad\], consequently, the corresponding summand $|\mathcal V_{\xi'}(x)| - 2\sqrt{3} \cdot \mathbbm 1_{\xi}(x)$ is defined. Further, the set $\xi' \cap D$ is finite because $\xi' \in \Omega(\mathbb R^2)$ and $D$ is bounded. Hence the sum in the right-hand side of  is finite and therefore well-defined. [**2. Localization.**]{} Since $\xi'_1 \cap D = \xi'_2 \cap D$, we have $$\Delta(\xi_i, \xi'_i, D) = \sum\limits_{x \in \xi'_1 \cap D} f_i(x), \quad (i = 1, 2),$$ where $$f_i(x) = |\mathcal V_{\xi'_i}(x)| - 2\sqrt{3} \cdot \mathbbm 1_{\xi_i}(x).$$ We claim that $f_1(x) = f_2(x)$ for each $x \in \xi'_1 \cap D$. Indeed, $\mathbbm 1_{\xi_1}(x) = \mathbbm 1_{\xi_2}(x)$ because $\xi_1 \cap D = \xi_2 \cap D$. Next, let us show that $\mathcal V_{\xi'_1 \cap B_4(x)}(x) = \mathcal V_{\xi'_2 \cap B_4(x)}(x)$. Indeed, assume the converse. Then $(\xi'_1 \bigtriangleup \xi'_2) \cap B_4(x) \neq \varnothing$. This contradicts the assumption $\operatorname{dist}(\xi'_1 \bigtriangleup \xi'_2, D) > \rho$. Therefore $$\mathcal V_{\xi'_1}(x) = \mathcal V_{\xi'_1 \cap B_4(x)}(x) = \mathcal V_{\xi'_2 \cap B_4(x)}(x) = \mathcal V_{\xi'_2}(x),$$ where the first and the third identities hold because $\rho > 2$ and the configurations $\xi'_i$ are saturated in the $\rho$-neighborhood of $D$. Hence the expressions for $\Delta(\xi_1, \xi'_1, D)$ and $\Delta(\xi_2, \xi'_2, D)$ are tautologically identical. [**3. Positivity.**]{} Follows immediately from Proposition \[prop:key\_defect\_prop\], assertion 1. [**4. Monotonicity.**]{} Follows immediately from Proposition \[prop:key\_defect\_prop\], assertion 1. [**5. Additivity.**]{} Follows immediately from the definition of $\Delta(\xi, \xi', D)$. [**6. Saturation.**]{} We show that any choice of $c < 2 \sqrt{3}$ is sufficient. Indeed, if there exists $x \in (\xi' \cap D) \setminus (\xi \cap D)$, then $$\Delta(\xi, \xi', D) \geq |\mathcal V_{\xi'}(x)| - 2\sqrt{3} \cdot \mathbbm 1_{\xi}(x) = |\mathcal V_{\xi'}(x)| \geq 2 \sqrt{3}.$$ (Here both inequalities follow from Proposition \[prop:key\_defect\_prop\], assertion 1.) [**7. Connectivity.**]{} We claim that, with any choice of $\rho \geq 100$, there exists $c_0 \in (0, 2 \sqrt{3})$ such that any choice $c \in (0, c_0)$ fulfills the Connectivity property. Denote $D' = \{ y \in \mathbb R^2 : B_{\rho}(y) \subseteq D \}$. Clearly, $D'$ is an open bounded convex set. Let $x, x' \in \xi \cap D'$. Consider the segment $[x, x']$. With a small perturbation of this segment we can obtain a curve segment $\gamma \subset D'$, connecting $x$ and $x'$, such that $\gamma$ avoids all vertices of the Voronoi tessellation for $\xi'$. Consider the sequence of points $x_0 = x, x_1, \ldots, x_k = x'$ ($x_i \in \xi'$) such that $\gamma$ intersects $\mathcal V_{\xi'}(x_0)$, $\mathcal V_{\xi'}(x_1)$, $\ldots$, $\mathcal V_{\xi'}(x_k)$ in that order. (Consequently, the cells $\mathcal V_{\xi'}(x_i)$ and $\mathcal V_{\xi'}(x_{i + 1})$ share a common edge.) Applying Proposition \[prop:voronoi\_cell\_rad\] yields $\rho > 2 \geq \operatorname{dist}(x_i, \gamma) \geq \operatorname{dist}(x_i, D')$. Therefore $x_i \in \xi' \cap D$. Moreover, $x_i \in \xi \cap D$, as implied by an assumption $c < c_0 < 2\sqrt{3}$ and the already proved Saturation property. Let us prove that an appropriate choice of $c_0$ implies $\| x_i - x_{i + 1} \| < 2 + \varepsilon$. Indeed, we have $| \mathcal V_{\xi'}(x_j) | < 2 \sqrt{3} + c_0$ ($j = i, i + 1$), hence by choosing $c_0$ sufficiently small we can ensure that both $\mathcal V_{\xi'}(x_i)$ and $\mathcal V_{\xi'}(x_{i + 1})$ are hexagons. (Here we use Proposition \[prop:key\_defect\_prop\], assertion 2.) Consider the common edge $[v, w]$ of the cells $\mathcal V_{\xi'}(x_i)$ and $\mathcal V_{\xi'}(x_{i + 1})$. Then $$\label{eq:triangle} \| x_i - x_{i + 1} \| \leq \left\| x_i - \frac{v + w}{2} \right\| + \left\| x_{i + 1} - \frac{v + w}{2} \right\|.$$ By Proposition \[prop:key\_defect\_prop\], assertion 3, the summands on the right-hand side of  are sufficiently close to 1 if $c_0$ is small enough. Therefore $(x_i, x_{i + 1})$ is an edge of $G_{\varepsilon}(\xi \cap D)$. Consequently, $x x_1 x_2 \ldots x_{k - 1} x'$ is a path in $G_{\varepsilon}(\xi \cap D)$. Hence the Connectivity property follows. [**8. Distance-Decreasing Step.**]{} Let us prove this property under the assumption $\varepsilon < \varepsilon_0 \leq 0.1$. We claim that, with any choice of $\rho > 100$, there exists $c_0 \in (0, 2 \sqrt{3})$ such that any choice $c \in (0, c_0)$ fulfills the Distance-Decreasing Step property. Similarly to the previous argument, we can guarantee that $\mathcal V_{\xi'}(x)$ is a hexagon. Denote by $x_1, x_2, \ldots, x_6$ those points in $\xi'$ with the respective Voronoi cells $\mathcal V_{\xi'}(x)$ sharing common edges with $\mathcal V_{\xi'}(x)$. Then $$\label{eq:neighbors} \mathcal V_{\xi'}(x) = \mathcal V_{\{x, x_1, x_2, \ldots, x_6 \}}(x).$$ By Proposition \[prop:key\_defect\_prop\], assertion 3, an appropriate choice of $c_0$ guarantees $$d_{Haus}(\operatorname{conv}\{x_1, \ldots, x_6 \}, H) < \frac{\varepsilon}{10},$$ where $H$ is some regular hexagon with circumradius 2 centered at $x$. Consequently, $2 < \| x - x_i \| < 2 + \frac{\varepsilon}{2}$, which, in turn, implies $x_i \in D$ and $x_i \in \xi$. Since $\varepsilon < 0.1$, the rays from $x$ to all $x_i$ split the plane into six angles with each of the angles not exceeding $70^\circ$. Thus, with no loss of generality, we can assume that the angle $\angle yxx_1$ (i.e., the angle between the vectors $y - x$ and $x_1 - x$) does not exceed $35^\circ$. Therefore $$\|y - x_1 \|^2 \leq \| y - x \|^2 + \|x - x'\|^2 - 2\| y - x \| \| x - x' \| \cos 35^\circ < \| y - x \|^2,$$ because, indeed, $2 \| x - x' \| \cos 35^\circ < \| y - x \|$. Hence $\|y - x_1 \| < \| y - x \|$ and Distance-Decreasing Step property is proved. [**9. Forbidden Distances.**]{} Again, we prove this property under the assumption $\varepsilon < \varepsilon_0 \leq 0.1$. Similarly to the previous argument, an appropriate choice of $c_0$ guarantees the existence of 6 points $x_1, \ldots, x_6 \in \xi'$ such that  is satisfied. Then one can check the inclusion $$B_{2.1}(x) \subseteq B_2(x) \cup B_2(x_1) \cup \ldots \cup B_2(x_6).$$ Therefore every point $x' \in \xi' \setminus \{x, x_1, \ldots, x_6 \}$ satisfies the inequality $\| x' - x \| > 2.1 > 2 + \varepsilon$. On the other hand, $x' \in \{ x, x_1, \ldots, x_6 \}$ implies $\| x' - x \| < 2 + \frac{\varepsilon}{2} < 2 + 0.9\varepsilon$. Hence the Forbidden Distances property follows. [**10. Point counting.**]{} We have $$\bigcup \limits_{x \in (\xi' \cap D)} \mathcal V_{\xi'}(x) \supseteq Q_{L - 2}.$$ Indeed, otherwise there is a point $y$ that belongs to the right-hand side and does not belong to the left-hand side. Then $\xi' \cup \{y\} \in \Omega(\mathbb R^2)$, which is impossible because $\xi'$ is saturated in the $\rho$-neighborhood of $Q_L$. Therefore $$\begin{gathered} F(\xi, \xi', Q_L) = \sum\limits_{x \in \xi' \cap D} |\mathcal V_{\xi'}(x)| - 2\sqrt{3} \# ( \xi \cap Q_L ) \geq \\ |Q_{L - 2}| - \frac{1}{\alpha(2)} \# ( \xi \cap Q_L ) = \geq |Q_L| - \frac{1}{\alpha(2)} \# ( \xi \cap Q_L ) - 16L.\end{gathered}$$ Hence the Counting property follows. We now conclude the proof of the lemma. By the argument above, it is sufficient to set $\varepsilon_0 = 0.1$, $\rho(\varepsilon) \equiv 100$, $C_{cnt}(\varepsilon) \equiv 16$. Finally, in order to define $c(\varepsilon)$ it is sufficient to fulfill the restrictions of the Connectivity, Distance-Decreasing Step and Forbidden Distances properties. The Thin Box Lemma ================== The uniform model ----------------- A Poisson hard-disk model can be represented as a mixture of the so-called [*uniform hard-disk models*]{}, defined below. In [@Ar Section 2] this representation is used as an equivalent definition of a Poisson model. Therefore most of our auxiliary results will concern the uniform models and then passed to the Poisson model by means of Lemma \[lem:pois\_to\_unif\] in Section \[sec:main\_proof\]. Let us introduce some notation. Given [*boundary conditions*]{} $\zeta \in \Omega(\mathbb R^d)$ and a bounded open domain $D \subseteq \mathbb R^d$, we write $$\Omega(D, \zeta) = \{ \xi \in \Omega(\mathbb R^d) : \xi \setminus D = \zeta \setminus D \}.$$ One can thus notice that $\Omega(D, \zeta)$ is exactly the domain of values for the Poisson hard-disk model in $D$ with boundary conditions $\zeta$ and an arbitrary intensity. Next, given additionally an integer $s \geq 0$, denote $$\Omega^{(s)}(D, \zeta) = \{ \xi \in \Omega(D, \zeta) : \# (\xi \cap D) = s \}.$$ Let $D \subseteq \mathbb R^d$ be a bounded open domain and $\zeta \in \Omega(\mathbb R^d)$ be a configuration. Assume an integer $s \geq 0$ satisfies $\Omega^{(s)}(D, \zeta) \neq \varnothing$. Consider the random set $\eta$ of $s$ points sampled independently from the uniform distribution on $D$. Denote by $\bar{\eta}$ the conditional distribution of $\eta$ restricted to the event $\eta \cup (\zeta \setminus D) \in \Omega^{(s)}(D, \zeta)$. Then the random point set $\eta^{(s)}(D, \zeta)$ defined by $$\eta^{(s)}(D, \zeta) = \bar{\eta} \cup (\zeta \setminus D)$$ is called the $s$-point [*uniform hard-disk model*]{} on $D$ with boundary conditions $\zeta$. One can see that $\bar{\eta}$ is well-defined. Indeed, the inequality $$\Pr (\eta \cup (\xi \setminus D) \in \Omega^{(s)}(D, \zeta)) > 0$$ follows from the assumption $\Omega^{(s)}(D, \zeta) \neq \varnothing$. Statement of the Thin Box Lemma ------------------------------- In the previous subsection we defined the uniform hard-disk model. Before that, we gave a definition of an (abstract $d$-dimensional) defect function. Let us bring this notions together in the next two definitions. For the rest of this section we assume that the dimension $d \geq 2$, a constant $\varepsilon > 0$ are given, the function $\Delta$ satisfies the defect function properties at level $\varepsilon$. We keep the notation $\rho, c, C_{cnt}$ for the respective constants from Definition \[def:defect\_prop\]. Let $D_1, D_2 \subseteq \mathbb R^d$ be two bounded open domains, and $D_1 \subseteq D_2$. Assume a real number $\rho > 100$, boundary conditions $\zeta \in \Omega(\mathbb R^d)$ and an integer $s \geq 0$ are given. A measurable map $\phi : \Omega^{(s)}(D, \zeta) \to \Omega(\mathbb R^d)$ is called a [*$(D_2, \rho)$-saturator*]{} over $\Omega^{(s)}(D_1, \zeta)$ if the following holds for every $\xi \in \Omega^{(s)}(D, \zeta)$: 1. $\phi(\xi) \supseteq \xi$. 2. $\phi(\xi)$ is saturated in the $\rho$-neighborhood of $D_2$. \[def:bd\_defect\] Let two bounded open domains $D_1 \subseteq D_2 \subset \mathbb R^d$, the boundary conditions $\zeta \in \Omega(\mathbb R^2)$ and an integer $s \geq 0$ be given. We say that the defect of the model $\eta^{(s)}(D_1, \zeta)$ with respect to the outer domain $D_2$ [*is bounded by a constant*]{} $\Delta_0 > 0$ if there exists a $(D_2, \rho)$-saturator $\phi$ over $\Omega^{(s)}(D_1, \zeta)$ such that the inequality $$\Delta(\xi, \phi(\xi), D_2) \leq \Delta_0$$ holds for every $\xi \in \Omega^{(s)}(D_1, \zeta)$. In the notation of Definition \[def:bd\_defect\] let $$\begin{gathered} \operatorname{Dfc}_{\Delta_0}(D_1, D_2) = \{(\zeta, s) : \text{$\zeta \in \Omega(\mathbb R^2)$, $s \in \mathbb Z$, $s \geq 0$} \\ \text{and the defect of $\eta^{(s)}(D_1, \zeta)$ with respect to $D_2$ is bounded by $\Delta_0$} \}.\end{gathered}$$ Now we turn to specific domains in $\mathbb R^d$. For $K > 100 \rho$, $n \in \mathbb N$, $i \in \mathbb Z$ we will write $$R(K, n) = (-K, K)^{d - 1} \times (-20nK, 20nK),$$ $$R'(K, n) = (-5K, 5K)^{d - 1} \times (-20nK, 20nK),$$ $$P_i(K) = (-5K, 5K)^{d - 1} \times ((10i - 5)K, (10i + 5)K).$$ We will consider two important classes of configurations defined below. \[def:cross\] We say that a configuration $\xi$ [*$(\varepsilon, \nu)$-crosses*]{} the box $R'(K, n)$ if there exists a set of indices $I \subseteq \{-2n + 1, -2n + 2, \ldots, 2n - 1 \}$ such that 1. $\# I \geq 4n - 1 - \nu$. 2. If $i \in I$ and $\xi_i = \{ x \in \xi : B_{\rho}(x) \subseteq P_i(K) \}$ then $\xi_i \neq \varnothing$ and every two points of $\xi_i$ are connected by a path in $G_{\varepsilon}(\xi \cap P_i(K))$. 3. If $i_1, i_2 \in I$ and two points $x_1, x_2 \in \xi$ satisfy $B_{\rho}(x_k) \subseteq P_{i_k}(K)$ ($k = 1, 2$) then $x_1$ and $x_2$ are connected by a path in $G_{\varepsilon}(\xi \cap R'(K, n))$. \[def:empty\] Let $D \subseteq \mathbb R^d$ be a bounded open domain. We say that a configuration $\xi$ [*admits an empty $\varepsilon$-space in $D$*]{} if there exists a point $w \in \mathbb R^d$ such that $$B_{\varepsilon}(w) \subset D \quad \text{and} \quad \operatorname{dist}(w, \xi) > 2 + \varepsilon.$$ Introduce the following notation: $$p_{cross}[\varepsilon, \nu](K, n, \zeta, s) = \Pr(\text{the box $R'(K, n)$ is $(\varepsilon, \nu)$-crossed by $\eta^{(s)}(R(K, n), \zeta)$} ),$$ $$p_{empty}[\varepsilon](K, n, \zeta, s) = \Pr(\text{$\eta^{(s)}(R(K, n), \zeta)$ admits an empty $\varepsilon$-space in $R(K, n)$}).$$ The main result of this section is as follows. Let $d \geq 2$ be a fixed dimension. Assume that a function $\Delta$ satisfies the defect function properties at some fixed level $\varepsilon \in (0, 1)$, and $\rho, c$ are the respective constants from Definition \[def:defect\_prop\]. Then there exists $K > 100 \rho$ such that the inequality $$\begin{gathered} \inf\limits_{\substack{(n, \zeta, s) \\ \text{\rm appropriate}}} \left( p_{cross}\left[ \varepsilon, \frac{6 \Delta_0}{c} \right](K, n, \zeta, s) + p_{empty}[\varepsilon](K, n, \zeta, s) \right) > 0, \\ \text{where} \qquad \text{$(n, \zeta, s)$ is appropriate} \Longleftrightarrow (\zeta, s) \in \operatorname{Dfc}_{\Delta_0}(R(K, n), R'(K, n)),\end{gathered}$$ holds for every $\Delta_0 > 0$. Definition \[def:defect\_prop\] involves also the constant $C_{cnt}$ (needed for the Point Counting property). However, this constant is not relevant for the Thin Box Lemma. We will need the Point Counting property and the constant $C_{cnt}$ for the further steps. Reduction to existence of a repair algorithm -------------------------------------------- First of all, we wish to emphasize the set of input parameters for the Thin Box Lemma. This set consists of the constants $d, \varepsilon, \rho, c$ and the function $\Delta$. From this point and until the end of the section we assume all these parameters to be fixed. Let $\zeta \in \Omega(\mathbb R^d)$, $K > 100 \rho$, and let $n$ be a positive integer. Assume $\phi : \Omega^{(s)}(R(K, n), \zeta) \to \Omega(\mathbb R^d)$ is a $(R'(K, n), \rho)$-saturator. In addition, assume that a constant $\Delta_0 > 0$ satisfies $$\Delta_0 > \sup\limits_{\xi \in \Omega^{(s)}(R(K, n), \zeta)} \Delta(\xi, \phi(\xi), R'(K, n)).$$ We aim to construct an algorithm as in the following Definition \[defn:rep\_alg\]. \[defn:rep\_alg\] Let $\mathcal A$ be an algorithm with the following structure: $\zeta, \tilde{K}, n, \phi, \Delta_0$ as above; $\xi \in \Omega^{(s)}(R(\tilde{K}, n), \zeta)$. A sequence $(\xi_1, \xi_2, \ldots, \xi_k)$, where $\xi_1 = \xi$, $\xi_i \in \Omega^{(s)}(R(\tilde{K}, n), \zeta)$, and the length $k$ depends on the input. For each fixed $\mathcal I = (\zeta, \tilde{K}, n, \phi, \Delta_0)$ denote by $\Xi(\mathcal I)$ the set of all possible output sequences of $\mathcal A$. Further, denote $$Z_i(\mathcal I) = \{ \xi \in \Omega^{(s)}(R(\tilde{K}, n), \zeta) : \text{$\exists (\xi_1, \xi_2, \ldots, \xi_k) \in \Xi(\mathcal I)$ such that $k \geq i$ and $\xi_i = \xi$} \},$$ $$Z^{term}_i(\mathcal I) = \{ \xi \in \Omega^{(s)}(R(\tilde{K}, n), \zeta) : \text{$\exists (\xi_1, \xi_2, \ldots, \xi_i) \in \Xi(\mathcal I)$ such that $\xi_i = \xi$} \}.$$ We say that $\mathcal A$ [*is a repair algorithm*]{} if there exists a positive constant $K > 100 \rho$, and positive-valued functions $k_0 = k_0(\Delta_0)$ and $c_0 = c_0(\Delta_0)$ such that in the case $\mathcal I = (\zeta, K, n, \phi, \Delta_0)$ one necessarily has 1. Every sequence $(\xi_1, \xi_2, \ldots, \xi_k) \in \Xi(\mathcal I)$ (deterministically) satisfies $k \leq k_0$. 2. If $\xi \in Z^{term}_i(\mathcal I)$ then at least one of the following holds: - $R'(K, n)$ is $\left( \varepsilon, \frac{6 \Delta_0}{c} \right)$-crossed by $\xi$, - $\xi$ admits an empty $\varepsilon$-space in $R(K, n)$. 3. There exists a positive constant $c_0$ such that the inequality $$\Pr (\eta \in Z_{i + 1}(\mathcal I)) \geq c_0 \Pr (\eta \in Z_i(\mathcal I) \setminus Z^{term}_i(\mathcal I))$$ holds with $\eta = \eta^{(s)}(R(K, n), \zeta)$ Now we make the key reduction of the Thin Box Lemma. If there exists a repair algorithm then the Thin Box Lemma holds. With no loss of generality, let $c_0 \in (0, 1)$. Further, we assume that $\mathcal I = (\zeta, K, n, \phi, \Delta_0)$ is fixed, therefore we write simply $Z_i$ and $Z^{term}_i$ omitting the argument $\mathcal I$. Finally, we write $\eta = \eta^{(s)}(R(K, n), \zeta)$ as in the definition of a repair algorithm. We will prove that $$\label{eq:inf} p_{cross}\left[ \varepsilon, \frac{6 \Delta_0}{c} \right](K, n, \zeta, s) + p_{empty}[\varepsilon](K, n, \zeta, s) \geq \frac{1}{2}\left(\frac{c_0}{2} \right)^{k_0}.$$ We argue by contradiction: assume that  is false. Let us show, by induction, that $$\Pr(\eta \in Z_i) \geq \left(\frac{c_0}{2} \right)^i.$$ For $i = 1$ this is immediate, since $Z_0 = \Omega^{(s)}(R(K, n), \zeta)$. Assume that the inequality is proved up to some $i$. One can observe that $$\begin{gathered} \frac{1}{2}\left(\frac{c_0}{2} \right)^i \geq \frac{1}{2}\left(\frac{c_0}{2} \right)^{k_0} \geq \\ p_{cross}\left[ \varepsilon, \frac{6 \Delta_0}{c} \right](K, n, \zeta, s) + p_{empty}[\varepsilon](K, n, \zeta, s) \geq \Pr(\eta \in Z^{term}_i).\end{gathered}$$ Indeed, the second inequality is exactly the contrary to , while the third inequality follows from the Key Property 2. Consequently, $$\Pr(\eta \in Z_i \setminus Z^{term}_i) \geq \frac{1}{2}\left(\frac{c_0}{2} \right)^i.$$ By Key Property 3, one concludes $$\Pr(\eta \in Z_{i + 1}) \geq \left(\frac{c_0}{2} \right)^{i + 1}.$$ The induction step is verified. By Key Property 1, $Z_{k_0} = Z^{term}_{k_0}$. Hence $$\begin{gathered} p_{cross}\left[ \varepsilon, \frac{6 \Delta_0}{c} \right](K, n, \zeta, s) + p_{empty}[\varepsilon](K, n, \zeta, s) \geq \\ \Pr(\eta \in Z^{term}_{k_0}) = \Pr(\eta \in Z_{k_0}) \geq \left(\frac{c_0}{2} \right)^{k_0}.\end{gathered}$$ This contradicts our assumption that  is false. Elementary moves ---------------- Consider the line $$\ell = \{ 0 \}^{d - 1} \times \mathbb R.$$ Let $K > 0$. Define two real-valued functions $$h^K_-, h^K_+ : \{ x : \operatorname{dist}(x, \ell) < K \} \to \mathbb R$$ as follows: $$B_K(x) \cap \ell = \{ 0 \}^{d - 1} \times [h^K_-, h^K_+] + t.$$ Accordingly, let us write $$x^K_{\pm} = \{ 0 \}^{d - 1} \times \{ h^K_{\pm} \} + t.$$ Thus $x^K_{\pm}$ are two points on the line $\ell$ at distance $K$ from $x$. In the construction below we refer to $K$ as to the [*radius of the elementary move*]{}. Let $\xi \in \Omega(\mathbb R^d)$, $x \in \xi$ and $dist(x, \ell) < K$. Let $x' \in [x, x^K_-]$ be a point satisfying $\| x - x' \| = m\frac{\varepsilon}{10}$, where $m \in \left\{1, \ldots, \left\lceil \frac{100}{\varepsilon} \right\rceil \right\}$. Then 1. If $\{y \in \xi : \text{$y \neq x$ and $\| y - x' \| \leq 2$} \} = \varnothing$, we will say that set $(\xi \setminus \{ x \}) \cup \{ x' \}$ is [*obtained from $\xi$ by an elementary move of $x$ with magnitude $m$*]{}. 2. Otherwise we say that an elementary move of $x$ to $x'$ [*is forbidden*]{} by any point $y \in (\xi \setminus \{ x \}) \cap B_2(x')$. We proceed by proving two crucial properties of elementary moves. \[lem:move\] There exists a positive constant $K_1$ such that for every $K > K_1$ the following holds. If $\xi \in \Omega(\mathbb R^d)$, $x \in \xi$ and a point $y \in \xi$ forbids some elementary move of $x$ with radius $K$, then $\| y - x^K_- \| < \rho$. Let $x'$ be the point such that the move of $x$ to $x'$ be forbidden by $y$. Then $\| x - x' \| = m\frac{\varepsilon}{10}$, where $m$ an integer, $1 \leq m \leq \left\lceil \frac{100}{\varepsilon} \right\rceil$. Also, with no loss of generality assume that $x^K_-$ coincides with the origin $\mathbf 0$. Since $x, y \in \xi$ and $\zeta \in \Omega(\mathbb R^d)$, we have $\| x - y \| > 2 \geq \| x' - y \|$. Thus the perpendicular bisector to the segment $[x, x']$ separates $x$ from $x'$ and $y$. Consequently, $$\langle x - x', x \rangle + \langle x - x', x' \rangle > 2 \langle x, y \rangle.$$ Then $$\langle x, x \rangle + \langle x, x' \rangle > 2 \langle x, y \rangle,$$ since $x = \frac{10K}{m \varepsilon} (x - x')$. After subtracting $2\langle x, x \rangle$ from each side and reversing the sign, one obtains $$\langle x, x - x' \rangle < 2 \langle x, x - y \rangle.$$ But $m \geq 1$, therefore $\langle x, x - x' \rangle \geq \frac{\varepsilon}{10} K$. Hence $$\langle x, x - y \rangle \geq \rho \frac{\varepsilon}{20}.$$ The above implies $$\begin{gathered} \langle y, y \rangle = \langle x - (x - y), x - (x - y) \rangle = \langle x, x \rangle - 2 \langle x, x - y \rangle + \langle x - y, x - y \rangle \leq \\ K^2 - K \frac{\varepsilon}{10} + \langle x - y, x - y \rangle.\end{gathered}$$ But $$\langle x - y, x - y \rangle = \| x - y \|^2 \leq (\| x - x' \| + \| x' - y \|)^2 \leq (20 + 2)^2 < 500.$$ Hence with $K_1 = \frac{5000}{\varepsilon}$ one concludes $$\| y - x^K_- \|^2 = \langle y, y \rangle < K^2.$$ This finishes the proof. \[lem:dist\] There exists a positive constant $K_2$ such that for every $K > K_2$ the following holds. Assume that for $x, y \in \mathbb R^d$ the conditions below are satisfied: 1. $\operatorname{dist}(x, \ell) < K$, $\operatorname{dist}(y, \ell) < K$. 2. $\| x - y \| < 10$. 3. $x' \in [x, x^{\rho}_-]$, $y' \in [y, y^{\rho}_-]$. 4. $\| x - x' \| = \| y - y' \| < 20$. Then $-\frac{\varepsilon}{10} < \| x - y \| - \| x' - y' \| < \frac{\varepsilon}{10}$. Let $b = \| x - x' \|$. Then $$x' = \frac{b}{K} x^K_- + \frac{K - b}{K} x, \quad y' = \frac{b}{K} y^K_- + \frac{K - b}{K} y.$$ Therefore $$x' - y' = (x - y) - \frac{b}{K}(x - y) + \frac{b}{K} (x^K_- - y^K_-).$$ Hence the conclusion of lemma will hold if the following inequalities are satisfied: $$\begin{aligned} \| x - y \| < \frac{K}{20} \cdot \frac{\varepsilon}{20}, \label{eq:moves:4}\\ \| x^K_- - y^K_- \| < \frac{K}{20} \cdot \frac{\varepsilon}{20}. \label{eq:moves:5}\end{aligned}$$ Our aim is to show that $K_2 = \left( \frac{2000}{\varepsilon} \right)^2$ is sufficient to satisfy both  and . In particular, the restriction on $K_2$ implies $K_2 \geq \frac{8000}{\varepsilon}$, so  is immediate. Let us turn to the inequality . Without loss of generality, assume that $h^K_-(x) \geq h^K_-(y)$. Then the angle between the vectors $x - x^K_-$ and $y^K_- - x^K_-$ is right or obtuse, therefore $$\| x^K_- - y^K_- \|^2 \leq \| x - y^K_- \|^2 - \| x - x^K_- \|^2 \leq (K + 10)^2 - K^2 = 20K + 100.$$ From the assumption on $K_2$, we have, in particular, $K > 20$, and, consequently, $20K + 100 < 25K$. Thus $$\| x^K_- - y^K_- \| < 5\sqrt{K} = \frac{K}{20} \cdot \frac{100}{\sqrt{K}} < \frac{K}{20} \cdot \frac{\varepsilon}{20},$$ and  is proved as well, completing the proof of the lemma. Implementation of the repair algorithm -------------------------------------- We are ready to implement an algorithm which will satisfy the definition of a repair algorithm. Let $\mathcal I = (\zeta, K, n, \phi, \Delta_0)$ and $\xi \in \Omega^{(s)}(R(K, n), \zeta)$ be the input. For simplicity, we will write $P_i$, $R$ and $R'$ instead of $P_i(K)$, $R(K, n)$ and $R'(K, n)$, respectively. We proceed as follows. [**1. Classification of the cubes $P_i$.**]{} We attribute each $P_i$ ($-2n + 1 \leq i \leq 2n - 1$) to one of the five types according to the rule below. - If $\Delta(\xi, \phi(\xi), P_i) \geq c/2$, consider the sets $$J_-(i) = \{ j \in \mathbb Z : \text{$-2n + 1 \leq j < i$ and $\Delta(\xi, \phi(\xi), P_j) < c/2$} \},$$ $$J_+(i) = \{ j \in \mathbb Z : \text{$-2n + 1 \leq j < i$ and $\Delta(\xi, \phi(\xi), P_j) < c/2$} \}.$$ $P_i$ is attributed to [**Type A**]{} if neither of the sets $J_-(i)$ and $J_+(i)$ is empty. Otherwise $P_i$ is attributed to [**Type B**]{}. - $P_i$ is attributed to [**Type C**]{} if $-2n + 2 \leq i \leq 2n - 2$ and $\Delta(\xi, \phi(\xi), P_j) < c/2$ for each $j \in \{ i - 1, i, i + 1 \}$. - $P_i$ is attributed to [**Type D**]{} if $\Delta(\xi, \phi(\xi), P_j) < c/2$ and there exists $j \in \{ i - 1, i + 1\}$ such that $P_j$ is attributed to Type A. - $P_i$ is attributed to [**Type E**]{} if neither of the above applies. [**2. Auxiliary routine: processing a maximal sequence of neighboring Type A cubes.**]{} Let the indices $j_1, j_2$ ($-2n + 1 \leq j_1 < j_2 \leq 2n - 1$) satisfy the property: $P_{j_1}$ and $P_{j_2}$ are two consecutive cubes of Type D (i.e., no $P_i$ with $j_1 < i < j_2$ is of Type D). Then the set $\{ P_{j_1 + 1}, P_{j_1 + 2}, \ldots, P_{j_2 - 1} \}$ consists only of Type A cubes. Additionally assume $j_1 + 1 < j_2$ so that the sequence of Type A cubes separating $P_{j_1}$ from $P_{j_2}$ is non-empty. Let the point $o_i$ denote the center of the cube $P_i$. Define $$w = \operatorname*{argmin}\limits_{x \in \xi \cap B_K(o_{j_2})} h^K_- (x).$$ Let $$\{ x : x \in \xi, \operatorname{dist}(x, \ell) < K, 10j_1K \leq h^K_-(x) \leq h^K_-(w) \} = \{ x_1, x_2, \ldots, x_q \},$$ where the points $x_i$ are sorted in increasing order with respect to the function $h^K_-$ (i.e., the value $h^K_-(x_i)$ increases with $i$). The routine runs consecutively over $i = 1, 2, \ldots, q$ and finds a new position $x'_i$ for each $x_i$ as follows. We say that an elementary move [*connects*]{} $x$ and $y$ if $\| x' - y \| < 2 + \varepsilon$, where $x'$ is the new position of $x$. - Check if $\operatorname{dist}(x_i, (\xi \cap B_K(o_{j_1})) \cup \{x'_1, x'_2, \ldots, x'_{i - 1} \}) \leq 2 + \varepsilon$. If yes, set $x'_i = x_i$, replace $i$ by $i + 1$ and start over. If no, proceed to the next step. - Check if there exists a (non-forbidden) elementary move of $x_i$ with radius $K$ and connecting $x_i$ to some point $y \in (\xi \cap B_K(o_{j_1})) \cup \{x'_1, x'_2, \ldots, x'_{i - 1} \}$. If yes, choose the smallest possible magnitude for such a move. Define $x'_i$ to be the result of the elementary move chosen. Replace $i$ by $i + 1$ and start over. If no, proceed to the next step. - Retain all $x_j$ ($j \geq i$) in place. Report this event and terminate the routine. By our construction, it is clear that $$(\xi \setminus \{ x_1, x_2, \ldots, x_i \} ) \cup \{ x'_1, x'_2, \ldots, x'_i \} \in \Omega(\mathbb R^d).$$ [**3. Using the routine of Step 2.**]{} We apply the routine of Step 2 to each maximal set of consequent Type A cubes. We will show that the processes do not interact (see Lemma \[lem:routine\], Assertion 1), so we can perform the routines consecutively from the lowermost sequence of neighboring Type A cubes to the uppermost one. If some instance of the routine reports a termination, we terminate the entire algorithm. For the rest of the section we denote the above defined algorithm by $\mathcal A$. The sequence $(\xi_1, \xi_2, \ldots, \xi_k)$ is, by definition, the sequence of configurations obtained from $\xi = \xi_1$ after each elementary move of $\mathcal A$. Auxiliary lemmas on the algorithm --------------------------------- Let us prove some immediate properties of the above defined algorithm. Denote $$I_A = \{ i : \text{the cube $P_i$ is of Type A} \}.$$ $I_B$, $I_C$, $I_D$ and $I_E$ are defined similarly. \[lem:card\_i\] $\# (I_A \cup I_B \cup I_D \cup I_E) \leq \frac{6 \Delta_0}{c}$. By Monotonicity and Additivity properties of the defect function, one has $$\Delta_0 \geq \Delta(\xi, \phi(\xi), R') \geq \sum\limits_{i \in I_A \cup I_B} \Delta(\xi, \phi(\xi), P_i) \geq \# (I_A \cup I_B) \cdot \frac{c}{2}.$$ Hence $\# (I_A \cup I_B) \leq \frac{2\Delta_0}{c}$. Further, $\# (I_C \cup I_E) \leq 2 \# (I_A \cup I_B)$, because each Type C or Type E cube is adjacent to some Type A or Type B cube. Consequently, $$\# (I_A \cup I_B \cup I_C \cup I_E) \leq 3 \# (I_A \cup I_B) \leq \frac{6\Delta_0}{c}.$$ \[lem:routine\] Let $K = \max(K_1, K_2, 100\rho, 1000 / \varepsilon) + 1$. Assume that a sequence of neighboring Type A cubes $\{ P_{j_1 + 1}, P_{j_1 + 2}, \ldots, P_{j_2 - 1} \}$ is subject to the above defined auxiliary routine. Then the following assertions are true: 1. If a point $x \in \xi$ is affected by the routine then $$\begin{aligned} \operatorname{dist}(x, [o_{j_1}, o_{j_2}]) \leq K, \\ \|x - o_{j_1}\| \geq K, \\ \|x - o_{j_2}\| \geq K. \end{aligned}$$ 2. If the routine terminates while attempting to move $x_i$, and $$\tilde{\xi} = \xi \setminus \{x_1, x_2, \ldots, x_{i - 1} \} \cup \{x'_1, x'_2, \ldots, x'_{i - 1} \}$$ then $\tilde{\xi}$ has an $\varepsilon$-empty space in $R$. Assertion 1 is clear from the construction. Let us prove Assertion 2. Given $i$ as in the assertion, denote $$\begin{gathered} J = \biggl\{ m \in \mathbb Z, \text{$0 \leq m \leq \left\lceil \frac{100}{\varepsilon} \right\rceil$ and} \\ \operatorname{dist}\left(\frac{m \frac{\varepsilon}{10}}{K} \cdot (x_i)^K_- + \frac{K - m \frac{\varepsilon}{10}}{K} \cdot x_i, (\xi \cap B_K(o_{j_1})) \cup \{x'_1, x'_2, \ldots, x'_{i - 1} \} \right) \leq 2 + \varepsilon \biggr\}.\end{gathered}$$ Consider three cases. [**Case 1.**]{} $0 \in J$. Then $\operatorname{dist}(x_i, (\xi \cap B_K(o_{j_1})) \cup \{x'_1, x'_2, \ldots, x'_{i - 1} \}) \leq 2 + \varepsilon$. Therefore the routine sets $x'_i = x_i$ and does not terminate. This contradicts our initial assumption on $i$, so this case is impossible. [**Case 2.**]{} $J \neq \varnothing$ and $0 \notin J$. Let $m = \min J$. We claim that the routine sets $x'_i = z$, where $$z = \frac{m \frac{\varepsilon}{10}}{K} \cdot (x_i)^K_- + \frac{K - m \frac{\varepsilon}{10}}{K} \cdot x_i$$ and does not terminate. In order to prove the claim, we will check that the elementary move of $x_i$ to $z$ as above is not forbidden. Assume the converse: there is a point $y \in (\xi \setminus \{ x_1, x_2, \ldots, x_{i - 1} \} ) \cup \{ x'_1, x'_2, \ldots, x'_{i - 1} \} $ satisfying $\| y - z \| < 2$. There are five subcases. 1. $y = x'_j$, where $j < i$. But $$\left\| z - \left( \frac{(m - 1) \frac{\varepsilon}{10}}{K} \cdot (x_i)^K_- + \frac{K - (m - 1) \frac{\varepsilon}{10}}{K} \cdot x_i \right) \right \| = \frac{\varepsilon}{10},$$ thus $m - 1 \in J$. This contradicts the assumption $m = \min J$, hence the subcase is impossible. 2. $y = x_j$, where $j > i$. Lemma \[lem:move\] immediately implies $h^K_-(x_j) = h^K_-(y) < h^K_-(x_i)$. But this is impossible, since $x_1, x_2, \ldots, x_q$ are arranged in increasing order with respect to the function $h^K_-$. This subcase is also impossible. 3. $\operatorname{dist}(y, \ell) \geq K$. But, Lemma \[lem:move\] implies $\operatorname{dist}(y, \ell) \leq \| y - (x_i)^K_- \| < K$. Hence this subcase is impossible. 4. $y \in \xi \setminus \{ x_1, x_2, \ldots, x_q \}$ and $h^K_-(y) > 10j_1K$. On the other hand, $h^K_-(y) < h^K_-(x_i)$ by Lemma \[lem:move\]. Then $h^K_-(y) < h^K_-(x_i) < h^K_-(w)$. Consequently, $y = x_j$, $j < i$. This contradicts the definition of the subcase, therefore this subcase is impossible. 5. $y \in \xi \setminus \{ x_1, x_2, \ldots, x_q \}$ and $h^K_-(y) \leq 10j_1K$.$h^K_-(y) \leq 10j_1K$. By Lemma \[lem:move\], we also have $h^K_+(y) > h^K_-(x_i) \geq 10j_1K$. Consequently, $o_{j_1} \in [y^K_-, y^K_+]$ and thus $y \in \xi \cap B_K(o_{j_1})$. Similarly to Subcase 1 we conclude $m - 1 \in J$. This again contradicts the assumption $m = \min J$, hence the subcase is impossible. All the subcases are impossible, consequently, the routine does not terminate as claimed. [**Case 3.**]{} $J = \varnothing$. Choose a point $z \in \mathbb R^d$ as follows: $$y \in [x_i, (x_i)^K_-], \qquad \| z - x_i \| = \left\lceil \frac{50}{\varepsilon} \right\rceil \frac{\varepsilon}{10}.$$ Then $\left\lceil \frac{50}{\varepsilon} \right\rceil \notin J$ implies $B_{2 + \varepsilon}(z) \cap \tilde{\xi} = \varnothing$. In addition, $$B_{2 + \varepsilon}(z) \subset B_3(z) \subset B_K((x_i)^K_-) \subset R.$$ Hence $\tilde{\xi}$ indeed has an $\varepsilon$-empty space. $\mathcal A$ is a repair algorithm ---------------------------------- We proceed by verifying the properties 1–3 of Definition \[defn:rep\_alg\] for the algorithm constructed above. We need to check these properties for some particular value of $K$, therefore we fix $K = \max(K_1, K_2, 100\rho, 1000 / \varepsilon) + 1$. Each property will be verified in a separate lemma. The algorithm $\mathcal A$ satisfies property 1. Equivalently, the sequence $(\xi_1, \xi_2, \ldots, \xi_k)$ produced by the algorithm satisfies $k \leq k_0(\Delta_0)$. Each point of the set $\xi$ is moved at most once. Therefore the number of points affected by $\mathcal A$ equals $k - 1$. By Lemma \[lem:routine\], Assertion 1, whenever a point $x \in \xi \cap P_i$ is affected by $\mathcal A$, one necessarily has $i \in I_A \cup I_D$. On the other hand, the inequality $\# (\zeta \cap P_i) \leq \mathrm{const}$, since the edges of the cube $P_i$ have fixed length $k$. Hence, using Lemma \[lem:card\_i\], one obtains $$k \leq 1 + \# (I_A + I_D) \cdot \mathrm{const} = 1 + \mathrm{const} \cdot \frac{6\Delta_0}{c}. \qedhere$$ The algorithm $\mathcal A$ satisfies property 2. Equivalently, if $(\xi_1, \xi_2, \ldots, \xi_k)$ is an output of $\mathcal A$ and $\xi_k$ does not admit an empty $\varepsilon$-space in $R$, then or $R'$ is $\left(\varepsilon, \frac{6\Delta_0}{c} \right)$-crossed by $\xi_k$. Assume $\xi_k$ indeed does not admit an empty $\varepsilon$-space in $R$. Then by Lemma \[lem:routine\], Assertion 2, none of the auxiliary routines performed by $\mathcal A$ reports termination. Our proof that $R'$ is $\left(\varepsilon, \frac{6\Delta_0}{c} \right)$-crossed by $\xi_k$ will require verification of the three properties (see Definition \[def:cross\]). These properties, which we call [*Crossing Properties*]{}, involve the set of indices $I$, which we take equal to $I_C$. By Lemma \[lem:card\_i\] we have $\# I \geq 4n - 1 - \frac{6\Delta_0}{c}$. Therefore Crossing Property 1 holds. Let an index $i \in I$ be given. Assume the points $x, x' \in \zeta_k \cap P_i$ satisfy $B_{\rho}(x), B_{\rho}(x') \subset P_i$. The definition of $I = I_C$ implies $\Delta(\xi, \phi(\xi), P_i) < c$. By the Connectivity property of $\Delta$ one concludes that $x$ and $x'$ belong to the same connected component of $G_{\varepsilon}(\xi \cap P_i)$. But Lemma \[lem:routine\], Assertion 1, implies $G_{\varepsilon}(\xi \cap P_i) = G_{\varepsilon}(\xi_k \cap P_i)$, hence Crossing Property 2 holds. Before verifying Crossing Property 3, we establish several auxiliary facts. [**A.**]{} Let $i, i + 1 \in I_C \cup I_D$, $$z_1 \in \xi_k \cap B_2(o_i), \quad z_2 \in \xi_k \cap B_2(o_{i + 1}).$$ Then $z_1$ and $z_2$ belong to the same connected component of $G_{\varepsilon}(\xi_k \cap R')$. Lemma \[lem:routine\], Assertion 1 yields $z_1, z_2 \in \xi$. Now denote $D = \operatorname{conv}(B_K(o_i) \cup B_K(o_{i + 1}))$. Then $$\Delta(\xi, \phi(\xi), D) \leq \Delta(\xi, \phi(\xi), P_i) + \Delta(\xi, \phi(\xi), P_{i + 1}) < c.$$ By the Connectivity property of $\Delta$, the points $z_1$ and $z_2$ belong to the same connected component of $G_{\varepsilon}(\xi \cap D)$. But Lemma \[lem:routine\], Assertion 1 implies $\xi_k \cap D \supseteq \xi \cap D$, hence [**A**]{} follows. [**B.**]{} Let $j_1, j_2 \in I_C$, $j_1 < j_2 - 1$ and $i \in I_A$ for all $i$ satisfying $j_1 < i < j_2$. Let $$z_1 \in \xi_k \cap B_2(o_{j_1}), \quad z_2 \in \xi_k \cap B_2(o_{j_2}).$$ Then $z_1$ and $z_2$ belong to the same connected component of $G_{\varepsilon}(\xi_k \cap R')$. By the conditions of [**B**]{}, a single auxiliary routine of $\mathcal A$ affects the set $$\xi \cap \bigcup_{i = j_1}^{j_2} P_i.$$ Let $x_1, x_2, \ldots, x_q$ and $w$ be as in the definition of the routine. Since the routine does not report termination, $x'_i \in \zeta_k$ is defined for every $i = 1, 2, \ldots, q$. We continue the proof of [**B**]{} in several steps. [**B.1)**]{} For each $i = 1, 2, \ldots, q$ there is $u_i \in \xi \cap B_K(o_{j_1})$ such that $x'_i$ and $u_i$ belong to the same connected component of $G_{\varepsilon}(\xi_k \cap R')$. We argue by induction over $i$. For $i = 1$ the statement is clear from the construction of $x'_1$. If $i > 1$ and $\| x'_i - x'_j \| < 2 + \varepsilon$ for some $j < i$ then we apply the induction hypothesis for $x'_j$. Otherwise $\operatorname{dist}(x'_i, \xi \cap B_K(o_{j_1})) \leq 2 + \varepsilon$ and [**B.1)**]{} follows immediately. [**B.2)**]{} There exists an index $i_0 \in \{ 1, 2, \ldots, q \}$ such that $\| w - x_{i_0} \| \leq 2 + \varepsilon$ and $x'_{i_0} = x_{i_0}$. Define $x_{i_0}$ to be the point of the set $\xi$ with the following properties: $$\| x_{i_0} - w \| < 2 + \frac{\varepsilon}{2}, \quad h^K_-(x_{i_0}) < h^K_-(w).$$ There exists at least one such point because of the Distance Decrement property of $\Delta$ applied to the domain $B_K(w^K_-)$. We proceed by defining a sequence of indices $i_0, i_1, \ldots, i_{\lceil 100 / \varepsilon \rceil}$ recursively by fulfilling the condition $$\| x_{i_{j + 1}} - x_{i_j} \| < 2 + \frac{\varepsilon}{2}, \quad h^K_-(x_{i_{j + 1}}) < h^K_-(x_{i_j}).$$ (Since $K > 1000 / \varepsilon$, one has $B_K((x_{i_j})^K_-) \subset P_{j_2}$, therefore the Distance Decrement property is applicable.) Let $m_j$ be the magnitude of the elementary move applied to $x_{i_j}$ (in the case $x'_{i_j} = x_{i_j}$ we set $m_j = 0$ by definition). We prove that $m_j \leq j$. The proof is inductive, from $j = \lceil 100 / \varepsilon \rceil$ to $j = 0$. Indeed, the inequality $m_{\lceil 100 / \varepsilon \rceil} \leq \lceil 100 / \varepsilon \rceil$ clearly holds. Now let $j < \lceil 100 / \varepsilon \rceil$. If $m_{j + 1} = 0$, then $m_j = 0$ and there is nothing to prove. Otherwise denote $$y = \frac{(m_{j + 1} - 1) \frac{\varepsilon}{10}}{K} \cdot (x_{i_j})^K_- + \frac{K - (m_{j + 1} - 1) \frac{\varepsilon}{10}}{K} \cdot x_{i_j}.$$ Using Lemma \[lem:dist\], one concludes $$\| y - x'_{i_{j + 1}} \| < \| x_{i_j} - x_{i_{j + 1}} \| + 2 \cdot \varepsilon / 10 \leq 2 + 0.7 \varepsilon.$$ Therefore $m_j \leq m_{j + 1} - 1$, hence the induction step follows. [**B.3)**]{} There is a point $w' \in \xi \cap B_{K - \rho}(o_{j_2})$ such that $w$ and $w'$ belong to the same connected component of $G_{\varepsilon}(\xi_k \cap R')$. The path $y_0 = w, y_1, y_2, \ldots, y_l = w'$ can be constructed recursively: $y_{i + 1}$ is obtained by applying the Distance Decrement property of $\Delta$ to the ball $B_{\| y - o_{j_2} \|}(o_{j_2})$. [**B.4)**]{} Let $i_0$ be as in [**B.2)**]{} and $u_{i_0}$ be as in [**B.1)**]{}. Then there is a point $u' \in \xi \cap B_{K - \rho}(o_{j_1})$ such that $u_{i_0}$ and $u'$ belong to the same connected component of $G_{\varepsilon}(\xi_k \cap R')$. If $\| u_{i_0} - o_{j_1} \| \leq K - \rho$, there is nothing to prove. Otherwise one argues as in [**B.3)**]{}. [**B.5)**]{} The points $z_1$ and $z_2$ belong to the same connected component of $G_{\varepsilon}(\xi_k \cap R')$. Let $\mathcal C$ be the connected component of $G_{\varepsilon}(\xi_k \cap R')$ containing $z_2$ and let $V$ be the vertex set of $\mathcal C$. By the Connectivity property of $\Delta$ applied to $B_K(o_{j_2})$, $v \in V$. By [**B.3)**]{}, $w \in V$. By [**B.2)**]{}, $x_{i_0} \in V$. By [**B.1)**]{}, $u_{i_0} \in V$. By [**B.4)**]{}, $u' \in V$. Finally, by the Connectivity property of $\Delta$ applied to $B_K(o_{j_1})$, $z_1 \in V$. Thus [**B**]{} is verified. [**C.**]{} Let $i \in I_C$, $$z \in \xi_k \cap B_2(o_i), \quad z' \in \xi_k, \quad B_{\rho}(z') \subset P_i.$$ Then $z$ and $z'$ belong the same connected component of $G_{\varepsilon}(\xi_k \cap R')$. This is a direct consequence of the Connectivity Property of $\Delta$ applied to $P_i$. [**D.**]{} Let $i \in I_C \cup I_D$. Then $\xi_k \cap B_2(o_i) \neq \varnothing$. This is a direct consequence of Lemma \[lem:routine\], Assertion 1, and the Saturation property of $\Delta$ applied to $P_i$. Let us return to Crossing Property 3. It is sufficient to show that the two points $z_1, z_2 \in \xi_k$ belong to the same connected component of $G_{\varepsilon}(\zeta_k \cap R')$ once the following conditions are satisfied: $$\begin{aligned} z_1 \in P_{i_1}, & \quad z_2 \in P_{i_2}, & \qquad \text{where $i_1 \neq i_2$} \\ B_{\rho}(z_1) \subset P_{i_1}, & \quad B_{\rho}(z_1) \subset P_{i_1} &\end{aligned}$$ But this is an immediate consequence of the facts [**A**]{} – [**D**]{}. The algorithm $\mathcal A$ satisfies property 2. Equivalently, there exists a positive constant $c_0 = c_0(\Delta_0)$ such that the following inequality holds for the random configuration $\eta = \eta^{(s)}(R, \zeta)$: $$\label{eq:shrink} \Pr (\eta \in Z_{i + 1}) \geq c_0 \Pr (\eta \in Z_i \setminus Z^{term}_i).$$ For each $\zeta_{i + 1} \in Z_{i + 1}$ consider all its possible predecessors $\zeta_i \in Z_i$. This is a multi-valued map $\mathfrak f$ from $Z^{i + 1}$ to $Z_i$. The set $\mathfrak(Z_{i + 1})$ of all values attained by $\mathfrak f$ satisfies $Z_i \setminus Z^{term}_i \subset \mathfrak(Z_{i + 1})$. We will verify the following inequalities. 1. For any sheet $f$ of the multi-map $\mathfrak f$ and any $\xi \in Z_{i + 1}$ such that $f$ is defined in a neighborhood of $\xi$ one has $$|Jac\, f(\xi)| < \left( \frac{K}{K - 20} \right)^{d - 1}.$$ 2. For a.e. $\xi \in Z_{i + 1}$ one has $$\# \mathfrak f(\xi) < \left\lceil \frac{100}{\varepsilon} \right\rceil \cdot \frac{6 \Delta_0}{c} \cdot \frac{(10K + 2)^d}{|B_1(\mathbf 0)|}.$$ [**Proof of a.**]{} Consider the map $g_m: \bigcup\limits_{y \in \ell} B_K(y) \to \mathbb R^d$ defined by $$g_m(x) = \frac{m \frac{\varepsilon}{10}}{K} \cdot x^K_- + \frac{K - m \frac{\varepsilon}{10}}{K} \cdot x.$$ One can check that $$|Jac\, g_m(x)| = \left( \frac{K - m\varepsilon / 10}{K} \right)^{d - 1}.$$ If $f$ and $\xi$ are as above then there exists an integer $m \in \{1, 2, \ldots, \lceil 100 / \varepsilon \rceil \}$ and a point $x_0 \in \xi$ such that the identity $$\label{eq:sheet} f(\xi') = (\xi' \setminus \{ x \}) \cup g_m^{-1}(x), \quad \text{where $\{ x \} = \xi' \cap B_1(x_0)$,}$$ holds for any $\xi'$ in a sufficiently small neighborhood of $\xi$. In other words, $f$ affects a single point of $\xi'$ by an inverse of an elementary move. Thus $$|Jac\, f(\xi)| = |Jac\, g_m(x_0)|^{-1} < \left( \frac{K}{K - 20} \right)^{d - 1}.$$ [**Proof of b.**]{} Let $\xi \in Z_{i + 1}$. Call a pair $(x_0, m)$, where $x_0 \in \xi$ and $m \in \{1, 2, \ldots, \lceil 100 / \varepsilon \rceil \}$, [*valid*]{} if the map $f(\xi')$ defined by  is a sheet of $\mathfrak f$ in some neighborhood of $\xi$. For a.e. $\xi \in Z_{i + 1}$ the multiplicity $\# \mathfrak f(\xi)$ coincides with the number of valid pairs. Assume a pair $(x_0, m)$ is valid. By construction of $\mathcal A$, there exists a point $y \in \xi$ such that $$\label{eq:0.9} 2 + 0.9 \varepsilon \leq \| y - x_0 \| \leq 2 + \varepsilon.$$ But this is impossible for $x_0 \in \bigcup\limits_{i \in I_C} P_i$ because of the Forbidden Distances property of $\Delta$ and Lemma \[lem:routine\], Assertion 1. Therefore $x_0 \in P_i$, where $i \in I_A \cup I_B \cup I_D \cup I_E$. On the other hand, a straightforward volume estimate gives $(\xi \cap P_i) \leq \frac{(10K + 2)^d}{|B_1(\mathbf 0)|}$. Applying Lemma \[lem:card\_i\] finishes the proof of [**b**]{}. Now, as [**a**]{} and [**b**]{} are verified, let $$c_0 = \left( \left( \frac{K}{K - 20} \right)^{d - 1} \cdot \left\lceil \frac{100}{\varepsilon} \right\rceil \cdot \frac{6 \Delta_0}{c} \cdot \frac{(10K + 2)^d}{|B_1(\mathbf 0)|} \right)^{-1}.$$ Then  holds, because $$\Pr (\eta \in Z_i \setminus Z^{term}_i) \leq \Pr(\eta \in \mathfrak f(Z_{i + 1})) \leq c_0^{-1} \Pr(\eta \in Z_{i + 1}),$$ where the second inequality is implied by [**a**]{} and [**b**]{}. Large Circuit Lemma =================== Statement of the lemma and reduction to two cases ------------------------------------------------- At this point and for this entire section we fix $\varepsilon > 0$ so that the function $\Delta$ defined by  satisfies the defect function properties at level $\varepsilon$. We proceed by introducing the notion of a [*large-circuit configuration*]{} for a two-dimensional square box $Q_L$. \[def:large\_circuit\] We say that a configuration $\xi \in \Omega(\mathbb R^2)$ is a [*large-circuit configuration*]{} for in the square box $Q_L$ (or [*has a large circuit*]{} in $Q_L$) if the graph $G_{\varepsilon}(\xi \cap (Q_L \setminus Q_{0.9L}))$ has a cycle $x_1 x_2 \ldots x_m x_1$ such that the polygonal circuit $x_1 x_2 \ldots x_m x_1$ is contractible to $\partial Q_L$ in the square annulus $Q_L \setminus Q_{0.9L}$. The following Large Circuit Lemma is the main result of this section. Let $p > 0$. Then there exist positive real numbers $q$ and $L$ such that uniform hard-disk model $\eta = \eta^{(s)}(Q_L, \zeta)$ in the square $Q_L \subset \mathbb R^2$ satisfies at least one of the following two assertions. 1. $\Pr \bigl( \exists y \in Q_L : B_{2 + \varepsilon}(y) \subset Q_L \; \text{and} \; B_{2 + \varepsilon}(y) \cap \eta = \varnothing \bigr) > q$. 2. $\Pr \bigl( \text{$\eta$ has a large circuit in $Q_L$} \bigr) > 1 - p$. We prove the Large Circuit Lemma by reduction to two cases depending on the magnitude of $s$. Each of the two lemmas below corresponds to one of the cases. \[lem:dense\] Let $p > 0$. Assume a constant $C > 0$ is given. Then there exist positive real numbers $q_0$ and $L_0$ such that the following holds. If a two-dimensional uniform hard-disk model $\eta = \eta^{(s)}(Q_L, \zeta)$ satisfies $$L \geq L_0, \quad s > \frac{(2L)^2}{2\sqrt{3}} - CL$$ then either (LC1) or (LC2) holds. \[lem:sparse\] There exist positive real numbers $C$, $L'_0$ and a positive-valued function $q'_0 : (L'_0, +\infty) \to \mathbb R$ such that the following holds. If a two-dimensional uniform hard-disk model $\eta = \eta^{(s)}(Q_L, \zeta)$ satisfies $$L \geq L_0, \quad s \leq \frac{(2L)^2}{2\sqrt{3}} - CL,$$ then one has $$\Pr \bigl( \exists y \in Q_L : B_{2 + \varepsilon}(y) \subset Q_L \; \text{and} \; B_{2 + \varepsilon}(y) \cap \xi = \varnothing \bigr) > q'_0(L).$$ The Large Circuit Lemma indeed follows from Lemma \[lem:dense\] and Lemma \[lem:sparse\] as below. Let $C, L'_0$ be as provided by Lemma \[lem:sparse\]. Inserting this value of $C$ into Lemma \[lem:dense\] yields some $q_0$ and $L_0$. Let $$L = \max(L_0, L'_0), \quad q = \min(q_0, q'_0(L)).$$ We show that the Large Circuit Lemma holds with this choice of $q$ and $L$. Indeed, if $s \leq \frac{(2L)^2}{2\sqrt{3}} - CL$ then (LC1) holds by Lemma \[lem:sparse\]. If $s > \frac{(2L)^2}{2\sqrt{3}} - CL$ then either (LC1) or (LC2) holds by Lemma \[lem:dense\]. Preliminaries ------------- Before we proceed with proofs of Lemmas \[lem:dense\] and \[lem:sparse\], we provide two statements that are important for the further argument. \[lem:finite\_gibbs\] Let $\rho > 2$ and let $D_0 \subseteq \mathbb R^d$ be an open bounded domain. Assume that $m$ pairs $(D_1, D'_1), (D_2, D'_2) \ldots, (D_m, D'_m)$ of open domains are given such that $D_i \subseteq D'_i \subseteq D_0$ and $\operatorname{dist}(D'_i, D'_j) > \rho$ for every $1 \leq i < j \leq m$. Let, finally, $E_1, E_2, \ldots, E_m \subset \Omega(\mathbb R^d)$ be measurable sets of configurations such that each indicator function $\mathbbm 1_{E_i}(\xi)$ depends only on $\xi \cap (D'_i)_{\rho}$. For every $\xi \in \Omega(\mathbb R^d)$ write $$\begin{aligned} \operatorname{P}_i (\xi) & = & \Pr(\eta^{(s_i)}(D_i, \xi) \in E_i), \quad \text{where $s_i = \# (\xi \cap D_i)$,} \\ \operatorname{P}_i^{[\lambda]} (\xi) & = & \Pr(\eta^{[\lambda]}(D_i, \xi) \in E_i)\end{aligned}$$ Then: 1. (Uniform case.) Every uniform hard-disk model $\eta^{(s)}(D_0, \zeta)$ satisfies $$\Pr \left( \eta^{(s)}(D_0, \zeta) \in \bigcap\limits_{i = 1}^{m} E_i \right) = \operatorname{E}\left[ \prod\limits_{i = 1}^{m} \operatorname{P}_i (\eta^{(s)}(D_0, \zeta))\right].$$ 2. (Poisson case.) Every Poisson hard-disk model $\eta^{[\lambda]}(D_0, \zeta)$ satisfies $$\Pr \left( \eta^{[\lambda]}(D_0, \zeta) \in \bigcap\limits_{i = 1}^{m} E_i \right) = \operatorname{E} \left[\prod\limits_{i = 1}^{m} \operatorname{P}_i^{[\lambda]} (\eta^{(s)}(D_0, \zeta))\right].$$ Denote $D = D_1 \cup D_2 \cup \ldots \cup D_m$ and $E = E_1 \cap E_2 \cap \ldots \cap E_m$. Consider the uniform case. Write $\eta = \eta^{(s)}(D_0, \zeta)$ Define a map $T : \Omega(\mathbb R^d) \to \Omega(\mathbb R^d) \times \mathbb Z^n$ as follows: $$T(\xi) = (\xi \setminus D', \#(\xi \cap D_1), \# (\xi \cap D_2), \ldots, \# (\xi \cap D_m).$$ For every $( \xi_0, s_1, s_2, \ldots, s_m) \in T(\Omega(\mathbb R^d))$ denote $$\operatorname{P}(\xi_0, s_1, s_2, \ldots, s_m) = \left( \eta \in E \mid T(\eta) = ( \xi_0, s_1, s_2, \ldots, s_m) \right),$$ where the expression on the right-hand side is the regular conditional probability spanned by the map $T$. Then the following holds: $$\begin{aligned} \Pr \left( \eta \in E \right) & = & \operatorname{E} \left[ \operatorname{P}(T(\eta)) \right] \label{eq:markov_1} \\ & = & \operatorname{E}\left[ \prod\limits_{i = 1}^{m} \operatorname{P}_i (\eta)\right]. \label{eq:markov_2}\end{aligned}$$ Indeed,  is the law of total expectation with $\Pr(\eta \in E)$ treated as $\operatorname{E} \left[ \mathbbm 1_E(\eta) \right]$. Further, the regular conditional probability distribution of $\eta$, conditioned on $T(\eta) = T(\xi)$, coincides with the distribution of the random configuration $$\theta(\xi) = (\xi \setminus D) \cup (\eta_1(\xi) \cap D_1) \cup (\eta_2(\xi) \cap D_2) \cup \ldots \cup (\eta_n(\xi) \cap D_m),$$ where all $\eta_i(\xi)$ are independent and each $\eta_i(\xi)$ is distributed as $\eta^{\left( \# (\xi \cap D_i) \right)}(D_i, \xi)$. But $$\Pr(\theta(\xi) \in E) = \Pr(\eta_i(\xi) \in E_i \; \forall i) = \prod\limits_{i = 1}^{m} \operatorname{P}_i (\xi),$$ hence  holds. The proof of the Poisson case is essentially the same; thus the details are omitted. The lemma above is an instance of the so-called spatial Markov property. For instance, an analogous framework for the Ising model can be found in \[Friedli–Velenik, Exercise 3.11\] and the remark following the cited exercise. \[lem:index\_elimination\] Let $I$ be a finite set of indices, $J \subseteq I$. Assume that each index $i \in I$ is supplied with a number $a_i \in [0, 1]$. Finally, let $\mathcal F \subseteq 2^I$ be a family of subsets of $I$ such that the incidence $X \in \mathcal F$ implies that all supersets of $X$ are in $\mathcal F$, too. Let $\mathcal F_J = \mathcal F \cap 2^J$. Then the following inequality holds. $$\sum\limits_{X \in \mathcal F_J} \left( \prod\limits_{i \in X} a_i \prod\limits_{i \in J \setminus X} (1 - a_i) \right) \leq \sum\limits_{X \in \mathcal F} \left( \prod\limits_{i \in X} a_i \prod\limits_{i \in I \setminus X} (1 - a_i) \right).$$ Clearly, we have $$\sum\limits_{X \in \mathcal F} \left( \prod\limits_{i \in X} a_i \prod\limits_{i \in I \setminus X} (1 - a_i) \right) \geq \sum\limits_{X \in \mathcal F_J}\sum\limits_{Y \in 2^{I \setminus J}} \left( \prod\limits_{i \in X \cup Y} a_i \prod\limits_{i \in I \setminus (X \cup Y)} (1 - a_i) \right).$$ On the other hand, for every $X \in \mathcal F_J$ one has $$\begin{gathered} \sum\limits_{Y \in 2^{I \setminus J}} \left( \prod\limits_{i \in X \cup Y} a_i \prod\limits_{i \in I \setminus (X \cup Y)} (1 - a_i) \right) = \\ \prod\limits_{i \in X} a_i \prod\limits_{i \in I \setminus X} (1 - a_i) \cdot \sum\limits_{Y \in 2^{I \setminus J}} \left( \prod\limits_{i \in Y} a_i \prod\limits_{i \in (I \setminus J) \setminus Y} (1 - a_i) \right) = \\ \prod\limits_{i \in X} a_i \prod\limits_{i \in I \setminus X} (1 - a_i) \prod \limits_{i \in I \setminus J} (a_i + (1 - a_i)) = \prod\limits_{i \in X} a_i \prod\limits_{i \in I \setminus X} (1 - a_i).\end{gathered}$$ Hence the lemma follows. Lemma \[lem:index\_elimination\] may be considered as a special, simpler, case of the Harris inequality [@Har]. But this case allows for a short proof, which we provide for the convenience of a reader. The case of large $s$ --------------------- This is the harder of the two cases considered. Before we proceed with the proof of Lemma \[lem:dense\], let us recall the assumptions we have already accepted. Namely, we assume that a given function $\Delta(\xi, \xi', D)$ satisfies the defect function properties at a fixed level $\varepsilon \in (0, 1)$ (see Definition \[def:defect\_prop\]). The variables $\rho, c, C$ have the same meaning as in Definition \[def:defect\_prop\]. In addition, we will use the parameter $K$ defined by the Thin Box Lemma. The argument is arranged as follows: for every $L > 10^4 K$ we determine the range of pairs $(p, q)$ sufficient to ensure either (LC1) or (LC2) as in the statement of the Large Circuit Lemma. We argue as if the boundary conditions $\zeta$ are fixed, however, none of the estimates below depends on $\zeta$. In the above notation, let a positive integer $n$ satisfy $$20nK < L \leq 20(n + 1)K.$$ (By assumption on $L$, we have $n \geq 500$.) Consider the two families, $R_{i*}$ and $R'_{i*}$ of vertical rectangular boxes and the two families, $R_{*i}$ and $R'_{*i}$ of horizontal rectangular boxes defined as follows: $$\label{eq:boxes} \begin{array}{lll} R'_{i*} & = & [(20i - 5)K, (20i + 15)K] \times [-20nK, 20nK], \\ R_{i*} & = & [(20i - 1)K, (20i + 1)K] \times [-20nK, 20nK], \\ R'_{*i} & = & [-20nK, 20nK] \times [(20i - 5)K, (20i + 5)K],\\ R_{*i} & = & [-20nK, 20nK] \times [(20i - 1)K, (20i + 1)K], \end{array}$$ where $i$ runs through the set $\{ -n + 1, -n + 2, \ldots, n - 1 \}$. Clearly, $R_{i*} \subset R'_{i*}$. Further, a direct analogue of the Thin Box Lemma (obtained from the original Thin Box Lemma by an appropriate translation or rotation) holds for every pair $(R_{i*}, R'_{i*})$ of vertical boxes as well as for every pair $(R_{*i}, R'_{*i})$ of horizontal boxes. Given a configuration $\xi \in \Omega(\mathbb R^2)$, we will write $s_i(\xi) = \#(\xi \cap R_{i*})$. Let us turn to the proof of Lemma \[lem:dense\]. We present the argument as a sequence of steps. [**Step 1.**]{} Construction of saturators. This step is devoted to construction of saturated configurations that will be passed to the function $\Delta(\cdot, \cdot, \cdot)$ as the second argument. Since our argument relies on the defect function properties, this step is crucial. Propositions \[prop:saturator\_1\] and \[prop:saturator\_2\] below are the building blocks of our construction, while Proposition \[prop:saturator\_3\] shows how these building blocks are combined to yield a saturated configuration. \[prop:saturator\_1\] There exists a map $$\theta_* : \Omega^(\mathbb R^2) \to \Omega(\mathbb R^2)$$ such that for every $\xi \in \Omega^(\mathbb R^2)$ the following properties hold: 1. $\xi \cap \theta_*(\xi) = \varnothing$. 2. $\xi \cup \theta_*(\xi) \in \Omega(\mathbb R^2)$. 3. For each $i \in \{ -n + 1, -n + 2, \ldots, n - 1 \}$ one has $\operatorname{dist}(\theta_*(\xi), R_{i*}) \geq \rho$. 4. If $y \in \mathbb R^2$, then one has either $\operatorname{dist}(y, \theta_*(\xi) \cup \xi) \leq 2$, or $\operatorname{dist}(y, R_{i*}) < \rho$ for some $i \in \{ -n + 1, -n + 2, \ldots, n - 1 \}$. Assume that $\Hat{\theta}$ is a set satisfying the properties 1–3. If the property 4 does not hold for $\Hat{\theta}$ and some point $y \in \mathbb R^2$, then one can replace $\Hat{\theta}$ by $\Hat{\theta} \cup \{ y \}$, and the properties 1–3 will still hold. We call such replacement an [*elementary expansion*]{} of $\Hat{\theta}$ by $y$. Let us proceed by constructing $\theta_*$ using a greedy algorithm. Set $\Hat{\theta} = \varnothing$, as the properties 1–3 are evident. Next, perform as many consecutive expansions of $\Hat{\theta}$ by points $y \in B_1(\mathbf 0)$ as possible. After that, perform as many consecutive elementary expansions of $\Hat{\theta}$ by points $y \in B_2(\mathbf 0)$ as possible. Repeating this consecutively for each of the concentric balls $B_3(\mathbf 0), B_4(\mathbf 0), \ldots$ yields a limit shape $\theta_*$ for $\Hat(\theta)$. One can see that $\theta_*$ is as required. The question whether $\theta_*$ is measurable is not addressed in Proposition \[prop:saturator\_1\], because this is insufficient for the further argument. However, the construction in the proof shows that $\theta_*$ can be constructed as a measurable map. The issue of measurability, is, however, important for the next Proposition \[prop:saturator\_2\]. \[prop:saturator\_2\] Let $\xi \in \Omega(\mathbb R^2)$, $i \in \{ -n + 1, -n + 2, \ldots, n - 1 \}$ and $s_i(\xi) = \#(\xi \cap R_{i*})$. Then there exists a measurable map $$\theta_i[\xi] : \Omega^{s_i}(R_{i*}, \xi) \to \Omega(\mathbb R^2)$$ such that for every $\eta \in \Omega^{s_i}(R_{i*}, \xi)$ the following holds: 1. $\eta \cap \theta_i[\xi](\eta) = \varnothing$. 2. $\eta \cup \theta_*(\xi) \cup \theta_i[\xi](\eta) \in \Omega(\mathbb R^2)$. 3. For each $x \in \theta_i[\xi](\eta)$ one has $\operatorname{dist}(x, R_{i*}) < \rho$. 4. If $y \in \mathbb R^2$, then one has either $\operatorname{dist}(y, \eta \cup \theta_*(\xi) \cup \theta_i[\xi](\eta)) \leq 2$, or $\operatorname{dist}(y, R_{i*}) \geq \rho$. The proof is similar to the one of Proposition \[prop:saturator\_1\]. The measurability throughout the algorithm is maintained in a standard way. \[prop:saturator\_3\] Let $\xi \in \Omega^{(s)}(Q_L, \zeta)$, $i \in \{ -n + 1, -n + 2, \ldots, n - 1 \}$, $s_i = \#(\xi \cap R_{i*})$. Define $$\psi_i[\xi](\eta) = \eta \cup \theta_*(\xi) \cup \theta_i[\xi](\eta),$$ where $\eta$ runs through $\Omega^{s_i}(R_{i*}, \xi)$. Then $\psi_i[\xi]$ is an $(R'_{i*}, \rho)$-saturator over $\Omega^{(s)}(R_{i*}, \xi)$. Let $y \in \mathbb R^2$ satisfy $\operatorname{dist}(y, \psi_i[\xi](\eta)) > 2$. Then there exists an index $j \neq i$ such that $\operatorname{dist}(y, R_{j*}) \leq \rho$. (This is an immediate consequence of the definitions of $\theta_*(\xi)$ and $\theta_i[\xi](\eta)$.) Therefore $$\operatorname{dist}(y, R'_{i*}) \geq 10K - \rho > 2 \rho,$$ which finishes the proof. [**Step 2.**]{} With $\xi$ fixed, the defects of $\psi_i[\xi]$ are uniformly bounded for most $i$. The aim of this step is to prove certain consequences of the condition $s > \frac{(2L)^2}{2\sqrt{3}} - CL$ of Lemma \[lem:dense\]. \[prop:counting\] Let $C > 0$. Then there exists $C' > 0$ such that the following holds. If the $(\xi, \xi', Q_L)$ is a defect-measuring triple in the domain of arguments of $\Delta$ and if the inequality $\# (\xi \cap Q_L) > \frac{|Q_L|}{2\sqrt{3}} - CL$ holds then one has $$\Delta(\xi, \xi', Q_L) < C'L.$$ Denote $s = \# (\xi \cap Q_L)$. Then, by the Counting property of $\Delta$, $$\Delta(\xi, \xi', Q_L) \leq |Q_L| - s \cdot 2\sqrt{3} + C_{cnt}L \leq (2C\sqrt{3} + C_{cnt})L.$$ Hence it is sufficient to choose $C' = 2C\sqrt{3} + C_{cnt}$. Proposition \[prop:counting\] can be applied to the saturators $\psi_i[\xi]$ as follows. \[prop:most\_defects\_bounded\] Let $C > 0$. Then there exists $C'' > 0$ such that the following holds. If a configuration $\xi \in \Omega(\mathbb R^2)$ satisfies the inequality $\# (\xi \cap Q_L) > \frac{|Q_L|}{2\sqrt{3}} - CL$ and if $s_i = \#(\xi \cap R_{i*})$ then one has $$\# \{ i : \text{$i \in \{ -n + 1, -n + 2, \ldots, n - 1 \}$ and $(\xi, s_i) \in \operatorname{Dfc}_{C''}(R_{i*}, R'_{i*})$} \} \geq 2n - 1 - \frac{n}{20}.$$ Let $I \subseteq \{ -n + 1, -n + 2, \ldots, n - 1 \}$ be the set of indices $i$ satisfying $$(\xi, s_i) \notin \operatorname{Dfc}_{C''}(R_{i*}, R'_{i*}).$$ Assume $i \in I$. By definition of $\operatorname{Dfc}_{C''}(R_{i*}, R'_{i*})$, this means that the defect of $\eta^{(s_i)}(R_{i*}, \xi)$ with respect to $R_{i*}$ cannot be bounded by $C''$. Thus, according to Definition \[def:bd\_defect\] with $\phi = \psi_i[\xi]$, one can find a configuration $\eta_i \in \Omega^{(s_i)}(\xi, R_{i*})$ such that $$\label{eq:large_defect} \Delta(\eta_i, \psi_i[\xi](\eta_i), R'_{i*}) > C''.$$ Let  be the definition of $\eta_i$ for $i \in I$. For $i \in \{ -n + 1, -n + 2, \ldots, n - 1 \} \setminus I$ let, by definition, $\eta_i = \xi$. Consider the configurations $$\eta = \left( \xi \setminus \bigcup\limits_{i = -n + 1}^{n - 1} R_{i*} \right) \cup \bigcup\limits_{i = -n + 1}^{n - 1} (\eta_i \cup R_{i*}),$$ $$\eta' = \eta \cup \theta_*(\xi) \cup \bigcup\limits_{i = -n + 1}^{n - 1} \theta_i[\xi](\eta_i).$$ By construction, one concludes that $\eta \subseteq \eta'$ and that $\eta'$ is (globally) saturated. Hence each $(\eta, \eta', R'_{i*})$ ($i = -n + 1, -n + 2, \ldots, n - 1$) is a defect-measuring triple and belongs to the domain of arguments of $\Delta$. Further, for each $i \in I$ one has $$\Delta(\eta, \eta', R'_{i*}) = \Delta(\eta_i, \psi_i[\xi](\eta_i), R'_{i*}) > C'',$$ as the first identity follows from the Localization property of $\Delta$. Finally, we apply Proposition \[prop:counting\] to the configuration $\eta$. It is clear that $$\# (\eta \cap Q_L) = \# (\xi \cap Q_L) > \frac{|Q_L|}{2\sqrt{3}} - CL.$$ Therefore $$C' L \geq \Delta(\eta, \eta', R'_{i*}) \geq \sum\limits_{i \in I} \Delta(\eta, \eta', R'_{i*}) \geq C'' \cdot \# I.$$ Consequently, $$\# I \leq \frac{C'}{C''}L \leq \frac{20(n + 1)KC'}{C''}.$$ Hence it is sufficient to choose $C'' = 10^4KC'$ to ensure $\# I \leq \frac{n}{20}$, as required. Note that the Counting property of $\Delta$ was crucial for this step. [**Step 3.**]{} Small probability of an empty space event implies high probability of multiple crossings. For the rest of this section we assume that the value of $C$ is inherited from the condition of Lemma \[lem:dense\]. Thus, by Step 2, the values assigned to $C'$ and $C''$ become fixed as well. This step essentially relies on the notion of the $(\varepsilon, \nu)$-crossing. Therefore the reader might find it helpful to recall the notion from Definition \[def:cross\]. In order to state and prove the main result of Step 3, Proposition \[prop:alternative\], we will define a constant $p_1$ and a function $p_2 : \mathbb N \to \mathbb R$. The following Proposition \[prop:p1\_def\] serves as the definition of $p_1$. \[prop:p1\_def\] Let $\xi \in \Omega^{(s)}(Q_L, \zeta)$, where $s > \frac{|Q_L|}{2\sqrt{3}} - CL$. Assume that an index $i \in \{ -n + 1, -n + 2, \ldots, n - 1 \}$ satisfies $$(\xi, s_i) \in \operatorname{Dfc}_{C''}(R_{i*}, R'_{i*}),$$ where $s_i = s_i(\xi)$. Define $\nu = \left\lceil \frac{6C''}{c(\varepsilon)} \right\rceil$, where $c(\varepsilon)$ is inherited from the definition of a defect function. Then there exists a number $p_1 > 0$, independent of $L$, such that at least one of the following holds: 1. $\Pr(\text{$R'_{i*}$ is $(\varepsilon, \nu)$-crossed by $\eta^{(s_i)}(R_{i*}, \xi)$}) > p_1$. 2. $\Pr \left( \text{$R_{i*} \setminus \bigcup\limits_{x \in \eta^{(s_i)}(R_{i*}, \xi)} B_2(x)$ contains an $\varepsilon$-ball} \right) > p_1$. This is a direct consequence of the Thin Box Lemma. Let us define the function $p_2$. Let $n \in \mathbb N$ and $E_1, E_2, \ldots, E_n$ be independent events of probability $p_1$ each. We set, by definition, $$p_2(n) = 1 - \Pr\left( \sum\limits_{i = 1}^{\left\lfloor \frac{n}{20} - 5 \right\rfloor} \mathbbm 1(E_i) > 8\nu \right).$$ \[prop:alternative\] Let $p, q > 0$ satisfy the inequality $$\label{eq:ineq_pq} \frac{q}{p_1} + \frac{1 - \frac{p}{4}}{1 - p_2(n)} < 1.$$ Then for every boundary conditions $\zeta \in \Omega(\mathbb R^2)$ and every integer $s > \frac{|Q_L|}{2\sqrt{3}} - CL$ at least one of the following holds: 1. $\Pr \left( \sum\limits_{i = -n + 1}^{-n + \left\lfloor \frac{n}{10} - 4 \right\rfloor} \mathbbm 1(\text{$R'_{i*}$ is $(\varepsilon, \nu)$-crossed by $\eta^{(s)}(Q_L, \zeta)$} \}) > 8\nu \right) > 1 - \frac{p}{4}$. 2. $\Pr \left( \text{$Q_L \setminus \bigcup\limits_{x \in \eta^{(s)}(Q_L, \zeta)} B_2(x)$ contains an $\varepsilon$-ball} \right) > q$. For each $i \in \{ -n + 1, -n + 2, \ldots, n - 1 \}$ define $$F^{cross}_i = \{ \eta \in \Omega(\mathbb R^2) : \text{$R'_{i*}$ is $(\varepsilon, \nu)$-crossed by $\eta$} \},$$ $$F^{empty}_i = \left\{ \eta \in \Omega(\mathbb R^2) : \text{$R_{i*} \setminus \bigcup\limits_{x \in \eta} B_2(x)$ contains an $\varepsilon$-ball} \right\}.$$ Denote $$\begin{gathered} G = \{\xi \in \Omega^{(s)}(Q_L, \zeta) : \\ \text{$\Pr( \eta^{s_i(\xi)}(R_{i*}, \xi) \in F^{empty}_i ) > p_1$ holds for some index $i$} \}.\end{gathered}$$ Consider the two cases. [**Case 1.**]{} $\Pr(\eta^{(s)}(Q_L, \zeta) \in G) > \frac{q}{p_1}$. Let us apply Lemma \[lem:finite\_gibbs\] with $D_0 = Q_L$, $m = 2n - 1$, $D_i = R_{(-n + i)*}$, $D'_i = R'_{(-n + i)*}$ and $E_i = \Omega(\mathbb R^2) \setminus F^{empty}_{-n + i}$. For every $\xi \in G$ we have $$\prod\limits_{i = 1}^{m} \operatorname{P}_i(\xi) \leq 1 - p_1,$$ because at least one multiplier does not exceed $1 - p_1$ and the others do not exceed 1. Consequently, $$\begin{gathered} \Pr \left( \text{$Q_L \setminus \bigcup\limits_{x \in \eta^{(s)}(Q_L, \zeta)} B_2(x)$ contains an $\varepsilon$-ball} \right) \geq \\ 1 - \Pr\left( \eta^{(s)}(Q_L, \zeta) \in \bigcap\limits_{i = 1}^{m} E_i \right) = \\ 1 - \operatorname{E} \left[ \prod\limits_{i = 1}^{n - 1} \operatorname{P}_i (\eta^{(s)}(Q_L, \zeta)) \right] \geq \\ 1 - \Pr(\eta^{(s)}(Q_L, \zeta) \in G) \cdot (1 - p_1) - (1 - \Pr(\eta^{(s)}(Q_L, \zeta) \in G)) = \\ p_1 \cdot \Pr(\eta^{(s)}(Q_L, \zeta) \in G) > q,\end{gathered}$$ which is exactly the second assertion of the alternative. [**Case 2.**]{} $\Pr(\eta^{(s)}(Q_L, \zeta) \notin G) > \frac{1 - \frac{p}{4}}{1 - p_2(n)}.$ Denote $$I = \left\{ 1, 2, \ldots, \left\lfloor \frac{n}{10} - 4 \right\rfloor \right\}, \quad \mathcal F = \{ X \subseteq I : \# X > 8 \nu \},$$ $$s_i(\xi) = \# (\xi \cap R_{(-n + i)*}), \quad a_i(\xi) = \Pr(\eta^{(s_i)}(R_{(-n + i)*}, \xi) \in F^{cross}_{-n + i}), \quad \text{where $s_i = s_i(\xi)$.}$$ Consider an arbitrary set of integers $X \in \mathcal F$ and let us apply Lemma \[lem:finite\_gibbs\] with $D_0 = Q_L$, $m = \left\lfloor \frac{n}{10} - 4 \right\rfloor$, $D_i = R_{(-n + i)*}$, $D'_i = R'_{(-n + i)*}$ and $$E_i = \left\{ \begin{array}{ll} F^{cross}_{-n + i} & \text{if $i \in X$} \\ \Omega(\mathbb R^2) \setminus F^{cross}_{-n + i} & \text{if $i \notin X$.} \end{array} \right.$$ If $$X(\xi) = \{i \in I : \text{$R'_{(-n + i)*}$ is $(\varepsilon, \nu)$-crossed by $\xi$}$$ and $\eta = \eta^{(s)}(Q_L, \zeta)$, one has $$\Pr \left( X(\eta) = X \right) = \operatorname{E} \left[ \prod\limits_{i \in X} a_i(\eta) \prod\limits_{i \in I \setminus X} (1 - a_i(\eta)) \right].$$ Therefore $$\begin{gathered} \Pr \left( \sum\limits_{i = -n + 1}^{-n + \left\lfloor \frac{n}{10} - 4 \right\rfloor} \mathbbm 1(\text{$R'_{i*}$ is $(\varepsilon, \nu)$-crossed by $\eta$} \}) > 8\nu \right) = \\ \operatorname{E} \sum\limits_{X \in \mathcal F} \left( \prod\limits_{i \in X} a_i(\eta) \prod\limits_{i \in I \setminus X} (1 - a_i(\eta)) \right).\end{gathered}$$ Further, denote $$Y(\xi) = \{ i \in I : (\xi, s_i(\xi)) \in \operatorname{Dfc}_{C''}(R_{(-n + i)*}, R'_{(-n + i)*}) \}.$$ By Proposition \[prop:most\_defects\_bounded\], every configuration $\xi \in \Omega^{(s)}(Q_L, \zeta)$ satisfies $$\# J \geq \left\lfloor \frac{n}{10} - 4 \right\rfloor - \frac{n}{20} \geq \left\lfloor \frac{n}{20} - 5 \right\rfloor.$$ Let, in addition $\xi \notin G$. Then, by Proposition \[prop:p1\_def\], the inequality $a_i(\xi) > p_1$ is satisfied for every $i \in Y(\xi)$. Therefore $$\begin{gathered} \sum\limits_{X \in \mathcal F} \left( \prod\limits_{i \in X} a_i(\xi) \prod\limits_{i \in I \setminus X} (1 - a_i(\xi)) \right) \geq \\ \sum\limits_{X \in \mathcal F \cap 2^{Y(\xi)}} \left( \prod\limits_{i \in X} a_i(\xi) \prod\limits_{i \in Y(\xi) \setminus X} (1 - a_i(\xi)) \right) \geq 1 - p_2(n),\end{gathered}$$ where the first inequality follows from Lemma \[lem:index\_elimination\] and the second one follows from the definition of $p_2$. Hence $$\begin{gathered} \Pr \left( \sum\limits_{i = -n + 1}^{-n + \left\lfloor \frac{n}{10} - 4 \right\rfloor} \mathbbm 1(\text{$R'_{i*}$ is $(\varepsilon, \nu)$-crossed by $\eta$}) > 8\nu \right) \geq \\ (1 - p_2(n)) \cdot \Pr(\eta \notin G) > 1 - \frac{p}{4},\end{gathered}$$ which is exactly the first assertion of the alternative. But $$1 = \Pr(\eta^{(s)}(Q_L, \zeta) \in G) + \Pr(\eta^{(s)}(Q_L, \zeta) \notin G) = 1 > \frac{q}{p_1} + \frac{1 - \frac{p}{4}}{1 - p_2(n)}.$$ Therefore Case 1 and Case 2 exhaust all possibilities. [**Step 4.**]{} Multiple vertical and horizontal crossings guarantee a large circuit. At this point the reader might find it helpful to recall the notation $R_{*i}$, $R'_{*i}$ (for vertical thin boxes) and $R_{i*}$, $R'_{i*}$ (for horizontal thin boxes), introduced in . We use the notation in the following Proposition \[prop:circuit\], which is the main result of this step, \[prop:circuit\] Let $\xi \in \Omega(\mathbb R^2)$. Assume that there are four sets of indices $$\begin{gathered} I_{left}, I_{lower} \subseteq \left\{ -n + 1, -n + 2, \ldots, -n + \left\lfloor \frac{n}{10} - 4 \right\rfloor \right\}, \\ I_{right}, I_{upper} \subseteq \left\{ n - 1, n - 2, \ldots, n - \left\lfloor \frac{n}{10} - 4 \right\rfloor \right\},\end{gathered}$$ such that $$\min (\# I_{left}, \# I_{right}, \# I_{lower}, \# I_{upper}) > 8\nu,$$ all the boxes $R'_{i*}$ are $(\varepsilon, \nu)$-crossed by $\xi$ for $i \in I_{left} \cup I_{right}$, and all the boxes $R'_{*i}$ are $(\varepsilon, \nu)$-crossed by $\xi$ for $i \in I_{lower} \cup I_{upper}$. Then $\xi$ has a large circuit in $Q_L$. With no loss of generality, assume that $$\# I_{left} = \# I_{right} = \# I_{lower} = \# I_{upper} = 8\nu + 1.$$ Define $P_{ij} = R'_{i*} \cap R_{*j}$. If $i \in I_{left} \cup I_{right}$, $j \in I_{lower} \cup I_{upper}$ and the square $P_{ij}$ is exceptional either for $R'_{i*}$ or for $R'_{*j}$, then the pair $(i, j)$ will be called exceptional, too. It is clear that the number of exceptional pairs, $m_{exc}$ satisfies the inequality $$m_{exc} \leq 4\nu(8\nu + 1).$$ Now choose $i_1 \in I_{left}$, $i_2 \in I_{right}$, $j_1 \in I_{lower}$ and $j_2 \in I_{upper}$ at random and independently. One can see that the expected number of exceptional pairs $(i_k, j_l)$, where $(k, l)$ runs through $\{1, 2\}^2$, equals $\frac{m_{exc}}{(8\nu + 1)^2} < 1$. Hence there exist $i_1, i_2, j_1, j_2$ as above such that none of the pairs $(i_k, j_l)$ is exceptional. Consequently, there exists a large circuit for $Q_L$ enclosed in the set $R'_{i_1*} \cup R'_{*j_1} \cup R'_{i_2*} \cup R'_{*j_2}$ (by definitions of crossing and exceptionality). [**Step 5.**]{} Assembling the proof of Lemma \[lem:dense\]. It is clear that $\lim\limits_{n \to \infty} p_2(n) = 0$. Hence, given $p > 0$, one can choose $q > 0$ and $n_0 \in \mathbb N$ such that the inequality  holds for every $n > n_0$. We will show that the statement of Lemma \[lem:dense\] holds with $q_0 = q$ and $L_0 = 20n_0K$. Let a hard-disk model $\eta = \eta^{(s)}(Q_L, \zeta)$ satisfy the conditions of Lemma \[lem:dense\]. If assertion (LC1) of the Large Circuit Lemma holds, there is nothing to prove. Therefore we assume that (LC1) is false. For an arbitrary configuration $\xi \in \Omega(\mathbb R^2)$ define $$\begin{gathered} I_{left}(\xi) = \biggl\{ i \in \left\{ -n + 1, -n + 2, \ldots, -n + \left\lfloor \frac{n}{10} - 4 \right\rfloor \right\} : \\ \text{$R'_{i*}$ is $(\varepsilon, \nu)$-crossed by $\xi$} \biggr\}.\end{gathered}$$ By Proposition \[prop:alternative\], we have $$\label{eq:left_crossings} \Pr(\# I_{left}(\eta^{(s)}(Q_L, \zeta)) > 8\nu) > 1 - \frac{p}{4}.$$ Let us define $I_{right}(\xi), I_{lower}(\xi)$ and $I_{upper}(\xi)$ in a similar way to $I_{left}(\xi)$. Then the inequalities similar to  apply. The union bound for these inequalities yields $$\Pr(\min (\# I_{left}(\eta), \# I_{right}(\eta), \# I_{lower}(\eta), \# I_{upper}(\eta) > 8\nu) > 1 - p.$$ By Proposition \[prop:circuit\], assertion (LC2) of the Large Circuit Lemma follows. The case of small $s$ --------------------- The goal of this subsection is to prove Lemma \[lem:sparse\]. We start with a simple observation. \[prop:pckdensity\_finite\] Let $L > 0$. Then there exists a configuration $\xi \in \Omega(\mathbb R^2)$ such that $$\# (\xi \cap Q_L) \geq \frac{|Q_L|}{2\sqrt{3}}.$$ Assume the converse. Then, since the $\# (\xi \cap Q_L)$ is an integer, one necessarily has $$\# (\xi \cap Q_L) \leq |Q_L|\left( \frac{1}{2\sqrt{3}} - \delta \right)$$ for every $\xi \in \Omega(\mathbb R^2)$ and some absolute constant $\delta > 0$. Therefore the inequality $$\# (\xi \cap (Q_L + t)) \leq |Q_L|\left( \frac{1}{2\sqrt{3}} - \delta \right)$$ holds for a fixed configuration $\xi$ and an arbitrary translation $t$. This immediately implies $$\alpha(2) = \limsup\limits_{M \to \infty} \sup\limits_{\xi \in \Omega(\mathbb R^2)} \frac{\# (\xi \cap Q_M)}{|Q_M|} \leq \frac{1}{2\sqrt{3}} - \delta.$$ But it is well-known (see [@FT]) that $\alpha(2) = \frac{1}{2\sqrt{3}}$. A contradiction finishes the proof. We proceed to the proof of Lemma \[lem:sparse\]. It is clear that there exist constants $C > 0$ and $L'_0 > 10$ such that the inequality $$\frac{|Q_{L - 7}|}{2\sqrt{3}} \geq \frac{|Q_L|}{2\sqrt{3}} - CL$$ holds for every $L > L'_0$. Hence, by Proposition \[prop:pckdensity\_finite\], there is a set $\xi \in \Omega(\mathbb R^2)$ such that $$\# (\xi \cap Q_{L - 7}) \geq \frac{|Q_L|}{2\sqrt{3}} - CL.$$ With no loss of generality, one can assume that $\xi \subset Q_{L - 7}$. Let $$\xi = \{ x_1, x_2, \ldots, x_k \},$$ where $k \geq \frac{|Q_L|}{2\sqrt{3}} - CL$. Let $0 \leq s \leq k$ be an integer, $\zeta \in \Omega(\mathbb R^2)$ be arbitrary boundary conditions. Consider a subspace $E \subseteq \Omega^{(s)}(Q_L, \zeta)$ defined as follows: $$E = \biggl\{(\zeta \setminus Q_L) \cup \{y_1, y_2, \ldots, y_s\} : \left \| y_i - \frac{L - 6}{L - 7}x_i \right\| < \frac{1}{2(L - 7)} \biggr\}.$$ Of course, $\inf\limits_{\zeta \in \Omega(\mathbb R^2)}\Pr( \eta^{(s)}(Q_L, \zeta) \in E ) > 0$. In addition, there is a ball $B_3(z) \subset (Q_L \setminus Q_{L - 6})$. Since $\varepsilon < \varepsilon_0 \leq 1$, one concludes that $$B_{2 + \varepsilon}(z) \subset Q_L \; \text{and} \; B_{2 + \varepsilon}(z) \cap \xi = \varnothing \quad \text{whenever $\xi \in E$.}$$ This finishes the proof of the lemma. Proof of the Main Theorem {#sec:main_proof} ========================= We proceed in three steps. First we transfer our results concerning the uniform model to the Poisson model. Then we show that, with high probability, there exists a chain of points $t_1, t_2, \ldots, t_m \in L \cdot \mathbb Z^2$, such that $\| t_{i + 1} - t_i \| = L$, $t_1 \in Q_M$, $t_m \notin Q_{M'}$ and each translate $\eta^{[\lambda]}(Q_{M' + 2L}, \zeta)$ has a large circuit in $Q_L$. Finally, we prove that the annulus crossing indeed occurs whenever such a chain exists. [**Step 1.**]{} “Poissonization” of the Large Circuit Lemma. The key statement of this step is Lemma \[lem:pois\_to\_unif\] below. In view of possible generalizations it is stated for arbitrary dimension $d$. The reader might find it useful to recall the notion of a configuration admitting an empty $\varepsilon$-space in a bounded open domain $D$ (Definition \[def:empty\]). Namely, $\xi$ admits an empty $\varepsilon$-space in $D$ if there exists an $\varepsilon$-ball $B_{\varepsilon}(w) \subseteq D$ such that $\operatorname{dist}(\xi, B_{\varepsilon}(w)) \geq 2$. \[lem:pois\_to\_unif\] Let the dimension $d \geq 1$ be fixed and the numbers $p, q, \varepsilon, L > 0$ be given. Then there exists $\lambda_0 > 0$ such that the following holds. If $E \subseteq \Omega(\mathbb R^2)$ is measurable and a Poisson hard-disk model $\eta^{[\lambda]}(Q_L, \zeta)$ satisfies the conditions $$\label{eq:pois_to_unif:pois} \begin{aligned} & \lambda > \lambda_0, \\ & \Pr(\eta^{[\lambda]}(Q_L, \zeta) \in E) \leq 1 - p, \end{aligned}$$ then there exists a uniform hard-disk model $\eta^{(s)}(Q_L, \zeta)$ (with the same boundary conditions $\zeta$) such that the two inequalities below hold simultaneously: $$\begin{aligned} & \Pr(\eta^{(s)}(Q_L, \zeta) \in E) \leq 1 - \frac{p}{2} \label{eq:pois_to_unif:p} \\ & \Pr \left( \text{$\eta^{(s)}(Q_L, \zeta)$ admits an empty $\varepsilon$-space in $Q_L$} \right) \leq q. \label{eq:pois_to_unif:q}\end{aligned}$$ It is clear that there exists a non-negative integer $s_0 = s_0(\zeta)$ such that the uniform hard-disk model $\eta^{(s)}(Q_L, \zeta)$ is well-defined for $s \in \{ 0, 1, \ldots, s_0 \}$ and undefined for $s > s_0$. If $\xi \in \eta^{(s)}(Q_L, \zeta)$, then $$\{ B_1(x) : x \in \xi \cap Q_L \}$$ is a packing of balls in $Q_{L + 1}$, and therefore $$s = \# (\xi \cap Q_L) \leq \frac{|Q_{L + 1}|}{\beta},$$ where $\beta = |B_1(\mathbf 0)|$. Thus $s_0 \leq \frac{|Q_{L + 1}|}{\beta}$. We will show that the choice $$\label{eq:lambda_0} \lambda_0 = \frac{2 |Q_{L + 1}|}{\beta^2 \varepsilon^d pq}$$ is sufficient. Let $$S_1(\zeta) = \{ s \in \{ 0, 1, \ldots, s_0(\zeta) \} : \text{\eqref{eq:pois_to_unif:p} is false} \},$$ $$S_2(\zeta) = \{ s \in \{ 0, 1, \ldots, s_0(\zeta) \} : \text{\eqref{eq:pois_to_unif:q} is false} \}.$$ Assume, for a contradiction, that a Poisson hard-disk model $\eta^{[\lambda]}(Q_L, \zeta)$ satisfies  and $$S_1(\zeta) \cup S_2(\zeta) = \{ 0, 1, \ldots, s_0(\zeta) \}.$$ With no loss of generality suppose $\zeta \cap Q_L = \varnothing$, since the replacement $\zeta \mapsto \zeta \setminus Q_L$ has no effect on any of the relevant models. For the rest of the proof we will use the shortened notation $\eta$ for the Poisson model $\eta^{[\lambda]}(Q_L, \zeta)$. A Poisson hard-disk model is known to be a weighted mixture of uniform hard-disk models. The weight assigned to a uniform model $\eta^{(s)}(Q_L, \zeta)$ ($s \in \{ 0, 1, \ldots, s_0(\zeta) \}$) equals $\Pr(\# (\eta \cap Q_L) = s)$ and satisfies the following expression: $$\Pr(\# (\eta \cap Q_L) = s) = \frac{\frac{\lambda^s}{s!} A_s(\zeta)}{\sum\limits_{i = 0}^{s_0(\zeta)}\frac{\lambda^i}{i!} A_i(\zeta)},$$ where $$A_s(\zeta) = \int\limits_{(Q_L)^s} \mathbbm{1}(\{x_1, x_2, \ldots, x_s \} \cup (\zeta \setminus Q_L) \in \Omega(\mathbb R^d))\, dx_1 dx_2 \ldots dx_s.$$ (See, for instance, [@Ar Section 2].) Consider an arbitrary integer $s \in S_2(\zeta)$. By definition of $S_2(\zeta)$, the space $\Omega^{(s)}(Q_L, \zeta)$ contains configurations that can be extended to a larger configuration by adding one point from $Q_L$. Therefore $s + 1 \leq s_0$. Moreover, if $$F^{empty}(\zeta) = \{ \xi \in \Omega(\mathbb R^d) : \text{$\xi \setminus Q_L = \zeta$ and $\xi$ admits an empty $\varepsilon$-space in $Q_L$} \},$$ then $$\begin{gathered} A_{s + 1}(\zeta) = \int\limits_{(Q_L)^{s + 1}} \mathbbm{1}(\{x_1, x_2, \ldots, x_{s + 1} \} \cup \zeta \in \Omega(\mathbb R^d))\, dx_1 dx_2 \ldots dx_{s + 1} = \\ \int\limits_{(Q_L)^s} dx_1 dx_2 \ldots dx_s \Biggl[ \mathbbm{1}(\{x_1, x_2, \ldots, x_s \} \cup \zeta \in \Omega(\mathbb R^d)) \times \\ \int\limits_{Q_L} \mathbbm{1}(\operatorname{dist}(x_{s + 1}, \{x_1, x_2, \ldots, x_s \} \cup \zeta) \geq 2)\, dx_{s + 1} \Biggr] \geq \\ \int\limits_{(Q_L)^s} dx_1 dx_2 \ldots dx_s \Biggl[ \mathbbm{1}(\{x_1, x_2, \ldots, x_s \} \cup \zeta \in F^{empty}(\zeta)) \times \\ \int\limits_{Q_L} \mathbbm{1}(\operatorname{dist}(x_{s + 1}, \{x_1, x_2, \ldots, x_s \} \cup \zeta) \geq 2)\, dx_{s + 1} \Biggr] \geq \\ \beta \varepsilon^d \int\limits_{(Q_L)^s} \mathbbm{1}(\{x_1, x_2, \ldots, x_s \} \cup \zeta \in F^{empty}(\zeta)) \, dx_1 dx_2 \ldots dx_s \geq \beta \varepsilon^d q A_s.\end{gathered}$$ Therefore $$\Pr(\# (\eta \cap Q_L) = s + 1 ) \geq \frac{\lambda \beta \varepsilon^d q}{s + 1} \Pr(\# (\eta \cap Q_L) = s) \geq \frac{\lambda \beta \varepsilon^d q}{s_0} \Pr(\# (\eta \cap Q_L) = s).$$ Thus  and the assumption $\lambda > \lambda_0$ imply $$\Pr(\# (\eta \cap Q_L) \in S_2(\zeta) ) \leq \frac{s_0}{\lambda \beta \varepsilon^d q} \leq \frac{1}{\lambda} \cdot \frac{|Q_{L + 1}|}{\beta^2 \varepsilon^d q} \leq \frac{p}{2}.$$ From the assumption $S_1(\zeta) \cup S_2(\zeta) = \{ s \in \{ 0, 1, \ldots, s_0(\zeta) \}$ we conclude $$\Pr(\# (\eta \cap Q_L) \in S_1(\zeta) ) \geq 1 - \frac{p}{2}.$$ Consequently, $$\begin{aligned} & \Pr(\eta \in E) \geq \\ & \sum\limits_{s \in S_1(\zeta)} \Pr \left( \eta \in E \bigm\vert \# (\eta \cap Q_L) = s \right) \cdot \Pr(\# (\eta \cap Q_L) = s) > \\ & \sum\limits_{s \in S_1(\zeta)} \left( 1 - \frac{p}{2} \right) \cdot \Pr(\# (\eta \cap Q_L) = s) = \\ & \left( 1 - \frac{p}{2} \right) \cdot \Pr(\# (\eta \cap Q_L) \in S_1) \geq \\ & \left( 1 - \frac{p}{2} \right)^2 > 1 - p. \end{aligned}$$ A contradiction to the second inequality of  finishes the proof. We apply Lemma \[lem:pois\_to\_unif\] obtaining the following Proposition \[prop:large\_circuit\_pois\] on the probability to see a large circuit. For the explicit definition of a large circuit the reader may refer to Definition \[def:large\_circuit\]. \[prop:large\_circuit\_pois\] Let $\varepsilon, p, \rho > 0$. Then there exists $L > 2\rho$ and $\lambda_0 > 0$ such that the inequality $$\Pr \left( \text{$\eta^{[\lambda]}(Q_L, \zeta)$ has a large circuit in $Q_L$} \right) > 1 - p$$ holds for all boundary conditions $\zeta \in \Omega(\mathbb R^2)$ and all $\lambda > \lambda_0$. Follows immediately from the Large Circuit Lemma and Lemma \[lem:pois\_to\_unif\]. [**Step 2.**]{} Large circuits of fixed size percolate in the sense of a certain discrete Peierls-type lemma. Again, the key statement of this step, Lemma \[lem:peierls\] is stated for an arbitrary dimension $d$, since it could be useful in possible generalizations. We start with several definitions. A point set $\{ u_1, u_2, \ldots, u_m \} \subset \mathbb Z^d$ is called [*neighbor-free*]{} if $\| u_i - u_j \|_{L_{\infty}} > 1$ for every $1 \leq i < j \leq m$. The definition above uses the $L_{\infty}$ distance. By the $L_{\infty}$ norm of a vector we mean, as usual, the largest absolute value of its coordinates. Let a dimension $d \geq 2$, a real number $p \in (0, 1)$ and a positive integer $M$ be given. Let $\tau \subseteq \mathbb Z^d$ be a random point set. Assume that for every neighbor-free set $\{ u_1, u_2, \ldots, u_m \} \subset \mathbb Z^d \cap Q_N$ the inequality $$\Pr( \{ u_1, u_2, \ldots, u_m \} \subseteq \mathbb Z^d \setminus \tau ) \geq p^m$$ holds. Then the random set $\tau$ will be called [*$p$-dense*]{} in $Q_N$. \[def:discrete\_crossing\] A point set $\tau \subseteq \mathbb Z^d$ will be called $(M, N)$-crossing, where $M, N \in \mathbb Z$ and $0 < M \leq N$, if there exists a sequence of points $t_1, t_2, \ldots, t_k \in \tau$ such that $\| t_{i + 1} - t_i \| = 1$, $t_1 \in \partial Q_M$, $t_k \in \partial Q_N$. \[lem:peierls\] Let $d \geq 2$ be a fixed dimension. Then there exist positive real numbers $c_1, C_1, C_2 > 0$ such that the following holds. If $N$ is a positive integer, $p < \min (1, 1 / C_2)$ is a positive real number and $\tau \subseteq \mathbb Z^d$ is a random point set satisfying the $p$-density property in $Q_N$, then the inequality $$\Pr( \text{$\tau$ is $(M, N)$-crossing} ) \geq 1 - C_1(C_2p)^{c_1M^{d - 1}}$$ holds for every $M \in \{1, 2, \ldots, N \}$. Denote $$E_{M, N} = \{ \upsilon \subseteq \mathbb Z^d : \text{$\upsilon$ is not $(M, N)$-crossing} \}.$$ Let us construct a map $\sigma_{M, N} : E_{M, N} \to 2^{\mathbb Z^d}$ as follows: $$\begin{gathered} \sigma_{M, N}(\upsilon) = (\mathbb Z^d \setminus Q_{M'}) \cup \sigma'_{M, N}(\upsilon), \quad \text{where} \\ \sigma'_{M, N}(\upsilon) = \{ t \in \mathbb Z^d \cap Q_{M'} : \text{$\exists t_1, t_2, \ldots, t_k \in \upsilon$ such that} \\ \text{$t_1 = t$, $t_k \in \partial Q_{M'}$ and $\| t_{i + 1} - t_i \| = 1$} \}.\end{gathered}$$ Take an arbitrary set $\sigma \in \sigma_{M, N}(E_{M, N})$. Let us surround each point of $\sigma$ with a closed unit cube. We will consider the boundary $\mathcal C(\sigma)$ of the union of all such cubes, i.e., $$\mathcal C(\sigma) = \partial \operatorname{cl}\left( \bigcup \limits_{x \in \sigma} (Q_{0.5} + x) \right).$$ In the framework of [@Ru Section 5.3] the surface $\mathcal C(\sigma)$ can be decomposed into the so-called [*Peierls contours*]{}. Each contour is a union of [*plaquettes*]{}, where a plaquette is a facet of some unit cube $Q_{0.5} + y$, $y \in \mathbb Z^d$. A contour possesses the structure of adjacency of plaquettes. Each plaquette is declared adjacent to $2(d - 1)$ other plaquettes of the same contour, one adjacency for each $(d - 2)$-face of a plaquette. Two plaquettes adjacent by a $(d - 2)$-face share that face; the converse is not guaranteed: two plaquettes of the same contour sharing a $(d - 2)$-face may be adjacent by that face, but also may be non-adjacent. A point $x \in \mathbb R^d$ is said to be [*inside*]{} a contour $\mathcal C' \subseteq \mathcal C(\sigma)$ if every sufficiently generic ray from the point $x$ intersects an odd number of plaquettes of $\mathcal C'$. Then there is a unique contour $\mathcal C_0(\sigma) \subseteq \mathcal C(\sigma)$ such that the origin $\mathbf 0$ is inside $\mathcal C_0(\sigma)$. Next, choose an arbitrary contour $\mathcal C_0 \in \mathcal C_0 \circ \sigma_{M, N}(E_{M, N})$. Let $\# \mathcal C_0$ denote the [*size*]{} of the contour $\mathcal C_0$, i.e., the number of its plaquettes. Since no plaquette $\mathcal C$ lies inside $Q_M$, the entire cube $Q_M$ lies in the interior of $\mathcal C_0$. Therefore the following inequality holds: $$\# \mathcal C_0 \geq c_2 M^{d - 1},$$ where $c_2$ is a positive number, depending only on $d$. We will say that an integer point $y \in \mathbb Z^d$ [*approaches $\mathcal C_0$ from inside*]{} if the cube $Q_{0.5} + y$ is inside $\mathcal C_0$ and its boundary, $\partial(Q_{0.5} + y)$ has a common plaquette with $\mathcal C_0$. Let $\chi(\mathcal C_0)$ denote the set of all integer points approaching $\mathcal C_0$ from inside. Then there exists a positive number $c_3$, depending only on $d$, such that $$\# \chi(\mathcal C_0) \geq c_3 \# \mathcal C_0(\sigma).$$ Moreover, it is possible to choose an neighbor-free set $\chi'(\mathcal C_0) \subseteq \chi(\mathcal C_0)$ such that $$\# \chi'(\mathcal C_0) \geq c_4 \# \mathcal C_0(\sigma),$$ where $c_4$ is a positive number, depending only on $d$. Let $\upsilon \in E_{M, N}$ and $\mathcal C_0 \circ \sigma_{M, N}(\upsilon) = \mathcal C_0$. Then, clearly, $\chi(\mathcal C_0) \subseteq \mathbb Z^d \setminus \upsilon$, and thus $\chi'(\mathcal C_0) \subseteq \mathbb Z^d \setminus \upsilon$. Therefore, by the $p$-density of a random set $\tau$ one has $$\Pr \left( \text{$\tau \in E_{M, N}$ and $\mathcal C_0 \circ \sigma_{M, N}(\tau) = \mathcal C_0$} \right) \leq \Pr(\chi'(\mathcal C_0) \subseteq \mathbb Z^d \setminus \tau) \leq p^{c_4 \# \mathcal C_0}.$$ Taking the sum over all $\mathcal C_0 \in \mathcal C_0 \circ \sigma_{M, N}(E_{M, N})$, one obtains $$\Pr \left( \tau \in E_{M, N} \right) \leq \sum\limits_{i \geq c_2 M^{d - 1}} \left( \# \{\mathcal C_0 \in \mathcal C_0 \circ \sigma_{M, N}(E_{M, N}) : \# \mathcal C_0 = i \} \cdot p^{c_4 i} \right),$$ assuming that the right-hand side converges. But $$\# \{\mathcal C_0 \in \mathcal C_0 \circ \sigma_{M, N}(E_{M, N}) : \# \mathcal C_0 = i \} \leq C_3^i$$ see [@Ru; @LM; @BB]). Consequently, $$\Pr \left( \tau \in E_{M, N} \right) \leq \sum\limits_{i \geq c_2 M^{d - 1}} C_3^i \cdot p^{c_4i} \leq C_1(C_2p)^{c_1M^{d - 1}},$$ which finishes the proof. Now we turn to the corollary for the Poisson hard-disk model. \[prop:discrete\_crossing\] Let $\varepsilon, p, \rho > 0$. There exist $L > 2\rho$ and $\lambda_0 > 0$ such that the following holds. For every 2-dimensional Poisson hard-disk model $\eta = \eta^{[\lambda]}(Q_{L'}, \zeta)$, where $L' > 10L$ and $\lambda > \lambda_0$, and every integer $M < \frac{L'}{L} - 2$ the random integer-point set $$\label{eq:tau} \tau_{\varepsilon, L}(\eta) = \{ t \in \mathbb Z^2 : \text{$\xi - Lt$ has a large $\varepsilon$-circuit in $Q_L$} \}$$ satisfies the property $$\label{eq:discrete_crossing} \Pr\left( \text{$\tau_{\varepsilon, L}(\eta)$ is $\left(M, \left\lfloor L'/L \right\rfloor - 1 \right)$-crossing} \right) \geq 1 - C_1(C_2p)^{c_1M},$$ where $c_1, c_2, C_1$ are positive absolute constants. Choose $L$ and $\lambda_0$ as in Proposition \[prop:large\_circuit\_pois\]. By Lemma \[lem:peierls\], it is sufficient to prove the $p$-density of the set $\tau_{\varepsilon, L}(\eta)$. But this is an immediate corollary of Lemma \[lem:finite\_gibbs\] for $D_i = Q_L + L u_i$ if $\{ u_1, u_2, \ldots, u_m \} \subseteq \mathbb Z^d$ is the relevant neighbor-free set. [**Step 3.**]{} The event on the left-hand side of  implies the annulus crossing. \[prop:dc\_to\_ac\] Let a real number $L > 0$ two positive integers $M < N$ be given. Assume that a configuration $\xi \in \Omega(\mathbb R^2)$ is such that the set $\tau_{\varepsilon, L}(\xi)$, defined by , is $(M, N)$-crossing. Then $$\xi \in \operatorname{AC}(\varepsilon, ML, NL),$$ i.e., there is a connected component of the graph $G_{\varepsilon}(\xi)$ with a vertex in $Q_{ML}$ and another vertex in $\mathbb R^2 \setminus Q_{NL}$. Let $t_1, t_2, \ldots, t_k \in \tau_{\varepsilon, L}(\xi)$ be the sequence of integer points as in Definition \[def:discrete\_crossing\]. The definition of a large circuit can be naturally extended from the one for the box $Q_L$ to every translate $Q_L + t$ of this box. Namely, $\xi$ has a large circuit in $Q_L + t$ if $\xi - t$ has a large circuit in $Q_L$. If $x_1 - t, x_2 - t, \ldots, x_m - t, x_1 - t$ ($x_i \in \xi$) is a large circuit for $\xi - t$ in the box $Q_L$ then we call $x_1, x_2, \ldots, x_m, x_1$ a large circuit for $\xi$ in the box $Q_L + t$. Correspondingly, by the definition  of $\tau_{\varepsilon, L}(\xi)$, for every $i = 1, 2, \ldots, k$ the configuration $\xi$ has a large circuit $\mathfrak c_i \subset G_{\varepsilon}(\xi)$ in each square box $Q_L + t_i$. Since $\|t_i - t_{i + 1} \| = 1$, the circuits $\mathfrak c_i$ and $\mathfrak c_{i + 1}$ intersect in the following sense: one can choose an edge $(x, y)$ of $\mathfrak c_i$ and an edge $(z, w)$ of $\mathfrak c_{i + 1}$ such that the segments $[x, y]$ and $[z, w]$ have a common point. Thus $$\| x - z \| + \| y - w \| \leq \| x - y \| + \| z - w \| \leq 2(2 + \varepsilon).$$ Therefore $\min ( \| x - z\| , \| y - w \| ) \leq 2 + \varepsilon$, and, consequently, $\mathfrak c_i$ and $\mathfrak c_{i + 1}$ belong to the same connected component of $G_{\varepsilon}(\xi)$. As a conclusion, $\mathfrak c_1$ and $\mathfrak c_m$ are in the same connected component of $G_{\varepsilon}(\xi)$. But $\mathfrak c_1$ has a vertex in $Q_{ML}$, and $\mathfrak c_m$ has a vertex in $\mathbb R^2 \setminus Q_{NL}$. Hence the lemma follows. We are ready to finish the proof of the Main Theorem. Choose $L$ and $\lambda_0$ as in Proposition \[prop:large\_circuit\_pois\]. Then, by Propositions \[prop:discrete\_crossing\] and \[prop:dc\_to\_ac\], one has $$\Pr\left( \eta^{[\lambda]}(Q_{L'}, \zeta) \in \operatorname{AC}(\varepsilon, ML, NL) \right) \geq 1 - C_1(C_2p)^{c_1M},$$ where $N = \lfloor L' / L \rfloor - 1$, $0 < M < N$. Since $NL > L' - 2L$, one concludes $$\Pr\left( \eta^{[\lambda]}(Q_{L'}, \zeta) \in \operatorname{AC}(\varepsilon, L_1, L' - 2L) \right) \geq 1 - C_1(C_2p)^{c_1\lfloor L_1 / L \rfloor}.$$ Hence the Main Theorem follows. Corollaries for Gibbs distributions {#sec:gibbs} =================================== In this section we prove some corollaries of the Main Theorem related to the notion of a Gibbs distribution. These results are direct counterparts of [@Ar Theorems 2, 3]. We start with a standard definition of a Gibbs distribution. Let $\lambda > 0$. A random configuration $\eta \in \Omega(\mathbb R^2)$ is said to comply with a [*Gibbs distribution*]{} with intensity $\lambda$ if every measurable function $f : \Omega(\mathbb R^2) \to [0, \infty)$ and every $L > 0$ satisfy $$\label{eq:def:1} \operatorname{E} \bigl[ f(\eta) \bigr] = \operatorname{E} \biggl[ \operatorname{E} \bigl[ f(\eta^{[\lambda]}(Q_L, \eta)) \bigr] \biggr].$$ The identity  is a close analogue to the one in Lemma \[lem:finite\_gibbs\], Poisson case. For this reason, a Gibbs distribution can be considered as a generalization of a hard-disk model. Gibbs measures are known to exist for every intensity $\lambda > 0$, while their uniqueness remains an open problem. We are interested in a standard percolation question: if a random configuration $\eta$ is sampled from a Gibbs distribution with intensity $\lambda$, is it true that with high probability the graph $G_{\varepsilon}(\eta)$ has an infinite connected component? The following proposition provides a convenient notation for dealing with this question. Let $$\label{eq:nested} A(\varepsilon, M) = \bigcap\limits_{M' > M_0} \operatorname{AC}(\varepsilon, M, M').$$ Then for every $\xi \in A(\varepsilon, M)$ there exists a point $x \in \xi \cap Q_M$, which is a vertex of an infinite connected component of the graph $G_{\varepsilon}(\xi)$. We argue by contradiction. Assume $\xi \in A(\varepsilon, M)$ is a counterexample to the proposition. Then for every $x \in \xi \cap Q_M$ there exists $M'(x)$ such that all vertices of the connected component of $x$ in $G_{\varepsilon}(\xi)$ belong to $Q_{M'(x)}$. The set $\xi \cap Q_M$ is finite, therefore the value $$\Hat{M}' = \sup\limits_{x \in \xi \cap Q_M} M'(x)$$ is finite. But then $\xi \notin \operatorname{AC}(\varepsilon, M, \Hat{M}')$, and hence $\xi \notin A(\varepsilon, M)$. A contradiction finishes the proof. Now we turn to the main results of this section, Theorems \[thm:2\] and \[thm:3\]. \[thm:2\] Let $\varepsilon > 0$. Then there exist positive numbers $\lambda_0$, $c$ and $C$ such that the inequality $$\Pr (\eta \in A(\varepsilon, M)) \geq 1 - C \exp(-cM)$$ holds for every $M > 0$, every $\lambda > \lambda_0$ and a random configuration $\eta$ sampled from any Gibbs distribution on $\Omega(\mathbb R^2)$ with intensity $\lambda$. Since the formula  is a representation of $A(\varepsilon, M)$ as the intersection of nested families of configurations (i.e., $\operatorname{AC}(\varepsilon, M, M') \subseteq \operatorname{AC}(\varepsilon, M, M'')$ whenever $M < M'' < M'$), it is sufficient to prove that $$\Pr (\eta \in \operatorname{AC}(\varepsilon, M, M')) \geq 1 - C \exp(-cM)$$ for every $M' > M$. Consider the indicator function $f(\xi) = \mathbbm{1}_{\operatorname{AC}(\varepsilon, M, M')}(\xi)$. Applying  to the function $f$ yields $$\begin{gathered} \Pr (\eta \in \operatorname{AC}(\varepsilon, M, M')) = \operatorname{E} \bigl[ \mathbbm{1}_{\operatorname{AC}(\varepsilon, M, M')}(\eta) \bigr] = \\ \operatorname{E} \biggl[ \operatorname{E} \bigl[ \mathbbm{1}_{\operatorname{AC}(\varepsilon, M, M')}(\eta^{[\lambda]}(Q_{M' + L_1}, \eta)) \bigr] \biggr] = \\ \operatorname{E} \biggl[ \Pr \bigl[ \eta^{[\lambda]}(Q_{M' + L_1}, \eta) \in \operatorname{AC}(\varepsilon, M, M') \bigr] \geq 1 - C \exp(-cM),\end{gathered}$$ where the last inequality immediately follows from the Main Theorem. On the other hand, the inclusion $\operatorname{AC}(\varepsilon, M, M') \subseteq \operatorname{AC}(\varepsilon, M, M'')$ holds whenever $M < M'' < M'$. Therefore  implies $$\Pr (\eta \in A(\varepsilon, M)) = \inf\limits_{M' > M} \Pr (\eta \in \operatorname{AC}(\varepsilon, M, M')) \geq 1 - C \exp(-cM).$$ \[thm:3\] Let $\varepsilon > 0$. Then there exists $\lambda_0 > 0$ such that the identity $$\mathsf P(\text{$G_{\varepsilon}(\eta)$ has an infinite connected component}) = 1$$ holds for every $\lambda > \lambda_0$ and a random configuration $\eta$ sampled from any Gibbs distribution on $\Omega(\mathbb R^2)$ with intensity $\lambda$. Denote $$A(\varepsilon) = \{ \xi \in \Omega(\mathbb R^2) : \text{$G_{\varepsilon}(\eta)$ has an infinite connected component}.$$ Then $$A(\varepsilon) = \bigcup\limits_{M > 0} A(\varepsilon, M).$$ Since $A(\varepsilon, M) \subseteq A(\varepsilon, M')$ whenever $0 < M < M'$, we have $$\Pr (\eta \in A(\varepsilon)) = \sup\limits_{M > 0} \Pr (\eta \in A(\varepsilon, M)) = 1,$$ where the last identity follows immediately from Theorem \[thm:2\]. Acknowledgements {#acknowledgements .unnumbered} ================ The author is thankful to A. Sodin, R. Peled and N. Chandgotia for discussion that helped improving the earlier versions of the paper. [99]{} D. Aristoff, Percolation of hard disks. J. Appl. Probab. Volume 51:1 (2014), 235–246. P.N. Balister, B. Bollobás, B, Counting regions with bounded surface area. Communications in mathematical physics, 273:2 (2007), 305–315. X. Blanc, M. Lewin, The crystallization conjecture: a review. Preprint (2015), arXiv:1504.01153. L. Bowen, R. Lyons, C. Radin, and P. Winkler, A Solidification Phenomenon in Random Packings. SIAM J. Math. Anal., 38:4 (2006), 1075–1089. H. Cohn, N. Elkies, New upper bounds on sphere packings I. Annals of Math. 157 (2003) pp. 689–714. L. Fejes Tóth.: Lagerungen in der Ebene, auf der Kugel und in Raum. Springer, New York, 1953. T.C. Hales, A proof of the Kepler conjecture, Annals of Mathematics, 162 (2005), 1065–1185. T.E. Harris, A lower bound for the critical probability in a certain percolation process. Mathematical Proceedings of the Cambridge Philosophical Society, 56:1 (1960), 13–20. S. Jansen, Continuum percolation for Gibbsian point processes with attractive interactions. Electronic Journal of Probability 21 (2016). J.F.C. Kingman. Poisson processes. John Wiley & Sons, Ltd, 1993. J.L. Lebowitz, A.E. Mazel, Improved Peierls argument for high-dimensional Ising models. Journal of statistical physics 90:3 (1998), 1051–1059. S. Mase, J. Møller, D. Stoyan, R.P. Waagepetersen, G. Döge, Packing Densities and Simulated Tempering for Hard Core Gibbs Point Processes, Annals of the Institute of Statistical Mathematics, 53:4 (2001), 661–680. T. Richthammer, Translation-invariance of two-dimensional Gibbsian point processes, Commun. Math. Phys., 274 (2007), 81–122. C.A. Rogers, The packing of equal spheres. Proceedings of the London Mathematical Society 3:4 (1958), 609–620. D. Ruelle, Statistical mechanics: Rigorous results. World Scientific, 1999. K. Stucki, Continuum percolation for Gibbs point processes. Electronic communications in probability 18 (2013). A. Thue, Om nogle geometrisk-taltheoretiske Theoremer. Forhdl. skandinaviske naturforskeres (1892), 352–353. M. Viazovska, The sphere packing problem in dimension 8. Preprint (2016), arXiv:1603.04246. H. Cohn, A. Kumar, S.D. Miller, D. Radchenko, M. Viazovska, The sphere packing problem in dimension 24. Preprint (2016), arXiv:1603.06518. [^1]: Supported by ERC Starting Grant 639305 and ERC Starting Grant 678520.
--- abstract: 'We formulate self-consistency equations for the distribution of links in spin models and of plaquettes in gauge theories. This improves upon known mean-field, mean-link, and mean-plaquette approximations in such that we self-consistently determine all moments of the considered variable instead of just the first. We give examples in both Abelian and non-Abelian cases.' author: - Oscar Akerlund - Philippe de Forcrand bibliography: - '\\jobname.bib' title: Mean distribution approach to spin and gauge theories --- Introduction ============ (current page.north east) ++(-1,-1.5) node\[below left\] [`CERN-PH-TH/2015-291`]{}; It is always of interest to think about methods that allow easy extraction of approximate results, even though the computer power available for exact simulations is growing at an ever increasing pace. Mean-field methods are often qualitatively reliable in their self-consistent determination of the long-distance physics, and have a wide range of applications, with spin models as typical examples. For a gauge theory, formulated in terms of the gauge links, however, it is questionable what a *mean link* would mean, because of the local nature of the symmetry. This can be addressed by fixing the gauge, but the mean-field solution will then in general depend on the gauge-fixing parameter. Nevertheless, Drouffe and Zuber developed techniques for a mean field treatment of general Lattice Gauge Theories in [@Drouffe:1983fv] and showed that for fixed $\beta d$, where $\beta$ is the inverse gauge coupling and $d$ the dimension, the mean-field approximation can be considered the first term in a $1/d$ expansion. They established that the mean field approximation can be thought of as a resummation of the weak coupling expansion in a particular gauge and that there is a first order transition to a strong coupling phase at a critical value of $\beta$. Since it becomes exact in the $d\to\infty$ limit, this mean field approximation can be used with some confidence in high-dimensional models [@Irges:2009bi]. The crucial problem of gauge invariance was tackled and solved by Batrouni in a series of papers [@Batrouni:1982dx; @Batrouni:1982bg], where he first changed variables from gauge-variant links to gauge-invariant plaquettes. The associated Jacobian is a product of lattice Bianchi identities, which enforce that the product of the plaquette variables around an elementary cube is the identity element. In the Abelian case this is easily understood, since each link occurs twice (in opposite directions) and cancels in this product, leaving the identity element. In the non-Abelian case the plaquettes in each cube have to be parallel transported to a common reference point in order for the cancellation to work. It is worth noting that in two dimensions there are no cubes so the Jacobian of the transformation is trivial and the new degrees of freedom completely decouple (up to global constraints). This kind of change of variables can be performed for any gauge or spin model whose variables are elements of some group. Apart from gauge theories, examples include ${\ensuremath{\mathbb{Z}}}_N$-spin models, $O(2)$- and $O(4)$-spin models and matrix-valued spin models. In spin models, the change of variables is from spins to links and the Bianchi constraint dictates that the product of the links around an elementary plaquette is the identity element. A visualization of the transformation and the Bianchi constraint for a $2d$ spin model is given in Fig. \[fig:bianchi\]. ![The change of variables from spins $s_i$ (*left panel*) to links $l_{ij}$ (*right panel*) that leads to the Bianchi identity $l_{12}l_{23}l_{34}l_{41} (=s_1s_2^\dagger s_2s_3^\dagger s_3s_4^\dagger s_4s_1^\dagger)=1$.[]{data-label="fig:bianchi"}]({{../figures/bianchi_c}}){width="0.6\linewidth"} Let us review the change of variables for a gauge theory [@Batrouni:1982bg]. The original variables are links. The new ones are plaquettes. Under the action of the original symmetry of the model, the new variables transform within equivalence classes and it is possible to employ a mean field analysis to determine the “mean equivalence class”. As usual we first choose a set of *live* variables, which keep their original dynamics and interact with an external bath of mean-valued fields. Interactions are generated through the Jacobian, which is a product of Bianchi identities represented by $\delta$-functions $$\label{eq:bianchi_id} \delta\left(\prod_{P\in\partial C}U_P-1\right),$$ where $P$ denotes a plaquette and $\partial C$ denotes the oriented boundary of the elementary cube $C$. The $\delta$-functions can be represented by a character expansion in which we can replace the characters at the external sites by their expectation, or mean, values. Upon truncating the number of representations, this yields a closed set of equations in the expectation values which can be solved numerically. The method can be systematically improved by increasing the number of representations used and the size of the live domain. While this method works surprisingly well, even at low truncation, it determines the expectation value of the plaquette in only a few representations. Here, we propose a method that self-consistently determines the complete distribution of the plaquettes (or links) and thus the expectation value in all representations. This is due to an exact treatment of the lattice Bianchi identities which does not rely on a character expansion. The only approximation then lies in the size of the live domain which can be systematically enlarged, as in any mean field method. It is worth noting that our method works best for small $\beta$ and low dimensions: it does not become exact in the infinite dimension limit. In this way it can be seen as complementary to the mean field approach of [@Drouffe:1983fv]. We will also see that the mean distribution approach proposed here actually works rather well for both small and large $\beta$. The paper is organized as follows. In section \[sec:method\] we describe the method in general terms and compare it to the mean field, mean link and mean plaquette methods before describing more detailed treatments of spin models and gauge theories in sections \[sec:spin\] and  \[sec:gauge\] respectively. Finally, we draw conclusions in section \[sec:conclusions\]. Method {#sec:method} ====== Mean Field Theory ----------------- Let us for completeness give a very brief reminder of standard mean field theory. Consider for definiteness a lattice model with a single type of variables $s$ which live on the lattice sites. The lattice action is assumed to be translation invariant and of the form $$\label{eq:mf_action_orig} S = -\frac{1}{2}\sum_{i,j}J_{{\lverti-j\rvert}}s^\dagger_is_j + \sum_iV(s_i),$$ where $i,j$ labels the lattice sites and $V(s)$ is some local potential. Let us now split the original lattice into a live domain $D$ and an external bath $D^c$. The variables $\{s_i\mid i\in D^c\}$ all take a constant “mean” value ${ \hbox{ \vbox{ \hrule height 0.5pt \kern0.3ex \hbox{ \kern-0.1em \ensuremath{s} \kern-0.1em } } }}$. The mean field action then becomes (up to a constant) $$\label{eq:mf_action_mf} S_\text{MF} = -\frac{1}{2}\sum_{i,j\in D}J_{{\lverti-j\rvert}}s^\dagger_is_j + \sum_{i\in D}\left(V(s_i)-\sum_{j\in D^c}J_{{\lverti-j\rvert}}s_i^\dagger{ \hbox{ \vbox{ \hrule height 0.5pt \kern0.3ex \hbox{ \kern-0.1em \ensuremath{s} \kern-0.1em } } }}\right),$$ where ${ \hbox{ \vbox{ \hrule height 0.5pt \kern0.3ex \hbox{ \kern-0.1em \ensuremath{s} \kern-0.1em } } }}$ is determined by the self-consistency condition that the average value of $s$ in the domain $D$ is equal to the average value in the external bath, $$\label{eq:sc_mf} {\left\langles\right\rangle} = \frac{\displaystyle\int\prod_{i\in D}{\ensuremath{\mathrm{d}}}{}s_i\, s_i e^{-S_\text{MF}}} {\displaystyle\int\prod_{i\in D}{\ensuremath{\mathrm{d}}}{}s_i\, e^{-S_\text{MF}}} \overset{!}{=} { \hbox{ \vbox{ \hrule height 0.5pt \kern0.3ex \hbox{ \kern-0.1em \ensuremath{s} \kern-0.1em } } }}.$$ Once ${ \hbox{ \vbox{ \hrule height 0.5pt \kern0.3ex \hbox{ \kern-0.1em \ensuremath{s} \kern-0.1em } } }}$ has been determined the mean field action  can be used to measure other observables local to the domain $D$. Mean Distribution Theory ------------------------ To generalize the mean field approach we relax the condition that the fields at the live sites interact only with the mean value of the external bath. Instead, the fields in the external bath are allowed to vary and take different values distributed according to a *mean distribution*. The self-consistency condition is thus that the distribution of the variables in the live domain equals the distribution in the bath. Consider a real scalar theory for illustration purposes. Starting from the action $$\label{eq:act_scalar} S = -2\kappa\sum_{{\left\langlei,j\right\rangle}}\phi_i\phi_j + \sum_i V\left(\phi_i\right),$$ with nearest neighbor coupling $\kappa$ and a general on-site potential $V$, we expand the field $\phi\equiv\delta\phi+{ \hbox{ \vbox{ \hrule height 0.5pt \kern0.3ex \hbox{ \kern-0.1em \ensuremath{\phi} \kern-0.1em } } }}$ around its mean value ${ \hbox{ \vbox{ \hrule height 0.5pt \kern0.3ex \hbox{ \kern-0.1em \ensuremath{\phi} \kern-0.1em } } }}$ and integrate out all the fields except the field at the origin $\phi_0={ \hbox{ \vbox{ \hrule height 0.5pt \kern0.3ex \hbox{ \kern-0.1em \ensuremath{\phi} \kern-0.1em } } }}+\delta\phi_0$ and its nearest neighbors, denoted $\phi_i$, $i=1,\ldots,z$, where $z$ is the coordination number of the lattice. The partition function can then be written $$\label{eq:Z_scalar} Z = \int{\ensuremath{\mathrm{d}}}\phi_0\,e^{-V\left(\phi_0\right)+2z\kappa\bar{\phi}\delta\phi_0}\int\prod_{i=1}^z{\ensuremath{\mathrm{d}}}\delta\phi_i\,p_J(\delta\phi_1,\ldots,\delta\phi_z) e^{2\kappa\delta\phi_0\sum_{i=1}^z\delta\phi_i},$$ where $p_J(\delta\phi_1,\ldots,\delta\phi_z)$ is a joint distribution function for the fields around the origin and absorbs everything not explicitly depending of $\delta\phi_0$ into its normalization. So far everything is exact and, given a way to compute $p_J$, we could obtain all local observables, for example ${\left\langle\phi_0^n\right\rangle}$. Now, $p_J$ is in general not known, so we will have to make some ansatz and determine the best distribution compatible with this ansatz. In standard mean field theory the ansatz is $p_J(\delta\phi_1,\ldots,\delta\phi_z)=\prod_{i=1}^z\delta(\delta\phi_i)$ and only ${ \hbox{ \vbox{ \hrule height 0.5pt \kern0.3ex \hbox{ \kern-0.1em \ensuremath{\phi} \kern-0.1em } } }}$ is left to be determined as explained above. In the mean distribution approach we will assume that the distribution is a *product* distribution $p_J(\delta\phi_1,\ldots,\delta\phi_z)=\prod_{i=1}^zp(\delta\phi_i)$ and determine $p$ self-consistently to be equal to the distribution of $\delta\phi_0$, i.e. $$\label{eq:dist_scalar} p(\delta\phi_0) = \frac{1}{Z}e^{-V\left(\delta\phi_0+\bar{\phi}\right)+2z\kappa\bar{\phi}\delta\phi_0}\left({\left\langlee^{2\kappa\delta\phi_0\delta\phi_i}\right\rangle}_{p(\delta\phi_i)}\right)^z,$$ where ${\left\langlef(\phi)\right\rangle}_{p(\phi)}=\int{\ensuremath{\mathrm{d}}}\phi\,p(\phi)f(\phi)$. The mean value ${ \hbox{ \vbox{ \hrule height 0.5pt \kern0.3ex \hbox{ \kern-0.1em \ensuremath{\phi} \kern-0.1em } } }}$ has to be adjusted such that the distribution $p$ has zero mean. After $p$ and ${ \hbox{ \vbox{ \hrule height 0.5pt \kern0.3ex \hbox{ \kern-0.1em \ensuremath{\phi} \kern-0.1em } } }}$ have been determined any observable, even observables extending outside the live domain, can be extracted under the assumption that every plaquette is distributed according to $p$. Local observables are given by simple expectation values with respect to the distribution $p$. This strategy can also be applied to spin and gauge models, taking as variables the links and plaquettes respectively, as discussed in the introduction. For a gauge theory, the starting point is the partition function in the plaquette formulation $$\label{eq:z_plaq} Z = \int\prod_P{\ensuremath{\mathrm{d}}}U_P\,\prod_C\delta\left(\;\,\prod_{\mathclap{P\in\partial C}}U_P-1\right)e^{-S[U_P]},$$ where $S[U_p]$ is any action which is a sum over the individual plaquettes, for example the Wilson action $S[U_P] = \beta\sum_P(1-\mathrm{ReTr}U_P)$, or a topological action [@Bietenholz:2010xg; @Akerlund:2015zha] where the action is constant but the traces of the plaquette variables are limited to a compact region around the identity. The difference to the mean plaquette method is that it is not assumed that the external plaquettes take some average value, but rather that they are distributed according to a mean distribution. More specifically, we assume that there exists a mean distribution for the real part of the trace of the plaquettes and that the other degrees of freedom are uniformly distributed with respect to the Haar measure. Such a distribution must exist and it can be measured for example by Monte Carlo simulations. For definiteness let us consider compact $U(1)$ gauge theory with a single plaquette $P_0$ as the live domain. The plaquette variables $U_P = e^{i\theta_P}\in U(1)$ can be represented with a single real parameter $\theta_P\in[0,2\pi]$ and the real part of the trace is $\cos\theta_P$. Our goal is to obtain an approximation to the distribution $p\left(\cos\theta_{P_0}\right)$, or equivalently $p\left(\theta_{P_0}\right) = Z\left(\theta_{P_0}\right)/Z$, where $$\begin{aligned} Z(\theta_{P_0}) &= e^{-S[U_{P_0}]}\int\!\!\prod_{P\neq P_0}\!\!{\ensuremath{\mathrm{d}}}U_P\,e^{-S[U_P]}\prod_C \delta\left(\;\;\prod_{\mathclap{P'\in\partial C}}U_P'-1\right),\label{eq:Z_th}\\ Z &= \int\!\!{\ensuremath{\mathrm{d}}}U_{P_0}\,Z(\theta_{P_0}).\end{aligned}$$ To obtain a finite number of integrals we now make the approximation that all plaquettes which do not share a cube with $P_0$ are independently distributed according to some distribution $p(\theta)$. Clearly this neglects some correlations among the plaquettes but this can be improved by taking a larger live domain. Again, let $C$ denote an elementary cube with boundary $\partial C$ and $P$ denote a plaquette. We define $$\begin{aligned} U_C &\equiv \prod_{P\in\partial C}U_P,\\ \mathcal{C}_0 &\equiv \{C\mid P_0\in\partial C\},\\ \mathcal{P}_\mathcal{C} &\equiv \{P\mid \exists C \in \mathcal{C}_0:\,P\in\partial C,\,P\neq P_0\},\end{aligned}$$ i.e. $\mathcal{C}_0$ is the set of all cubes containing $P_0$, and $\mathcal{P}_\mathcal{C}$ is the set of plaquettes, excluding $P_0$, making up $\mathcal{C}_0$. The sought distribution is then determined by the self-consistency equation $$\label{eq:prob_th} p\left(\theta_{P_0}\right) = \frac{e^{-S\left[U_{P_0}\right]}\displaystyle\int\displaystyle\prod_{\mathclap{P\in \mathcal{P}_\mathcal{C}}}{\ensuremath{\mathrm{d}}}U_P\,p\left(\theta_P\right)\displaystyle\prod_{\mathclap{C\in \mathcal{C}_0}}\delta\left(U_C-1\right)} {\displaystyle\int\!\!{\ensuremath{\mathrm{d}}}U_{P_0}\,e^{-S\left[U_{P_0}\right]}\displaystyle\int\displaystyle\prod_{\mathclap{P\in \mathcal{P}_\mathcal{C}}}{\ensuremath{\mathrm{d}}}U_P\,p\left(\theta_P\right)\displaystyle\prod_{\mathclap{C\in\mathcal{C}_0}}\delta\left(U_C-1\right)}.$$ This self-consistency equation is solved by iterative substitution: given an initial guess for the distribution $p^{(0)}\left(\theta_{P_0}\right)$, it is a straightforward task to integrate out the external plaquettes and obtain the next iterate $p^{(1)}\left(\theta_{P_0}\right)$ from eq. , and to iterate the procedure until a fixed point is reached, i.e. $p^{(n+1)}\left(\theta_{P_0}\right)=p^{(n)}\left(\theta_{P_0}\right)$. This is a functional equation, which is solved numerically by replacing the distribution $p$ by a set of values on a fine grid in $\theta_P$ or by a truncated expansion in a functional basis. In this paper we have chosen to discretize the distribution on a grid. As mentioned above, this can be done in a completely analogous way also for spin models and for different types of actions. In Fig. \[fig:4d\_u1\_rest\_dist\] we compare the distributions of plaquettes in the $4d$ $U(1)$ lattice gauge theory with the Wilson action close to the critical coupling (*left panel*) and with the topological action at the critical restriction $\delta_c$ (*right panel*), obtained by Monte Carlo on an $8^4$ lattice and by the mean distribution approach with the normalized action $e^{\beta\cos\theta_P}$. Below we give more details for a selection of models along with numerical results. ![The distribution of plaquettes angles $p(\theta_P)$ in the $4d$ $U(1)$ lattice gauge theory with the Wilson action close to the critical coupling (*left panel*) and with the topological action at the critical restriction $\delta_c$ (*right panel*) obtained by Monte Carlo on an $8^4$ lattice and by the mean distribution approach, together with the Haar measure.[]{data-label="fig:4d_u1_rest_dist"}]({{../figures/4d_u1_beta_dist}} "fig:"){width="0.45\linewidth"} ![The distribution of plaquettes angles $p(\theta_P)$ in the $4d$ $U(1)$ lattice gauge theory with the Wilson action close to the critical coupling (*left panel*) and with the topological action at the critical restriction $\delta_c$ (*right panel*) obtained by Monte Carlo on an $8^4$ lattice and by the mean distribution approach, together with the Haar measure.[]{data-label="fig:4d_u1_rest_dist"}]({{../figures/4d_u1_rest_dist}} "fig:"){width="0.45\linewidth"} Spin models {#sec:spin} =========== We will start by applying the method to a few spin models, namely ${\ensuremath{\mathbb{Z}}}_2$, ${\ensuremath{\mathbb{Z}}}_4$ and the U(1) symmetric $XY$-model and we will explain the procedure as we go along. Afterwards, only minor adjustments are needed in order to treat gauge theories. We will derive the self-consistency equations in an unspecified number of dimensions although graphical illustrations will be given in two dimensions for obvious reasons. Let us start with an Abelian spin model with a global ${\ensuremath{\mathbb{Z}}}_N$ symmetry. The partition function is given by $$\label{eq:pf_zn} Z = \sum_{\{s\}}\exp\left(\beta\sum_{{\left\langlei,j\right\rangle}}{{\rm Re}}\, s_is_j^\dagger\right),$$ where $s_i = e^{i\frac{2\pi}{N}n_i},\,n_i\in\{1,\cdots,N\} (\in{\ensuremath{\mathbb{Z}}}_N)$. In the usual mean field approach we would self-consistently determine the mean value of $s_i$ by letting one or more live sites fluctuate in an external bath of mean valued spins. However, Batrouni [@Batrouni:1982dx; @Batrouni:1985dp] noticed that by self-consistently determining the mean value of the *links*, or internal energy, $U_{ij}\equiv s_is_j^\dagger$, much better estimates of for example the critical temperature could be obtained for a given live domain. Thus, we first change variables from spins to links. The Jacobian of this change of variables is a product of lattice Bianchi identities, $\delta\left(U_P-1\right)$, one for each plaquette [^1]. This can be verified by introducing the link variables $U_{ij}$ via $\int{\ensuremath{\mathrm{d}}}U_{ij}\,\delta\left(U_{ij}s_js_i^\dagger-1\right)$ and integrating out the spins in a pedestrian manner. Since the Boltzmann weight factorizes over the link variables, all link interactions are induced by the Bianchi identities and hence the transformation trivially solves the one dimensional spin chain where there are no plaquettes [^2]. As mentioned above, each $\delta$-function can be represented by a sum over the characters of all the irreducible representations of the group. For ${\ensuremath{\mathbb{Z}}}_N$ this is merely a geometric series, $\delta\left(U_P-1\right) = \frac{1}{N}\sum_{n=0}^{N-1}U_P^n$. Since only the real part enters in the action it is convenient to reshuffle the sum so that we sum only over real combinations of the variables, $$\label{eq:delt_ZN} \delta\left(U_P-1\right) \propto 1+U_P^{N/2}\delta^N_\text{even}+\sum_{n=1}^{\mathclap{\left\lfloor\frac{N-1}{2}\right\rfloor}}\left(U_P^n+U_P^{-n}\right),$$ where $\delta^N_\text{even}$ is $1$ if $N$ is even and $0$ otherwise. The next step is to choose a domain of live links. In this step, imagination is the limiting factor; for a given number of live links there can be many different choices and it is not known to us if there is a way to decide which is the optimal one. The simplest choice is of course to keep only one link alive but in our $2d$ examples we will make use also of a nine-link domain [@Batrouni:1985dp] to see how the results improve with larger domains. These two domains are shown in the left (one link) and right (nine links) panels of Fig. \[fig:live\_domains\]. In the case of a single live link, there are $2(d-1)$ plaquettes and thus there are $2(d-1)$ $\delta$-functions of the type in eq. . ![Two choices of domains of live links for $2d$ spin models. The live links are denoted by the solid lines, whereas the dashed lines denote links which are assumed to take mean values or to be distributed according to the mean distribution. The left panel shows the unique domain with one live link and the right panel shows one of many domains with nine live links.[]{data-label="fig:live_domains"}]({{../figures/live_domains_c}}){width="0.6\linewidth"} Mean link approach ------------------ Let us for simplicity consider the case of one live link, denoted $U_0$. The external links, denoted $U_k$ by some enumeration $ij\to k$, are fixed to the mean value by demanding that $U_k^n=U_k^{-n}={\left\langleU\right\rangle}^n,\, \forall k\neq0$. Each plaquette containing the live link also contains three external links, and the $\delta$-function eq.  becomes $$\delta\left(U_P-1\right) \propto 1+{\left\langleU\right\rangle}^{3N/2}(-1)^{n_0}\delta^N_\text{even} +2\sum_{n=1}^{\mathclap{\left\lfloor\frac{N-1}{2}\right\rfloor}}{\left\langleU\right\rangle}^{3n}\cos\frac{2\pi n_0 n}{N}. \label{eq:delt_ZN_av}$$ For large $N$ it is best to perform the sum analytically to obtain (for $N=2M$) $$\label{eq:delt_ZN_rs} \delta\left(U_P-1\right) \propto \frac{1-(-1)^{n_0}{\left\langleU\right\rangle}^{3M}}{1+{\left\langleU\right\rangle}^6-2{\left\langleU\right\rangle}^3\cos\frac{\pi n_0}{M}}.$$ For U(1) we define $\frac{\pi n_0}{M} = \theta_0$ as $M\to\infty$ and since ${\left\langleU\right\rangle}<1$ we get $$\label{eq:delt_ZN_u1} \delta\left(U_P-1\right) \propto \left(1+{\left\langleU\right\rangle}^6-2{\left\langleU\right\rangle}^3\cos\theta_0\right)^{-1},$$ which can efficiently be dealt with by numerical integration. The partition functions for the single live link for ${\ensuremath{\mathbb{Z}}}_2$, ${\ensuremath{\mathbb{Z}}}_4$ and $U(1)$ [^3] spin models then become $$\begin{aligned} Z_{{\ensuremath{\mathbb{Z}}}_2}&\propto\sum_{U_0=\pm1}e^{\beta U_0}\left(1+{\left\langleU\right\rangle}^3U_0\right)^{2(d-1)},\\ Z_{{\ensuremath{\mathbb{Z}}}_4}&\propto\sum_{n_0=0}^3e^{\beta\cos\frac{\pi n_0}{2}}\left(1+{\left\langleU\right\rangle}^6(-1)^{n_0}+2{\left\langleU\right\rangle}^3\cos\frac{\pi n_0}{2}\right)^{2(d-1)},\\ Z_{U(1)}&\propto\displaystyle\int\limits_{\mathclap{-\delta}}^\delta {\ensuremath{\mathrm{d}}}\theta\,e^{\beta \cos\theta}\left(1+{\left\langleU\right\rangle}^6-2{\left\langleU\right\rangle}^3\cos\theta\right)^{-2(d-1)}.\label{eq:u1_spin}\end{aligned}$$ In the $U(1)$ case, eq.  applies both to the standard action $(\beta\geq0,\delta=\pi)$ and to the topological action $(\beta=0,\delta\leq\pi)$. Mean distribution approach -------------------------- In the mean distribution approach we sum over the external links assuming they each obey a mean distribution $p(U)$, for which a one-to-one mapping to the set of moments $\{{\left\langleU^n\right\rangle}\}$ exists. The difference between the two methods becomes apparent when expressed in terms of the moments, which are obtained by integrating the distributions of the external links against the $\delta$-function given by the Bianchi constraint in eq.  $$\label{eq:dela_dist} \sum_{\mathclap{\{U_1,U_2,U_3\}}}p(U_1)p(U_2)p(U_3)\delta(U_P-1)= 1+{\left\langleU^{N/2}\right\rangle}^3U_0^{N/2}\delta^N_\text{even} + 2\sum_{n=1}^{\mathclap{\left\lfloor\frac{N-1}{2}\right\rfloor}}{\left\langleU^n\right\rangle}^3\cos\frac{2\pi n_0n}{N}.$$ Comparing to eq , we see that for $N\leq3$ there is only one moment and the two methods are thus equivalent, but for larger $N$ the mean link approach makes the approximation ${\left\langleU^n\right\rangle}={\left\langleU\right\rangle}^n$ whereas the mean distribution approach treats all moments correctly. Thus, for small $N$ we do not expect much difference between the two approaches, and this is indeed confirmed by explicit calculations. For $U(1)$, however, there are infinitely many moments which are treated incorrectly by the mean link approach and this renders the mean distribution approach conceptually more appealing. By using the Bianchi identities, one link per plaquette can be integrated out, giving $$Z_{U(1)}=\int\limits_{\mathclap{-\delta}}^\delta {\ensuremath{\mathrm{d}}}\theta\,e^{\beta \cos\theta} \left(\int\limits_{-\delta}^{\delta}{\ensuremath{\mathrm{d}}}\theta_1{\ensuremath{\mathrm{d}}}\theta_2\,p(\theta_1)p(\theta_2)\sum_{n=-2}^2p(2\pi{}n-\theta-\theta_1-\theta_2)\right)^{\mathrlap{2(d-1)}}.$$ It is often convenient not to work solely with distributions of single links, but also of multiple links, which are defined in the obvious way, $$\label{eq:dist_multi} p_N(\Theta)\equiv\int\prod_{i=1}^N{\ensuremath{\mathrm{d}}}\theta_i\,p(\theta_i)\delta\left(\sum_{i=1}^N\theta_i-\Theta\right),$$ and can efficiently be calculated recursively. The above partition function then simplifies slightly to $$Z_{\text{U(1)}}=\int\limits_{\mathclap{-\delta}}^\delta {\ensuremath{\mathrm{d}}}\theta\,e^{\beta \cos\theta}\left(\int\limits_{-2\delta}^{2\delta}{\ensuremath{\mathrm{d}}}\Theta\,p_2(\Theta) \sum_{n=-2}^2p(2\pi{}n-\theta-\Theta)\right)^{2(d-1)}.$$ In Figs. \[fig:2d\_ising\_z4\] and \[fig:2d\_u1\] we show results for $2d$ ${\ensuremath{\mathbb{Z}}}_2$, ${\ensuremath{\mathbb{Z}}}_4$ and $U(1)$ spin models, the latter for the Wilson action $S=\beta\sum_{{\left\langleij\right\rangle}}{{\rm Re}}\, s_is_j^\dagger$ and the topological action $e^S=\prod_{{\left\langleij\right\rangle}}\Theta\left(\delta-{\lvert\theta_i-\theta_j\rvert}\right)$. Note the remarkable accuracy of the mean distribution approach in the latter case, even when there is only one live link. ![(*left*) Mean-field and mean-link approximation in the $2d$ Ising model for two choices of live domains. (*Right*) Mean-link and mean-distribution in the $2d$ ${\ensuremath{\mathbb{Z}}}_4$ model. In the Ising case, mean-link and mean-distribution are equivalent.[]{data-label="fig:2d_ising_z4"}]({{../figures/2d_ising}} "fig:"){width="0.45\linewidth"} ![(*left*) Mean-field and mean-link approximation in the $2d$ Ising model for two choices of live domains. (*Right*) Mean-link and mean-distribution in the $2d$ ${\ensuremath{\mathbb{Z}}}_4$ model. In the Ising case, mean-link and mean-distribution are equivalent.[]{data-label="fig:2d_ising_z4"}]({{../figures/2d_z4}} "fig:"){width="0.45\linewidth"} ![The mean link in the $2d$ $XY$ spin model as a function of the Wilson coupling $\beta$ (*left panel*) and of the restriction $\delta$ (*right panel*) from Monte Carlo, from the mean link and from the mean distribution methods.[]{data-label="fig:2d_u1"}]({{../figures/2d_xy_beta}} "fig:"){width="0.45\linewidth"} ![The mean link in the $2d$ $XY$ spin model as a function of the Wilson coupling $\beta$ (*left panel*) and of the restriction $\delta$ (*right panel*) from Monte Carlo, from the mean link and from the mean distribution methods.[]{data-label="fig:2d_u1"}]({{../figures/2d_xy_rest}} "fig:"){width="0.45\linewidth"} Gauge theories {#sec:gauge} ============== To extend the formalism from spin models to gauge theories, we merely have to change from links and plaquettes to plaquettes and cubes. The partition function for a $U(1)$ gauge theory analogous to eq. becomes $$\label{eq:u1_gauge} Z_{\text{U(1)}}=\int\limits_{\mathclap{-\delta}}^\delta {\ensuremath{\mathrm{d}}}\theta\,e^{\beta \cos\theta}\left(1+{\left\langleU\right\rangle}^{10}-2{\left\langleU\right\rangle}^5\cos\theta\right)^{-2(d-2)}$$ in the mean plaquette approach and $$Z_{\text{U(1)}}=\int\limits_{\mathclap{-\delta}}^\delta {\ensuremath{\mathrm{d}}}\theta\,e^{\beta \cos\theta}\left(\int\limits_{-4\delta}^{4\delta}{\ensuremath{\mathrm{d}}}\Theta\,p_4(\Theta)\sum_{n=-3}^3p(2\pi{}n-\theta-\Theta)\right)^{2(d-2)}$$ in the mean distribution approach. Results for $d=4$ are shown in Fig. \[fig:4d\_u1\] for the Wilson action (*left panel*) and for the topological action (*right panel*). ![The mean plaquette in the $4d$ $U(1)$ gauge theory as a function of the Wilson coupling $\beta$ (*left panel*) and the restriction $\delta$ (*right panel*) from Monte Carlo, and from the mean plaquette and the mean distribution methods.[]{data-label="fig:4d_u1"}]({{../figures/4d_u1_beta}} "fig:"){width="0.45\linewidth"} ![The mean plaquette in the $4d$ $U(1)$ gauge theory as a function of the Wilson coupling $\beta$ (*left panel*) and the restriction $\delta$ (*right panel*) from Monte Carlo, and from the mean plaquette and the mean distribution methods.[]{data-label="fig:4d_u1"}]({{../figures/4d_u1_rest}} "fig:"){width="0.45\linewidth"} Another nice feature of the mean distribution approach is that other observables become available, like for instance the monopole density in the $U(1)$ gauge theory, under the assumption that each plaquette is distributed according to the mean distribution $p$. A cube is said to contain $q$ monopoles if the sum of its outward oriented plaquette angles sums up to $2\pi{}q$. Given the distribution $p(\theta)$ of plaquette angles the (unnormalized) probability $p_q$ of finding $q$ monopoles in a cube is given by $$\label{eq:monopole_prob} p_q = \int\prod_{i=1}^6{\ensuremath{\mathrm{d}}}\theta_i\, p(\theta_i) \delta\left(\sum_{i=1}^6\theta_i - 2q\pi\right),\, q\in\{-2,-1,0,1,2\}$$ and the monopole density $n_{\rm monop}$ is given by $$\label{eq:nmonop} n_{\rm monop} = \frac{2p_1 + 4p_2}{p_0+2p_1+2p_2}.$$ In Fig. \[fig:4d\_u1\_monop\] we show the monopole densities for $4d$ $U(1)$ gauge theory as obtained by Monte Carlo simulations and by the mean distribution approach. Note that the monopole extends outside of the domain of a single live plaquette, which was used to determine the mean distribution $p$. The left panel shows results for the Wilson action and in the right panel the topological action is used. ![The monopole density in the $4d$ $U(1)$ gauge theory as a function of the Wilson coupling $\beta$ (*left panel*) and the restriction $\delta$ (*right panel*) from Monte Carlo and the mean distribution method.[]{data-label="fig:4d_u1_monop"}]({{../figures/4d_u1_beta_monop}} "fig:"){width="0.45\linewidth"} ![The monopole density in the $4d$ $U(1)$ gauge theory as a function of the Wilson coupling $\beta$ (*left panel*) and the restriction $\delta$ (*right panel*) from Monte Carlo and the mean distribution method.[]{data-label="fig:4d_u1_monop"}]({{../figures/4d_u1_rest_monop}} "fig:"){width="0.45\linewidth"} We can also treat $SU(2)$ Yang-Mills theory without much difficulty. For the mean plaquette approach we need the character expansion of the $\delta$-function $$\label{eq:delt_su2} \delta\left(U_C-1\right) \propto \sum_{n=0}^\infty(n+1)\frac{\sin(n+1)\theta_C}{\sin\theta_C},$$ where $\theta_C$ is related to the trace of the cube matrix $U_C$ through $\text{Tr}U_C = 2\cos\theta_C$. In the mean plaquette approach we again make the substitution $U_C\to U_0{\left\langleU\right\rangle}^5$ in the case of a single live plaquette. The above delta function then becomes $$\begin{aligned} \label{eq:delt_su2_mp} \delta\left(U_0{\left\langleU\right\rangle}^5-1\right) &\propto \sum_{n=0}^\infty{\left\langleU\right\rangle}^{5n}(n+1)\frac{\sin(n+1)\theta_0}{\sin\theta_0}\nonumber\\ &\propto \left(1+{\left\langleU\right\rangle}^{10}-2\cos\theta_0{\left\langleU\right\rangle}^5\right)^{-2}.\end{aligned}$$ For $SU(2)$, the analogue of a restriction $\delta$ on the plaquette angle is a restriction on the trace of the plaquette matrix to the domain $[2\alpha,2]$, where $-1\leq\alpha<1$. If we define $a_0 \equiv \frac{1}{2}\operatorname{Tr}U_0=\cos\theta_0$ the approximate $SU(2)$ partition function can be written [^4] in a way very similar to the $U(1)$ partition function  $$Z_{\text{SU(2)}}=\int\limits_{\alpha}^1 {\ensuremath{\mathrm{d}}}a_0\,\sqrt{1-a_0^2}\,e^{\beta a_0}\left(1+{\left\langleU\right\rangle}^{10}-2{\left\langleU\right\rangle}^5a_0\right)^{-4(d-2)},$$ from which ${\left\langleU\right\rangle}$ can be easily obtained as a function of $\alpha$ and $\beta$. The mean distribution approach works in a completely analogous way as for $U(1)$, but let us go through the details anyway, since there are now extra angular variables to be integrated out. The starting point is again an elementary cube on the lattice. Five of the cubes faces have their trace distributed according to the distribution $p(a_0)$ and we want to calculate the distribution of the sixth face compatible with the Bianchi identity $U_C=1$. In other words, taking $U_6$ as the live plaquette, we want to evaluate $$\tilde{p}(a_{0,6})\propto \left.\int{\ensuremath{\mathrm{d}}}\Omega_6 \int\prod_{i=1}^5\left\{{\ensuremath{\mathrm{d}}}U_i\frac{p(a_{0,i})}{\sqrt{1-a_{0,i}^2}}\right\} \delta\left(\prod_{i=1}^6U_i-1\right)\right\rvert_{\text{Tr}U_6=2a_{0,6}},$$ where we have decomposed $U_6=\Omega_6 \hat{U}_6\Omega_6^\dagger$ with $\hat{U}_6$ a diagonal $SU(2)$ matrix with trace $2a_{0,6}$, i.e. $\Omega_6$ is the angular part of $U_6$. The choice to include the measure factor $\sqrt{1-a_0^2}$ in the distribution is arbitrary but convenient. To facilitate the calculation we recursively combine the product of four of the plaquette matrices into one matrix, $U_1U_2U_3U_4\to\tilde U$, by pairwise convolution of distributions (with $p_1(a_0)\equiv{}p(a_0)$) $$\begin{aligned} p_{2i}(\tilde{a}_0) &\propto \left.\int{\ensuremath{\mathrm{d}}}\tilde{\Omega}{\ensuremath{\mathrm{d}}}U_1 {\ensuremath{\mathrm{d}}}U_2 \frac{p_i(a_{0,1})}{\sqrt{1-a_{0,1}^2}}\frac{p_i(a_{0,2})}{\sqrt{1-a_{0,2}^2}} \delta\left(U_1U_2\tilde{U}^\dagger-1\right)\right\rvert_{\text{Tr}\tilde{U}=2\tilde{a}_0}\nonumber\\ &\propto \int\limits_{\alpha_i}^1{\ensuremath{\mathrm{d}}}a_{0,1}{\ensuremath{\mathrm{d}}}a_{0,2}\,p_i(a_{0,1})p_i(a_{0,2})\int\limits_{-1}^1{\ensuremath{\mathrm{d}}}\cos\theta_{12}\, \delta\left(\tilde{a}_0-a_{0,1}a_{0,2}-\sqrt{1-a_{0,1}^2}\sqrt{1-a_{0,2}^2}\cos\theta_{12}\right)\\ &=\int\limits_{\alpha_i}^1{\ensuremath{\mathrm{d}}}a_{0,1}{\ensuremath{\mathrm{d}}}a_{0,2}\frac{p_i(a_{0,1})p_i(a_{0,2})}{\sqrt{1-a_{0,1}^2}\sqrt{1-a_{0,2}^2}} \chi_{{\lvert\tilde{a}_0-a_{0,1}a_{0,2}\rvert}\leq\sqrt{1-a_{0,1}^2}\sqrt{1-a_{0,2}^2}},\nonumber\end{aligned}$$ where $\alpha_1 \equiv \alpha,\,\alpha_{2i}=\max(2\alpha_i-1,-1)$ and $\chi_A$ is the characteristic function on the domain $A$. The domain of integration in the $(a_{0,1},a_{0,2})$-plane is simply connected with parametrizable boundaries and comes from the condition that the argument of the delta function has a zero for some $\cos\theta_{12}\in[-1,1]$. We then obtain for the sought distribution $$\tilde{p}(a_{0,6})\propto \left.\int{\ensuremath{\mathrm{d}}}\Omega_6 \int{\ensuremath{\mathrm{d}}}U_5\frac{p(a_{0,5})}{\sqrt{1-a_{0,5}^2}}\int{\ensuremath{\mathrm{d}}}\tilde{U}\frac{p_4(\tilde{a}_0)}{\sqrt{1-\tilde{a}_0^2}}\delta\left(\tilde{U}U_5U_6-1\right)\right\rvert_{\text{Tr}U_6=2a_{0,6}},$$ where it is now easy to integrate out $\tilde{U}=U_6^\dagger U_5^\dagger$. If we denote by $\theta_{56}$ the angle between $U_5$ and $U_6$, the angular integral over $\Omega_6$ contributes just a multiplicative constant and we obtain $$\tilde{p}(a_{0,6})\propto \int{\ensuremath{\mathrm{d}}}a_{0,5}{\ensuremath{\mathrm{d}}}\cos\theta_{56}\,p(a_{0,5})\frac{p_4\left(a_{0,5}a_{0,6}-\sqrt{1-a_{0,5}^2}\sqrt{1-a_{0,6}^2}\cos\theta_{56}\right)} {\sqrt{a_{0,5}a_{0,6}-\sqrt{1-a_{0,5}^2}\sqrt{1-a_{0,6}^2}\cos\theta_{56}}},$$ which can be evaluated numerically in a straightforward manner. In the end, since there are $2(d-2)$ cubes sharing the plaquette $P_0$, and since the a priori probability for $P_0$ to have trace $2a_0$ is $\sqrt{1-a_0^2}e^{\beta a_0}$, with respect to the uniform measure, we obtain for one live plaquette $$\begin{aligned} Z_{SU(2)}&=\int\limits_{\alpha}^1{\ensuremath{\mathrm{d}}}a_{0}\,p(a_0)=\int\limits_{\alpha}^1{\ensuremath{\mathrm{d}}}a_{0}\sqrt{1-a_0^2}e^{\beta a_0}\tilde{p}(a_0)^{2(d-2)}\nonumber\\ &= \int\limits_{\alpha}^1{\ensuremath{\mathrm{d}}}a_{0}\sqrt{1-a_0^2}e^{\beta a_0}\left(\int\limits_{\alpha}^1{\ensuremath{\mathrm{d}}}x\, p(x){\ensuremath{\mathrm{d}}}\cos\theta\frac{p_4\left(a_0x-\sqrt{1-a_0^2}\sqrt{1-x^2} \cos\theta\right)}{\sqrt{a_0x-\sqrt{1-a_0^2}\sqrt{1-x^2}\cos\theta}}\right)^{2(d-2)},\end{aligned}$$ which also defines the functional self-consistency equation for $p(a_0)$. Results for the Wilson and topological actions can be seen in Fig. \[fig:su2\] in the left and right panels, respectively [^5]. ![The average plaquette for the $SU(2)$ gauge theory as a function of the Wilson coupling $\beta$ (*left panel*) and the restriction $\alpha$ (*right panel*) from Monte Carlo simulation, the mean plaquette method and the mean distribution method. For comparison the mean link result obtained with the formalism in [@Drouffe:1983fv] is also shown in the left panel.[]{data-label="fig:su2"}]({{../figures/su2_plaq_beta}} "fig:"){width="0.45\linewidth"} ![The average plaquette for the $SU(2)$ gauge theory as a function of the Wilson coupling $\beta$ (*left panel*) and the restriction $\alpha$ (*right panel*) from Monte Carlo simulation, the mean plaquette method and the mean distribution method. For comparison the mean link result obtained with the formalism in [@Drouffe:1983fv] is also shown in the left panel.[]{data-label="fig:su2"}]({{../figures/su2_plaq_rest}} "fig:"){width="0.45\linewidth"} For $SU(3)$ one can proceed in an analogous manner, only the angular integrals are now more involved and the trace of the plaquette depends on two diagonal generators so the resulting distribution function needs to be two dimensional. Conclusions {#sec:conclusions} =========== It has been shown before [@Batrouni:1985dp] that determining a self-consistent mean-link gives a much better approximation than the traditional mean-field. Furthermore, the symmetry-invariant mean link can be generalized to a mean plaquette in gauge theories [@Batrouni:1982dx]. Here, we have shown that the approximation can be further improved by determining the self-consistent mean distribution of links or plaquettes. The extension from a self-consistent determination of the symmetry invariant mean link or plaquette to a self-consistent determination of the entire distribution of links and plaquettes is shown to improve upon the results obtained by Batrouni in his seminal work [@Batrouni:1982dx; @Batrouni:1982bg]. Especially appealing is the fact that the mean distribution approach yields a non-trivial result for the whole range of couplings and not just in the strong coupling regime, which is sometimes the case for the mean link/plaquette approach, or just in the weak coupling regime which is accessible to the mean field treatment of [@Drouffe:1983fv]. Indeed, the mean distribution approach gives a nearly correct answer when the correlation length is not too large, and by enlarging the live domain the exact result is approached systematically for any value of the coupling. As the domain of live variables is enlarged, the mean link/plaquette and the mean distribution results tend to approach each other but since determining the full mean distribution does not require much additional computer time it should always be desirable to do so. Furthermore, another appealing feature of the mean distribution approach is that once the distribution has been self-consistently determined, other local observables, like the vortex or monopole densities become readily available. Finally, the whole approach applies to non-Abelian models as well. [^1]: On a periodic lattice there are also global Bianchi identities but they play no role here. [^2]: Up to a global constraint in the case of periodic boundary conditions [^3]: The $U(1)$ Wilson action is defined by $\delta=\pi,\beta\neq0$ and the topological action by $\delta<\pi,\beta=0$. [^4]: The $SU(2)$ Wilson action is defined by $\alpha=-1,\beta\neq0$ and the topological action by $\alpha>-1,\beta=0$. [^5]: Our results for the mean plaquette approach differ a little from those of [@Batrouni:1982dx], because we imposed the Bianchi constraint exactly rather than truncating its character expansion. Surprisingly, truncation gives better results.
--- abstract: 'Digital Rock Imaging is constrained by detector hardware, and a trade-off between the image field of view (FOV) and the image resolution must be made. This can be compensated for with super resolution (SR) techniques that take a wide FOV, low resolution (LR) image, and super resolve a high resolution (HR), high FOV image. A Super Resolution Convolutional Neural Network (SRCNN) that excels in capturing edge details is coupled with a Generative Adversarial Network (GAN) to recover high frequency texture details. The Enhanced Deep Super Resolution Generative Adversarial Network (EDSRGAN) is trained on the Deep Learning Digital Rock Super Resolution Dataset (DeepRock-SR), a diverse compilation of raw and processed $\mu$CT (micro-Computed Tomography) images. The dataset consists of 12000 sandstone, carbonate, and coal samples of image size 500x500 as a representation of fractured and granular media. The SRCNN network shows comparable performance of +3-5dB in pixel accuracy (50% to 70% reduction in relative error) over bicubic interpolation. GAN performance in recovering texture shows superior visual similarity compared to normal SRCNN and other interpolation methods. Difference maps indicate that the SRCNN section of the SRGAN network recovers large scale edge (grain boundaries) features while the GAN network regenerates perceptually indistinguishable high frequency texture. Extrapolation of the learned network with external samples remains accurate and network performance is generalised with augmentation, showing high adaptability to noise and blur. HR images are fed into the network, generating HR-SR images to extrapolate network performance to sub-resolution features present in the HR images themselves. Results show that under-resolution features such as dissolved minerals and thin fractures are regenerated despite the network operating outside of trained specifications. Comparison with Scanning Electron Microscope (SEM) images shows details are consistent with the underlying geometry of the sample. Recovery of textures benefits the characterisation of digital rocks with a high proportion of under-resolution micro-porous features, such as carbonate and coal samples. Images that are normally constrained by the mineralogy of the rock (coal), by fast transient imaging (waterflooding), or by the energy of the source (microporosity), can be super resolved accurately for further analysis downstream. The neural network architecture and training methodology presented in this study (and SRGANs in general) offer the potential to generate HR $\mu$CT images that exceed the typical imaging limits.' author: - | Ying Da Wang\ School of Minerals and Energy Resources Engineering\ University of New South Wales\ Sydney, NSW, 2052\ `[email protected]`\ Ryan T. Armstrong\ School of Minerals and Energy Resources Engineering\ University of New South Wales\ Sydney, NSW, 2052\ `[email protected]`\ Peyman Mostaghimi\ School of Minerals and Energy Resources Engineering\ University of New South Wales\ Sydney, NSW, 2052\ `[email protected]`\ title: 'Boosting Resolution and Recovering Texture of micro-CT Images with Deep Learning' --- Plain Language Summary ====================== When capturing an image of the insides of a rock sample (or any opaque object), hardware limitations on the image quality and size exist. This limitation can be overcome with the use of machine learning algorithms that “super-resolve” a lower resolution image. Once trained, the machine algorithm can sharpen otherwise blurry features, and regenerate the underlying texture of the imaged object. We train such an algorithm on a large and wide array of digital rock images, and test its flexibility on some images that it had never seen before, as well as on some very high quality images that it was not trained to super-resolve. The results of training and testing the algorithm shows a promising degree of accuracy and flexibility in handling a wide array of images of different quality, and allows for higher quality images to be generated for use in other image-based analysis techniques. Introduction ============ X-ray micro-computed tomography ($\mu$CT) techniques can generate 3D images that detail the internal micro-structure of porous rock samples. It allows samples to be used for other analyses, as it does not physically interact with the sample [@lindquist; @Hazlett; @Wildenschild; @RN7]. It assists in the determination of the petrophysical and flow properties of rocks [@DGDD; @pfvs; @wang2019multi; @peymanK; @Krakowska; @ferrand1992effect], as they can resolve features down to a few micrometres over a field of view (FOV) spanning a few millimetres, sufficient enough to characterise the micro-structure of conventional rocks [@Flannery; @Coenen]. For use in flow simulation [@jiang2013representation; @jiang2013representation; @schluter2014image] and other numerical methods [@yi2017pore], an imaged rock sample must be of sufficiently high resolution to resolve the pore space features and connectivity while also spanning a wide enough FOV to represent bulk properties of the rocks in a way that captures the heterogeneity of the sample [@Li_and_Teng]. This is challenging for certain cases of more unconventional rocks such as carbonate rocks that can have up to 50% of the pore space unresolvable under the resolution of the imaging device [@BULTREYS201536] or coal samples that are limited to 15-25 $\mu$m resolutions due to sample fragility resulting in physical sample size constraints causing under-resolution of fracture features [@coalFragility]. The capabilities of the scanning device and the physical characteristics of rocks themselves are limiting factors in capturing high resolution $\mu$CT digital rock images, with trade-offs between resolution and field of view. These problems can be addressed by applying super resolution methods on low resolution data, resulting in a high resolution image with a wide FOV that captures more of the pore space geometry in higher detail [@wang2019super]. Super resolution methods aim to generate a super resolution (SR) image from only a single low resolution (LR) image (as opposed to using multiple LR images) in a way that most closely resembles its true high resolution (HR) counterpart [@sr2003overview]. It is an ill-posed, indeterminate inverse problem with an infinite solution space [@SRCNNDong]. Simple, typical methods apply interpolation (bicubic, linear, etc.) on the LR data to generate an higher resolution image that struggles to recover features and is mostly a blurry image with more pixels. More complex methods apply a reconstruction based method that utilises prior knowledge of the domain characteristics to reduce the size of the solution space while generating sharp details [@softCuts; @gradProfPrior; @gradProfSharp]. The scale factor is limited in these methods and the computational time required during image generation is high. Example based methods, also known as learning based methods, utilise machine learning algorithms to improve the speed and performance of SR methods. The statistical relationship between LR and HR features are deduced from a large example dataset, and have been applied with the Markov Random Field [@MRFExmaple], Neighbour Embedding [@neighbourEmbedding], sparse coding methods [@sparseRep], and Random Forest methods [@randomForest]. These learning based methods have also been combined with reconstruction based methods to further improve SR performance [@unifiedSR]. The sparse coding example based SR technique, while inflexible, has been shown to outperform bicubic interpolation and has since been generalised [@SRCNNDong] into the structurally analogous but highly flexible Super Resolution Convolutional Neural Network (SRCNN) for super resolving individual photographic images [@SRCNNDong; @WangEEDS; @EDSR; @WDSR; @VDSR; @SRGANledig], medical images [@umeharaSRCNNmedical; @circleGAN; @geWangRecon], and digital rock images [@wang2019super; @wang3DSRCNN]. Results from SRCNN have shown to consistently outperform previously utilised learning based and reconstruction based methods. Specifically for digital rock images, SRCNN methods have been applied to $\mu$CT images of rocks using a variety of different architectures in 2D and 3D, producing compelling results that can be applied to Digital Rock workflows. Using Convolutional Neural Networks to super resolve an image was originally done with a network that applied 3-5 activated convolutional layers [@SRCNNDong] to a bicubically upscaled LR image to recover the HR details on a direct mapping. This initial architecture coined the acronym “SRCNN”, and was iteratively improved upon with deeper, more complex networks with improved LR to HR mappings. Integrated LR to HR SRCNNs that do not require bicubic upsampling as a processing step are favoured as the need for a fully sized input image increases the computational cost. Deconvolutional layers at the end of the network [@FSRCNN] have been replaced by subpixel convolution to reduce checkerboard artifacting [@subpixelConv; @deconvCheckerboard]. Deeper networks have shown improved results by using the skip connection that adds outputs from shallow layers to deeper layers to preserve important shallow feature sets and improve gradient scaling [@VDSR]. Batch normalisation is featured in many deep networks [@SRGANledig], but has since been empirically observed to reduce the accuracy of SRCNN methods that rely on mini batches of cropped images due to the highly variant batch characteristics. As such they were removed in later networks [@EDSR], and are no longer present in more recent formulations [@WDSR]. The typical SRCNN algorithm attempts to generate a SR image that has the smallest L2 (Mean Squared Error) or L1 [^1] (Mean Absolute Error) loss across the image [@cycleGAN] which has been found to be more robust against outlying features and improve convergence rates. The generated SR images tend to have high PSNR values but a low perceptual quality, with high frequency texture features lost as smearing occurs in order to achieve a high “average” accuracy that represents multiple possible HR realisations [@perceptLoss], a manifestation of the local minima problem in neural network training. While larger scale features that have high contrast edges over multiple pixels are recovered by SRCNN, the overall image fidelity is unsatisfying due to the loss of texture. Efforts to address this problem have resulted in the use of hybrid loss functions that combine a pixelwise L1 or L2 loss with a featurewise loss that is calculated as the L2 loss of features extracted from a convolutional layer in a pretrained model [@perceptLoss2]. This has shown superior perceptual results compared to standard loss functions. This problem of texture loss has been most successfully addressed by the introduction of the SRGAN network. Generative Adversarial Networks (GANs) [@ganGoodfellow] are comprised of a pair of neural networks, a generator and a discriminator, being participants in a zero-sum game. The discriminator learns to classify inputs as either real or fake, while the generator attempts to generate fake inputs to the discriminator that are good enough for it to classify as real. In its simplest form, random noise is fed into a generator that is trained to transform the input noise into data that closely resembles the real data from the training set. The discriminator is trained alongside the generator to distinguish between the real training set data, and the generated data. As the neural networks are trained, the discriminator becomes better at identifying fake data, while the generator becomes better at generating data that fools the discriminator. By combining a SRCNN network (the generator) with an image classification network (the discriminator), an SRGAN is formed. SRGANs have been applied to the generation of photo-realistic and highly textured images that score highly on human surveys of image quality [@SRGANledig; @perceptLoss]. Cycle Consistent GANs (cycleGANs) [@cycleGAN] have been used as a semi-supervised method of generating SR images, by training on unpaired SR and LR datasets [@circleGAN]. While SRGAN results in features that look to the human eye as realistic when surveyed, the resulting generated SR images are lower in pixelwise accuracy compared to SRCNN due to pixelwise mismatch [@circleGAN]. Since $\mu$CT images contain significant amounts of image noise and texture as high frequency features, this is also recovered inadvertently recovered. As image segmentation is common in most Digital Rock workflows [@iassonov2009segmentation], SRCNN tends to suffice as they possess some form of intrinsic noise suppression while maximising edge recovery [@wang2019super]. While sufficient for conventional samples that are resolvable at the 3-5 $\mu$m resolution scale, the blurring of intraphase texture is detrimental to images such as carbonate and coal, with significant under-resolution features. In the context of digital rock imaging, to the authors’ best knowledge, SRGAN networks have not yet been applied to the generation of SR $\mu$CT images. In the following sections of this paper, we introduce the DeepRock-SR dataset used to train and validate the results from the Super Resolution Generative Adversarial Network, which in this case, the generator (acting as the SRCNN) is a modified EDSR [@EDSR] network (seen in Figure \[fig:edsr\]), and the discriminator is a deep convolutional classifier (seen in Figure \[fig:discrimNet\]). The generator shows comparative performance to other SRCNN networks, with performance on the DeepRock-SR dataset is in line with expected improvements in PSNR of +3-5 dB, which translates to a reduction in relative pixelwise error of 50% to 70% respectively according to Eqn \[eqn:PSNR\]. Difference maps of the validation sample images confirm SRCNN recovery of bulk features, while the SRGAN network is able to regenerate the high resolution texture. Extrapolation of network performance with unseen external images incurs minimal performance loss, while application of image augmentation to generalise the model capabilities indicates good adaptation to image noise and blur. Super resolution images of the high resolution sandstone, coal, and carbonate samples in the DeepRock-SR dataset are also generated. This extrapolates the network performance past the image resolutions of the training (3-5 $\mu$m) to under 1 $\mu$m to observe the improvement in sub-pixel resolution past the physical capabilities of $\mu$CT scanners, showing excellent visual regeneration of under-resolution features across a wide range of image resolutions, which are also confirmed with a direct comparison with high fidelity SEM images. Materials and Methods {#sec:Materials and Methods} ===================== Datasets and Digital Rocks {#sec:Datasets and Digital Rocks} -------------------------- The SRGAN network in this study is trained on the DeepRock-SR-2D dataset [@DRSRD3], which comprises of 12000 500x500 high resolution unsegmented slices of various digital rocks of sandstone, carbonate, and coal, with image resolution ranging from 2.7 to 25 $\mu$m, outlined in Table \[tab:DRSRD3Table\]. Each rock type is represented by 4000 images for an even distribution of rock geometries. The images are shuffled and split 8:1:1 into training, validation, and testing sets. From a pore morphology and image characteristics perspective, the rock types differ greatly from each other, seen in Figure \[fig:sampleImages\]. The samples in the dataset are diverse, and represent both simple resolved sandstone images (sandstone), and complex under-resolved images of carbonate microporosity and coal fracture networks. There is also a Gildehauser sandstone sample that is highly processed by Non-Local Means Filtering [@nonLocal], resulting in a very smooth intraphase image. The carbonate images are under-resolved due to much of the pore space measuring under 1 $\mu$m in resolution [@BULTREYS201536] (requiring an image resolution approaching 100 nm to adequately resolve such features). The coal samples are inadequately resolved due to the fragility of the rock itself, restricting the size to a larger than usual sample, limiting the resolution of the resulting image [@coalFragility]. The images are downsampled by a factor of 2x and 4x using the matlab imresize function, used commonly for SR dataset benchmarking [@wang2019super], with one set being downsampled exclusively by bicubic interpolation, while another set is downsampled with random kernels of either box, triangle, lanczos2, and lanczos3. In this paper, to generalise the results, the set with random (unknown) downsampling functions is used for training at a scale factor of x4. All training and testing was performed on a GTX1080ti Nvidia GPU with TensorFlow 1.12. While the training and testing dataset used here is of a fixed, relatively small size per image, the resulting trained SRCNN and SRGAN networks are “fully convolutional networks” that can be applied to any image of any size, without cropping or windowing. ![Sample slices of images from DeepRock-SR[]{data-label="fig:sampleImages"}](sampleImages.png){width="\textwidth"} [lllll]{}\ (r)[1-2]{} Name & Source & Resolution ($\mu$m) & Processing & Reference\ Bentheimer Sandstone 1 & UNSW Tyree X-ray & 3.8 & None & [@DRSRD1]\ Bentheimer Sandstone 2 & ANU CTLab & 4.9 & None & [@herringSands]\ Berea Sandstone & ANU CTLab & 4.6 & None & [@herringSands]\ Leopard Sandstone & ANU CTLab & 3.5 & None & [@herringSands]\ Gildehauser Sandstone & TOMCAT beamline & 4.4 & Non-Local Means Filtered & [@glideSand]\ Wilcox Tight Sandstone & UT Austin & 2.7 & None & [@wilcox]\ Estaillades Carbonate & UGCT HECTOR & 3.1 & None & [@estCarb]\ Savonnieres Carbonate & UGCT HECTOR & 3.8 & None & [@savCarb]\ Massangis Carbonate & UGCT HECTOR & 4.5 & None & [@massCarb]\ Sheared Coal & Ultratom RXSolutions & 25 & None & [@shearedCoal]\ Naturally Fractured Coal & Ultratom RXSolutions & 25 & None & [@fracCoal]\ While datasets are generated synthetically to remain consistent with other SR datasets, real LR $\mu$CT images are not mapped as consistently to HR equivalents, and registration error of even a few pixels can considerably reduce training results if the network requires images to be perfectly matched. Pixel binning of greyscale values [@Guan2019] has been used to model LR digital rock images, but in a practical case, with a lower energy detector or a shorter exposure time, the resulting LR $\mu$CT image can have different noise and blur compared to a synthetic LR images downsampled from a HR registered equivalent. Aside from the images contained within the DeepRock-SR dataset used for training and validation, external samples are also used in this study to extrapolate network performance. These are outlined in Table \[tab:externalSamples\]. [lllll]{}\ (r)[1-2]{} Name & Source & Resolution ($\mu$m) & Processing & Reference\ Bentheimer Sandstone & Equinor & 7.0 & None & [@bentRam]\ Ketton Carbonate & Imperial College & 5.0 & None & [@kettonCarb]\ Super Resolution Convolutional Neural Network {#sec:SRCNN} --------------------------------------------- SRCNN networks generate an SR image from only an LR image through a feed-forward CNN, the generator $G$. Here, $G$ is composed of multiple layers $L$ of weights $w_L$ and biases $b_L$ such that $SR=G_{w_L, b_L}(LR)$, that are calculated by optimising an objective loss function $f_{Loss}(SR, HR)$. The LR image input can be described as a tensor of shape $(N_x, N_y, N_c)$, where $N_x$ and $N_y$ are the dimensions of the image, and $N_c$ are the number of colour channels present. At a scale factor of 4 used in this study, the resulting SR image dimensions are $(4N_x, 4N_y, N_c)$. The architecture used in this study is based on the Enhanced Deep Super Resolution (EDSR) [@EDSR] and the SR-Resnet [@SRGANledig] networks. In particular, the overall EDSR architecture is retained, with the Rectified Linear Unit (ReLU) layers replaced by Parametric Rectified Linear Unit (PReLU) layers from SR-Resnet. These steps are taken as the use of batch normalisation layers in SRCNN networks have been shown to be detrimental to training [@WDSR], while the PReLU layers provide a minor improvement in performance for minimal computational impact [@prelu]. In this paper, the EDSR network utilises convolutional layers of kernel size 3 with 64 filters over 16 residual layers and skip connections throughout the layers prior to upscaling, as seen in Figure \[fig:edsr\]. Typically, EDSR and similar SRCNN networks use the L2 loss as the objective minimisation function, defined as the pixelwise mean of the sum of the squares of the differences between the generated SR and ground truth HR images: $$\label{eqn:L2} L_{2_{Loss}}=\frac{\sum_i(SR_i-HR_i)^2}{\sum_ii}$$ This choice of loss function in SRCNN has been superseded by the L1 loss defined as: $$\label{eqn:L1} L_{1_{Loss}}=\frac{\sum_i|SR_i-HR_i|}{\sum_ii}$$ as it has been shown to converge SRCNN networks faster, with better results [@cycleGAN]. This is suspected to be due to a better representation of the high frequency noise and texture of an image, resulting in a lower local minimum during training. ![Architecture of the modified EDSR network. All ReLU activation layers are replaced with PReLU layers, and the subpixel convolutions are activated by PReLU for additional non-linearity during the critical upscaling layers[]{data-label="fig:edsr"}](EDSRPRELU.png){width="\textwidth"} The modified EDSR is trained for 100 epochs at 1000 iterations per epoch with mini-batches of 16 cropped 192x192 images using an L1 loss as in Eqn \[eqn:L1\] with a learning rate of 1e-4 using the Adam optimiser [@AdamKingma]. The PSNR metric is tracked during the training and validation of the DeepRock-SR dataset, and is calculated assuming that the HR and SR image pixel values lie between \[-1,1\] such that $I=2$: $$\label{eqn:PSNR} PSNR=10log_{10}\frac{I^2}{L_{2_{Loss}}}$$ Further training may result in a higher final SRCNN-only PSNR score, however previous SRCNN tests on the DRSRD1 dataset [@DRSRD1] has indicated that 100 epochs of 1000 iterations of 16x192x192 batches are typically enough to saturate training for digital rocks, with the PSNR plateauing at around 50 epochs using a learning rate of 1e-4. After the training of this SRCNN-only model is done, the SRGAN is initialised and trained. Super Resolution Generative Adversarial Network {#sec:SRGAN} ----------------------------------------------- The GAN network attaches onto the generative SRCNN network in the form of an image classification type network, a discriminator that identifies and labels input images as real and fake. The discriminator is trained on the SR and HR images, gradually becoming better at identifying SR and HR images. As the discriminator improves, its classification feedback is passed back to the generator to allow it to generate progressively more realistic SR images that attempt to fool a progressively improving classifier. The discriminator network is built with 8 discriminator blocks, with each block containing a convolutional layer of kernel size 3, stride 2 and filter numbers increasing from 64 to 512, LReLU activation ($\alpha=0.2$), and batch normalisation as can be seen in Figure \[fig:discrimNet\]. The network ends with 2 dense layers followed by sigmoid activation to obtain probabilistic values. ![Architecture of the Discriminator network[]{data-label="fig:discrimNet"}](discrimNet.png){width="\textwidth"} The discriminator ultimately outputs a single value between \[0,1\] for each image it is classifying, that acts as a logit, or a “probability”, that the image is either real (1) or fake (0). These probabilities are discretised appropriately when classifying images. The SR and HR images are labelled as $y_{HR}=1$ and $y_{SR}=0$, while the binary cross entropy (BXE) loss function of the discriminator output ${output}_{discrim} = p(SR,HR)$ is given by: $$\label{eqn:BXE} \begin{aligned} {BXE}_{HR}=-\frac{1}{N}\sum_{i=1}^Ny_{HR}log(p(HR))\\ {BXE}_{SR}=-\frac{1}{N}\sum_{i=1}^Ny_{SR}log(p(SR)) \end{aligned}$$ Since the output of the discriminator has passed through a sigmoid function, the Binary Cross Entropy calculated here is also known as the Sigmoid Cross Entropy. The discriminator network is trained with the Adam optimiser and a learning rate of 1e-4 to minimise the total classification loss between real and fake images: $$\label{eqn:dLoss} D_{Loss}={BXE}_{HR}+{BXE}_{SR}$$ The GAN begins training after the generator $G$ has reached its prescribed number of iterations (see Section \[sec:SRCNN\]), and both GAN and SRCNN are trained for 150,000 further iterations. To encourage the generation of texture and features that a typical SRCNN generator would omit, the pixelwise L1 loss is supplemented with a perceptual loss parameter [@SRGANledig] calculated as the mean squared error between the SR and HR feature maps obtained as the fully convolutional output from the 16th layer of the 19 layer VGG network [@perceptLoss2]. This term, the $VGG_{19_{Loss}}$, corresponds to the 4th convolutional output prior to the 5th max-pooling layer in the VGG-19 network. The feature maps obtained by passing an image through the network, $\phi$, are used to compute the L2 loss as: $$\label{eqn:vgg} VGG_{19_{Loss}}=\frac{\sum_i(\phi_{SR_i}-\phi_{HR_i})^2}{\sum_ii}$$ The generator and discriminator are coupled together by passing the discriminator SR outputs $p(SR)$ into the generator as part of the generative loss function. The adversarial loss term (ADV) takes the form of a BXE against labels of 1: $$\label{eqn:adv} {adv}_{Loss}=-\frac{1}{N}\sum_{i=1}^Nlog(p(SR))$$ This term is minimised when the discriminator classifies the SR images as real images (1 label), despite being trained to classify SR images as fake (0 label) based on Eqn \[eqn:BXE\]. The generator loss function is now a hybrid function defined as: $$\label{eqn:gLoss} {G}_{Loss}=L_{1_{Loss}}+\alpha VGG_{19_{Loss}}+\beta {adv}_{Loss}$$ where $\alpha$ and $\beta$ are scaling terms. In this paper, we take them to equal 1e-5 and 5e-3 respectively to scale near the magnitude of the L1 loss. The choice of the scaling terms will affect the accuracy of the SRGAN network, as the generation of perceptually accurate features by the discriminator works adversarially to the generation of pixelwise accurate features by the L2 loss. As a whole ensemble, the network of generator and discriminator is depicted in Figure \[fig:SRGANnet\]. ![Overall architecture of the network, showing the coupling of Generator and Discriminator Networks[]{data-label="fig:SRGANnet"}](SRGANNet.png){width="\textwidth"} Results and Discussion {#sec:Results and Discussion} ====================== Training and Validation Metrics {#sec:Training and Validation Metrics} ------------------------------- The network is trained using the first 9,600 images of the shuffled subset of the DeepRock-SR dataset, which is comprised of a selection of $\mu$CT images listed in Table \[tab:DRSRD3Table\]. The parameters used are outlined in sections \[sec:SRCNN\] and \[sec:SRGAN\]. Losses and metrics over the training epochs are tracked over 250 epochs as a moving average with a window of 1000 over 250,000 iterations. The training schedule is comprised of 100 epochs of SRCNN generator training and a further 150 epochs of SRGAN training with the discriminator active. Figure \[fig:genLosses\] shows the generator losses as described in Eqn \[eqn:gLoss\], with the L1 pixelwise loss, the VGG featurewise loss, and the ADV adversarial classification loss. The L1 loss falls to a plateau during the first 100 epochs and results show a 0.053 L1 error between SR and HR images in the training dataset. This translates to a training PSNR of 32, shown in figure \[fig:trainValPSNR\], and an equivalent L2 error of .0025 (calculated from Eqn \[eqn:PSNR\]). These are consistent with typical expected results for SRCNN training on digital rocks [@wang2019super], and will be elaborated upon in the later section \[sec:Validation and Testing Analysis\]. The VGG and ADV losses are activated after 100 epochs, and fall to a plateau over 150 further epochs. The VGG loss falls from 0.045 to 0.041, while the ADV loss drops from 0.025 to 0.0125. During the SRGAN training, the L1 loss rises to 0.071 with an associated PSNR of 29.7. Similarly during SRGAN training, the discriminator metrics are tracked (see Figure \[fig:discLosses\]) and show a stable training of the discriminator, plateauing to an overall $D_{loss}$ value of 0.8. The separate classification probabilities $p(HR)$ and $p(LR)$ (see Eqn \[eqn:BXE\]) are also plotted, and show that the final equilibrium between generator and discriminator results in a classification accuracy of 75%. ![Generative during training of SRGAN. SRCNN is trained with only a pixelwise L1 loss for the first 100 epochs, followed by the activation of SRGAN, with VGG feature losses and adversarial losses falling over epochs 101-250 with an increase in L1 losses.[]{data-label="fig:genLosses"}](lossTrainFig.png){width="\textwidth"} ![Discriminative losses during training of SRGAN. From epochs 101-250, the discriminator losses rise first fall as it is trained to distinguish SR and HR images, then rises as the generator catches up and begins to fool the discriminator. A perfect generator would result in a $D_{loss}$ of 1, and classification losses of 0.5.[]{data-label="fig:discLosses"}](lossDTrainFig.png){width="\textwidth"} As the training is run, at the end of every epoch (1000 mini-batch iterations), 1200 fullsize 500x500 images from the DeepRock-SR-2D shuffled dataset are used to validate the generator. The results are plotted with the training PSNR calculated on the 1000 16x192x192 mini-batches in each training epoch, shown in Figure \[fig:trainValPSNR\]. PSNR values obtained during validation track closely to the training PSNR, with a reduction of 0.2 to 0.3, settling to approximately 29.5 compared to 29.7. The variation in the validation PSNR increases significantly when the SRGAN is active, reflected in the VGG and ADV losses in Figure \[fig:genLosses\] due to the adversarial interplay between generator and discriminator. From both Figures \[fig:trainValPSNR\] and \[fig:genLosses\], it can be seen that there are potentially further improvements in the SRCNN performance past 100 epochs, as the initial generator training is not completely plateaued when the discriminator is activated. ![PSNR metrics over the training epochs. Like the L1 loss, the PSNR value falls once the discriminator is activated as texture is regenerated. Training and validation PSNR values track closely, indicating the minibatch window size of 192x192 is sufficient to capture details of the images.[]{data-label="fig:trainValPSNR"}](trainValPSNR.png){width="\textwidth"} During SRGAN training, the rise in the L1 loss in order to obtain a lower VGG and ADV is indicative of limitations in the SRGAN network architecture. In an ideal optimisation, the increase in higher frequency texture features should also be reflected in the L1 loss. However, the Adam optimisation [@AdamKingma] (or, likely any optimiser) utilised in SRGAN to obtain these texture features are in competition with the L1 loss which tends to favour smoother, averaged pixels. The presence of the VGG loss tends to act as a bridging parameter between the smoothing effect of the L1 loss and the texture generation of the ADV loss [@SRGANledig]. Validation and Testing Analysis {#sec:Validation and Testing Analysis} ------------------------------- The validation applied during training of the network (see Figure \[fig:trainValPSNR\]) is applied to a bulk shuffled collection of 1,200 images from the DeepRock-SR dataset. This is expanded upon in this section to include the 1,200 images in the testing section of the dataset and split apart into sandstone, carbonate, and coal subsets. An initial visualisation of the resulting validation images is shown in Figure \[fig:trainValImgs\], and difference maps with respect to the HR images is shown in Figure \[fig:trainValDiffs\]. The LR image (top row) and HR image (2nd row) are used as comparison points to the bicubically (BC) generated image, SR image, and SRGAN image that are generated only from the LR image. Inspection of these sample validation images indicate that there is an increasing trend of visual sharpness and texture from BC to SR to SRGAN images. While present in all 3 types of rock images, this is especially apparent in the carbonate images that contain highly heterogeneous features such as oolitic vugs and high frequency texture associated with microporosity. ![Sample images from the validation of the network, highlighted with a Jet colourmap for visual clarity. From the LR image, a bicubic image is generated, a SR image is generated from the epoch prior to activation of the GAN, and the SRGAN image is generated at the end of the training. Visually, the results show a gradual improvement in feature recovery until the SRGAN images that look perceptually identical to the HR images.[]{data-label="fig:trainValImgs"}](bigImgFigHR.png){width="\textwidth"} The difference maps of these sample images, shown in Figure \[fig:trainValDiffs\], provide a better indication of the ability for each method to recover important features from a single LR image. The BC images struggle to regenerate large scale features such as the edges of coal fractures and grain contacts in sandstone and carbonates. The SR images lose intragranular detail but recover larger features well, which is useful for cases where the details within each phase are irrelevant. This is typically the case for well resolved grains and pores of sandstones. SRGAN images tend to still struggle to achieve a pixelwise match to the HR image. The perceptual texture and sharpness seen in the images in Figure \[fig:trainValImgs\] are shown in the difference maps in Figure \[fig:trainValDiffs\] to not contribute any significant improvement in accuracy. This is most noticeable in the first sand image (from the left), where there exists a region of dissolved microporous mineral that is below the resolution of both the HR and LR images. The SR image results in a mostly blurry and featureless characterisation of this under-resolution area, while the SRGAN recovers very convincingly the mineralised features of this region. The difference maps of this area however, show that the overall pixelwise accuracy has not improved. Aside from the texture regeneration, it can be seen in the coal sample images that SRGAN also improves upon the edge recovery of SRCNN, resulting a better match with the well defined fractured features of the coal images. ![Difference maps of the sample images shown in Figure \[fig:trainValImgs\]. This more quantitative analysis of the images reveals the benefits and limitations of the SRGAN results. No distinct improvement in pixelwise accuracy in the sandstone and carbonate examples can be seen despite the considerable textural and perceptual improvement. Coal SRGAN images further improve the edge recovery from the SR images likely due to the SRCNN method struggling to cope with the high contrast and thin fracture features.[]{data-label="fig:trainValDiffs"}](bigDiffBoxFig.png){width="\textwidth"} Comparative validation of the SRCNN and SRGAN results is performed on separated subsets of sandstone, carbonate, and coal images, each comprising 800 500x500 images. The PSNR metrics for each subset are computed and shown in Figure \[fig:boxplotFig\], and show that SR images outperform bicubic interpolation, while SRGAN images are less accurate on a pixelwise basis. The overall PSNR boosts are given in Table \[tab:valTestPSNRTable\], and show similarly, that the SRCNN network results in a noticeable boost in the pixelwise accuracy, while the SRGAN performs poorly in comparison. The variance of the dataset PSNR changes when applying SRCNN and SRGAN methods compared to Bicubic methods. For the sandstone dataset, the the highly filtered and smooth Gildehauser sandstone sample results in high SR PSNR values as the smoothness of the image can be inferred by the network, while Bicubic methods cannot. Similarly, the significant reduction in PSNR variance in the Coal samples when using SR methods can be attributed to the presence of thin fracture features that bicubic methods struggle to regenerate, resulting in low PSNR values. ![Boxplots of the PSNR values computed by comparison of original HR images to BC, SR, and SRGAN images for sandstone, carbonate, and coal. Results show that SRCNN outperforms bicubic interpolation, while SRGAN performs worse. The texture recovery capabilities of the SRGAN network are not quantifiable by the PSNR metric.[]{data-label="fig:boxplotFig"}](boxplotFig.png){width="\textwidth"} --------- --------- -------- --------- -------- --------- --------- Mean Var Mean Var Mean Var Bicubic 25.8822 0.6802 22.9768 1.9896 39.8738 10.3738 SRCNN 28.5986 6.1992 24.3879 1.7475 42.6653 3.7016 SRGAN 26.2118 8.3945 21.8533 1.3415 40.6061 4.4339 --------- --------- -------- --------- -------- --------- --------- : PSNR results for rock types in the validation and testing DeepRock-SR dataset[]{data-label="tab:valTestPSNRTable"} While it is visually clear from Figures \[fig:trainValImgs\] and \[fig:trainValDiffs\] that SRGAN images posses improvements in perceptual, high frequency, textural features while maintaining accuracy of the bulk features that are recovered by SRCNN, this cannot be quantified appropriately using averaging metrics such as the PSNR since the generated texture is not a pixelwise match. External Sample Testing {#sec:External Sample Testing} ----------------------- Testing of the SRCNN generator on unseen images is expected to result in less error compared to previous SRCNN tests on external samples since the training and validation sets are derived from the larger and more diverse DeepRock-SR dataset. However, the samples used in this section are completely unlike the validation/testing sets, which are unseen subsections of training images. Therefore the images used may exist further away from the manifold of features that the network is trained on. ![Samples of the external rock images of Bentheimer (left) and Ketton Carbonate (right). Similar characteristics can be observed in terms of the relative performance of bicubic interpolation, SRCNN, and SRGAN. Images show that SRCNN recovers bulk features with high accuracy, while SRGAN regenerates a visually similar textured image to the HR counterpart.[]{data-label="fig:extDiffBoxFig"}](extDiffBoxFig.png){width="\textwidth"} ![Boxplots of the external rock images of Bentheimer (left) and Ketton Carbonate (right). PSNR pixelwise error shows the typical boost over bicubic interpolation that SRCNN achieves, while SRGAN is less accurate. Overall, these are again in line with expected results obtained during validation and testing in section \[sec:Validation and Testing Analysis\].[]{data-label="fig:extboxplotFig"}](extboxplotFig.png){width="\textwidth"} Testing the network on these completely unseen images shows that SRCNN performance is impacted so a slight degree in terms of the PSNR, seen in Figure \[fig:extboxplotFig\], but retains its superiority to BC images in terms of edge feature recovery and overall pixelwise matching, seen in Figure \[fig:extDiffBoxFig\]. Similarly, SRGAN generated images of the external samples remain visually similar to their HR counterparts. PSNR performance is as expected, with SRCNN improving over bicubic methods, and SRGAN resulting in a decline in pixelwise accuracy. The more globular features of Bentheimer Sandstone and Ketton Carbonate are less sensitive to the edge-loss incurred with bicubic interpolation. While the SRCNN and SRGAN generated images for these external samples are significant improvements on interpolation methods in terms of their features and textures, deviation from the ground truth is unavoidable to an extent, as deep learning is highly reliant on large amounts of representative data. Other further deviations can be expected with images of low contrast (not included in the DeepRock-SR dataset) or otherwise. Testing network performance on completely unseen digital rocks inevitably results in a performance penalty that can be rectified by ever increasingly diverse training data. LR Augmentation for Generalisation of Blur and Noise {#sec:Augmentation} ---------------------------------------------------- Training SRCNN and SRGAN networks using synthetic data generated using downsampling kernels ultimately limits the capabilities of the resulting generator when dealing with images that contain characteristics that are not present in synthetic images. In particular, the image noise may be different and there may be some blur associated with the LR image. While maintaining synthetic sample training, it has been shown that augmenting the LR training images with noise and blur will significantly impact SRCNN performance [@wang2019super]. Blur is applied as a Gaussian smoothing kernel with a standard deviation randomly sampled from 0 to 1, and noise is applied as Gaussian white noise with mean of 0 and variance randomly sampled between 0 and 0.005. The resulting LR training images are depicted in Figure \[fig:noisyTrainingComparisonFig\]. Previously published results indicate that application of these augmentation parameters results in SR images that are highly denoised with very sharp and accurate edges. This is due to the increase in the “unlearnable” randomness of the LR to HR mapping. This extreme loss of intraphase detail is beneficial for segmentation of well resolved features such as sandstone grains, but is detrimental to characterising the microporosity and under-resolution features present in coal and carbonate images. Training on these augmented images for the full schedule of 250 epochs of 1000 iterations, with GAN training starting from epoch 101 results in validation PSNR values that are given in Figure \[fig:SynAugTrainValPSNR\], showing a clear reduction in SR performance from a pixelwise perspective. The SR images generated from the network trained on synthetic images, and the ones generated from the network trained on augmented images, will hereinafter be referred to simply as “synthetic” and “augmented” SR images ![Sample images of the resulting augmented LR image compared to the synthetic LR image, and the original HR image.[]{data-label="fig:noisyTrainingComparisonFig"}](noisyTrainingComparisonFig.png){width="\textwidth"} ![Validation PSNR achieved over training epochs for the augmented dataset training, compared to the synthetic data from Figure \[fig:trainValPSNR\]. There is a significant drop in network performance, as the augmented LR images provide generalisation to the SR results by adding unlearnable randomness to the LR-HR pairing[]{data-label="fig:SynAugTrainValPSNR"}](SynAugTrainValPSNR.png){width="\textwidth"} Example SR images obtained from the synthetic network compared to the augmented network shows in Figure \[fig:synAugBoxFig\] shows that synthetic images outperform augmented images in feature recovery, though both results retain larger scale edgewise features [@wang2019super]. The GAN trained results again show little perceptual difference. There are losses in the detail in coal SR images during augmentation, likely due to an excessive augmentation of noise and blur, that affects the lower contrast coal images more significantly. Boxplots of the PSNR over the 800 validation and testing images of sandstone, carbonate, and coal in Figure \[fig:AugboxplotFig\] show that the PSNR is affected adversely, with the loss of intraphase features resulting in lower PSNR values than even bicubic methods. The overall PSNR is lower in augmented images, but the final result is compensated by the discriminator to regenerate the texture in a perceptually convincing manner. ![Sample images generated from training with synthetic and augmented datasets. Compared to the synthetic SR images (also shown in Figure \[fig:trainValImgs\]), augmented SR images provide generalisation to the SR results by adding unlearnable randomness to the LR-HR pairing. SRGAN images on the other hand, remain perceptually convincing regenerations of the HR image, as any losses in the pixelwise recovery of intraphase detail is compensated for by the GAN. The lower contrast coal images are significantly affected by the augmentation, which is likely too excessive for the image characteristics.[]{data-label="fig:synAugBoxFig"}](synAugBoxFig.png){width="\textwidth"} ![Box plots of the PSNR achieved using synthetic and augmented training datasets. Augmented training results in a reduction in PSNR performance, and in the case of SRGAN images and coal images, lower performance than even bicubic methods. However, a visual inspection of the sample images in Figure \[fig:synAugBoxFig\] reveals that the losses in the pixelwise accuracy do not occur at the edges of important edge features, but instead occurs in the intraphase regions of the image.[]{data-label="fig:AugboxplotFig"}](AugboxplotFig.png){width="\textwidth"} Despite the reduced PSNR achieved by the augmented training, the seemingly high fidelity texture regeneration shows that sandstone and carbonate Aug-SRGAN images result in a similar visual texture match with the Syn-SRGAN images, indicating that the GAN texture that is regenerated is spatially similar to the HR images. The lower performance of coal images is due to the presence of thin fracture features that are lost or poorly estimated when the LR images are augmented with blur and noise, seen in a sample image in Figure \[fig:coalAugFracFig\], and present throughout the generated augmented SR images. With a less aggressive augmentation, it can be expected that the network will appropriately recover the features and textures. A full repository of the results can be found in conjunction with the DeepRock-SR dataset [@DRSRD3]. ![Sample image of how the blur and noise augmentation causes an overestimation of the fracture aperture in coal images, as the fracture features are too attenuated by the augmentation process.[]{data-label="fig:coalAugFracFig"}](coalAugFracFig.png){width="\textwidth"} To illustrate and quantify the limitations and different results that can be expected between synthetic and augmented training, we use a Bentheimer sample with a resolution of 7 $\mu$m as a LR input and generate SR images. This sample contains some natural blur and noise that is outside the manifold that synthetic LR images exist on, thus outside the parameters learned from training on synthetically downsampled images. From the original 7 $\mu$m, noisy and blurry image, a few SR images can be generated from the trained networks, shown in Figure \[fig:bentRamFig\]. The synthetically trained networks generate images that contain higher resolution features and textures, but remain blurry and noisy. The augmented networks instead are capable of sharpening the input image as LR image blur is a recognised input feature and are generally more perceptually superior. ![Sample images of the 7 $\mu$m, noisy and blurry Bentheimer sample, with generated SR images with a scaling factor of 4x (resulting in a 1.75 $\mu$m resolution), trained with synthetic and augmented images. It can be seen that the synthetic images do not account for the blur present in the original images, and generate perceptually unsatisfying results. The augmented networks are capable of accounting for image blur and thus generate higher fidelity results.[]{data-label="fig:bentRamFig"}](bentRamFig.png){width="\textwidth"} Super Resolution of High Resolution Images {#sec:HRSR} ------------------------------------------ Since the network is trained on downsampled LR counterparts (synthetic and augmented) of HR ground truth images, the HR images themselves are unseen by the trained network. Feeding the HR images as input LR images will generate HR-SR images with an image resolution higher than the range that the training is conducted with. The Bentheimer sample 1 [@DRSRD1], Estaillades Carbonate [@estCarb], and Sheared Coal [@shearedCoal] samples are used to represent the different rock types of interest, shown in Table \[tab:HRSRtable\]. The DeepRock-SR dataset contains sandstone and carbonate images with a HR to LR feature mapping of around 3-5 $\mu$m to 12-20 $\mu$m, and a coal mapping of 25 $\mu$m to 100 $\mu$m. Feeding the trained network an input sandstone image of for example, 4 $\mu$m, and expecting an accurate SRGAN image with a resolution of $\mu$m is an extrapolation of performance. The SRCNN and SRGAN synthetic generators are applied to the HR images of each sample, and samples of the HR-SR images are outlined in Table \[tab:HRSRtable\] and shown in Figure \[fig:HRSRFig\]. [lllll]{}\ (r)[1-2]{} Name & HR Resolution ($\mu$m) & HR-SR Resolution ($\mu$m)\ Bentheimer Sandstone & 3.8 & 0.95\ Estaillades Carbonate & 3.1 & 0.7525\ Sheared Coal & 16 & 4\ ![Sample images of HR images in the DeepRock-SR dataset, used as inputs for generating HR-SR images. Features that are near the limit of the image resolution become resolved and textured.[]{data-label="fig:HRSRFig"}](HRSRFig.png){width="\textwidth"} In this case, there are no ground truth images to compare quantitatively with so PSNR metrics of pixelwise accuracy are unavailable to analyse. Of particular interest is the generation of sub-pixel features of micro-porous or poorly resolved regions, such as the partially dissolved mineralisation found in the Bentheimer image, the micro-porous texture of the Estaillades Carbonate, and any under-resolved fractures present in the coal images. These features typically require SEM imaging or nano-CT to resolve properly and will form part of our future work in this area of research. In the generated SR and SRGAN images, the under-resolved features that were present in the original HR images become resolved and sharpened, with texture regenerated by the SRGAN. In the cases of sandstone and carbonate features, SRGAN results are visually sharp, with textures appearing qualitatively as expected in a typical greyscale $\mu$CT image. The coal image on the other hand appears to remain perceptually blurry and undertextured. However, under-resolved fracture features are clearly resolved in the HR-SR image, shown in Figure \[fig:HRSRFig\] by the green bounding boxes. It is important to emphasise that by applying the trained network on images with a voxel size mapping (1-5 $\mu$m) that is outside the range of training (5-20 $\mu$m), the primary assumption is that the type of texture exists at the smaller length scales of the domain. While an exercise in extrapolation, this suggests that if the network (or similar SRGAN architecture) is trained on a wider range of image resolutions (obtained from nano-CT, SEM, or otherwise), performance would remain accurate. Direct Visual Comparison with SEM Images {#sec:HRSRSEM} ---------------------------------------- A final test is performed using the trained network by taking a SEM scan of a slice from a Mt Simon sandstone and comparing its detail and features with the equivalent $\mu$CT image. The SEM image measures 7047x6226, at a resolution of 0.5 $\mu$m, while the equivalent CT image was imaged at 1.7 $\mu$m and resampled to the suitable 4x image size of 1762x1557, resulting in a voxel size of 2 $\mu$m. These testing images can be seen in Figure \[fig:SEMandLRCT\]. The lower resolution $\mu$CT image is super resolved to boost the resolution and recover the texture of the image. Once again, this particular use case involves images with voxel sizes outside the range of the DeepRock-SR training dataset. The SEM image characteristics are also very different to the $\mu$CT, as it possesses high image sharpness, showing each pixel discretely unlike the $\mu$CT images which are more diffuse with information spanning multiple pixels. Since the network was trained on such diffuse images, it is not possible under the current training scheme to reach image detail levels in the range of the SEM image. A selection of small subsets of the images is shown in Figure \[fig:HRSRSEMFig\], and shows a clear visual improvement in the regeneration of features from a low resolution source image compared to a bicubic upsampling. There are some convolutional artefacts present in the SRGAN $\mu$CT generated images due to the extrapolative nature of this test case, but the results show that the images generated by EDSRGAN improve connectivity and feature detail in a manner that is visually consistent with the high fidelity SEM images. Aside from convolutional artefacts from network extrapolation, the main sources of comparative error come from (a) the inherently more diffuse quality of the $\mu$CT images that the network is trained on, (b) the $\mu$CT image pixels contain diffuse information from the previous and subsequent slice, and (c) registration error and changes in grain locations between imaging. ![$\mu$CT image (left) and registered SEM image (right) of Mt Simon slice[]{data-label="fig:SEMandLRCT"}](LRCT.png){width="\textwidth"} ![$\mu$CT image (left) and registered SEM image (right) of Mt Simon slice[]{data-label="fig:SEMandLRCT"}](HRSEM.png){width="\textwidth"} ![Sample subsections of the Mt Simon sandstone, with comparison between the original low resolution, the bicubic upsample, the SRGAN upsample, and the registered SEM image. There is a clear improvement in image quality achieved by the use of the EDSRGAN network.[]{data-label="fig:HRSRSEMFig"}](HRSRSEMFig.png){width="\textwidth"} Conclusions =========== By training the network with the DeepRock-SR dataset, the inherent hardware limitations with FOV and resolution of Digital Rock Images can be compensated. A low resolution, noisy, blurry image of sandstone, carbonate or coal can be transformed into a high fidelity, accurately textured SR image. This is possible by combining SRCNN that difference maps have shown to excel in capturing edge details with a GAN to recover high frequency texture details. The SRCNN network shows a 3-5 dB boost in pixel accuracy (50% to 70% reduction in relative error) compared to bicubic interpolation, while texture shows superior similarity from SRGAN images compared to normal SRCNN and other interpolation methods. Extrapolation of training with external, morphologically distinct samples remains similarly accurate based on pixel error and visual texture analysis. Generalising the LR to HR mapping with augmentation results in a high adaptability to noise and blur. HR-SR images generated by feeding HR images into the network to extrapolate performance to sub-resolution features in the HR images themselves show that under-resolution features are recovered in high detail. Dissolved minerals and thin fractures are regenerated despite the network operating outside of trained specifications. Direct comparison to SEM images of a Mt Simon sandstone show that even in the extrapolative tests, the generated details are consistent with the underlying geometry of the sample, which bicubic interpolations are unable to achieve. The neural network architecture used in this study can be made more efficient with wide activation network trimming [@WDSR], or more flexible with unpaired, unsupervised cyclical networks [@circleGAN]. The choice of loss function weights will affect the interplay between generator and discriminator (features and textures), and the availability of training data and augmentation further adds avenues of algorithmic improvement. In its current state, the network is operational and adequately effective in its accuracy and flexibility for the purposes of illustrating and quantifying the general effectiveness of SRGAN methods. SRGAN images are accurate in both a feature and texture basis when applied to unseen validation and testing LR images within the boundaries of the dataset used for training, as seen in Figure \[fig:trainValDiffs\]. However, when there does not exist a ground truth image to calculate comparative metrics with, there is understandable scepticism and suspicion towards the generation of texture within the intra-phase with SRGAN methods, especially true in the case of extrapolative uses of trained networks. The use of 2D super resolution models on 3D data ignores the depth dimension, which limits this study as a proof of concept for the recovery and generation of micro-CT images that can be segmented for digital rock workflows. Application of specialised networks to 3D super resolution of micro-CT images and coupled Super Resolution with Segmentation is natural progression from this work. The recovery of both features and texture from LR images is beneficial for characterising digital rocks that are dominated by under-resolution micro-porous features such as in carbonate and coal samples. Overall, images can be constrained by the brittle mineralogy of the rock (coal), by lower quality fast transient imaging (waterflooding), or by the energy of the source (microporosity). These limitations can be overcome and super resolved accurately for further analysis downstream. The neural network architecture and training methodology used in this study provide the tools necessary to generate HR $\mu$CT images that exceed typical imaging limits. Acknowledgements ================ We would like to acknowledge both the Tyree X-ray facility (<http://www.tyreexrayct.unsw.edu.au/>), and the Digital Rocks Portal (<https://www.digitalrocksportal.org/projects/>) for providing images that were analysed in this paper. Images used and generated in this paper can be found in the DeepRock-SR repository on the Digital Rocks Portal. [10]{} W Brent Lindquist, Sang‐Moon Lee, David A Coker, Keith W Jones, and Per Spanne. Medial axis analysis of void structure in three‐dimensional tomographic images of porous media. , 101(B4):8297–8310, 1996. RD Hazlett. , pages 21–35. Springer, 1995. Dorthe Wildenschild and Adrian P Sheppard. X-ray imaging and analysis techniques for quantifying pore-scale structure and processes in subsurface porous medium systems. , 51:217–246, 2013. Steffen Schlüter, Adrian Sheppard, Kendra Brown, and Dorthe Wildenschild. Image processing of multiphase images obtained via x-ray microtomography: A review. , 50(4):3615–3639, 2014. Ying Da Wang, Traiwit Chung, Ryan T. Armstrong, James E. McClure, and Peyman Mostaghimi. Computations of permeability of large rock images by dual grid domain decomposition. , 126:1–14, 2019. T. Chung, Y.D. Wang, Mostaghimi P., and Armstrong RT. Approximating permeability of micro-ct images using elliptic flow equations. , 2018. Ying Da Wang, Traiwit Chung, Ryan T. Armstrong, James McClure, Thomas Ramstad, and Peyman Mostaghimi. Accelerated computation of relative permeability by coupled morphological-direct multiphase flow simulation, 2019. Peyman Mostaghimi, Martin J Blunt, and Branko Bijeljic. Computations of absolute permeability on micro-ct images. , 45(1):103–125, 2013. Paulina Krakowska, Marek Dohnalik, Jadwiga Jarzyna, and Kamila Wawrzyniak-Guz. Computed x-ray microtomography as the useful tool in petrophysics: A case study of tight carbonates modryn formation from poland. , 31:67–75, 2016. Lin A Ferrand and Michael A Celia. The effect of heterogeneity on the drainage capillary pressure-saturation relation. , 28(3):859–870, 1992. Brian P Flannery, Harry W Deckman, Wayne G Roberge, and Kevin L d’Amico. Three-dimensional x-ray microtomography. , 237(4821):1439–1444, 1987. J Coenen, E Tchouparova, and X Jing. Measurement parameters and resolution aspects of micro x-ray tomography for advanced core analysis. In [*proceedings of International Symposium of the Society of Core Analysts*]{}. Zeyun Jiang, MIJ Van Dijke, Kenneth Stuart Sorbie, and Gary Douglas Couples. Representation of multiscale heterogeneity via multiscale pore networks. , 49(9):5437–5449, 2013. Steffen Schl[ü]{}ter, Adrian Sheppard, Kendra Brown, and Dorthe Wildenschild. Image processing of multiphase images obtained via x-ray microtomography: a review. , 50(4):3615–3639, 2014. Zhixing Yi, Mian Lin, Wenbin Jiang, Zhaobin Zhang, Haishan Li, and Jian Gao. Pore network extraction from pore space images of various porous media systems. , 53(4):3424–3445, 2017. Zhengji Li, Qizhi Teng, Xiaohai He, Guihua Yue, and Zhengyong Wang. Sparse representation-based volumetric super-resolution algorithm for 3d ct images of reservoir rocks. , 144:69–77, 2017. Tom Bultreys, Luc Van Hoorebeke, and Veerle Cnudde. Multi-scale, micro-computed tomography-based pore network models to simulate drainage in heterogeneous rocks. , 78:36 – 49, 2015. Wen Hao Chen, Y Yang, Tiqiao Xiao, Sheridan Mayo, Yu Dan Wang, and Haipeng Wang. A synchrotron-based local computed tomography combined with data-constrained modelling approach for quantitative analysis of anthracite coal microstructure. , 21:586–93, 05 2014. Ying Da Wang, Ryan Armstrong, and Peyman Mostaghimi. Super resolution convolutional neural network models for enhancing resolution of rock micro-ct images, 2019. S. C. Park, M. K. Park, and M. G. Kang. Super-resolution image reconstruction: a technical overview. , 20(3):21–36, May 2003. Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. , volume 38. 2014. S. [Dai]{}, M. [Han]{}, W. [Xu]{}, Y. [Wu]{}, Y. [Gong]{}, and A. K. [Katsaggelos]{}. Softcuts: A soft edge smoothness prior for color image super-resolution. , 18(5):969–981, May 2009. , [Zongben Xu]{}, and [Heung-Yeung Shum]{}. Image super-resolution using gradient profile prior. In [*2008 IEEE Conference on Computer Vision and Pattern Recognition*]{}, pages 1–8, June 2008. Q. [Yan]{}, Y. [Xu]{}, X. [Yang]{}, and T. Q. [Nguyen]{}. Single image superresolution based on gradient profile sharpness. , 24(10):3187–3202, Oct 2015. W. T. [Freeman]{}, T. R. [Jones]{}, and E. C. [Pasztor]{}. Example-based super-resolution. , 22(2):56–65, March 2002. , [Dit-Yan Yeung]{}, and [Yimin Xiong]{}. Super-resolution through neighbor embedding. In [*Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.*]{}, volume 1, pages I–I, June 2004. Jianchao Yang, John Wright, Thomas S. Huang, and Lei Yu. Image super-resolution via sparse representation. , 19:2861 – 2873, 12 2010. S. [Schulter]{}, C. [Leistner]{}, and H. [Bischof]{}. Fast and accurate image upscaling with super-resolution forests. In [*2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*]{}, pages 3791–3799, June 2015. J. [Yu]{}, X. [Gao]{}, D. [Tao]{}, X. [Li]{}, and K. [Zhang]{}. A unified learning framework for single image super-resolution. , 25(4):780–792, April 2014. Yifan Wang, Lijun Wang, Hongyu Wang, and Peihua Li. End-to-end image super-resolution via deep and shallow convolutional networks. , abs/1607.07680, 2016. Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. . 2017. Jiahui Yu, Yuchen Fan, Jianchao Yang, Ning Xu, Zhaowen Wang, Xinchao Wang, and Thomas Huang. Wide activation for efficient and accurate image super-resolution. , abs/1808.08718, 2018. Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. . 2015. Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. . 2017. Kensuke Umehara, Junko Ota, and Takayuki Ishida. , volume 31. 2018. Chenyu [You]{}, Guang [Li]{}, Yi [Zhang]{}, Xiaoliu [Zhang]{}, Hongming [Shan]{}, Shenghong [Ju]{}, Zhen [Zhao]{}, Zhuiyang [Zhang]{}, Wenxiang [Cong]{}, Michael W. [Vannier]{}, Punam K. [Saha]{}, and Ge [Wang]{}. . , page arXiv:1808.04256, Aug 2018. Yukai Wang, Qizhi Teng, Xiaohai He, Junxi Feng, and Tingrong Zhang. Ct-image super resolution using 3d convolutional neural network. , abs/1806.09074, 2018. Chao Dong, Chen Change Loy, and Xiaoou Tang. Accelerating the super-resolution convolutional neural network. , abs/1608.00367, 2016. Wenzhe Shi, Jose Caballero, Ferenc Husz[á]{}r, Johannes Totz, Andrew P. Aitken, Rob Bishop, Daniel Rueckert, and Zehan Wang. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. , abs/1609.05158, 2016. Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts. , 2016. Jun[-]{}Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. , abs/1703.10593, 2017. Alexey Dosovitskiy and Thomas Brox. Generating images with perceptual similarity metrics based on deep networks. , abs/1602.02644, 2016. Justin Johnson, Alexandre Alahi, and Fei[-]{}Fei Li. Perceptual losses for real-time style transfer and super-resolution. , abs/1603.08155, 2016. Ian J. [Goodfellow]{}, Jean [Pouget-Abadie]{}, Mehdi [Mirza]{}, Bing [Xu]{}, David [Warde-Farley]{}, Sherjil [Ozair]{}, Aaron [Courville]{}, and Yoshua [Bengio]{}. . , page arXiv:1406.2661, Jun 2014. Pavel Iassonov, Thomas Gebrenegus, and Markus Tuller. Segmentation of x-ray computed tomography images of porous materials: A crucial step for characterization and quantitative analysis of pore structures. , 45(9), 2009. Ying Da Wang, Peyman Mostaghimi, and Ryan Armstrong. A diverse super resolution dataset of sandstone, carbonate, and coal (deeprock-sr), 2019. A. [Buades]{}, B. [Coll]{}, and J. . [Morel]{}. A non-local algorithm for image denoising. In [*2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05)*]{}, volume 2, pages 60–65 vol. 2, June 2005. Ying Da Wang, Peyman Mostaghimi, and Ryan Armstrong. A super resolution dataset of digital rocks (drsrd1): Sandstone and carbonate, 2019. Anna Herring, Adrian Sheppard, Michael Turner, and Levi Beeching. Multiphase flows in sandstones, 2018. Steffen Berg, Ryan Armstrong, and Andreas Wiegmann. Gildehauser sandstone, 2018. Ayaz Mehmani, Masa Prodanovic, and Kitty L. Milliken. Wilcox tight gas sandstone, 2015. Tom Bultreys. Estaillades carbonate \#2, 2016. Tom Bultreys. Savonnières carbonate, 2016. Tom Bultreys. Massangis jaune carbonate, 2016. D Nicolas Espinoza. Sheared coal sample, 2015. D Nicolas Espinoza. Naturally fractured coal sample, 2015. Kelly M. Guan, Marfa Nazarova, Bo Guo, Hamdi Tchelepi, Anthony R. Kovscek, and Patrice Creux. Effects of image resolution on sandstone porosity and permeability as obtained from x-ray microscopy. , 127(1):233–245, Mar 2019. Thomas Ramstad. Bentheimer micro-ct with waterflood, 2018. Alessio Scanziani, Kamaljit Singh, and Martin Blunt. Water-wet three-phase flow micro-ct tomograms, 2018. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. , abs/1502.01852, 2015. Diederik Kingma and Jimmy Ba. . 2014. [^1]: Lebesgue spaces, named after Henri Lebesgue, L1 and L2 are specific cases of $L^p$ spaces
--- author: - 'Anton Evseev[^1]' bibliography: - 'porc.bib' - 'parabolic.bib' date: | with an appendix by\ Anton Evseev and George Wellen[^2] title: Conjugacy classes in parabolic subgroups of general linear groups --- Introduction {#parintro} ============ Let $q$ be a prime power and ${\mathbb{F}}_q$ be the finite field with $q$ elements. If $k$ and $m$ are nonnegative integers, let ${\operatorname{M}}_{m,k}(q)$ be the set of all $m\times k$ matrices over ${\mathbb{F}}_q$. Let ${\mathbf}{l}=(l_1,\ldots,l_s)$ be a sequence of nonnegative integers. Writing ${\operatorname{M}}_k (q) = {\operatorname{M}}_{k,k}(q)$, let $M^{{\mathbf}l}(q)$ be the set of all matrices of the form $$\begin{pmatrix} {\operatorname{M}}_{l_1}(q) & {\operatorname{M}}_{l_1,l_2}(q) & \ldots & {\operatorname{M}}_{l_1,l_s}(q) \\ 0 & {\operatorname{M}}_{l_2} (q) & \ldots & {\operatorname{M}}_{l_2,l_s}(q) \\ \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \ldots & {\operatorname{M}}_{l_s}(q) \end{pmatrix}.$$ Let $P^{{\mathbf}{l}}(q)$ be the group of all invertible matrices in ${\operatorname{M}}^{{\mathbf}l}(q)$. The group $P^{{\mathbf}{l}} (q)$ is called a *parabolic* subgroup of the group ${\operatorname{GL}}_m (q)$, where $m=l_1+\cdots+l_s$. Alternatively, a parabolic subgroup may be described as the stabiliser of a flag in the vector space ${\mathbb{F}}_q^m$. Let $N^{{\mathbf}l}(q)$ be the set of all nilpotent matrices in $M^{{\mathbf}l}(q)$. The group $P^{{\mathbf}l}(q)$ acts on the set $M^{{\mathbf}l}(q)$ by conjugation: $^{g}x=gxg^{-1}$ ($g\in P^{{\mathbf}{l}}(q)$, $x\in M^{{\mathbf}{l}}(q)$). We investigate the number of orbits of this action and also the numbers of orbits of the actions of $P^{{\mathbf}l}(q)$ on certain subsets of $M^{{\mathbf}l} (q)$. Throughout the paper, we denote by $\gamma(G,X)$ the number of orbits of an action of a group $G$ on a finite set $X$, where the action is understood. Let $$\rho_{{\mathbf}l}(q)=\gamma(P^{{\mathbf}l}(q), N^{{\mathbf}l}(q)).$$ Although the groups $P^{{\mathbf}{l}}(q)$ have received a lot of attention, not much is known about the numbers $\gamma(P^{{\mathbf}l}(q),P^{{\mathbf}l}(q))$ and $\rho_{{\mathbf}l}(q)$. In particular, it is not known whether the following is true. \[gencon\] For every tuple ${\mathbf}l=(l_1,\ldots,l_s)$ of nonnegative integers, $\rho_{{\mathbf}l}(q)$ is polynomial in $q$. (Here, and in what follows, we call a rational-valued function $f(x)$ defined on a set $D$ of integers *polynomial* if there exists a polynomial $g$ with rational coefficients such that $f(x)=g(x)$ for all $x\in D$.) Write $(1^m)=\underbrace{(1,1,\ldots,1)}_{m}$. The following two special cases of Conjecture \[gencon\] have received considerable attention and will be of particular interest to us. \[unicon\] For every positive integer $m$, $\rho_{(1^m)} (q)$ is a polynomial in $q$ with rational coefficients. \[maxcon\] For any positive integers $k$ and $m$, $\rho_{(k,m)}(q)$ is a polynomial in $q$ with rational coefficients. It is not difficult to show that Conjecture \[unicon\] is equivalent to the conjecture that the number of conjugacy classes of the group of upper unitriangular $n\times n$ matrices is a polynomial in $q$ with rational coefficients (see [@thesis Section 4.2]). This last conjecture has been proved for $n\le 13$ by J.M. Arregi and A. Vera-López [@pol1992; @pol1995; @pol2003]. Conjecture \[maxcon\] has been proved for $k\le 5$ (or, alternatively, for $m \le 5$); indeed, S.H. Murray [@murray2000] proved that in those cases $\rho_{(k,m)}(q)$ does not depend on $q$. A *partition* is a (possibly, empty) non-increasing sequence of positive integers. If $\lambda=(\lambda_1,\dots,\lambda_r)$ is a partition, let $|\lambda|=\lambda_1+\cdots+\lambda_r$, and write $l({\lambda})=r$. If $k,m\in {\mathbb{N}}$, let ${\mathcal}{P}_k$ be the set of all partitions $\lambda$ with $|{\lambda}|=k$, let ${\mathcal}{P}^m$ be the set of partitions $\lambda$ such that $\lambda_i\le m$ for all $i$, and let ${\mathcal}{P}_k^m= {\mathcal}P_k \cap {\mathcal}P^m$. Let $p(k)=|{\mathcal}{P}_k|$. Let $m\in {\mathbb{N}}$, and let ${\lambda}=({\lambda}_1,\ldots,{\lambda}_r)$ be a partition such that ${\lambda}_1\le m$. Define $$\delta(m,\lambda)=(\lambda_r,\lambda_{r-1}-\lambda_r, \lambda_{r-2}-\lambda_{r-1},\dots,\lambda_1-\lambda_2,m-{\lambda}_1).$$ Let $\nu^m_{{\lambda}}(q)=\rho_{\delta(m,{\lambda})}(q)$. Thus, $\nu^m_{{\lambda}}(q)$ is the number of $P^{{\mathbf}l}(q)$-orbits of $N^{{\mathbf}l}(q)$ where $P^{{\mathbf}l}(q)$ is the stabiliser of a flag $${\mathbb{F}}_q^{{\lambda}_r} \le {\mathbb{F}}_q^{{\lambda}_{r-1}} \le \cdots \le {\mathbb{F}}_q^{{\lambda}_1} \le {\mathbb{F}}_q^m.$$ We shall generalise the definition of $\nu_{{\lambda}}^m (q)$ as follows. Let $\overline{{\mathbb{F}}}_q$ be the algebraic closure of the field ${\mathbb{F}}_q$, and let $F$ be a subset of ${\overline}{{\mathbb{F}}}_q$. Let ${\mathcal}Y={\mathcal}Y_F$ be the family of all linear endomorphisms $T:U{\rightarrow}U$ (where $U$ is an arbitrary finite dimensional vector space over ${\mathbb{F}}_q$) such that all eigenvalues of $T$ over $\overline{{\mathbb{F}}}_q$ are in $F$. We shall say that ${\mathcal}Y$ is the *class* of endomorphisms associated with $F$. We shall refer to elements of this class as *${\mathcal}Y$-endomorphisms*. Obviously, ${\mathcal}Y$ is preserved by conjugation. Note that the family of all nilpotent endomorphisms and the family of all invertible endomorphisms are both classes. If ${\mathcal}Y$ is a class and $m\in{\mathbb{Z}}_{\ge 0}$, let $c(m, {\mathcal}Y)$ be the number of ${\operatorname{GL}}_m ({\mathbb{F}}_q)$-orbits on ${\mathcal}Y\cap {\operatorname{M}}_m ({\mathbb{F}}_q)$. We shall denote by ${\mathcal}N$ the class of all nilpotent endomorphisms. Note that $$c(m,{\mathcal}N)=p(m),$$ the number of partitions of $m$. This follows from the fact that nilpotent matrices in Jordan canonical form form a complete set of representatives of ${\operatorname{GL}}_m (q)$-orbits on ${\operatorname{N}}_m(q)$, the set of nilpotent $m\times m$ matrices. Let ${\mathcal}Y$ be a class (of endomorphisms), and let $m\in {\mathbb{N}}$. Let ${\lambda}=({\lambda}_1,\ldots,{\lambda}_r) \in {\mathcal}P^m$ (that is, ${\lambda}_1\le m$). Define $$\kappa_{{\lambda}}^m ({\mathcal}Y)= \gamma(P^{\delta (m,{\lambda})}(q),M^{\delta(m,{\lambda})}(q)\cap {\mathcal}Y).$$ One of the main results of this paper is as follows. \[pargen\] Let $k$ and $m$ be positive integers. Let $q$ be a prime power and ${\mathcal}Y$ be a class of endomorphisms over ${\mathbb{F}}_q$. Then $$\kappa_{(k)}^{k+m}({\mathcal}Y)= \sum_{j=0}^{k} c(k-j, {\mathcal}Y) \sum_{{\lambda}\in {\mathcal}P_j^m} \kappa_{{\lambda}}^m ({\mathcal}Y).$$ In particular, $$\rho_{(k,m)}(q)= \sum_{j=0}^{k} p(k-j)\sum_{\lambda\in {\mathcal}P_j^m} \nu_{{\lambda}}^m (q).$$ S.H. Murray [@murray2006] has proved a similar result stated in terms of irreducible representations of parabolic subgroups. This result implies Theorem \[pargen\] in the case when ${\mathcal}Y$ is the class of all invertible matrices. The proof in [@murray2006] is different from the one given here. Unlike the proof in this paper, the proof in [@murray2006] establishes not just a numerical equality, but also an explicit correspondence between representations. It is an interesting question whether the method of [@murray2006] can be extended to prove an analogue of the more general Theorem \[gensq\]. We will deduce the following two results from Theorem \[pargen\]. \[impl\] Let $m\in {\mathbb{N}}$, and let $n=m(m-1)/2$. There exist integers $a_0,a_1,\dots,a_n$ such that, for all prime powers $q$, $$\rho_{(1^m)}(q)=\sum_{j=0}^n a_j \rho_{(j,m)}(q).$$ Hence, Conjecture \[maxcon\] implies Conjecture \[unicon\]. If ${\mathbf}l=(l_1,\ldots,l_s)$, write $\rho_{k,{\mathbf}l}(q)$ for $\rho_{(k,l_1,\ldots,l_s)}(q)$. \[rec\] Let $k, m\in {\mathbb{N}}$, and let $n=m(m-1)/2$. There exist integers $a_{k0},\ldots,a_{kn}$ such that, for all tuples ${\mathbf}l=(l_1,\ldots,l_s)$ of nonnegative integers with $l_1+\cdots+l_s=m$, $$\rho_{k,{\mathbf}l} (q) = \sum_{j=0}^n a_{kj} \rho_{j,{\mathbf}l}(q).$$ Using the methods developed in the proof of Theorem \[pargen\], one may compute $\rho_{{\mathbf}l}(q)$ for all tuples ${\mathbf}l=(l_1,\ldots,l_s)$ with $l_1+\cdots+l_s\le 6$. In particular, the following holds. \[rhosmall\] Let ${\mathbf}l=(l_1,\ldots,l_s)$ be a tuple of nonnegative integers with $l_1+\cdots+l_s\le 6$. Then $\rho_{{\mathbf}l}(q)$ is a polynomial in $q$ with positive integer coefficients. Hence (by Theorem \[pargen\]) $\rho_{(k,m)}(q)$ is polynomial in $q$ whenever $m\le 6$ and $k\in {\mathbb{N}}$. This paper is organised as follows. In Section \[quivers\] we describe a general framework of quiver representations and their endomorphisms, which is used to state and prove the results. In Section \[rednil\] we show, in particular, how the numbers $\gamma(P^{{\mathbf}l}(q),P^{{\mathbf}l}(q))$ and $\gamma(P^{{\mathbf}l}(q),M^{{\mathbf}l}(q))$ may be expressed in terms of $\rho_{{\mathbf}l'}(q^d)$ where $d\in {\mathbb{N}}$ and ${\mathbf}l'$ is another tuple of nonnegative integers. This justifies our focus on the numbers $\rho_{{\mathbf}l}(q)$. Section \[parprem\] contains a few standard results used later. In Section \[dualact\] we prove results that serve as the main tools allowing us to reduce problems such as that of counting $\rho_{(k,m)}(q)$ to other problems in a smaller dimension. In Section \[indarg\] we use those tools to prove a generalisation of Theorem \[pargen\] stated in terms of quiver representations (Theorem \[gensq\]). In Section \[inv\] we invert the formulae in Theorems \[pargen\] and \[gensq\] and deduce Theorems \[impl\] and \[rec\]. This relies on combinatorial results proved jointly with G. Wellen in the Appendix. I am very grateful to George Wellen for his part in this work. In Section \[presets\] we describe a method for computing $\rho_{{\mathbf}l}(q)$ when $m=l_1+\cdots+l_s$ is small. For this purpose, we generalise our problem to that of counting conjugacy classes of groups associated with preordered sets. Finally, Section \[dualrep\] investigates the symmetry afforded by considering a dual quiver representation. In particular, we show that $$\rho_{(l_1,\ldots,l_s)}(q)=\rho_{(l_s,\ldots,l_1)}(q).$$ **Acknowledgments.** Most of this paper is a part of my D.Phil. thesis. I am very grateful to my supervisor, Marcus du Sautoy, and to George Wellen for his part in this work. I would also like to thank my thesis examiners, Dan Segal and Gerhard Röhrle, for spotting a number of errors and for helpful comments. *Notation and definitions* - $\gamma(G,X)$ is the number of $G$-orbits on a finite set $X$ where the action of a group $G$ on $X$ is understood; - $[k,n]=\{k,k+1,k+2,\ldots,n\}$ where $k \le n$ are integers; - $|X|$ is the cardinality of a set $X$; - ${\mathcal}A$ is a *partition of a set* $X$ if ${\mathcal}A$ is a family of disjoint sets whose union is $X$; two elements $x$ and $y$ of $X$ are said to be ${\mathcal}A$-*equivalent* if there exists $A\in {\mathcal}A$ such that $x,y\in A$; - $\delta_{ij}=0$ if $i\ne j$, and $\delta_{ii}=1$; - $A\sqcup B$ is the disjoint union of sets $A$ and $B$ (formally defined as $A\times \{0\} \cup B\times \{1\}$); - $I_V=I$ is the identity element of ${\operatorname{GL}}(V)$; - $I_k$ is the identity $k\times k$ matrix over an appropriate field. - Suppose $U$ and $V$ are vector spaces, $X\in {\operatorname{End}}(U)$ and $Y\in {\operatorname{End}}(V)$; then ${\mathcal}I(X,Y)$ denotes the vector space of all linear maps $T:U{\rightarrow}V$ such that $TX=YT$; - ${\operatorname{N}}_n (K)$ is the set of all nilpotent matrices in ${\operatorname{M}}_{n,n}(K)$; - ${\operatorname{End}}(V;U_1,\ldots,U_k)=\{ f\in {\operatorname{End}}(V): f(U_i)\subseteq U_i \; \forall i \}$ where $U_i$ are subspaces of $V$; - ${\mathscr}P(V;U_1,\ldots,U_k):={\operatorname{End}}(V;U_1,\ldots,U_k) \cap {\operatorname{GL}}(V)$; - By convention, the set of $0\times k$ matrices contains just one element (which is nilpotent if $k=0$), and the group ${\operatorname{GL}}_{0}(K)$ is trivial; - If $I$ and $J$ are finite sets, then ${\operatorname{M}}_{I,J}(K)$ is the set of all matrices over a field $K$ whose rows are indexed by the elements of $I$ and whose columns are indexed by elements of $J$; we refer to these as *$I\times J$ matrices*; note that, if $A\in {\operatorname{M}}_{I,J}(K)$ and $B\in {\operatorname{M}}_{J,J'}(K)$, then the product $AB\in {\operatorname{M}}_{I,J'}(K)$ is well defined; - $A^{t}$ is the transpose of a matrix $A$; - ${\operatorname{Tr}}(A)$ is the trace of a square matrix $A$; - If a group $G$ acts on a set $X$, two elements of $X$ are said to be $G$-*conjugate* if they are in the same $G$-orbit; - All *rings* are understood to have an identity element; - If $R$ is a ring, then $R^{{\operatorname{op}}}$ is the ring with the same underlying abelian group and with the multiplication $(r,s)\mapsto sr$; Quiver representations and automorphisms {#quivers} ======================================== We recall the standard definitions related to quivers (see [@ARS], for example). A *quiver* is a pair $(E_0, E_1)$ of finite sets together with maps $\sigma:E_1{\rightarrow}E_0$ and $\tau: E_1 {\rightarrow}E_0$. Elements of $E_0$ may be thought of as nodes; then each element $e\in E_1$ may be represented as an arrow from $\sigma(e)$ to $\tau(e)$. Let $K$ be a field. A *representation* of a quiver $(E_0,E_1)$ over $K$ is a pair $({\mathbf}U, {\boldsymbol}\alpha)$ such that (i) ${\mathbf}U=(U_{a})_{a\in E_0}$ is a tuple of vector spaces over $K$; (ii) ${\boldsymbol}\alpha=(\alpha_e)_{e\in E_1}$ where $\alpha_e\in {\operatorname{Hom}}(U_{\sigma(e)},U_{\tau(e)})$. If $({\mathbf}U,{\boldsymbol}\alpha)$ is a representation of a quiver $(E_0,E_1)$, we shall refer to the quadruple $Q=(E_0,E_1,{\mathbf}U,{\boldsymbol}\alpha)$ as a *quiver representation*. A quiver representation may be thought of as a collection of vector spaces together with linear maps between some of those spaces. If $({\mathbf}U,{\boldsymbol}\alpha)$ and $({\mathbf}U',{\boldsymbol}\alpha')$ are two representations of a quiver $(E_0,E_1)$, a *morphism* between those representations is a tuple $(X_a)_{a\in E_0}$ such that (i) $X_a\in {\operatorname{Hom}}(U_a,U'_a)$ for all $a\in E_0$; (ii) $\alpha'_e X_{\sigma(e)} = X_{\tau(e)} \alpha_e$ for all $e\in E_1$. This defines the category of representations of $(E_0,E_1)$. If $Q=(E_0,E_1,{\mathbf}U,{\boldsymbol}\alpha)$ is a quiver representation, write ${\operatorname{End}}(Q)$ for the ring of all endomorphisms of $Q$ and ${\operatorname{Aut}}(Q)$ for the group of all automorphisms of $Q$. Let $N(Q)$ be the set of all nilpotent elements of ${\operatorname{End}}(Q)$. The group ${\operatorname{Aut}}(Q)$ acts naturally on the set ${\operatorname{End}}(Q)$ by conjugation: if ${\mathbf}{X}=(X_a)\in {\operatorname{End}}(Q)$ and ${\mathbf}{g}\in {\operatorname{Aut}}(Q)$, then $${\mathbf}{g}\circ {\mathbf}{X} = (g_a X_a g_a^{-1})_{a \in E_0}.$$ Assume that $K={\mathbb{F}}_q$, where $q$ is a prime power. We shall be concerned with the number of orbits of this action and, more generally, with the number of orbits of actions of certain subgroups of ${\operatorname{Aut}}(Q)$ on certain subsets of ${\operatorname{End}}(Q)$, such as $N(Q)$ for example. Note that ${\operatorname{Aut}}(Q)$ and $N(Q)$ may be defined in terms of the ring structure on ${\operatorname{End}}(Q)$, so they are preserved by ring isomorphisms. Let $$\theta(Q) = \gamma({\operatorname{Aut}}(Q), N(Q)).$$ All the problems discussed in the introduction may be stated in terms of quiver representations. If ${\mathbf}l = (l_1,l_2,\ldots,l_s)$ is a tuple of nonnegative integers with $m=l_1+\cdots+l_s$, let $R_{{\mathbf}l}=R_{{\mathbf}l}(q)$ be the quiver representation $$\xymatrix{ {\mathbb{F}}_q^{l_1}\ar@{^{(}->}[r] & {\mathbb{F}}_q^{l_1+l_2}\ar@{^{(}->}[r] & \cdots \ar@{^{(}->}[r] & {\mathbb{F}}_q^{l_1+\cdots+l_{s-1}}\ar@{^{(}->}[r] & {\mathbb{F}}_q^m }$$ where all the arrows represent injective linear maps. More formally, $$R_{{\mathbf}l}=([1,s],[1,s-1],{\mathbf}U,{\boldsymbol}\alpha)$$ with $\sigma(i)=i$, $\tau(i)=i+1$ for all $i\in [1,s-1]$, $U_i={\mathbb{F}}_q^{l_1+\cdots+l_i}$ and $\alpha_i$ injective. Obviously, these conditions define $R_{{\mathbf}l}$ up to isomorphism of representations. We may choose a basis $\{b_1,\ldots,b_m\}$ of $U_s={\mathbb{F}}_q^m$ so that, for each $i$, the image of $U_i={\mathbb{F}}_q^{l_1+\cdots+l_i}$ under $\alpha_{s-1}\cdots\alpha_{i+1}\alpha_i$ is equal to the span of $\{b_1,\ldots,b_{l_1+\cdots+l_i} \}$. Let $J_{{\mathbf}l}: {\operatorname{End}}(R_{{\mathbf}l}) {\rightarrow}{\operatorname{M}}_{m,m}(q)$ be the map which assigns to each ${\mathbf}X=(X_i)_{i\in [1,s]}\in {\operatorname{End}}(R_{{\mathbf}l})$ the matrix of $X_s\in {\operatorname{End}}({\mathbb{F}}_q^m)$ with respect to the basis $\{b_1,\ldots,b_m\}$. Then the following result is obvious. \[quivpar\] The map $J_{{\mathbf}l}$ is a ring isomorphism from ${\operatorname{End}}(R_{{\mathbf}l})$ onto $M^{{\mathbf}l}(q)$. Hence, $\rho_{{\mathbf}l}(q)=\theta(R_{{\mathbf}l})$. Call a quiver representation $Q'=(E'_0,E'_1,{\mathbf}U', {\boldsymbol}\alpha')$ an *extension* of a quiver representation $Q=(E_0,E_1,{\mathbf}U,{\boldsymbol}\alpha)$ if (i) $E'_0\supseteq E_0$; (ii) $U'_a=U_a$ for all $a\in E_0$; (iii) for every ${\mathbf}X=(X_a)_{a\in E'_0}\in {\operatorname{End}}(Q')$, the tuple $(X_a)_{a\in E_0}$ belongs to ${\operatorname{End}}(Q)$. Let $Q'$ be an extension of $Q$. Define a map $\pi=\pi_Q^{Q'}:{\operatorname{End}}(Q') {\rightarrow}{\operatorname{End}}(Q)$ by $$(X_a)_{a\in E'_0} \mapsto (X_a)_{a\in E_0}.$$ If $B\subseteq {\operatorname{End}}(Q)$, let $B^{Q'}=\pi^{-1}(B)$. If $G$ is a subgroup of ${\operatorname{Aut}}(Q)$, define $G^{Q'}=\pi^{-1}(G)\cap {\operatorname{Aut}}(Q')$. (This will cause no ambiguity if a group is considered a distinct object from the set of its elements.) Let $Q=(E_0,E_1,{\mathbf}U, {\boldsymbol}\alpha)$ be a quiver representation, and let $e\in E$ with $\sigma(e)=a$, $\tau(e)=b$. Let $Y={\operatorname{ker}}(\alpha_e)\le U_a$, $Z={\operatorname{im}}(\alpha_e)\le U_b$. Define an extension $K(Q,e)=(E'_0,E'_1,{\mathbf}U',{\boldsymbol}\alpha')$ as follows: (i) $E'_0=E_0 \sqcup \{ c \}$; (ii) $U'_c= Y$ (and $U'_x=U_x$ for $x\in E_0$); (iii) $E'_1=E_1\sqcup \{ e' \}$ where $\sigma(e')=c$, $\tau(e')=a$; (iv) $\alpha'_{f}=\alpha_f$ for all $f\in E_1$, and $\alpha'_{e'}$ is the inclusion map $Y{\hookrightarrow}U_a$. Define another extension $I(Q,e)=(E''_0,E''_1,{\mathbf}U'',{\boldsymbol}\alpha'')$ as follows: (i) $E''_0=E_0 \sqcup \{ d \}$; (ii) $U''_d = Z$; (iii) $E''_1=(E_1\setminus \{ e \}) \sqcup \{ e^{\sharp}, e'' \}$, where $\sigma(e'')=d$, $\tau(e'')=b$, $\sigma(e^{\sharp})=a$, $\tau(e^{\sharp})=d$; (iv) $\alpha''_{f}=\alpha_f$ for all $f\in E_1 \setminus \{ e \}$, $\alpha''_{e''}$ is the inclusion map $Z{\hookrightarrow}U_b$, and $\alpha''_{e^{\sharp}}$ is the map $U_a{\rightarrow}Z$ given by $\alpha''_{e^{\sharp}}(v)=\alpha_e (v)$ $\forall v\in V$. Clearly, $K(Q,e)$ is an extension of $Q$. Also, $Q'':=I(Q,e)$ is an extension of $Q$: if ${\mathbf}X\in {\operatorname{End}}(Q'')$, then $\alpha_e X_a = X_b \alpha_e$ because $\alpha_e = \alpha''_{e''} \alpha''_{e^{\sharp}}$. \[KI\] Let $Q=(E_0,E_1,{\mathbf}U,{\boldsymbol}\alpha)$ be a quiver representation, and let $e\in E_1$. Let $Q'=K(Q,e)$ and $Q''=I(Q,e)$. Then the maps $\pi^{Q'}_Q : {\operatorname{End}}(Q') {\rightarrow}{\operatorname{End}}(Q)$ and $\pi^{Q''}_Q: {\operatorname{End}}(Q'') {\rightarrow}{\operatorname{End}}(Q)$ are ring isomorphisms. As above, let $a=\sigma(e)$, $b=\tau(e)$, $Y={\operatorname{ker}}(\alpha_e)$, $Z={\operatorname{im}}(\alpha_e)$. Let ${\mathbf}X=(X_i)_{i\in E_0}\in {\operatorname{End}}(Q)$. Since $\alpha_e X_a = X_b \alpha_e$, the map $X_a$ preserves $Y$ and $X_b$ preserves $Z$. Any element ${\mathbf}X'\in \left(\pi_Q^{Q'} \right)^{-1}({\mathbf}X)$ satisfies $\alpha'_{e'} X'_c = X'_c \alpha'_{e'}$. Since $\alpha'_{e'}$ is injective, it follows that $\left(\pi_Q^{Q'}\right)^{-1}({\mathbf}X)=\{{\mathbf}X'\}$, where $X'_c = X_a |_Y$. Similarly, $\left(\pi_Q^{Q''}\right)^{-1} ({\mathbf}X)= \{ {\mathbf}X'' \}$, where $X''_d=X_b|_Z$. Hence, $\pi^{Q'}_Q$ and $\pi^{Q''}_Q$ are bijections. Reduction to nilpotent endomorphisms {#rednil} ==================================== In this section we show that, if ${\mathcal}Y$ is a class of endomorphisms over ${\mathbb{F}}_q$, the problem of counting orbits of ${\mathcal}Y$-endomorphisms of a quiver representation may be reduced to that of counting orbits of nilpotent endomorphisms of various quiver representations. (A *${\mathcal}Y$-endomorphism* is an endomorphism ${\mathbf}X$ such that $X_a\in {\mathcal}Y$ for all $a$.) We use standard methods related to rational canonical forms. The results of this section are not used elsewhere in the paper, but provide some motivation for our later focus on nilpotent endomorphisms. Let $U$ be a vector space over a field $K$. Let $L$ be a field containing $K$. Suppose $U'$ is a vector space over $L$ that becomes $U$ if one restricts the scalars to $K$; that is, $U'=U$ as sets and multiplication by scalars from $K$ in $U$ is the same as in $U'$. We shall say that $U'$ is an $L$-*expansion* of $U$ and $U$ is the *restriction* of $U'$ to $K$. Let $Q=(E_0,E_1,{\mathbf}U,{\boldsymbol}\alpha)$ be a quiver representation over $K$. Say that a quiver representation $Q'=(E_0,E_1,{\mathbf}U',{\boldsymbol}\alpha)$ over $L$ is an $L$-*expansion* of $Q$ (and $Q$ is the *restriction* of $Q'$ to $K$) if, for each $a\in E_0$, the space $U'_a$ is an $L$-*expansion* of $U_a$. (Then $\alpha_e$ is $L$-linear for each $e\in E_1$.) Let ${\mathbf}X\in {\operatorname{End}}(Q)$. If $a\in E_0$ and $f$ is a monic irreducible polynomial over ${\mathbb{F}}_q$, let $$U_{{\mathbf}X,f,a} = \{ u\in U_a : f(X_a)^k u = 0 \text{ for some } k\in {\mathbb{N}}\}.$$ Then $U_{{\mathbf}X,f,a}=0$ for all but finitely many $f$. For all monic irreducible polynomials $f\in {\mathbb{F}}_q [T]$ and all $e\in E_1$, we have $\alpha_e (U_{{\mathbf}X,f,\sigma(e)}) \le U_{{\mathbf}X,f,\tau(e)}$. Thus, ${\mathbf}U_{{\mathbf}X,f}:=(U_{X,f,a})_{a\in E_0}$ induces a subrepresentation $Q_{{\mathbf}X,f}$ of $Q$. \[rednil1\] Let $Q$ be a quiver representation over ${\mathbb{F}}_q$. Suppose ${\mathbf}X\in{\operatorname{End}}(Q)$. Then $$Q = \bigoplus_{f} Q_{{\mathbf}X,f}$$ where the sum is over all monic irreducible polynomials $f$ over ${\mathbb{F}}_q$. Moreover, the isomorphism class of each component $Q_{{\mathbf}X,f}$ is an invariant of the ${\operatorname{Aut}}(Q)$-orbit of ${\mathbf}X$. The first statement follows from the fact that $U_{a}=\bigoplus_{f} U_{{\mathbf}X,f,a}$ for each $a$. The second statement is clear. We now consider each quiver representation $Q_{{\mathbf}X,f}$ separately. Fix a monic irreducible polynomial $f=f(T)$ over ${\mathbb{F}}_q$. Let $d$ be the degree of $f$. Suppose ${\mathbf}X$ is an endomorphism of a quiver representation $Q$ over ${\mathbb{F}}_q$ such that $Q=Q_{{\mathbf}X,f}$. Let ${\mathbb{F}}_q [T]_{(f)}$ be the localisation of ${\mathbb{F}}_q[T]$ at the ideal $(f)$; it consists of all fractions $g/h$ such that $g,h\in{\mathbb{F}}_q [T]$ and $h$ is not divisible by $f$. Then ${\mathbf}X$ induces an ${\mathbb{F}}_q [T]_{(f)}$-module structure on each $U_a$: multiplication by $T$ is given by the action of $X_a$. Moreover, each $\alpha_e$ becomes an ${\mathbb{F}}_q [T]_{(f)}$-module homomorphism. Let $S_{f}$ be the completion of the discrete valuation ring ${\mathbb{F}}_q [T]_{(f)}$. That is, $S_{f}$ is the inverse limit of the rings ${\mathbb{F}}_q [T]/(f^k)$, $k=1,2,\ldots$. Any finite ${\mathbb{F}}_q [T]_{(f)}$-module is annihilated by $f^k$ for large enough $k$. Thus, a finite ${\mathbb{F}}_q [T]_{(f)}$-module may be thought of as a (finite) $S_{f}$-module (and vice versa). Now, by [@Bourbaki §9, Proposition 3], $S_f$ is isomorphic to ${\mathbb{F}}_{q^d}[[Z]]$, the ring of formal power series over a variable $Z$. Indeed, let $r$ be the element of $S_f$ whose projection onto ${\mathbb{F}}_q [T]/(f^k)$ is $T^{q^{kd}}$ for each $k\in {\mathbb{N}}$. Then ${\mathbb{F}}_q [r]\subseteq S_f$ is a field isomorphic to ${\mathbb{F}}_{q^d}$, and each element of $S_f$ may be represented as a power series in $f$ with coefficients in ${\mathbb{F}}_q[r]$ (see *loc. cit.* for more detail). Hence, $Q$ gives rise to a finite ${\mathbb{F}}_{q^d} [[Z]]$-module. This module corresponds to an ${\mathbb{F}}_{q^d}$-expansion $Q'$ of $Q$ and a nilpotent endomorphism of $Q'$ (given by multiplication by $Z$). Conversely, an ${\mathbb{F}}_{q^d}$-expansion $Q'$ of $Q$ together with a nilpotent endomorphism of $Q'$ gives rise to an ${\mathbb{F}}_{q^d}[[Z]]$-module structure on each $U_a$ in such a way that all $\alpha_e$ are ${\mathbb{F}}_{q^d}[[Z]]$-endomorphisms. Identifying ${\mathbb{F}}_{q^d}[[Z]]$ with $S_f$ as above, we get an endomorphism ${\mathbf}X$ of $Q$ such that $Q_{{\mathbf}X,f}=Q$. We have proved the following result. \[rednil2\] Let $q$ be a prime power, and let $f$ be a monic irreducible polynomial over ${\mathbb{F}}_q$ of degree $d$. Suppose $Q$ is a quiver representation over ${\mathbb{F}}_q$. Then the ${\operatorname{Aut}}(Q)$-orbits of endomorphisms ${\mathbf}X$ of $Q$ such that $Q_{{\mathbf}X,f}=Q$ are in a one-to-one correspondence with ${\operatorname{Aut}}(Q)$-orbits of pairs $(Q',{\mathbf}Z)$ such that $Q'$ is an ${\mathbb{F}}_{q^d}$-expansion of $Q$ and ${\mathbf}Z\in N(Q')$. Let ${\mathcal}Y$ be a class of linear endomorphisms over ${\mathbb{F}}_q$, as defined in Section \[parintro\]. Let $${\operatorname{End}}_{{\mathcal}Y}(Q) = \{ {\mathbf}X\in {\operatorname{End}}(Q): X_a\in {\mathcal}Y \text{ for all } a\in E_0 \}.$$ Let $F$ be a subset of ${\overline}{{\mathbb{F}}}_q$ such that ${\mathcal}Y={\mathcal}Y_F$. Call a polynomial $f\in {\mathbb{F}}_q[T]$ a *${\mathcal}Y$-polynomial* if all the roots of $f$ (over ${\overline}{{\mathbb{F}}}_q$) are in $F$. Let $k_{{\mathcal}Y,d}(q)$ be the number of monic irreducible ${\mathcal}Y$-polynomials of degree $d$. Consider again an arbitrary quiver representation $Q=(E_0,E_1,{\mathbf}U,{\boldsymbol}\alpha)$ over ${\mathbb{F}}_q$. Lemmata \[rednil1\] and \[rednil2\] allow us to express $\gamma({\operatorname{Aut}}(Q),{\operatorname{End}}_{{\mathcal}Y}(Q))$ in terms of $\theta(Q'_0)$ where $Q_0$ varies among direct summands of $Q$ and $Q'_0$ is an expansion of $Q_0$. Let ${\mathbf}X\in {\operatorname{End}}_{{\mathcal}Y}(Q)$. By Lemma \[rednil1\], there exists a finite set $\{f_i\}_{i\in J}$ of irreducible ${\mathcal}Y$-polynomials such that $Q=\bigoplus_{j\in J} Q_j$ where $Q_j=Q_{{\mathbf}X,f_j}$. Here, $J$ is a finite indexing set. Let $\epsilon(j)=\deg f_j$. By Lemma \[rednil2\], each $Q_j$ together with the restriction of ${\mathbf}X$ to $Q_j$ corresponds to an ${\mathbb{F}}_{q^{\epsilon(j)}}$-expansion $Q'_j$ of $Q_j$ together with a nilpotent endomorphism ${\mathbf}Z^{(j)}\in N(Q'_{j})$. Up to conjugation by ${\operatorname{Aut}}(Q'_j)$, the endomorphism ${\mathbf}Z^{(j)}$ may be chosen in $\theta(Q'_j)$ ways (by definition). Let ${\mathcal}A$ be the partition of $J$ that consists of the sets $$\{ j\in J: Q'_j\simeq Q'_i \}$$ where $i$ varies among the elements of $J$. In particular, $\epsilon(i)=\epsilon(j)$ whenever $i$ and $j$ are ${\mathcal}A$-equivalent. \[rednil3\] Let $Q$ be a quiver representation over ${\mathbb{F}}_q$. Then $$\begin{aligned} \gamma({\operatorname{Aut}}(Q),{\operatorname{End}}_{{\mathcal}Y}(Q)) & = & \sum_{(Q_i)_{i\in J}} \sum_{{\mathcal}A} \frac{1}{\prod_{A\in {\mathcal}A} |A|!} \sum_{\epsilon} \prod_{d=1}^{\infty} \frac{k_{{\mathcal}Y,d}(q)!}{(k_{{\mathcal}Y,d}(q)-|\epsilon^{-1}(d)|)!} \\ & \times & \sum_{(Q'_A)_{A\in {\mathcal}A}} \prod_{A\in{\mathcal}A} \theta(Q'_A)^{|A|}.\end{aligned}$$ Here, the first sum is over all decompositions $Q=\bigoplus_{i\in J} Q_i$ of $Q$ as a direct sum of representations; two such decompositions are considered to be the same if one may be obtained from the other by permuting the indices $i$ and replacing each $Q_i$ with an isomorphic representation (we assume that $J=[1,|J|]$). The second sum is over all partitions ${\mathcal}A$ of $J$ such that $Q_i\simeq Q_j$ whenever $i$ and $j$ are ${\mathcal}A$-equivalent. The third sum is over all maps $\epsilon: J{\rightarrow}{\mathbb{N}}$ such that $\epsilon(i)=\epsilon(j)$ whenever $i$ and $j$ are ${\mathcal}A$-equivalent. The last sum is over all isomorphism classes of tuples $(Q'_A)_{A\in {\mathcal}A}$ where $Q'_A$ is an ${\mathbb{F}}_{q^{\epsilon(i)}}$-expansion of $Q_i$ ($i$ is an arbitrary element of $A$) and the quiver representations $Q'_{A}$ ($A\in {\mathcal}A$) are pairwise not isomorphic. Suppose ${\mathcal}A$, $\epsilon$, $Q'_A=Q'_i$ ($i\in A$), as above, are fixed. There are $$\prod_{d=1}^{\infty}\frac{k_{{\mathcal}Y,d}(q)!}{(k_{{\mathcal}Y,d}(q)-|\epsilon^{-1}(d)|)!}$$ ways to assign a monic irreducible ${\mathcal}Y$-polynomial $f_i$ of degree $\epsilon(i)$ to each $i\in J$ so that all those polynomials are distinct. (Note that all but finitely many terms in the product are equal to $1$, so the product is well defined.) There are $$\prod_{A\in {\mathcal}A} \theta(Q'_A)^{|A|}$$ ways to choose, for each $i\in J$, an ${\operatorname{Aut}}(Q'_i)$-orbit in $N(Q'_i)$. By Lemmata \[rednil1\] and \[rednil2\], these assignments determine an ${\operatorname{Aut}}(Q)$-orbit in ${\operatorname{End}}_{{\mathcal}Y}(Q)$, and all orbits occur in this way. However, a permutation of the indices within a subset $A\in {\mathcal}A$ yields the same orbit. Thus, each orbit giving rise to this particular ${\mathcal}A$ is obtained by $ \prod_{A\in {\mathcal}A} (|A|!) $ such assignments. Hence, we must divide by this number to obtain the number of orbits. Now let ${\mathbf}l=(l_1,\ldots,l_s)$ be a tuple of nonnegative integers, and consider the quiver $R_{{\mathbf}l}(q)$. If ${\mathbf}l'=(l'_1,\ldots,l'_s)$ is another such $s$-tuple, let ${\mathbf}l+{\mathbf}l'=(l_1+l'_1,\ldots,l_s+l'_s)$; write ${\mathbf}l\le {\mathbf}l'$ if $l_i\le l'_i$ for all $i$. Also, if $b\in {\mathbb{Q}}$, let $b{\mathbf}l = (b l_1,\ldots, b l_s)$. It is easy to see that all decompositions of $R_{{\mathbf}l}$ into direct sums of subrepresentations are of the form $$R_{{\mathbf}l} = \bigoplus_{i=1}^n R_{{\mathbf}l^i}$$ where ${\mathbf}l = {\mathbf}l^1+\cdots +{\mathbf}l^n$. Let $d\in {\mathbb{N}}$. If not all $l_i$ are divisible by $d$, then there is no ${\mathbb{F}}_{q^d}$-expansion of $R_{{\mathbf}l}(q)$. If all $l_i$ are divisible by $d$, then $R_{{\mathbf}l}(q)$ has a unique (up to the action of ${\operatorname{Aut}}(R_{{\mathbf}l}(q))$) ${\mathbb{F}}_{q^d}$-expansion, namely $R_{{\mathbf}l/d}(q^d)$. Consider the set of tuples $J=({\mathbf}l^1,\ldots,{\mathbf}l^n,d_1,\ldots,d_n)$ such that each ${\mathbf}l^i$ is an $s$-tuple of nonnegative integers, not all equal to zero, $d_i\in {\mathbb{N}}$ for each $i$ and $$\sum_{i=0}^n d_i {\mathbf}l^i = {\mathbf}l.$$ Let ${\mathscr}D({\mathbf}l)$ be a complete set of representatives of the action of the symmetric group $S_n$ on this set, where the action is by permuting the indices $1,\ldots,n$. Less formally, ${\mathscr}D({\mathbf}l)$ is in a one-to-one correspondence with ways to decompose $R_{{\mathbf}l}(q)$ as a direct sum of quivers $R_{{\mathbf}l'}(q)$ and to expand each $R_{{\mathbf}l'}(q)$ to $R_{{\mathbf}l'/d}(q^d)$ for some $d$. By Lemma \[quivpar\], the numbers $\gamma(P^{{\mathbf}l}(q),P^{{\mathbf}l}(q))$ and $\gamma(P^{{\mathbf}l}(q),M^{{\mathbf}l}(q))$ may both be expressed as $\gamma({\operatorname{Aut}}(R_{{\mathbf}l}(q)),{\operatorname{End}}_{{\mathcal}Y}(R_{{\mathbf}l}(q)))$ where ${\mathcal}Y$ is the class of all invertible endomorphisms or the class of all endomorphisms, respectively. If ${\mathcal}Y$ is one of those classes, then $k_{{\mathcal}Y,d}(q)$ is polynomial in $q$. We deduce the following from Proposition \[rednil3\]. \[rednil4\] Let ${\mathbf}l=(l_1,\ldots,l_s)$ be a tuple of nonnegative integers. Then there exist tuples $(a_J)_{J\in {\mathscr}D({\mathbf}l)}$ and $(b_{J})_{J\in {\mathscr}D({\mathbf}l)}$ where $a_{J}(T)$ and $b_{J}(T)$ are polynomials with rational coefficients such that, for all prime powers $q$, $$\begin{aligned} \gamma(P^{{\mathbf}l}(q),P^{{\mathbf}l}(q))& = & \sum_{J\in {\mathscr}D({\mathbf}l)} a_J (q) \prod_{i=1}^n \rho_{{\mathbf}l^i}(q^{d_i}) \quad \text{and } \\ \gamma(P^{{\mathbf}l}(q),M^{{\mathbf}l}(q)) & = & \sum_{J\in {\mathscr}D({\mathbf}l)} b_J (q) \prod_{i=1}^n \rho_{{\mathbf}l^i}(q^{d_i})\end{aligned}$$ In each case, the first sum is over all elements $J=({\mathbf}l^1,\ldots,{\mathbf}l^n,d_1,\ldots,d_n)$ of ${\mathscr}D({\mathbf}l)$. Preliminary results {#parprem} =================== In this section we state several standard and straightforward results. We prove the last of these results; the first two are easy exercises. \[normal2\] Let $G$ be a group acting on a set $Y$. Let $N$ be a normal subgroup of $G$, and let $S$ be the set of $N$-orbits on $Y$. The action of $G$ on $Y$ induces an action of $G/N$ on $S$ via $gN\circ Ny = Ngy$ (for all $g\in G$, $y\in Y$). Moreover, the $G$-orbits on $Y$ are in a one-to-one correspondence with the $G/N$-orbits on $S$. \[actprod\] Let $G$ be a group acting on finite sets $X$ and $Y$. Suppose $S$ is a subset of $X\times Y$ preserved by $G$. Let $R$ be a complete set of representatives of $G$-orbits on $X$ (so each $G$-orbit on $X$ contains exactly one element of $R$). Then $$\gamma(G,S)=\sum_{x\in R} \gamma({\operatorname{Stab}}_G (x), \{y\in Y: (x,y)\in S \}).$$ Recall that if $V$ and $W$ are vector spaces and $X\in {\operatorname{End}}(V)$, $Y\in {\operatorname{End}}(W)$, then $${\mathcal}I(X,Y)=\{ T\in {\operatorname{Hom}}(V,W): TX=YT \}.$$ \[dimI\] Let $V$ and $W$ be finite dimensional vector spaces over a field $K$. Suppose $X\in {\operatorname{End}}(V)$ and $Y\in {\operatorname{End}}(W)$. Then $$\dim {\mathcal}I(X,Y) = \dim {\mathcal}I(Y,X).$$ Let $V^*$ and $W^*$ be the dual spaces to $V$ and $W$. Let $X^*\in {\operatorname{End}}(V^*)$ and $Y^*\in {\operatorname{End}}(W^*)$ be the maps dual to $X$ and $Y$: $(X^* f)(v)= f(X(v))$ for all $f\in V^*$, $v\in V$. Then $\dim {\mathcal}I(Y,X)= \dim{\mathcal}I(X^*,Y^*)$: a linear bijection between ${\mathcal}I(Y,X)$ and ${\mathcal}I(X^*,Y^*)$ is given by assigning to each map its dual. Also, $X^*$ and $X$ have the same rational canonical form, as do $Y$ and $Y^*$. Thus, $$\begin{array}{cr} \quad\qquad\qquad \dim{\mathcal}I(X,Y)= \dim{\mathcal}I(X^*,Y^*)= \dim{\mathcal}I(Y,X). \qquad\qquad\quad & \qed \end{array}$$ Action on the dual space {#dualact} ======================== Let $V$ be a finite dimensional vector space over a finite field ${\mathbb{F}}_q$. If $w\in V^*$ and $v\in V$, we shall write $(w,v)$ for $w(v)$. Suppose a group $G$ acts on $V$ by linear maps, so a representation of $G$ on $V$ is given: $$(g,v) \mapsto gv.$$ This action gives rise to a representation of $G$ on $V^*$ in the usual way: $$((gw,v)=(w,g^{-1}v),$$ where $g\in G$, $w\in V^*$, $v\in V$. We shall rely on the following simple result, which is a weak version of Brauer’s Theorem ([@CRI 11.9]). \[duality\] The number of $G$-orbits on $V$ is equal to the number of $G$-orbits on $V^*$. By a well-known orbit-counting formula, $$\gamma(G,V)=\frac{1}{|G|}\sum_{g\in G} {\operatorname{Fix}}_V (g),$$ where ${\operatorname{Fix}}_V (g)$ is the number of elements of $V$ fixed by $g$. The same holds for $V^*$. Therefore, it suffices to show that ${\operatorname{Fix}}_V (g)={\operatorname{Fix}}_{V^*} (g)$ for all $g\in G$. Let $g\in G$. Fix a basis of $V$, and let $A$ be the matrix of the action of $g$ with respect to this basis. Then $(A^t)^{-1}$ is the matrix of the action of $g$ on $V^*$ with respect to the dual basis. Since ${\operatorname{rank}}(A-I)={\operatorname{rank}}(A^t-I)={\operatorname{rank}}((A^t)^{-1}-I)$, $$\qquad\qquad {\operatorname{Fix}}_V (g)=q^{n-{\operatorname{rank}}(A-I)}=q^{n-{\operatorname{rank}}((A^t)^{-1}-I)}={\operatorname{Fix}}_{V^*} (g). \qquad \qquad {}$$ Let $Q=(E_0,E_1,{\mathbf}U ,{\boldsymbol}\alpha)$ be a quiver representation. Let $a,b\in E_0$. Define an extension $\Omega(Q,a,b)=Q'=(E'_0, E'_1,{\mathbf}U',{\boldsymbol}\alpha')$ of $Q$ as follows: (i) $E'_0 = E_0 \sqcup \{ c \}$; (ii) $\dim U'_c = \dim U_a + \dim U_b$; (iii) $E'_1=E_1\sqcup \{e_1,e_2\}$ where $\sigma(e_1)=a$, $\tau(e_1)=c=\sigma(e_2)$, $\tau(e_2)=b$; (iv) $\alpha'_e=\alpha_e$ for all $e\in E_1$; (v) $\alpha_{e_1}$ is injective, $\alpha_{e_2}$ is surjective, and ${\operatorname{im}}(\alpha_{e_1})={\operatorname{ker}}(\alpha_{e_2})$, so the sequence $$0{\longrightarrow}U_a\stackrel{\alpha_{e_1}}{{\longrightarrow}} U'_c \stackrel{\alpha_{e_2}}{{\longrightarrow}} U_b {\longrightarrow}0$$ is exact. Write $\pi$ for $\pi^{Q'}_Q:{\operatorname{End}}(Q'){\rightarrow}{\operatorname{End}}(Q)$. Note that ${\operatorname{Aut}}(Q)$ acts on ${\operatorname{Hom}}(X_a,X_b)$ in a natural way: ${\mathbf}g\circ R=g_b R g_a^{-1}$ (${\mathbf}g\in {\operatorname{Aut}}(Q)$, $R\in {\operatorname{Hom}}(X_a,X_b)$). \[link\] If ${\mathbf}X$ is an endomorphism of $Q$ and $H$ is a subgroup of ${\operatorname{Aut}}(Q)$ that fixes ${\mathbf}X$, then $$\gamma(H^{Q'},\pi^{-1}({\mathbf}X)) = \gamma(H,{\mathcal}{I}(X_a,X_b)).$$ Let $k=\dim U_a$, $m=\dim U_b$. Let $\beta=\alpha_{e_1}$. Let $V=\beta(U_a)\le U'_c$. Choose a complement $W$ of $V$ in $U'_c$, so that $U'_c=V\oplus W$. Then $\epsilon:=\alpha_{e_2}|_{W}$ is a bijective linear map from $W$ onto $U_b$. For any $R\in{\operatorname{Hom}}(U_b,U_a)$, let $\phi (R)$ be the unique element of $\pi^{-1}({\mathbf}X)$ such that, for $v\in V$, $w\in W$, $$\phi(R)_c (v+w)= \beta X_a \beta^{-1} \cdot v + \epsilon^{-1} X_b \epsilon \cdot w + \beta R \epsilon \cdot w.$$ Less formally, if one chooses bases for $U_a$ and $U_b$, uses $\beta$ and $\epsilon^{-1}$ to get bases of $V$ and $W$, combines these to get a basis of $U'_c$ and represents linear operators as matrices using these bases, then $$\phi(R)_c=\begin{pmatrix} X_a & R \\ 0 & X_b \end{pmatrix}.$$ Clearly, $\phi$ is a bijection from ${\operatorname{Hom}}(U_b,U_a)$ onto $\pi^{-1}({\mathbf}X)$. For $T\in{\operatorname{Hom}}(U_b,U_a)$, define $\psi(T)\in H^{Q'}$ as follows: $\psi(T)_d=I_{U_d}$ for $d\ne c$, and $$\psi(T)_c (v+w) = v + w + \beta T \epsilon \cdot w \quad \forall v\in V, w\in W.$$ As a matrix, $$\psi(T)_c = \begin{pmatrix} I_k & T \\ 0 & I_m \end{pmatrix}.$$ Consider the surjective group homomorphism $\pi':=\pi|_{H^{Q'}}: H^{Q'}{\rightarrow}H$. Let $A={\operatorname{ker}}\pi'$. Then $\psi:{\operatorname{Hom}}(U_b,U_a){\rightarrow}A$ is an isomorphism of abelian groups. A direct calculation (e.g. using matrices) produces a formula for the action of $A$ on $\pi^{-1}({\mathbf}X)$: $$\psi(T) \phi(R) \psi(T)^{-1} = \phi(R + T X_b - X_a T).$$ Let $$D=\{T X_b - X_a T: \: T\in {\operatorname{Hom}}(U_b,U_a) \}.$$ Then the map $\phi^{-1}: \pi^{-1}({\mathbf}X){\rightarrow}{\operatorname{Hom}}(U_b,U_a)$ establishes a one-to-one correspondence between $A$-orbits on $\pi^{-1}({\mathbf}X)$ and cosets of $D$ in ${\operatorname{Hom}}(U_b,U_a)$. Hence, the maps $\pi'$ and $\phi$ establish an isomorphism between the action of $H^{Q'}/A$ on the set of $A$-orbits in $\pi^{-1}({\mathbf}X)$ and the natural action of $H$ on ${\operatorname{Hom}}(U_b,U_a)/D$. It follows, by Lemma \[normal2\] (applied to the normal subgroup $A$ of $H^{Q'}$), that $$\gamma (H^{Q'},\pi^{-1} ({\mathbf}X)) = \gamma(H, {\operatorname{Hom}}(U_b,U_a)/D).$$ The dual space to ${\operatorname{Hom}}(U_b,U_a)$ may be identified with ${\operatorname{Hom}}(U_a,U_b)$, where $$(S,R)={\operatorname{tr}}(SR) \qquad \forall R\in {\operatorname{Hom}}(U_b,U_a), \; S\in {\operatorname{Hom}}(U_a,U_b),$$ and the dual action of $H$ on the space ${\operatorname{Hom}}(U_a,U_b)$ is induced by the natural action of ${\operatorname{GL}}(U_a)\times {\operatorname{GL}}(U_b)$ on this space. Hence the dual space of ${\operatorname{Hom}}(U_b,U_a)/D$ is $$D^{\perp}=\{S\in {\operatorname{Hom}}(U_a,U_b): {\operatorname{tr}}(RS)=0 \text{ for all } R\in D \}.$$ Any $S\in {\mathcal}{I}(X_a,X_b)$ belongs to $D^{\perp}$. Indeed, for all $T\in{\operatorname{Hom}}(U_b,U_a)$, $$\begin{aligned} {\operatorname{tr}}(S(T X_b - X_a T))& = & {\operatorname{tr}}(S T X_b)-{\operatorname{tr}}(S X_a T)={\operatorname{tr}}(X_b ST)-{\operatorname{tr}}(S X_a T) \\ & = & {\operatorname{tr}}((X_b S - S X_a)T)=0.\end{aligned}$$ Moreover, $$\dim D^{\perp}=km-\dim D=\dim {\mathcal}I(X_b,X_a) = \dim {\mathcal}I(X_a,X_b).$$ (The last equality holds by Lemma \[dimI\].) It follows that $D^{\perp}={\mathcal}I(X_a,X_b)$. By Lemma \[duality\], $$\gamma(H,{\operatorname{Hom}}(U_b,U_a)/D)=\gamma(H,D^{\perp})=\gamma(H,{\mathcal}{I}(X_a,X_b)),$$ and the result follows. Proposition \[link\] will allow us to replace the quiver $Q'=\Omega(Q,a,b)$ with quivers obtained from $Q$ by adding an arrow from $U_a$ to $U_b$ when counting ${\operatorname{Aut}}(Q')$-orbits. A more usable form of this proposition is given by the corollary below. Let $Q=(E_0,E_1,{\mathbf}U,{\boldsymbol}\alpha)$, $a,b\in E_0$, $Q'=\Omega(Q,a,b)$, $\pi=\pi^{Q'}_Q$ be as above. Let $G$ be a subgroup of ${\operatorname{Aut}}(Q)$. Let $\Xi=\Xi(Q,G,a,b)$ be a complete set of representatives of $G$-orbits on ${\operatorname{Hom}}(U_a,U_b)$ (the choice of $\Xi$ does not matter). For each $S\in\Xi$, let $Q^S$ be the quiver $(E_0, E_1 \sqcup \{ f \}, {\mathbf}U, \alpha^S)$ where $\sigma(f)=a$, $\tau(f)=b$, $\alpha^S_e=\alpha_e$ for $e\in E_1$, and $\alpha^S_f=S$. That is, $Q^S$ is obtained from $Q$ by adding the linear map $S:U_a{\rightarrow}U_b$. \[genlink\] Let $B$ be a subset of ${\operatorname{End}}(Q)$ preserved by a subgroup $G$ of ${\operatorname{Aut}}(Q)$. Then $$\gamma(G^{Q'},\pi^{-1} (B)) = \sum_{S\in\Xi} \gamma (G\cap {\operatorname{Aut}}(Q^S), B\cap {\operatorname{End}}(Q^S)).$$ Let $Y$ be a complete set of representatives of $G$-orbits on $B$. Let $$Z=\{ (S,{\mathbf}X)\in {\operatorname{Hom}}(U_a,U_b)\times B : S X_a = X_b S \}.$$ Then $$\begin{aligned} \gamma(G^{Q'},\pi^{-1} (B) ) & = & \sum_{{\mathbf}X\in Y} \gamma ({\operatorname{Stab}}_G ({\mathbf}X)^{Q'}, \pi^{-1}({\mathbf}X)) \\ & = & \sum_{{\mathbf}X\in Y} \gamma ({\operatorname{Stab}}_G ({\mathbf}X), {\mathcal}{I} (X_a,X_b)) \\ & = & \gamma (G, Z) = \sum_{S\in\Xi} \gamma (G\cap {\operatorname{Aut}}(Q^S), B\cap {\operatorname{End}}(Q^S)). \end{aligned}$$ The first equality holds by Lemma \[actprod\], applied to the set $$\{ ({\mathbf}X, {\mathbf}X')\in B\times{\operatorname{End}}(Q') : \pi({\mathbf}X')={\mathbf}X \}.$$ The second one follows from Proposition \[link\]. The third and fourth equalities follow from Lemma \[actprod\], applied to the set $Z$ in two different ways. The rest of this section is devoted to an informal sketch of a proof of Theorem \[pargen\] in the special case when ${\mathcal}Y$ is the class of nilpotent matrices. A formal proof of a more general statement is in the next section. Let $Q$ be the quiver representation consisting of vector spaces $U_a={\mathbb{F}}_q^k$ and $U_b={\mathbb{F}}_q^m$ without any arrows. Let $Q'=\Omega(Q,a,b)$, so $Q'$ is the quiver representation $$\xymatrix{ {\mathbb{F}}_q^k\ar@{^{(}->}[r] & {\mathbb{F}}_q^{k+m}\ar@{->>}[r] & {\mathbb{F}}_q^m }$$ where the first map is injective, the second one is surjective, and the sequence is exact at ${\mathbb{F}}_q^{k+m}$. It is not difficult to see that $\rho_{(k,m)}(q)=\theta(Q')$. The orbit of an element $S_1\in {\operatorname{Hom}}({\mathbb{F}}_q^k,{\mathbb{F}}_q^m)$ with respect to the action of ${\operatorname{Aut}}(Q)={\operatorname{GL}}_k (q)\times {\operatorname{GL}}_m (q)$ is determined by ${\operatorname{rank}}(S_1)$. Thus, by Corollary \[genlink\] (applied to $G={\operatorname{Aut}}(Q)$ and $B=N(Q)$), $\rho_{(k,m)}(q)=\theta(Q')$ may be expressed as a sum, with one summand for each possible value of ${\operatorname{rank}}(S_1)$. If $S_1=0$, we get the summand $\theta(Q)=p(k)p(m)$. Otherwise, the summand corresponding to $S_1$ is $\theta(Q_1)$ where $Q_1$ is the quiver representation $${\mathbb{F}}_q^k \stackrel{S_1}{{\longrightarrow}} {\mathbb{F}}_q^m.$$ Let ${\lambda}_1={\operatorname{rank}}S_1$. Then ${\operatorname{ker}}S_1$ and ${\operatorname{im}}S_1$ may be identified with ${\mathbb{F}}_q^{k-{\lambda}_1}$ and ${\mathbb{F}}_q^{{\lambda}_1}$ respectively. It follows from Lemma \[KI\] that $\theta(Q_1)=\theta({\overline}{Q}_1)$ where ${\overline}{Q}_1$ is the quiver representation $$\xymatrix{ {\mathbb{F}}_q^{k-{\lambda}_1} \ar@{^{(}->}[r] & {\mathbb{F}}_q^k \ar@{->>}[r]^{S_1} & {\mathbb{F}}_q^{{\lambda}_1}\ar@{^{(}->}[r] & {\mathbb{F}}_q^m }$$ with the sequence exact at ${\mathbb{F}}_q^k$. We apply Corollary \[genlink\] again: $\theta({\overline}{Q}_1)$ may be expressed as a sum of terms corresponding to maps $S_2\in {\operatorname{Hom}}({\mathbb{F}}_q^{k-{\lambda}_1},{\mathbb{F}}_q^{{\lambda}_1})$ of varying ranks. (Again, we have one map of each possible rank.) Thus, we branch into cases corresponding to the possible values of ${\lambda}_2:={\operatorname{rank}}S_2$. We continue this argument inductively, considering maps $S_3,S_4,\ldots$ of ranks ${\lambda}_i:={\operatorname{rank}}S_i$ until $S_{s+1}=0$ for some $s$. (Here, $S_{i+1} : {\mathbb{F}}_q^{k-{\lambda}_1-\cdots-{\lambda}_i} {\rightarrow}{\mathbb{F}}_q^{{\lambda}_i}$.) Ultimately, we express $\theta(Q')=\rho_{(k,m)}(q)$ as a sum of terms indexed by partitions ${\lambda}=({\lambda}_1,\ldots,{\lambda}_s)$. The term corresponding to ${\lambda}$ is equal to $\theta(Q_s)$ where $Q_s$ is the quiver representation $$\xymatrix{ {\mathbb{F}}_q^{k-|{\lambda}|} & {\mathbb{F}}_q^{{\lambda}_s} \ar@{^{(}->}[r] & {\mathbb{F}}_q^{{\lambda}_{s-1}}\ar@{^{(}->}[r] & \cdots \ar@{^{(}->}[r] & {\mathbb{F}}_q^{{\lambda}_1} \ar@{^{(}->}[r] & {\mathbb{F}}_q^m. }$$ It follows from Lemma \[quivpar\] that $\theta(Q_s)=p(k-|{\lambda}|) \nu_{{\lambda}}^m (q)$. Putting $j=|{\lambda}|$, we obtain $$\rho_{(k,m)}(q)= \sum_{j=0}^k p(k-j) \sum_{{\lambda}\in {\mathcal}P^m_j} \nu_{{\lambda}}^m (q),$$ as stated in Theorem \[pargen\]. An inductive argument {#indarg} ===================== Fix a positive integer $m$. Let $W$ be an $m$-dimensional vector space over ${\mathbb{F}}_q$. Let $G$ be a subgroup of ${\operatorname{GL}}(W)$, and let $A\subseteq {\operatorname{End}}(W)$ be a set preserved by $G$. If $\lambda=(\lambda_1,\ldots,\lambda_s)\in {\mathcal}P^m$, let ${\mathcal}F_{\lambda}$ be the set of flags ${\mathbf}W = (W_1,W_2,\ldots,W_s)$ such that $$W\ge W_1 \ge W_2 \ge \cdots \ge W_s > 0$$ and $\dim W_i = {\lambda}_i$. If ${\mathbf}W=(W_1,\ldots,W_s)\in {\mathcal}F_{{\lambda}}$ and ${\mathbf}W'=(W_1,\ldots,W_{s+1}) \in {\mathcal}F_{{\lambda}'}$ for an appropriate ${\lambda}'=({\lambda}_1,\ldots,{\lambda}_{s+1})$ (that is, the first $s$ subspaces are the same for ${\mathbf}W$ and ${\mathbf}W'$), we shall write ${\mathbf}W' \succ {\mathbf}W$. For each $\lambda\in {\mathcal}P^m$, choose a complete set $F_{\lambda}$ of representatives of $G$-orbits on ${\mathcal}F_{\lambda}$. We may assume that these choices satisfy the following property: if ${\mathbf}W\in {\mathcal}F_{{\lambda}}$, ${\mathbf}W' \in F_{{\lambda}'}$ and ${\mathbf}W'\succ {\mathbf}W$, then ${\mathbf}W \in F_{{\lambda}'}$. Let ${\lambda}\in {\mathcal}P^m$. Let $$T_{{\lambda}}(A)= \{ ({\mathbf}W,R)\in {\mathcal}F_{\lambda}\times A : \; R(W_i)\subseteq W_i \text{ for all } i \}, \text{ and let }$$ $$\xi_{\lambda}(G,A)=\gamma(G,T_{{\lambda}}(A)).$$ Let ${\lambda}=({\lambda}_1,\ldots,{\lambda}_s)\in {\mathcal}P^m$, and let ${\mathbf}W=(W_1,\ldots,W_s)\in{\mathcal}F_{{\lambda}}$. Let $Q_{{\lambda}}({\mathbf}W)$ be the quiver representation $$\xymatrix{ W_s\ar@{^{(}->}[r] & W_{s-1}\ar@{^{(}->}[r] & \cdots\ar@{^{(}->}[r] & W_1\ar@{^{(}->}[r] & W }$$ where the arrows are the inclusion maps (obtained by restricting $I_W$). If $s=0$ (so ${\lambda}=()$), we get the quiver representation $Q_{()}$ consisting just of the space $W$ without any arrows. Let ${\mathcal}Y$ be a class of linear operators over ${\mathbb{F}}_q$. We shall assume that $A\subseteq {\mathcal}Y$. Let $Q$ be an extension of $Q_{()}$. Let $\pi=\pi^{Q}_{Q_{()}}: {\operatorname{End}}(Q){\rightarrow}{\operatorname{End}}(Q_{()})$ be the natural projection map. Recall that $$\begin{aligned} G^Q & = & \pi^{-1} (G) \cap {\operatorname{Aut}}(Q), \qquad \text{and let} \\ A^Q_{{\mathcal}Y} & = & \pi^{-1} (A)\cap {\operatorname{End}}_{{\mathcal}Y}(Q).\end{aligned}$$ By Lemma \[quivpar\], for any ${\mathbf}W\in {\mathcal}F_{{\lambda}}$, ${\operatorname{Stab}}_G ({\mathbf}W)$-orbits on $\{R\in A: R(W_i)\subseteq W_i \;\forall i\}$ are in a one-to-one correspondence with $G^{Q_{{\lambda}}({\mathbf}W)}$-orbits on $A^{Q_{{\lambda}}({\mathbf}W)}$. It follows, by Lemma \[actprod\], that $$\label{xi} \xi_{{\lambda}} (G,A)= \sum_{{\mathbf}W\in F_{{\lambda}}} \gamma(G^{Q_{{\lambda}}({\mathbf}W)},A^{Q_{{\lambda}}({\mathbf}W)}).$$ Let ${\mathbf}W\in {\mathcal}F_{{\lambda}}$ and $k\in {\mathbb{Z}}_{\ge 0}$. Let $D^{k}_{{\lambda}}({\mathbf}W)$ be the quiver representation $$\xymatrix{ V & & W_s\ar@{^{(}->}[r] & W_{s-1} \ar@{^{(}->}[r] & \cdots \ar@{^{(}->}[r] & W_1 \ar@{^{(}->}[r] & W, }$$ where $\dim V=k$ and the maps are as in $Q_{{\lambda}}({\mathbf}W)$ (i.e. injective). Let $Q_{{\lambda}}^k ({\mathbf}W)$ be the quiver representation $\Omega (D_{{\lambda}}^k ({\mathbf}W), a, b)$ (as defined in the previous section) where $a$ and $b$ correspond to $V$ and $W_s$ respectively. Then $Q_{{\lambda}}^k ({\mathbf}W)$ may be depicted as $$\xymatrix{ V \ar@{^{(}->}[r] & Z \ar@{->>}[r] & W_s \ar@{^{(}->}[r] & W_{s-1} \ar@{^{(}->}[r] & \cdots \ar@{^{(}->}[r] & W_1 \ar@{^{(}->}[r] & W }$$ (Note that $\dim Z=k+{\lambda}_s$ and the sequence is exact at $Z$.) Consider again the case ${\lambda}=()$. (Then ${\mathbf}W$ is the empty sequence and is omitted from the notation.) The corresponding quiver representation $D_{()}^k$ does not have any arrows and consists just of the vector spaces $V$ ($\dim V=k$) and $W$. The quiver representation $Q_{()}^k$ is $$\label{Qk} \xymatrix{ V \ar@{^{(}->}[r] & Z \ar@{->>}[r] & W, }$$ where the sequence is exact at $Z$. Write $D^k$ for $D_{()}^k$ and $Q^k$ for $Q_{()}^k$. Let $G^{(k)}=G^{Q^k}$ and $A^{(k)}_{{\mathcal}Y}=A^{Q^k}_{{\mathcal}Y}$. Recall that $c(k,{\mathcal}Y)$ is the number of ${\operatorname{GL}}(V)$-orbits on ${\operatorname{End}}_{{\mathcal}Y} (V)$. We shall prove the following result using Corollary \[genlink\]. \[indlem\] Let ${\lambda}=({\lambda}_1,\ldots,{\lambda}_s)\in {\mathcal}P^m$ and $k\in {\mathbb{Z}}_{\ge 0}$. Let ${\mathbf}W \in F_{{\lambda}}$. Then $$\begin{aligned} \gamma\left(G^{Q_{{\lambda}}^k ({\mathbf}W)}_{{\vphantom}Y}, A^{Q_{{\lambda}}^k ({\mathbf}W)}_{{\mathcal}Y}\right) & = & c(k,{\mathcal}Y) \gamma\!\left(G^{Q_{{\lambda}}({\mathbf}W)}_{{\vphantom}Y}, A^{Q_{{\lambda}}({\mathbf}W)}_{{\mathcal}Y} \right) \\ & + & \sum_{{\lambda}_{s+1}=1}^{\min({\lambda}_s,k)} \sum_{\substack{{\mathbf}W'\in F_{{\lambda}'} \\ {\mathbf}W' \succ {\mathbf}W}} \gamma\left(G^{Q_{{\lambda}}^{k-{\lambda}_{s+1}}({\mathbf}W')}_{{\vphantom}Y}, A_{{\mathcal}Y}^{Q_{{\lambda}}^{k-{\lambda}_{s+1}} ({\mathbf}W')} \right),\end{aligned}$$ where ${\lambda}'=({\lambda}_1,\ldots,{\lambda}_s,{\lambda}_{s+1})$. Let $\Phi$ be the set of subspaces $W_{s+1}$ of $W_s$ such that $${\mathbf}W':=(W_1,\ldots,W_s,W_{s+1})\in F_{({\lambda}_1,\ldots,{\lambda}_s,{\lambda}_{s+1})}$$ for some ${\lambda}_{s+1}\in [1,k]$ (of course, then $\dim W_{s+1}={\lambda}_{s+1}$). Then $\Phi$ is a complete set of representatives of ${\operatorname{Stab}}_G ({\mathbf}W)$ on non-zero subspaces of $W_s$ of dimension at most $k$. For each $W_{s+1}\in \Phi$, choose a linear map $\alpha_{W_{s+1}}:V {\rightarrow}W_s$ with image $W_{s+1}$. Let $H=G^{D_{{\lambda}}^k ({\mathbf}W)}$. Then $H={\operatorname{GL}}(V)\times G^{Q_{{\lambda}}({\mathbf}W)}$. Thus, any two elements of ${\operatorname{Hom}}(V,W_s)$ are $H$-conjugate if and only if their images in $W_s$ are ${\operatorname{Stab}}_G ({\mathbf}W)$-conjugate. Let $\Xi$ be the set consisting of the zero map $V{\rightarrow}W_s$ and all the maps $\alpha_{W_{s+1}}$ where $W_{s+1}$ runs through $\Phi$. Then $\Xi$ is a complete set of representatives of $H$-orbits on ${\operatorname{Hom}}(V,W_s)$. For each $\alpha\in\Xi$, let $L^{\alpha}$ be the quiver obtained from $D_{{\lambda}}^k ({\mathbf}W)$ by adding the map $\alpha:V{\rightarrow}W_s$, that is, $$\xymatrix{ V \ar[r]^{\alpha} & W_s \ar@{^{(}->}[r] & W_{s-1} \ar@{^{(}->}[r] & \cdots \ar@{^{(}->}[r] & W_1 \ar@{^{(}->}[r] & W. }$$ By Corollary \[genlink\], applied to the group $G^{D_{{\lambda}}^k ({\mathbf}W)}\le {\operatorname{Aut}}(D_{{\lambda}}^k ({\mathbf}W))$ and to the set $A_{{\mathcal}Y}^{D_{{\lambda}}^k ({\mathbf}W)} \subseteq {\operatorname{End}}(D_{{\lambda}}^k ({\mathbf}W))$, $$\label{indlem1} \gamma\left(G^{Q_{{\lambda}}^k ({\mathbf}W)}_{{\vphantom}Y}, A^{Q_{{\lambda}}^k ({\mathbf}W)}_{{\mathcal}Y}\right) = \sum_{\alpha\in\Xi} \gamma\left(G^{L^{\alpha}}_{{\vphantom}Y}, A^{L^{\alpha}}_{{\mathcal}Y} \right).$$ (Here we use the fact that, if ${\mathbf}X\in {\operatorname{End}}(Q_{{\lambda}}^k ({\mathbf}W))$ and the actions of ${\mathbf}X$ on $V$ and $W_s$ are ${\mathcal}Y$-endomorphisms, then so is the action of ${\mathbf}X$ on $Z$.) If $\alpha=0$, then, obviously, $$\gamma\left(G^{L^{\alpha}}_{{\vphantom}Y}, A^{L^{\alpha}}_{{\mathcal}Y} \right) = \gamma\left(G^{D_{{\lambda}}^k ({\mathbf}W)}_{{\vphantom}Y}, A^{D_{{\lambda}}^k ({\mathbf}W)}_{{\mathcal}Y} \right).$$ Since the quiver representation $D_{{\lambda}}^k ({\mathbf}W)$ is the disconnected union of $Q_{{\lambda}}({\mathbf}W)$ and the quiver representation that consists of the space $V$, we have $$\label{indlem2} \gamma \!\left(G^{L^{0}}_{{\vphantom}Y}, A^{L^{0}}_{{\mathcal}Y}\right) = c(k,{\mathcal}Y) \gamma \!\left(G^{Q_{{\lambda}}({\mathbf}W)}_{{\vphantom}Y}, A^{Q_{{\lambda}}({\mathbf}W)}_{{\mathcal}Y} \right).$$ Now consider the case $\alpha\ne 0$. Then $\alpha=\alpha_{W_{s+1}}$ for some $W_{s+1}\in F_{{\lambda}'}$. Let $O$ be the quiver representation $I(K(L^{\alpha},e),e)$ where $e$ is the arrow from $V$ to $W_s$ in $L^{\alpha}$. Then $O$ may be depicted as $$\xymatrix{ V' \ar@{^{(}->}[r] & V \ar@{->>}[r] & W_{s+1} \ar@{^{(}->}[r] & W_s \ar@{^{(}->}[r] & \cdots \ar@{^{(}->}[r] & W_1 \ar@{^{(}->}[r] & W, }$$ where $V'={\operatorname{ker}}\alpha$ and the map $V {\twoheadrightarrow}W_{s+1}$ is induced by $\alpha$. By Lemma \[KI\], the $G^{L^{\alpha}}$-orbits on $A^{L^{\alpha}}_{{\mathcal}Y}$ are in a one-to-one correspondence with the $G^O$-orbits on $A^O_{{\mathcal}Y}$. However, renaming $V$ as $Z$ and $V'$ as $V$, we may identify $O$ with $Q_{{\lambda}'}^{k-{\lambda}_{s+1}}({\mathbf}W)$ where ${\lambda}'=({\lambda}_1,\ldots,{\lambda}_s,{\lambda}_{s+1})$. Thus, $$\label{indlem3} \gamma\left(G^{L^{\alpha}}_{{\vphantom}Y}, A^{L^{\alpha}}_{{\mathcal}Y} \right)= \gamma\left(G^{Q_{{\lambda}'}^{k-{\lambda}_{s+1}}({\mathbf}W)}_{{\vphantom}Y}, A^{Q_{{\lambda}'}^{k-{\lambda}_{s+1}}({\mathbf}W)}_{{\mathcal}Y} \right).$$ The result now follows from , and . The main result of this paper may be stated in a very general form as follows. \[gensq\] Let ${\mathcal}Y$ be a class of linear operators over ${\mathbb{F}}_q$. Let $k\in {\mathbb{N}}$. Suppose that $G$ is a subgroup of ${\operatorname{GL}}(W)$ and that $A\subseteq {\operatorname{End}}(W)\cap {\mathcal}Y$ is preserved by $G$. Then $$\gamma(G^{(k)}_{{\vphantom}Y}, A^{(k)}_{{\mathcal}Y}) = \sum_{j=0}^k c(k-j,{\mathcal}Y) \sum_{{\lambda}\in{\mathcal}P_j^m} \xi_{{\lambda}} (G,A).$$ Recall that $l({\lambda})$ is the length (the number of parts) of a partition $\lambda$. If $s$ is a nonnegative integer, let $$\begin{aligned} a_s & = & \sum_{j=0}^k \sum_{\substack{{\lambda}\in{\mathcal}P_j^m \\ l({\lambda})=s }} c(k-j,{\mathcal}Y) \xi_{{\lambda}} (G,A), \\ b_s & = & \sum_{j=0}^k \sum_{\substack{{\lambda}\in{\mathcal}P_{k-j}^m \\ l({\lambda})=s}} \sum_{{\mathbf}W\in F_{{\lambda}}} \gamma(G^{Q_{{\lambda}}^j ({\mathbf}W)}, A^{Q_{{\lambda}}^j ({\mathbf}W)}_{{\mathcal}Y}). \end{aligned}$$ Since $A\subseteq {\mathcal}Y$, we have $A_{{\mathcal}Y}^{Q_{{\lambda}}({\mathbf}W)}=A^{Q_{{\lambda}}({\mathbf}W)}_{{\vphantom}Y}$. Thus, by , $$\label{xi2} \xi_{{\lambda}}(G,A)=\sum_{{\mathbf}W\in F_{{\lambda}}} \gamma\left(G^{Q_{{\lambda}}({\mathbf}W)}_{{\vphantom}Y}, A^{Q_{{\lambda}}({\mathbf}W)}_{{\mathcal}Y}\right).$$ Observe that $$b_0=\gamma\left(G^{(k)}_{{\vphantom}Y}, A^{(k)}_{{\mathcal}Y} \right).$$ Indeed, the only non-zero summand of $b_0$ corresponds to the case $j=k$, ${\lambda}=()$. Applying Lemma \[indlem\] and rearranging the sums, we infer that, for any $s$, $$\begin{aligned} b_s & = & \sum_{j=0}^k \sum_{\substack{{\lambda}\in{\mathcal}P_{k-j}^m \\ l({\lambda})=s}} \sum_{{\mathbf}W\in F_{{\lambda}}} \gamma\left(G^{Q_{{\lambda}}^j ({\mathbf}W)}_{{\vphantom}Y}, A^{Q_{{\lambda}}^j ({\mathbf}W)}_{{\mathcal}Y}\right) \nonumber\\ & = & \sum_{j=0}^k \sum_{\substack{{\lambda}\in{\mathcal}P_{k-j}^m \\ l({\lambda})=s}} \sum_{{\mathbf}W\in F_{{\lambda}}} c(j,{\mathcal}Y)\gamma\!\left(G^{Q_{{\lambda}}({\mathbf}W)}_{{\vphantom}Y}, A^{Q_{{\lambda}}({\mathbf}W)}_{{\mathcal}Y}\right) \nonumber\\ & + & \sum_{i=0}^k \sum_{\substack{{\lambda}'\in{\mathcal}P_{k-i}^m \\ l({\lambda}')=s+1}} \sum_{{\mathbf}W'\in F_{{\lambda}'}} \gamma\left(G^{Q_{{\lambda}'}^i ({\mathbf}W)}_{{\vphantom}Y}, A^{Q_{{\lambda}'}^i ({\mathbf}W)}_{{\mathcal}Y}\right) \nonumber\\ & = & a_s+b_{s+1}. \label{gensq1}\end{aligned}$$ (The last equality follows from .) Note that, if $s>k$, then $a_s=b_s=0$ because any partition ${\lambda}$ with $l({\lambda})=s$ satisfies $|{\lambda}|\ge s>k$. Hence, by , $$b_0=a_0+a_1+\cdots+a_k,$$ and the result follows. We now deduce Theorem \[pargen\]. Let $k,m \in {\mathbb{N}}$, and let $W$ be a vector space over ${\mathbb{F}}_q$ of dimension $m$. Consider the quiver representation $Q^{(k)}=(E_0,E_1,{\mathbf}U,{\boldsymbol}\alpha)$ given by : say, $E_0=\{a,b,c\}$, $U_a=V$, $U_b=Z$, $U_c=W$ ($\dim V=k$). Put $G={\operatorname{GL}}(W)$ and $A={\mathcal}Y\cap {\operatorname{End}}(W)$. Then, for any partition ${\lambda}\in {\mathcal}P^m$, the set $F_{{\lambda}}$ consists of just one flag, so $$\label{pargenpr1} \xi_{{\lambda}}(G,A)=\kappa_{{\lambda}}^m ({\mathcal}Y).$$ Moreover, ${\mathbf}X=(X_a,X_b,X_c)\mapsto X_b$ induces a one-to-one correspondence between the $G^{(k)}$-orbits in $A^{(k)}$ and the ${\mathscr}P(Z;V)$-orbits in ${\mathcal}Y\cap {\operatorname{End}}(Z;V)$. (Here we identify $V$ with its image under the injective map $V{\rightarrow}Z$.) Hence, $$\label{pargenpr2} \gamma\left(G^{(k)}_{{\vphantom}Y},A^{(k)}_{{\mathcal}Y}\right)=\kappa_{(k)}^{k+m} ({\mathcal}Y).$$ Theorem \[pargen\] follows immediately from , and Theorem \[gensq\]. Let ${\lambda}$ be a partition. It can be represented as $$\lambda=(\underbrace{s_1,s_1,\ldots,s_1}_{u_1}, \underbrace{s_2,\ldots,s_2}_{u_2}, \ldots, \underbrace {s_l,\ldots,s_l}_{u_l})$$ where $s_1>s_2>\ldots>s_l$. Let $\bar{\lambda}$ be the set $\{s_1,\ldots,s_l\}$. If $S\subset {\mathbb{N}}$ is a finite set and $k\in {\mathbb{N}}$, let $r(k,S)$ be the number of partitions ${\lambda}$ such that $\bar{{\lambda}}=S$. That is, $r(k,S)$ is the number of partitions ${\lambda}$ such that ${\lambda}_i\in S$ for all $i$ and, for each $s\in S$, there exists $i$ such that ${\lambda}_i=s$. If $S=\{s_1,\ldots,s_l\}$ is a set and $s_1>\cdots>s_l$, we shall identify the set $S$ with the partition $(s_1,s_2,\ldots,s_l)$. (So the notation $\xi_S (G,A)$ makes sense, for example.) Let $W$, $G\le {\operatorname{GL}}(W)$ and $A\subseteq {\operatorname{End}}(W)\cap {\mathcal}Y$ be as above, with $m=\dim W$. It is clear that, if $\lambda\in{\mathcal}P^m$, then $$\label{coll1} \xi_{{\lambda}}(G,A)=\xi_{\bar{\lambda}} (G,A).$$ (Indeed, duplicating subspaces in a flag ${\mathbf}W$ does not add anything new to the structure.) Moreover, if a $S=S'\sqcup \{m\}$ is a subset of $[1,m]$ containing $m$, then $$\label{coll2} \xi_S (G,A)=\xi_{S'} (G,A).$$ These identities allow us to simplify the expression in Theorem \[gensq\]. \[sqcor1\] Under the hypotheses of Theorem \[gensq\], $$\gamma(G^{(k)}, A^{(k)}_{{\mathcal}Y}) = \sum_{j=0}^k c(k-j,{\mathcal}Y) \sum_{S\subseteq [1,m-1]} (r(j,S)+r(j,S\cup \{m\})) \xi_S (G,A).$$ When calculating the last sum, we only need to consider those $S\subseteq [1,m-1]$ for which $\sum_{s\in S} s\le k$: if $\sum_{s\in S} s>k$, then $r(j,S)=r(j,S\cup \{m\})=0$ for all $j\in [0,k]$. Similarly, Theorem \[pargen\] implies the following. \[sqcor2\] Let $k,m\in {\mathbb{N}}$. Then $$\kappa_{(k)}^{k+m} ({\mathcal}Y) = \sum_{j=0}^k c(k-j,{\mathcal}Y) \sum_{S\subseteq [1,m-1]} (r(k,S)+r(k,S\cup \{m\})) \kappa_S^{m} ({\mathcal}Y).$$ Inverting the formula {#inv} ===================== From now on, we shall assume that ${\mathcal}Y$ is the class ${\mathcal}N$ of all nilpotent endomorphisms, so $c(j,{\mathcal}Y)=p(j)$ for all $j$. Let $W$ be an $m$-dimensional vector space over ${\mathbb{F}}_q$, where $m\in {\mathbb{N}}$ is fixed throughout the section. Let $G$ be a subgroup of ${\operatorname{GL}}(W)$ preserving a subset $A$ of $N(W)$ (as in Section \[indarg\]). If $k\in {\mathbb{Z}}_{\ge 0}$, define $$\begin{aligned} \phi_k (G,A) & = & \sum_{S\subseteq [1,m]} r(k,S) \xi_S (G,A) \quad\text{and } \\ \psi_k (G,A) & = & \phi_k (G,A)-\phi_{k-m}(G,A)\end{aligned}$$ where, by convention, $\phi_k (G,A)=0$ if $k<0$. If $S\subseteq [1,m-1]$, then $$r(k,S \cup \{m\})=r(k-m,S) + r(k-m,S\cup \{m\})$$ for all $k\in {\mathbb{N}}$. Also, by , $\xi_S (G,A)= \xi_{S\cup \{m\}} (G,A)$. Hence, $$\begin{aligned} \phi_{k-m}(G,A) & = & \sum_{S\subseteq [1,m-1]} r(k-m,S) \xi_S (G,A) \\ & + & \sum_{S\subseteq [1,m-1]} r(k-m,S \cup \{m\})\xi_{S \cup \{m\}} (G,A) \\ & = & \sum_{S\subseteq [1,m-1]} r(k, S\cup \{m\}) \xi_{S\cup\{m\}}(G,A).\end{aligned}$$ It follows that $$\label{psi} \psi_k (G,A) = \sum_{S\subseteq [1,m-1]} r(k,S) \xi_S (G,A).$$ We now use Corollary \[sqcor1\] show that the numbers $\phi_k(G,A)$ and $\psi_k (G,A)$ may be expressed (independently of $G$ and $A$) in terms of $\gamma(G^{(r)}_{{\vphantom}N},A^{(r)}_{{\mathcal}N})$, $r\in [0,k]$. \[expphi\] Let $k\in {\mathbb{N}}$. There exist integers $a_0,a_1,\ldots,a_k$ that depend only on $k$ and $m$ (but not on $G$ or $A$) such that $$\phi_k (G,A) = \sum_{j=0}^k a_j \gamma(G^{(j)}_{{\vphantom}N},A^{(j)}_{{\mathcal}N}).$$ We prove the lemma by induction on $k$. Observe that $$\phi_0 (G,A) = \xi_{\varnothing}(G,A) = \gamma(G^{(0)}_{{\vphantom}N}, A^{(0)}_{{\mathcal}N}) = \gamma(G,A).$$ By Corollary \[sqcor1\], $$\begin{aligned} \gamma(G^{(k)}_{{\vphantom}N},A^{(k)}_{{\mathcal}N}) & = & \sum_{j=0}^k p(k-j) \phi_j (G,A), \qquad \text{so } \label{usephi}\\ \phi_k (G,A) & = & \gamma(G^{(k)}_{{\vphantom}N},A^{(k)}_{{\mathcal}N})-p(k)\gamma(G,A)- \sum_{j=1}^{k-1} p(k-j) \phi_j (G,A).\nonumber\end{aligned}$$ The result for $\phi_k (G,A)$ now follows by the inductive hypothesis. \[exppsi\] Let $k\in {\mathbb{N}}$. There exist integers $a_0,a_1,\ldots,a_k$ that depend only on $k$ and $m$ (but not on $G$ or $A$) such that $$\psi_k (G,A) = \sum_{j=0}^k a_j \gamma(G^{(j)}_{{\vphantom}N},A^{(j)}_{{\mathcal}N}).$$ Also, there exist integers $a'_0,a'_1,\ldots,a'_k$ depending only on $k$ and $m$ such that $$\gamma(G^{(k)}_{{\vphantom}N},A^{(k)}_{{\mathcal}N}) = \sum_{j=0}^k a'_j \psi_j (G,A).$$ The first statement follows from Lemma \[expphi\]. The second statement follows from  and the identity $$\phi_j (G,A)= \sum_{i=0}^{\infty} \psi_{j-im}(G,A),$$ which is a consequence of the definition of $\psi_k (G,A)$. Let $n=m(m-1)/2$. It is proved in the Appendix that there exist (explicitly defined) integers $c_1,\ldots,c_n$, depending only on $m$, such that $$\label{cn} r(k,S)=-\sum_{j=k-n}^{k-1} c_{k-j} r(j,S).$$ for all $k>n$ and all $S\subseteq [1,m-1]$. \[expgen\] Let $n=m(m-1)/2$. Let $k>n$. Then there exist integers $a_{k0}, a_{k1},\ldots,a_{kn}$ depending only on $k$ and $m$ (but not on $G$ or $A$) such that $$\gamma(G^{(k)}_{{\vphantom}N},A^{(k)}_{{\mathcal}N}) = \sum_{j=0}^n a_{kj} \gamma(G^{(j)}_{{\vphantom}N},A^{(j)}_{{\mathcal}N}).$$ By  and , for all $k>n$, $$\psi_k (G,A)= - \sum_{j=k-n}^{k-1} c_{k-j} \psi_j (G,A).$$ An induction on $k$ shows that, for $k>n$, there exist integers $a'_{k1},\ldots,a'_{kn}$ such that $$\psi_k (G,A) = \sum_{j=1}^n a'_{kj} \psi_j (G,A).$$ By Corollary \[exppsi\], the result follows. The matrix formed by the numbers $r(k,S)$ as $k$ runs through ${\mathbb{N}}$ and $S$ runs through non-empty subsets of $[1,m-1]$ is of rank $m(m-1)/2=n$, as shown in the Appendix. Thus, Proposition \[expgen\] is the best result that may be obtained via the method described. We now show that Theorem \[rec\] is a particular case of this result. Let ${\mathbf}l=(l_1,\ldots,l_s)$ be a tuple of nonnegative integers with $l_1+\cdots+l_s=m$. As before, let $W={\mathbb{F}}_q^m$. Put $G=P^{{\mathbf}l}(q)\le {\operatorname{GL}}(W)$ and $A=N^{{\mathbf}l}(q)\subseteq {\operatorname{End}}(W)$. Then $A$ is preserved by $G$. The quiver representation $Q^k$ may be depicted as $$\xymatrix{ U_a \ar@{^{(}->}[r] & U_b \ar@{->>}[r] & W=U_c }$$ where the sequence is exact. Then the map ${\operatorname{End}}(Q^k){\rightarrow}{\operatorname{End}}(U_b)$, ${\mathbf}X\mapsto X_b$ establishes an isomorphism between the $G^{(k)}$-action on $A^{(k)}_{{\mathcal}N}$ and the $P^{k,{\mathbf}l}(q)$-action on $N^{k,{\mathbf}l}(q)$. (Here we identify $P^{k,{\mathbf}l}(q)$ with the subgroup of ${\operatorname{GL}}(U_b)$ consisting of those maps $g$ that preserve the image of $U_a$ and act as an element of $G$ on $W$.) Thus, $$\label{rhoquiver} \rho_{k,{\mathbf}l}(q) = \gamma(G^{(k)}_{{\vphantom}N},A^{(k)}_{{\mathcal}N}).$$ Theorem \[rec\] now follows from Proposition \[expgen\]. With $n=m(m-1)/2$, as before, it is shown in the Appendix that there exist (explicitly defined) integers $d_1,\ldots,d_n$ such that $$\label{djeq} \sum_{j=1}^n d_j r(j,S) = \begin{cases} 1 & \text{if } S=[1,m-1], \\ 0 & \text{if } S \subsetneq [1,m-1]. \end{cases}$$ \[maxredgen\] Let $m\in {\mathbb{N}}$, and let $W={\mathbb{F}}_q^m$. Let $n=m(m-1)/2$. Then there exist integers $a_{0},a_1,\ldots,a_n$ depending only on $m$ such that, for any $G\le {\operatorname{GL}}(W)$ and $A\subseteq {\operatorname{N}}(W)$ with $G$ preserving $A$, $$\xi_{[1,m-1]}(G,A)=\sum_{j=0}^n a_j \gamma(G^{(j)}_{{\vphantom}N},A^{(j)}_{{\mathcal}N}).$$ By  and , $$\sum_{j=1}^n d_j \psi_j (G,A) = \xi_{[1,m-1]}(G,A).$$ The result now follows from Corollary \[exppsi\]. In order to deduce Theorem \[impl\], we put $G={\operatorname{GL}}_m (q)$ and $A={\operatorname{N}}_m (q)$. Then, by , $\rho_{(j,m)}(q)=\gamma(G^{(j)},A^{(j)})$. Also, $\xi_{[1,m-1]}(G,A)=\rho_{(1^m)}(q)$. Thus, Theorem \[impl\] follows from Proposition \[maxredgen\]. Groups associated to preordered sets {#presets} ==================================== Let $C$ be a finite set. A binary relation ${\preccurlyeq}$ on $C$ is a *preorder* if (i) $x{\preccurlyeq}x$ for all $x\in C$; and (ii) if $x{\preccurlyeq}y$ and $y{\preccurlyeq}z$, then $x{\preccurlyeq}z$, for all $x,y,z\in C$. A *preordered set* is a finite set together with a preorder on it. Two elements $x,y\in C$ are said to be *comparable* if either $x{\preccurlyeq}y$ or $y{\preccurlyeq}x$. If $(C,{\preccurlyeq})$ is a preordered set, one can define an equivalence relation on $C$ as follows: $x$ is equivalent to $y$ if and only if $x{\preccurlyeq}y$ and $y{\preccurlyeq}x$. The *clots* of $C$ are, by definition, the equivalence classes with respect to this relation. Say that a clot $D\subseteq C$ is *minimal* if there is no $x\in C\setminus D$ such that $x{\preccurlyeq}y$ for some $y\in D$. A *partially ordered set* is a preordered set each of whose clots contains only one element. The *dual* of a preordered set $(C,{\preccurlyeq})$ is $(C,{\preccurlyeq}')$ where $x{\preccurlyeq}' y$ if and only if $y{\preccurlyeq}x$. We denote the dual of a preordered set $C$ by $C^*$. If $(C_1,{\preccurlyeq}_1)$ and $(C_2,{\preccurlyeq}_2)$ are two preordered sets, their *disjoint union* is the set $C_1 \sqcup C_2$ with the preorder ${\preccurlyeq}$ defined as follows: $x{\preccurlyeq}y$ if and only if $x,y\in C_i$ and $x{\preccurlyeq}_i y$ for some $i\in\{1,2\}$. Where appropriate, $C_1 \sqcup C_2$ will denote the corresponding preordered set rather than just a set. We shall define preordered sets by diagrams as follows. Clots will correspond to nodes, with the number at each node equal to the number of elements in the corresponding clot; $x{\preccurlyeq}y$ if and only if one can get from the node of $x$ to the node of $y$ by going along arrows. For example, $$\label{expre} \begin{array}{c} \begin{picture}(40,12) \multiput(0,10)(20,0){3}{{\circle*{1.5}}} \put(0,0){{\circle*{1.5}}} \put(0,10){\usebox{\rvec}} \put(20,10){\usebox{\rvec}} \put(0,0){\usebox{\ruvec}} \put(1,11){$1$} \put(21,11){$2$} \put(41,11){$1$} \put(0,2){$1$} \end{picture} \end{array}$$ is (isomorphic to) the preordered set $\{1,2,3,4,5\}$ where ${\preccurlyeq}$ is defined as follows: $1{\preccurlyeq}3{\preccurlyeq}4 {\preccurlyeq}3 {\preccurlyeq}5$ and $2{\preccurlyeq}3$. Let $q$ be a prime power. Suppose $(C,{\preccurlyeq})$ is a (finite) preordered set. Let $M^C(q)=M^{(C,{\preccurlyeq})}(q)$ be the set of all $C\times C$ matrices $X=(x_{ij})$ over ${\mathbb{F}}_q$ such that $x_{ij}=0$ unless $i{\preccurlyeq}j$, for all $i,j\in C$. Let $P^C (q)$ be the group of all invertible matrices in $M^C (q)$, and let $N^C (q)$ be the set of all nilpotent matrices in $M^C (q)$. Let $$\rho_C (q) = \gamma(P^C (q), N^C (q)).$$ For example, if $C$ is the preordered set given by , then $M^{C}(q)$ is (up to a permutation of $C$) the set of matrices of the form $$\begin{pmatrix} * & 0 & * & * & * \\ 0 & * & * & * & * \\ 0 & 0 & * & * & * \\ 0 & 0 & * & * & * \\ 0 & 0 & 0 & 0 & * \\ \end{pmatrix}.$$ If ${\mathbf}l=(l_1,\ldots,l_s)$ is a tuple of nonnegative integers, we shall also write ${\mathbf}l$ for the preordered set $$\begin{picture}(70,2) \multiput(0,0)(20,0){2}{{\circle*{1.5}}} \multiput(50,0)(20,0){2}{{\circle*{1.5}}} \put(0,0){\usebox{\rvec}} \put(50,0){\usebox{\rvec}} \put(33,0){$\cdots$} \put(1,1){$l_1$} \put(21,1){$l_2$} \put(51,1){$l_{s-1}$} \put(71,1){$l_s$} \end{picture}$$ Note that then $P^{{\mathbf}l}(q)$, $\rho_{{\mathbf}l}(q)$, etc. are as defined previously. If $C$ is a preordered set, we shall aim to reduce the problem of finding $\rho_C (q)$ to orbit-counting problems for matrices of size less than $|C|$, using Theorem \[gensq\]. In some cases, we shall be able to express $\rho_C (q)$ in terms of $\rho_{O}(q)$ where $O$ varies among preordered sets of size less than $|C|$. This will allow us to calculate $\rho_C (q)$ using recursion for some $C$. Let $(C,{\preccurlyeq})$ be a finite preordered set. Let $D\subseteq C$ be a minimal clot in $C$ with $|D|=k$, say. Let $E=\{x\in C: y\not{\preccurlyeq}x \; \forall y\in D \}$. Let $\bar{D}=C\setminus D$, $\bar E=C\setminus E$ and $C'=C\setminus (D\cup E)$. We shall view $C\times C$ matrices as endomorphisms of a vector space $V$ over ${\mathbb{F}}_q$ equipped with a basis $\{e_i\}_{i\in C}$. Let $V_D= {\operatorname{span}}\{e_i\}_{i\in D}$, $V_E={\operatorname{span}}\{e_i\}_{i\in E}$, $V_{\bar D}=V/V_D$ and $V_{C'}=V/(V_D+V_E)$. We may view $M^{\bar D}(q)$ as a subring of ${\operatorname{End}}(V_{\bar D})$ using the projections of $e_i$, $i\in \bar D$ as a basis of $V_{\bar D}$. Similarly, we may view $M^{C'}(q)$ as a subring of ${\operatorname{End}}(V_{C'})$. (By abuse of notation, we shall view $\{e_i\}_{i\in C'}$ as a basis of $V_{C'}$.) Let $m=|C'|$. If $$S=\{s_1>\cdots>s_r\}\subseteq [1,m-1],$$ let ${\mathcal}H_S={\mathcal}H_S (C')$ be the set of all flags ${\mathbf}W=(W_1,\ldots, W_r)$ such that $$W_r \le W_{r-1} \le \cdots \le W_1 \le V_{C'}$$ and $\dim W_i=s_i$ for all $i$. Let $\pi: V_{\bar D}{\twoheadrightarrow}V_{C'}$ be the natural projection. If ${\mathbf}W\in {\mathcal}H_S$, write $\pi^{-1}({\mathbf}W)$ for the flag $(\pi^{-1}(W_1),\ldots,\pi^{-1}(W_r))$ in $V_{\bar D}$. Let $M^{\bar D}_{{\mathbf}W}(q) = {\mathscr}P(V_{\bar D}; \pi^{-1}({\mathbf}W))$. That is, $M^{\bar D}_{{\mathbf}W}(q)$ is the ring of all elements of $M^{\bar D}(q)$ whose action on $V_{C'}$ preserves ${\mathbf}W$. Let $P^{\bar D}_{{\mathbf}W}(q)$ be the group of all invertible elements in $M^{\bar D}_{{\mathbf}W}(q)$, and let $N^{\bar D}_{{\mathbf}W}(q)$ be the set of all nilpotent elements in $M^{\bar D}_{{\mathbf}W}(q)$. \[preord\] With the notation as in the previous paragraph, suppose that $H_S$ is a complete set of representatives of the $P^{C'}(q)$-orbits on ${\mathcal}H_S$ for each $S\subseteq [1,m-1]$. Then $$\rho_C (q) = \sum_{j=0}^k p(k-j) \sum_{S\subseteq [1,m-1]} (r(j,S)+r(j,S\cup \{m\})) \sum_{{\mathbf}W\in H_S} \gamma(P^{\bar D}_{{\mathbf}W}(q), N^{\bar D}_{{\mathbf}W} (q)).$$ Let $Z$ be a complete set of representatives of $P^{\bar D}(q)$-orbits on $N^{\bar D}(q)$. Let $\pi_{D}: M^C (q) {\rightarrow}M^{\bar D}(q)$ be the natural projection. If $X\in Z$, let $G_X = \pi_D^{-1}({\operatorname{Stab}}_{P^{\bar D}(q)} (X)) \cap P^{C}(q)$ and $A_X = \pi_D^{-1}(X) \cap N^C (q)$. By Lemma \[actprod\], $$\label{preord1} \rho_C (q) = \gamma(P^C (q), N^C (q))= \sum_{X\in Z} \gamma(G_X, A_X).$$ Let $\pi_E: M^C (q) {\rightarrow}M^{\bar E}(q)$ be the natural projection. Let $X\in Z$. It is easy to see that $\pi_E$ establishes a bijection between the $G_X$-orbits on $A_X$ and the $\pi_E (G_X)$-orbits on $\pi_E (A_X)$. So $$\label{preord2} \gamma(G_X, A_X)= \gamma(\pi_E (G_X), \pi_E (A_X))$$ Let $Q$ be the quiver representation $$\xymatrix{ V_D \ar@{^{(}->}[r] & V_{\bar E} \ar@{->>}[r] & V_{C'} }$$ where the first map is induced by the inclusion $V_D {\hookrightarrow}V_{C}$ and the second map is the natural projection. Then $Q$ is isomorphic to $Q^k$ in the notation of Section \[indarg\] ($V_{C'}$ plays the role of $W$). Let $\chi: {\operatorname{End}}(Q) {\rightarrow}{\operatorname{End}}(V_{\bar E}; V_D)$ be the isomorphism which maps an endomorphism ${\mathbf}Y$ of $Q$ to the action of ${\mathbf}Y$ on $V_{\bar E}$. Our aim is to express $\gamma(\pi_E (G_X), \pi_E (A_X))$ by applying Corollary \[sqcor1\] to the quiver $Q^k$. In fact, this expression is the only substantial step of the proof. Let $G'_X\le P^{C'}(q)$ be the image of $G_X$ under the natural map $P^C (q) {\rightarrow}P^{C'}(q)$. Let $\eta: M^{\bar D}(q){\rightarrow}M^{C'}(q)$ be the natural projection, and let $A'_X=\{\eta(X)\}\subseteq N^{C'}(q)$. Let $\omega: {\operatorname{End}}(V_{\bar E}; V_D){\rightarrow}{\operatorname{M}}_{C',C'}(q)$ be the natural projection, $(y_{ij})_{i,j\in \bar E}\mapsto (y_{ij})_{i,j\in C'}$. Since $d{\preccurlyeq}c$ for all $d\in D$ and $c\in C'$, $$\omega^{-1}(M^{C'}(q))=M^{\bar E}(q).$$ We identify the quiver representation $Q$ with $Q^k$; let $(G'_X)^{(k)}$ and $(A'_X)^{(k)}_{{\mathcal}N}$ be as defined in Section \[indarg\]. Then, by those definitions, $$\begin{aligned} \chi((G'_X)^{(k)})& = &\omega^{-1}(G'_X)\cap P^{\bar E}(q) \quad \text{and} \\ \chi((A'_X)^{(k)}_{{\mathcal}N}) & = & \omega^{-1}(A'_X)\cap N^{\bar E}(q).\end{aligned}$$ Hence, clearly, $\chi((A'_X)^{(k)}_{{\mathcal}N}) = \pi_E (A_X)$. We claim that $\chi((G'_X)^{(k)}) = \pi_E (G_X)$. It is clear that $\pi_E (G_X)\subseteq \omega^{-1}(G'_X) \cap P^{\bar E}(q)$. For the converse, suppose that $Y=(y_{ij})\in P^{\bar E}(q)$ satisfies $\omega(Y)\in G'_X$. Then there exists a matrix $B=(b_{ij})\in M^C (q)$ such that $\pi_D (B)$ fixes $X$ and $b_{ij}=y_{ij}$ for all $i,j\in C'$. Define $R=(r_{ij})\in M^{C}(q)$ by ‘gluing together’ $\pi_D (B)$ and $Y$: $r_{ij}=y_{ij}$ if $i,j \in D\cup C'$ and $r_{ij}=b_{ij}$ if $i,j\in E\cup C'$. Then $\pi_D (R)=\pi_D (B)$, so $R\in G_X$, and $\pi_E (R)=Y$. Hence, $Y\in \pi_E (G_X)$. We are now in a position to apply Corollary \[sqcor1\]: $$\begin{aligned} \gamma(\pi_E (G_X), \pi_E (A_X)) & = & \gamma((G'_X)_{{\vphantom}N}^{(k)}, (A'_X)_{{\mathcal}N}^{(k)}) \nonumber\\ & & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! = \; \sum_{j=0}^k p(k-j) \sum_{S\subseteq [1,m-1]} (r(j,S)+r(j,S\cap \{m\})) \xi_S (G'_X, A'_X) \label{preord3}\end{aligned}$$ By definition, $$\begin{aligned} \xi_S (G'_X,A'_X) & = &\gamma(G'_X, \{{\mathbf}W\in {\mathcal}H_{S}: X(\pi^{-1}({\mathbf}W))= \pi^{-1} ({\mathbf}W) \}) \\ & = & \gamma({\operatorname{Stab}}_{P^{\bar D}(q)}(X), \{{\mathbf}W\in {\mathcal}H_{S}: X(\pi^{-1}({\mathbf}W))= \pi^{-1}({\mathbf}W) \} ).\end{aligned}$$ Let $$L=\{ (X,{\mathbf}W)\in N^{\bar D}(q)\times {\mathcal}H_S: X(\pi^{-1}({\mathbf}W)) = \pi^{-1}({\mathbf}W) \}$$ By Lemma \[actprod\] (applied in two different ways), $$\sum_{X\in Z} \xi_S (G'_X, A'_X ) = \gamma(P^{\bar D}(q), L) = \sum_{{\mathbf}W\in H_S} \gamma(P^{\bar D}_{{\mathbf}W} (q), N^{\bar D}_{{\mathbf}W}(q)). \label{preord4}$$ The result follows by combining , , and . Therefore, in order to compute $\rho_C (q)$, it is enough to find a set of representatives $H_S$ for each $S\subseteq [1,m-1]$ and to calculate $\gamma(P^{\bar D}_{{\mathbf}W}(q), N^{\bar D}_{{\mathbf}W}(q))$ for every ${\mathbf}W\in H_S$. Suppose that any two elements of $C'$ are comparable. Then $C'$ is isomorphic to some ${\mathbf}l=(l_1,\ldots,l_s)$. Let $S=\{t_1>\cdots>t_u\} \subseteq [1,m-1]$. The orbits of $P^{{\mathbf}l}(q)$ on ${\mathcal}H_S ({\mathbf}l)$ are well understood. (In fact, these orbits correspond to orbits of pairs of flags, of appropriate dimensions, under the action of ${\operatorname{GL}}_m (q)$.) Let ${\mathcal}A_S^{{\mathbf}l}$ be the set of tuples ${\mathbf}n=(n_{ij})_{i\in [1,s], j\in [1,u]}$ of nonnegative integers satisfying $$l_i\ge n_{i1}\ge n_{i2} \ge \cdots \ge n_{iu} \quad \text{for all } i\in [1,s] \; \text{ and }$$ $$\sum_{i=1}^s n_{ij} = t_j \qquad\quad \text{ for all } j\in [1,u].$$ Relabel the basis $\{e_i\}_{i\in C'}$ of $V_{C'}$ as $\{f_{ij}\}_{i\in [1,s],j\in [1,l_i]}$ so that $P^{{\mathbf}l}(q)=P^{C'}(q)$ is the stabiliser of the flag consisting of the subspaces ${\operatorname{span}}\{f_{ij} \}_{i\in [1,r],j\in [1,l_i]}$, $r=1,\ldots,s$. (This relabelling is given by an isomorphism between $C'$ and ${\mathbf}l$.) For each ${\mathbf}n\in {\mathcal}A_S^{{\mathbf}l}$, define a flag ${\mathbf}W^{{\mathbf}n}\in {\mathcal}H_S ({\mathbf}l)$ as follows: $$W^{{\mathbf}n}_a = {\operatorname{span}}\{ f_{ij} : \: 1\le i \le s,\: 1\le j \le n_{ia} \}, \quad a=1,2,\ldots u.$$ \[doublefl\] *[@MWZ Example 2.10]* Let ${\mathbf}l=(l_1,\ldots,l_s)$ be a tuple of nonnegative integers with $m=l_1+\cdots+l_s$. Let $S\subseteq [1,m-1]$. Then $\{{\mathbf}W^{{\mathbf}n}: {\mathbf}n\in {\mathcal}A_S^{{\mathbf}l} \}$ is a complete set of representatives of $P^{{\mathbf}l}(q)$-orbits on ${\mathcal}H_S ({\mathbf}l)$. If $C'$ is isomorphic to some ${\mathbf}l$, we choose a set of representatives $H_S$ as given by Lemma \[doublefl\]. In particular, for every ${\mathbf}W\in H_S$, each $W_j$ is spanned by a subset of the standard basis $\{e_i\}_{i\in C'}$. Let ${\mathbf}n \in {\mathcal}A_S^{{\mathbf}l}$. Write $n_{i0}=l_i$ and $n_{i,u+1}=0$ for all $i\in [1,s]$. Define a new preorder ${\preccurlyeq}'$ on $\bar D$ as follows: (i) \[neword1\] suppose $i_1,i_2 \in C'$; let $a_1,b_1,a_2,b_2$ be such that $e_{i_1}=f_{a_1,b_1}$ and $e_j=f_{a_2,b_2}$; then $i_1{\preccurlyeq}' i_2$ if and only if $a_1 \le a_2$ and there exists $c\in [0,u]$ such that $b_1 \le n_{a_1,c}$ and $b_2> n_{a_2,c+1}$; (ii) if either $i\notin C'$ or $j\notin C'$, then $i{\preccurlyeq}' j$ if and only if $i{\preccurlyeq}j$. Note that, in case , $i_1{\preccurlyeq}i_2$ if and only if $a_1\le a_2$. It is straightforward to check that, if $i_1,i_2\in C'$, then $i_1{\preccurlyeq}' i_2$ if and only if $i_1{\preccurlyeq}i_2$ and, for all $j\in [1,u]$, $e_{i_1}\in W^{{\mathbf}n}_j$ implies $e_{i_2}\in W^{{\mathbf}n}_j$. (In fact, this is our motivation for defining ${\preccurlyeq}'$). Let $O=O(S,{\mathbf}n)$ be the preordered set $(\bar D,{\preccurlyeq}')$. It follows that $M^{\bar D}_{{\mathbf}W^{{\mathbf}n}}(q)=M^{O(S,{\mathbf}n)}(q)$. Hence, $$\gamma(P^{\bar D}_{{\mathbf}W^{{\mathbf}n}}(q), N^{\bar D}_{{\mathbf}W^{{\mathbf}n}}(q)) = \rho_{O(S,{\mathbf}n)}(q),$$ and we deduce the following from Proposition \[preord\]. \[preordcor\] Let $(C,{\preccurlyeq})$ be a preordered set with a minimal clot $D$. Let $k=|D|$. Let $C'=\{x\in C: y{\preccurlyeq}x \;\forall y\in D\}\setminus D$. Let $m=|C'|$. Suppose the preordered set $C'$ is isomorphic to some ${\mathbf}l=(l_1,\ldots,l_s)$. Then $$\rho_C (q) = \sum_{j=0}^k p(k-j) \sum_{S\subseteq [1,m-1]} (r(j,S)+r(j,S\cup \{m\})) \sum_{{\mathbf}n\in {\mathcal}A_S^{{\mathbf}l}} \rho_{O(S,{\mathbf}n)} (q),$$ where $O(S,{\mathbf}n)$ is defined in terms of $\bar D=C\setminus D$, $C'$ and ${\preccurlyeq}$ as above. Since $|\bar{D}|<|C|$, this allows us to compute $\rho_C (q)$ by recursion as long as we can always find a minimal clot $D$ such that any two elements of $C'$ are comparable. Using this method, one may compute $\rho_{(l_1,\ldots,l_s)}(q)$ for all tuples ${\mathbf}l$ with $l_1+\cdots+l_s\le 6$, thus proving Proposition \[rhosmall\] (the explicit expressions are given at the end of Section \[dualrep\]). Indeed, one can choose the clots $D$ in such a way that the only preordered set occurring in that computation for which Corollary \[preordcor\] is not applicable is the one considered in Example \[Delta2ex\] below. For more detail on the computation, see [@thesis Appendix B]. (The computation also uses the symmetry $\rho_C (q)=\rho_{C^*}(q)$ proved in Section \[dualrep\] below.) We now consider some examples of computations using Proposition \[preord\] and Corollary \[preordcor\]. For ease of notation, we list the flags ${\mathbf}W={\mathbf}W^{{\mathbf}n}\in H_S$ directly, without giving ${\mathbf}n$. \[ex1111\] Suppose $C$ is isomorphic to $(1,1,1,1)$: say, $C=[1,4]$ with the usual order. The only minimal clot is $D=\{1\}$. Then $E=\varnothing$ and $C'=\{2,3,4\}$. The only non-zero terms in Corollary \[preordcor\] are given by $S=\varnothing$ and $S=\{1\}$. If $S=\varnothing$, we get a summand $\rho_{(1^3)}(q)$. Assume $S=\{1\}$. Then any ${\mathbf}W\in {\mathcal}H_S$ consists of a single $1$-dimensional space $W_1$. So $H_S$ consists of $\langle e_2 \rangle$, $\langle e_3 \rangle$ and $\langle e_4 \rangle$. If $W_1=\langle e_2 \rangle$, then $O\simeq (1^3)$, so we get another term $\rho_{(1^3)}(q)$. If $W_1=\langle e_3 \rangle$, $O$ is isomorphic to the preordered set $\Delta_1$ given by the diagram $$\label{Delta1} \begin{array}{c} \begin{picture}(20,12) \multiput(0,10)(20,0){2}{{\circle*{1.5}}} \put(0,0){{\circle*{1.5}}} \put(0,10){\usebox{\rvec}} \put(0,0){\usebox{\ruvec}} \put(1,11){1} \put(21,11){1} \put(0,2){1} \end{picture} \end{array}$$ Finally, if $W_1=\langle e_4 \rangle$, then $O\simeq (1,1) \sqcup (1)$, so we get a term $\rho_{(1,1)}(q)$. Hence, $$\rho_{(1^4)}(q)= 2\rho_{(1^3)}(q)+\rho_{\Delta_1}(q)+\rho_{(1^2)}(q).$$ We compute $\rho_{\Delta_1}(q)$, where $\Delta_1$ is given by . We may take for $D$ either of the minimal elements of $\Delta_1$. Then $E$ consists of the other minimal element, and $C'$ consists of the maximal element (label the elements $1,2,3$ so that $3$ is the maximal element). If $S=\varnothing$, we get $\rho_{(1^2)}(q)$. If $S=\{1\}$, then necessarily $W_1=\langle e_3 \rangle$ and we get $\rho_{(1^2)}(q)$ again. Hence, $\rho_{\Delta_1}(q)=2\rho_{(1^2)}(q)=4$ for all $q$. \[Delta2ex\] We now compute $\rho_{\Delta_2}(q)$ where $\Delta_2$ is the partially ordered set $$\begin{picture}(40,12) \multiput(0,10)(20,0){3}{{\circle*{1.5}}} \put(20,0){{\circle*{1.5}}} \put(0,10){\usebox{\rvec}} \put(20,10){\usebox{\rvec}} \put(0,10){\usebox{\rdvec}} \put(20,0){\usebox{\ruvec}} \put(1,11){1} \put(21,11){1} \put(20,2){1} \put(41,11){1} \end{picture}$$ We may assume that the elements of $\Delta_2$ are $1,2,3,4$ where $1$ is the smallest element and $4$ is the largest. The only minimal clot is $D=\{1\}$. We have $C'=\{2,3,4\}$. In this case, $C'$ is not isomorphic to any ${\mathbf}l$, so we apply Proposition \[preord\] directly. If $S=\varnothing$, we get the term $\rho_{\Delta_1}(q)$. Assume $S=\{1\}$. The following is a complete set of representatives for the action of $P^{C'}(q)$ on the $1$-dimensional subspaces $W_1$ in $V_{C'}$: $$\{ \langle e_2 \rangle, \langle e_3 \rangle, \langle e_4 \rangle, \langle e_2+e_3 \rangle \}.$$ In each of the first two cases, $\gamma(P^{\bar D}_{{\mathbf}W}(q), N^{\bar D}_{{\mathbf}W} (q))=\rho_{\Delta_1}(q)$. If $W_1=\langle e_4 \rangle$, then $N^{\bar D}_{{\mathbf}W}(q)$ contains just the zero matrix, so the corresponding term in the sum is equal to $1$. Finally, suppose $W_1=\langle e_2+e_3 \rangle$. Then we do not get a term of the form $\rho_{O}(q)$ for a preordered set $O$. It is possible to apply Corollary \[genlink\] to find the corresponding term, but in this case we may find the orbits directly. With respect to the basis $\{e_2,e_3,e_4\}$, we have: $$\begin{aligned} P^{C'}_{{\mathbf}W} (q) & = & \left\{ \begin{pmatrix} a & 0 & * \\ 0 & a & * \\ 0 & 0 & b \end{pmatrix} : a,b \in {\mathbb{F}}_q \setminus \{0\} \right\} \quad \text{and} \\ N^{C'}_{{\mathbf}W} (q) & = & \left\{\begin{pmatrix} 0 & 0 & * \\ 0 & 0 & * \\ 0 & 0 & 0 \end{pmatrix}\right\}.\end{aligned}$$ It is easy to see that a complete set of representatives of $P^{C'}_{{\mathbf}W}(q)$-orbits on $N^{C'}_{{\mathbf}W}(q)$ is $$\left\{ \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}, \begin{pmatrix} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{pmatrix}, \begin{pmatrix} 0 & 0 & a \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix} : a\in {\mathbb{F}}_q \right\}.$$ So in this case $\gamma(P^{C'}_{{\mathbf}W}(q),N^{C'}_{{\mathbf}W}(q))=q+2$. Thus, $$\rho_{\Delta_2}(q)=3\rho_{\Delta_1}(q)+ 1+ (q+2) = q+15.$$ \[ex222\] We consider $\rho_{(2,2,2)}(q)$. $(2,2,2)$ is isomorphic to $C=[1,6]$ with clots $\{1,2\}$, $\{3,4\}$ and $\{5,6\}$ (in the increasing order). The only minimal clot is $D=\{1,2\}$, so $C'=\{3,4,5,6\}$. The only non-zero terms in Corollary \[preordcor\] come from $S=\varnothing$, $S=\{1\}$ and $S=\{2\}$. If $S=\varnothing$, we get $p(2)\rho_{(2,2)}=2\rho_{(2,2)}$. If $S=\{1\}$, $H_S$ contains two flags ${\mathbf}W=(W_1)$, namely $W_1=\langle e_3 \rangle$ and $W_1=\langle e_5 \rangle$. In the first case, we get $\rho_{(1,1,2)}(q)$. In the second case, $O$ is isomorphic to the preordered set $\Delta_3$ given by $$\begin{picture}(20,12) \multiput(0,10)(20,0){2}{{\circle*{1.5}}} \put(0,0){{\circle*{1.5}}} \put(0,10){\usebox{\rvec}} \put(0,0){\usebox{\ruvec}} \put(1,11){2} \put(21,11){1} \put(0,2){1} \end{picture}$$ These two terms occur with the multiple $2$ in the sum because $r(2,\{1\})=r(1,\{1\})=1$. Finally, if $S=\{2\}$, the possibilities for $W_1$ are: $\langle e_3,e_4 \rangle$, $\langle e_3,e_5 \rangle$ and $\langle e_5,e_6 \rangle$. These give rise to the terms $\rho_{(2,2)}(q)$, $\rho_{\Delta_2}(q)$ and $4$ respectively. (In the last case, $O\simeq (2) \sqcup (2)$.) Thus, $$\rho_{(2,2,2)}(q)=3\rho_{(2,2)}(q) + 2\rho_{(1,1,2)}(q)+ 2\rho_{\Delta_3}(q) + \rho_{\Delta_2}(q) + 4.$$ Dual quiver representations {#dualrep} =========================== Let $Q=(E_0,E_1,{\mathbf}U,{\boldsymbol}\alpha)$ be a quiver representation over a field ${\mathbb{F}}_q$. Define the dual representation $Q^*=(E_0,E_1^*,{\mathbf}U^*, {\boldsymbol}\alpha^*)$ as follows: (i) $E_1^*$ is obtained by inverting all the arrows in $E_1$: for each $e\in E_1$, $\sigma^{*}(e)=\tau(e)$ and $\tau^*(e)=\sigma(e)$; (ii) ${\mathbf}U=(U_a^*)_{a\in E_0}$ where $U_a^*$ is the dual space to $U_a$; (iii) ${\boldsymbol}\alpha^*=(\alpha_e^*)_{e\in E_1}$ where $\alpha_e^*$ is the dual map to $\alpha_e$. Recall that $\theta(Q)=\gamma({\operatorname{Aut}}(Q),N(Q))$. If ${\mathbf}X\in {\operatorname{End}}(Q)$, let ${\mathbf}X^*\in {\operatorname{End}}(Q^*)$ be the endomorphism $(X_a^*)_{a\in E_0}$. \[dualquiv\] Let $Q=(E_0,E_1,{\mathbf}U,{\boldsymbol}\alpha)$ be a quiver representation. The map ${\mathbf}X\mapsto {\mathbf}X^*$ is a ring isomorphism between ${\operatorname{End}}(Q)$ and ${\operatorname{End}}(Q^*)^{{\operatorname{op}}}$. Hence, $\theta(Q)=\theta(Q^*)$. Clearly, ${\mathbf}X\mapsto {\mathbf}X^*$ is a ring homomorphism from ${\operatorname{End}}(Q)$ to ${\operatorname{End}}(Q^*)^{{\operatorname{op}}}$. If $V$ is a finite dimensional vector space, $V^{**}$ is naturally equivalent to $V$. This equivalence establishes an isomorphism from $Q^{**}$ onto $Q$, which identifies ${\mathbf}X^{**}$ and ${\mathbf}X$ for all ${\mathbf}X\in {\operatorname{End}}(Q)$. Hence, ${\mathbf}X\mapsto {\mathbf}X^*$ is a bijection. Now let $(C,{\preccurlyeq})$ be a preordered set. Let $V$ be a vector space over ${\mathbb{F}}_q$ with a basis $\{e_i\}_{i\in C}$. As in the previous section, we may then identify $M^C (q)$ with a subring of ${\operatorname{End}}(V)$ using this basis. For each $i\in C$, let ${\mathcal}D(i)=\{j\in C: j{\preccurlyeq}i\}$. Let $T_C=T_C(q)$ be the quiver representation $(C\sqcup\{0\},E_1,{\mathbf}U,{\boldsymbol}\alpha)$ such that (i) $E_1$ is equal to $C$ as a set, $\sigma(i)=i$ and $\tau(i)=0$ for all $i\in C$; (ii) $U_0=V$, and $U_i={\operatorname{span}}\{e_j\}_{j\in {\mathcal}D(i)}\le V$ for all $i\in C$; (iii) $\alpha_i$ is the inclusion map $U_i {\hookrightarrow}V$ for each $i\in C$. It is easy to check that ${\mathbf}X\mapsto X_0$ is a ring isomorphism from ${\operatorname{End}}(T_C)$ onto $M^C (q)$. Hence, $$\label{TCeq} \rho_C (q) = \theta(T_C(q)).$$ The following is an easy exercise. \[TC\] Let $C$ be a preordered set. Then the rings ${\operatorname{End}}(T_C^*)$ and $M^{C^*}(q)$ are isomorphic. Hence, $\rho_{C^*}(q) = \theta(T_C^*)$. Combining , Proposition \[dualquiv\] and Proposition \[TC\], we obtain the following result. \[dualpre\] For any finite preordered set $C$, for all prime powers $q$, $$\rho_C (q) = \rho_{C^*}(q).$$ In particular, if $(l_1,\ldots,l_s)$ is a tuple of nonnegative integers, then $$\rho_{(l_1,\ldots,l_s)}(q) = \rho_{(l_s,\ldots,l_1)}(q).$$ The following table gives the values of $\rho_{{\mathbf}l}(q)$ for all tuples ${\mathbf}l=(l_1,\ldots,l_s)$ with $l_1+\cdots+l_s\le 6$ (see [@thesis Appendix B] for a detailed computation). In view of Corollary \[dualpre\], we list only one representative for each pair of distinct tuples $(l_1,\ldots,l_s)$ and $(l_s,\ldots,l_1)$. ${\mathbf}l$ $\rho_{{\mathbf}l}(q)$ ----------------- ------------------------ (1) 1 (2) 2 (1,1) 2 (3) 3 (1,2) 4 $(1,1,1)$ 5 (4) 5 (1,3) 7 (2,2) 10 $(1,1,2)$ 12 $(1,2,1)$ 11 $(1,1,1,1)$ 16 (5) 7 (1,4) 12 (2,3) 18 $(1,2,2)$ 30 $(2,1,2)$ 31 $(1,1,3)$ 23 $(1,3,1)$ 21 $(1,1,1,2)$ 43 $(1,1,2,1)$ 40 $(1,1,1,1,1)$ 61 (6) 11 (1,5) 19 (2,4) 34 (3,3) 37 $(1,2,3)$ 63 $(1,3,2)$ 62 $(2,1,3)$ 66 $(1,1,4)$ 43 $(1,4,1)$ 38 $(2,2,2)$ $q+89$ $(1,1,1,3)$ 93 $(1,1,3,1)$ 84 $(1,1,2,2)$ $q+121$ $(1,2,1,2)$ 120 $(1,2,2,1)$ 113 $(2,1,1,2)$ $q+127$ $(1,1,1,1,2)$ $q+185$ $(1,1,1,2,1)$ 173 $(1,1,2,1,1)$ $q+170$ $(1,1,1,1,1,1)$ $q+273$ Let $S$ be a subset of ${\mathbb{N}}$. If $k\in {\mathbb{Z}}$, let $p(k,S)$ be the number of partitions ${\lambda}$ such that $|{\lambda}|=k$ and ${\lambda}_i\in S$ for all $i$. Recall that $r(k,S)$ is the number of partitions ${\lambda}$ such that ${\lambda}_i\in S$ for all $i$ and, for each $s\in S$, there exists $i$ such that ${\lambda}_i=s$. Fix a positive integer $m$, and let $n=m(m+1)/2$. Let $\mathsf P$ be the matrix whose columns are indexed by non-empty subsets of $[1,m]$, whose rows are indexed by nonnegative integers and whose $(k,S)$ entry is $p(k,S)$. Let $\mathsf R$ be the matrix whose rows are indexed by positive integers $k$ and whose columns are indexed by non-empty subsets $S$ of $[1,m]$ with the $(k,S)$ entry equal to $r(k,S)$. Thus, $\mathsf P$ and $\mathsf R$ have infinitely many rows and $2^m-1$ columns each. Our aim is to find out the ranks of $\mathsf P$ and $\mathsf R$ and to find linear relations between rows of $\mathsf P$ and $\mathsf R$. If $S\subseteq [1,m]$, let $$P_S(X)=\sum_{k=0}^{\infty} p(k,S) X^k$$ be the generating function of the sequence $(p(k,S))_{s=0}^{\infty}$. Here, and in the sequel, $X$ is a formal variable, and all expressions involving $X$ are assumed to be elements of the ring ${\mathbb{Q}}[[X]]$ of formal power series over ${\mathbb{Q}}$. Observe that $$P_{\{i\}}(X)=\sum\limits_{k\geq 0}X^{ik}=\frac{1}{1-X^i}.$$ It follows that, for any non-empty finite subset $S\subset {\mathbb{N}}$, $$\label{GFeq2} P_S(X)=\frac{1}{\prod_{i\in S}(1-X^i)}$$ Also, by the inclusion-exclusion formula, for all $S\subseteq [1,m]$ and all $k\in{\mathbb{N}}$, $$\label{inex} r(k,S)=\sum_{S'\subseteq S, S\ne\varnothing} (-1)^{|S|-|S'|} p(k,S).$$ (Note that $p(k,\varnothing)=0$ for $k>0$.) The following result can be easily proved by induction. \[sums\] For every natural number $k\in [1,n]$ there exists a non-empty subset $S_k$ of $[1,m]$ such that $k=\sum\limits_{i\in S_k} i$. Let $$\Delta(X)=\prod_{i=1}^m (1-X^i).$$ Define integers $c_0,c_1,\ldots,c_n$ by $$\Delta(X)=\sum_{i=0}^n c_i^{(m)} X^i.$$ \[prank\] ${\operatorname{rank}}(\mathsf P)=n=m(m+1)/2$. Moreover, for all $k\ge n$ and all non-empty $S\subseteq [1,m]$, $$p(k,S)=-\sum_{j=k-n}^{k-1} c_{k-j} p(j,S).$$ Let $S$ be a non-empty subset of $[1,m]$. By , $$\begin{aligned} P_S(X) \Delta(X) & = & \prod_{i\in [1,m]\setminus S} (1-X^i), \quad \text{hence}, \label{GFeq}\\ \sum_{k=0}^{\infty} \sum_{j=\max(0,k-n)}^k c_{k-j} p(j,S) X^k & = & \prod_{i\in [1,m]\setminus S} (1-X^i). \end{aligned}$$ If $k\ge n$, the coefficient in $X^k$ on the right-hand side is $0$ (since $S\ne\varnothing$), so $$\sum_{j=k-n}^k c_{k-j} p(j,S) = 0.$$ Then the expression for $p(k,S)$ follows from the fact that $c_0=1$. It follows that ${\operatorname{rank}}(\mathsf P)\le n$. To show that ${\operatorname{rank}}(\mathsf P)\ge n$, it is enough to prove that the polynomials $P_S(X)\Delta(X)$ span a subspace of dimension at least $n$ in ${\mathbb{Q}}[[X]]$ as $S$ varies among the non-empty subsets of $[1,m]$. By , $$\deg(P_S\cdot \Delta)=\sum\limits_{i\in [1,m]\setminus S} i = n-\sum\limits_{i\in S} i.$$ Hence, by Lemma \[sums\], for each integer $k\in [0,n-1]$, there exists a non-empty $S\subseteq [1,m]$ such that $\deg(P_S \cdot \Delta)=k$. The result follows. \[rrank\] ${\operatorname{rank}}(\mathsf R)=n$. Moreover, for all $k>n$ and all non-empty $S\subseteq [1,m]$, $$r(k,S)=-\sum_{j=k-n}^{k-1} c_{k-j} r(j,S).$$ The expression for $r(k,S)$ follows from Theorem \[prank\] and . By , ${\operatorname{rank}}(\mathsf R)$ is equal to the rank of the matrix obtained by removing the $0$-row from $\mathsf P$. By Theorem \[prank\], $$c_n p(0,S) = - \sum_{j=1}^n c_{n-j} p(j,S),$$ so the $0$-row of $\mathsf P$ is a linear combination of the next $n$ rows ($c_n\ne 0$). Hence, ${\operatorname{rank}}(\mathsf R)=n$. Define $d_j=\sum_{i=0}^{n-j} c_i$ for $j=1,2,\ldots n$. \[dj\] For all non-empty $S\subseteq [1,m]$, $$\sum_{j=1}^n d_j r(j,S) = \sum_{j=1}^n d_j p(j,S) = \begin{cases} 1 & \text{if } S=[1,m], \\ 0 & \text{otherwise.} \end{cases}$$ Let $f(X)=(P_S (X) - 1) \Delta(X)$. Then $$f(X) = \prod_{i\in [1,m]\setminus S} (1-X^i)-\Delta(X).$$ Thus, $f(X)$ is a polynomial of degree $n$, and $f(1)$ is the sum of the coefficients of $f(X)$ in $X^0,X^1,\ldots,X^n$. On the other hand, $$f(X)=\sum_{j=1}^{\infty} \sum_{i=0}^n c_i p(j,S) X^{i+j}.$$ Therefore, $$f(1)=\sum_{j=1}^n \sum_{i=0}^{n-j} c_i p(j,S) = \sum_{j=1}^n d_j p(j,S).$$ Hence, $$\sum_{j=1}^n d_j p(j,S)= f(1)= \begin{cases} 1 & \text{if } S=[1,m], \\ 0 & \text{otherwise.} \end{cases}$$ By , $\sum_{j=1}^n d_j r(j,S)$ is equal to this too. [^1]: Selwyn College, Cambridge, CB3 9DQ, UK, [email protected] [^2]: Mathematical Institute, 24-29 St Giles’, Oxford, OX1 3LB, UK, [email protected]
--- title: | A new application of random matrices:\ $\Ext(\Cred(F_2))$ is not a group --- -12pt *Dedicated to the memory of Gert Kj[æ]{}rg[å]{}rd Pedersen* **Abstract** In the process of developing the theory of free probability and free entropy, Voiculescu introduced in 1991 a random matrix model for a free semicircular system. Since then, random matrices have played a key role in von Neumann algebra theory (cf. [@V8], [@V9]). The main result of this paper is the following extension of Voiculescu’s random matrix result: Let $(X_1^{(n)},\dots,X_r^{(n)})$ be a system of $r$ stochastically independent $n\times n$ Gaussian self-adjoint random matrices as in Voiculescu’s random matrix paper [@V3], and let $(x_1,\dots,x_r)$ be a semi-circular system in a $C^*$-probability space. Then for every polynomial $p$ in $r$ noncommuting variables $$\lim_{n\to\infty} \big\|p\big(X_1^{(n)}(\omega),\dots,X_r^{(n)}(\omega)\big)\big\| =\|p(x_1,\dots,x_r)\|,$$ for almost all $\omega$ in the underlying probability space. We use the result to show that the $\Ext$-invariant for the reduced $C^*$-algebra of the free group on 2 generators is not a group but only a semi-group. This problem has been open since Anderson in 1978 found the first example of a $C^*$-algebra $\CA$ for which $\Ext(\CA)$ is not a group. -20pt Introduction {#sec0} ============ -6pt A random matrix $X$ is a matrix whose entries are real or complex random variables on a probability space $(\Omega,\CF,P)$. As in [@T], we denote by $\SGRM(n,\sigma^2)$ the class of complex self-adjoint $n\times n$ random matrices $$X = (X_{ij})^n_{i,j=1},$$ for which $(X_{ii})_i$, $(\sqrt{2} \re X_{ij})_{i<j}$, $(\sqrt{2} \im X_{ij})_{i<j}$ are $n^2$ independent identically distributed (i.i.d.) Gaussian random variables with mean value 0 and variance $\sigma^2$. In the terminology of Mehta’s book [@Me], $X$ is a Gaussian unitary ensemble (GUE). In the following we put $\sigma^2=\frac 1n$ which is the normalization used in Voiculescu’s random matrix paper [@V3]. We shall need the following basic definitions from free probability theory (cf. [@V1], [@vdn]): - A $C^*$-probability space is a pair $(\CB,\tau)$ consisting of a unital $C^*$-algebra $\CB$ and a state $\tau$ on $\CB$. - A family of elements $(a_i)_{i\in I}$ in a $C^*$-probability space $(\CB,\tau)$ is free if for all $n\in\N$ and all polynomials $p_1,\dots,p_n\in\C[X]$, one has $$\tau(p_1(a_{i_1})\cdots p_n(a_{i_n})) = 0,$$ whenever $i_1\ne i_2, i_2\ne i_3,\dots,i_{n-1}\ne i_n$ and $\varphi(p_k(a_{i_k}))=0$ for $k=1,\dots,n$. - A family $(x_i)_{i\in I}$ of elements in a $C^*$-probability space $(\CB,\tau)$ is a semicircular family, if $(x_i)_{i\in I}$ is a free family, $x_i=x_i^*$ for all $i\in I$ and $$\tau(x_i^k) = \frac{1}{2\pi} \int^2_{-2} t^k\sqrt{4-t^2}\6t= \begin{cases} \frac{1}{k/2+1}\binom{k}{k/2}, &\mbox{if $k$ is even}, \\ 0, &\mbox{if $k$ is odd}, \end{cases}$$ for all $k\in\N$ and $i\in I$. We can now formulate Voiculescu’s random matrix result from [@V4]: Let, for each $n\in\N$, $(X_i^{(n)})_{i\in I}$ be a family of independent random matrices from the class $\SGRM(n,\frac 1n)$, and let $(x_i)_{i\in I}$ be a semicircular family in a $C^*$-probability space $(\CB,\tau)$. Then for all $p\in \N$ and all $i_1,\dots,i_p\in I$, we have $$\label{eq0-1} \lim_{n\to\infty}\E\big\{\tr_n\big(X_{i_1}^{(n)}\cdots X_{i_p}^{(n)}\big)\big\}=\tau(x_{i_1}\cdots x_{i_p}),$$ where $\tr_n$ is the normalized trace on $M_n(\C)$, i.e., $\tr_n=\frac{1}{n}\Tr_n$, where $\Tr_n(A)$ is the sum of the diagonal elements of $A$. Furthermore, $\E$ denotes expectation (or integration) with respect to the probability measure $P$. The special case $|I|=1$ is Wigner’s semi-circle law (cf. [@Wi], [@Me]). The strong law corresponding to also holds, i.e., $$\label{eq0-2} \lim_{n\to\infty} \tr_n\big(X_{i_1}^{(n)}(\omega)\cdots X_{i_p}^{(n)}(\omega)\big)=\tau(x_{i_1}\cdots x_{i_p}),$$ for almost all $\omega\in\Omega$ (cf. [@Ar] for the case $|I|=1$ and [@HP2], [@T Cor. 3.9] for the general case). Voiculescu’s result is actually more general than the one quoted above. It also involves sequences of non random diagonal matrices. We will, however, only consider the case, where there are no diagonal matrices. The main result of this paper is that the strong version of Voiculescu’s random matrix result also holds for the operator norm in the following sense: $$\label{eq0-3} \lim_{n\to\infty}\big\|p\big(X_1^{(n)}(\omega), \dots,X_r^{(n)}(\omega)\big)\big\|=\|p(x_1,\dots,x_r)\|.$$ The proof of Theorem A is given in Section \[sec6\]. The special case $$\lim_{n\to\infty}\big\|X_1^{(n)}(\omega)\big\|=\|x_1\|=2$$ is well known (cf. [@BY], [@Ba Thm. 2.12] or [@HT1 Thm. 3.1]). From Theorem A above, it is not hard to obtain the following result (cf. §\[ext-resultat\]). $c_1,\dots,c_m\in\C$[:]{} $$\lim_{n\to\infty} \Big\| \sum^m_{j=1} c_j\pi_n(h_j)\Big\|=\Big\|\sum^m_{j=1} c_j\lambda(h_j)\Big\|.$$ The invariant $\Ext(\CA)$ for separable unital $C^*$-algebras $\CA$ was introduced by Brown, Douglas and Fillmore in 1973 (cf. [@BDF1], [@BDF2]). $\Ext(\CA)$ is the set of equivalence classes $[\pi]$ of one-to-one $*$-homomorphisms $\pi\colon \CA\to \CC(\CH)$, where $\CC(\CH)=\CB(\CH)/\CK(\CH)$ is the Calkin algebra for the Hilbert space $\CH=\ell^2(\N)$. The equivalence relation is defined as follows: $$\pi_1\sim\pi_2\iff \exists u\in\CU(\CB(\CH))\ \forall a\in\CA\colon \pi_2(a)=\rho(u)\pi_1(a)\rho(u)^*,$$ where $\CU(\CB(\CH))$ denotes the unitary group of $\CB(\CH)$ and $\rho\colon \CB(\CH)\to \CC(\CH)$ is the quotient map. Since $\CH\oplus \CH\simeq \CH$, the map $(\pi_1,\pi_2)\to\pi_1\oplus\pi_2$ defines a natural semi-group structure on $\Ext(\CA)$. By Choi and Effros [@CE], $\Ext(\CA)$ is a group for every separable unital nuclear $C^*$-algebra and by Voiculescu [@V0], $\Ext(\CA)$ is a unital semi-group for all separable unital $C^*$-algebras $\CA$. Anderson [@An] provided in 1978 the first example of a unital $C^*$-algebra $\CA$ for which $\Ext(\CA)$ is not a group. The $C^*$-algebra $\CA$ in [@An] is generated by the reduced $C^*$-algebra $\Cred(F_2)$ of the free group $F_2$ on 2 generators and a projection $p\in \CB(\ell^2(F_2))$. Since then, it has been an open problem whether $\Ext(\Cred(F_2))$ is a group. In [@V5 Sect. 5.14], Voiculescu shows that if one could prove Theorem B, then it would follow that $\Ext(\Cred(F_r))$ is not a group for any $r\ge 2$. Hence we have The problem of proving Corollary 1 has been considered by a number of mathematicians; see [@V5 §5.11] for a more detailed discussion. In Section 9 we extend Theorem A (resp. Theorem B) to polynomials (resp. linear combinations) with coefficients in an arbitrary unital exact $C^*$-algebra. The first of these two results is used to provide new proofs of two key results from our previous paper [@HT2]: “Random matrices and $K$-theory for exact $C^*$-algebras”. Moreover, we use the second result to make an exact computation of the constants $C(r)$, $r\in\N$, introduced by Junge and Pisier [@JP] in connection with their proof of $$\CB(\CH)\mathop{\otimes}_{\max} \CB(\CH)\ne \CB(\CH)\mathop{\otimes}_{\min} \CB(\CH).$$ Specifically, we prove the following: Pisier proved in [@P3] that $C(r)\ge 2\sqrt{r-1}$ and Valette proved subsequently in [@V] that $C(r)=2\sqrt{r-1}$, when $r$ is of the form $r=p+1$ for an odd prime number $p$. We end Section 9 by using Theorem A to prove the following result on powers of “circular” random matrices (cf. §9): [*Let $Y$ be a random matrix in the class $\GRM(n,\frac 1n)$[,]{} i.e.[,]{} the entries of $Y$ are independent and identically distributed complex Gaussian random variables with density $z\mapsto\frac{n}{\pi}\e^{-n|z|^2}$[,]{} $z\in\C$. Then for every $p\in\N$ and almost all*]{} $\omega\in\Omega$, $$\lim_{n\to\infty}\big\|Y(\omega)^p\big\| =\bigg(\frac{(p+1)^{p+1}}{p^p}\bigg)^\frac12.$$ Note that for $p=1$, Corollary 3 follows from Geman’s result [@Ge]. In the remainder of this introduction, we sketch the main steps in the proof of Theorem A. Throughout the paper, we denote by $\CA_{\rm sa}$ the real vector space of self-adjoint elements in a $C^*$-algebra $\CA$. In Section 2 we prove the following “linearization trick”: . The linearization trick allows us to conclude (see §\[sec6\]): [*In order to prove Theorem [A,]{} it is sufficient to prove the following[:]{} With $(X_1^{(n)},\dots,X_r^{(n)})$ and $(x_1,\dots,x_r)$ as in Theorem [A,]{} one has for all $m\in\N$[,]{} all matrices $a_0,\dots,a_r$ in $\Mmsa$ and all $\varepsilon >0$ that $$\spe\big(a_0\otimes\unit_n + \textstyle{\sum^r_{i=1} a_i\otimes X_i^{(n)}(\omega)}\big)\subseteq\spe(a_0\otimes\unit_\CB + \sum^r_{i=1}a_i\otimes x_i\big)+{}]-\ep,\ep[,$$ eventually as $n\to\infty$[,]{} for almost all $\omega\in\Omega$[,]{} and where $\unit_n$ denotes the unit of $M_n(\C)$.*]{} In the rest of this section, $(X_1^{(n)},\dots,X_r^{(n)})$ and $(x_1,\dots,x_r)$ are defined as in Theorem A. Moreover we let $a_0,\dots,a_r\in \Mmsa$ and put $$\begin{aligned} s &=& a_0\otimes\unit_{\CB} + \sum^r_{i=1} a_i\otimes x_i\\ S_n &=& a_0\otimes\unit_n+\sum^r_{i=1} a_i\otimes X_i^{(n)},\quad n\in\N.\end{aligned}$$ It was proved by Lehner in [@Le] that Voiculescu’s $R$-transform of $s$ with amalgamation over $M_m(\C)$ is given by $$\label{eq0-4} \CR_s(z) = a_0+\sum^r_{i=1} a_i z a_i, \quad z\in M_m(\C).$$ For $\lambda\in M_m(\C)$, we let $\im\lambda$ denote the self-adjoint matrix $\im\lambda = \frac{1}{2\i}(\lambda-\lambda^*)$, and we put $$\CO = \big\{\lambda\in M_m(\C)\mid\im \lambda\ \mbox{is positive definite}\big\} .$$ From one gets (cf. §6) that the matrix-valued Stieltjes transform of $s$, $$G(\lambda) = (\id_m\otimes\tau)\big[(\lambda\otimes\unit_\CB-s)^{-1}\big]\in M_m(\C),$$ is defined for all $\lambda\in\CO$, and satisfies the matrix equation $$\label{eq0-5} \sum^r_{i=1} a_iG(\lambda)a_i G(\lambda)+(a_0-\lambda)G(\lambda)+\unit_m=0 .$$ For $\lambda\in\CO$, we let $H_n(\lambda)$ denote the $M_m(\C)$-valued random variable $$H_n(\lambda) = (\id_m\otimes\tr_n)\big[(\lambda\otimes \unit_n-S_n)^{-1}\big],$$ and we put $$G_n(\lambda) = \E\big\{H_n(\lambda)\big\}\in M_m(\C).$$ Then the following analogy to holds (cf. §3): [*For all $\lambda\in\CO$ and*]{} $n\in\N$[:]{} $$\label{eq0-6} \E\Big\{\sum^r_{i=1} a_iH_n(\lambda)a_iH_n(\lambda) +(a_0-\lambda)H_n(\lambda)+\unit_m\Big\}=0.$$ The proof of is completely different from the proof of . It is based on the simple observation that the density of the standard Gaussian distribution, $\varphi(x)=\frac{1}{\sqrt{2\pi}}\e^{-x^2/2}$ satisfies the first order differential equation $\varphi'(x)+x\varphi(x)=0$. In the special case of a single $\SGRM(n,\frac{1}{n})$ random matrix (i.e., $r=m=1$ and $a_0=0,a_1=1$), equation occurs in a recent paper by Pastur (cf. [@pas Formula (2.25)]). Next we use the so-called “Gaussian Poincaré inequality” (cf. §4) to estimate the norm of the difference $$\E\Big\{\sum^r_{i=1} a_iH_n(\lambda)a_iH_n(\lambda)\Big\} - \sum^r_{i=1} a_i\E\{H_n(\lambda)\}a_i\E\{H_n(\lambda)\},$$ and we obtain thereby (cf. §4): [*For all $\lambda\in\CO$ and all $n\in\N$[,]{} we have $$\label{eq0-7} \Big\|\sum^r_{i=1} a_iG_n(\lambda)a_iG_n(\lambda)-(a_0-\lambda)G_n(\lambda)+\unit_m\Big\| \le\frac{C}{n^2}\big\|(\im\lambda)^{-1}\big\|^4,$$ where $C=m^3\big\|\sum^r_{i=1} a^2_i\big\|^2$.*]{} In Section 5, we deduce from and that $$\label{eq0-8} \|G_n(\lambda)-G(\lambda)\|\le\frac{4C}{n^2} \big(K+\|\lambda\|\big)^2\big\|(\im\lambda)^{-1}\big\|^7,$$ where $C$ is as above and $K=\|a_0\|+4\sum^r_{i=1} \|a_i\|$. The estimate implies that for every $\varphi\in C^\infty_c(\R,\R)$: $$\label{eq0-9} \E\big\{(\tr_m\otimes\tr_n)\varphi(S_n)\big\} = (\tr_m\otimes\tau)(\varphi(s))+ O\big(\textstyle{\frac{1}{n^2}}\big),$$ for $n\to\infty$ (cf. §6). Moreover, a second application of the Gaussian Poincaré inequality yields that $$\label{eq0-10} \V\big\{(\tr_m\otimes\tr_n)\varphi(S_n)\big\} \le\frac{1}{n^2}\E\big\{(\tr_m\otimes\tr_n)(\varphi'(S_n)^2)\big\},$$ where $\V$ denotes the variance. Let now $\psi$ be a $C^\infty$-function with values in $[0,1]$, such that $\psi$ vanishes on a neighbourhood of the spectrum $\spe(s)$ of $s$, and such that $\psi$ is 1 on the complement of $\spe(s)+{}]-\varepsilon,\varepsilon[$. By applying and to $\varphi=\psi-1$, one gets $$\begin{aligned} \E\big\{(\tr_m\otimes\tr_n)\psi(S_n)\big\} &=& O(n^{-2}),\\[.2cm] \V\big\{(\tr_m\otimes\tr_n)\psi(S_n)\big\} &=& O(n^{-4}),\end{aligned}$$ and by a standard application of the Borel-Cantelli lemma, this implies that $$(\tr_m\otimes\tr_n)\psi(S_n(\omega))=O(n^{-4/3}),$$ for almost all $\omega\in\Omega$. But the number of eigenvalues of $S_n(\omega)$ outside $\spe(s)+{}]-\varepsilon,\varepsilon[$ is dominated by $mn(\tr_m\otimes\tr_n)\psi(S_n(\omega))$, which is $O(n^{-1/3})$ for $n\to\infty$. Being an integer, this number must therefore vanish eventually as $n\to\infty$, which shows that for almost all $\omega\in\Omega$, $$\spe(S_n(\omega))\subseteq\spe(s)+{}]-\varepsilon,\varepsilon[,$$ eventually as $n\to\infty$, and Theorem A now follows from Lemma 1. A linearization trick {#sec1} ===================== Throughout this section we consider two unital $C^*$-algebras $\CA$ and $\CB$ and self-adjoint elements $x_1,\dots,x_r\in\CA$, $y_1,\dots,y_r\in\CB$. We put $$\CA_0=C^*(\unit_{\CA},x_1,\dots,x_r) \quad \textrm{and} \quad \CB_0=C^*(\unit_{\CB},y_1,\dots,y_r).$$ Note that since $x_1,\dots,x_r$ and $y_1,\dots,y_r$ are self-adjoint, the complex linear spaces $$E=\textrm{span}_{\C}\{\unit_{\CA},x_1,\dots,x_r,\textstyle{\sum_{i=1}^r x_i^2}\} \enspace \textrm{and} \enspace F=\textrm{span}_{\C}\{\unit_{\CB},y_1,\dots,y_r,\sum_{i=1}^ry_i^2\}$$ are both operator systems. \[lin1\] Assume that $u_0\colon E\to F$ is a unital completely positive [(]{}linear[)]{} mapping[,]{} such that $$u_0(x_i)=y_i, \qquad i=1,2,\dots,r,$$ and $$u_0\big(\textstyle{\sum_{i=1}^rx_i^2}\big)=\sum_{i=1}^ry_i^2.$$ Then there exists a surjective $*$[-]{}homomorphism $u\colon\CA_0\to\CB_0$[,]{} such that $$u_0=u_{\mid E}.$$ The proof is inspired by Pisier’s proof of [@P2 Prop. 1.7]. We may assume that $\CB$ is a unital sub-algebra of $\CB(\CH)$ for some Hilbert space $\CH$. Combining Stinespring’s theorem ([@pa Thm. 4.1]) with Arveson’s extension theorem ([@pa Cor. 6.6]), it follows then that there exists a Hilbert space $\CK$ containing $\CH$, and a unital $*$-homomorphism $\pi\colon\CA\to\CB(\CK)$, such that $$u_0(x)=p\pi(x)p \qquad (x\in E),$$ where $p$ is the orthogonal projection of $\CK$ onto $\CH$. Note in particular that - $u_0(\unit_{\CA})=p\pi(\unit_{\CA})p=p=\unit_{\CB(\CH)}$, - $y_i=u_0(x_i)=p\pi(x_i)p$,  $i=1,\dots,r$, - $\sum_{i=1}^ry_i^2=u_0\big(\sum_{i=1}^rx_i^2\big) =\sum_{i=1}^rp\pi(x_i)^2p$. From (b) and (c), it follows that $p$ commutes with $\pi(x_i)$ for all $i$ in $\{1,2,\dots,r\}$. Indeed, using (b) and (c), we find that $$\sum_{i=1}^rp\pi(x_i)p\pi(x_i)p=\sum_{i=1}^ry_i^2=\sum_{i=1}^rp\pi(x_i)^ 2p,$$ so that $$\sum_{i=1}^rp\pi(x_i)\big(\unit_{\CB(\CK)}-p\big)\pi(x_i)p=0.$$ Thus, putting $b_i=(\unit_{\CB(\CK)}-p)\pi(x_i)p$, $i=1,\dots,r$, we have that $\sum_{i=1}^rb_i^*b_i=0$, so that $b_1=\cdots=b_r=0$. Hence, for each $i$ in $\{1,2,\dots,r\}$, we have $$\begin{aligned} [p,\pi(x_i)] &=&p\pi(x_i)-\pi(x_i)p\\ &=& p\pi(x_i)(\unit_{\CB(\CK)}-p)-(\unit_{\CB(\CK)}-p)\pi(x_i)p= b_i^*-b_i = 0,\end{aligned}$$ as desired. Since $\pi$ is a unital $*$-homomorphism, we may conclude further that $p$ commutes with all elements of the $C^*$-algebra $\pi(\CA_0)$. Now define the mapping $u\colon\CA_0\to\CB(\CH)$ by $$u(a)=p\pi(a)p, \quad (a\in\CA_0).$$ Clearly $u(a^*)=u(a)^*$ for all $a$ in $\CA_0$, and, using (a) above, $u(\unit_{\CA})=u_0(\unit_{\CA})\break=\unit_{\CB}$. Furthermore, since $p$ commutes with $\pi(\CA_0)$, we find for any $a,b$ in $\CA_0$ that $$u(ab)=p\pi(ab)p=p\pi(a)\pi(b)p=p\pi(a)p\pi(b)p=u(a)u(b).$$ Thus, $u\colon\CA_0\to\CB(\CH)$ is a unital $*$-homomorphism, which extends $u_0$, and $u(\CA_0)$ is a $C^*$-sub-algebra of $\CB(\CH)$. It remains to note that $u(\CA_0)$ is generated, as a $C^*$-algebra, by the set $u(\{\unit_{\CA},x_1,\dots,x_r\})=\{\unit_{\CB},y_1,\dots,y_r\}$, so that $u(\CA_0)=C^*(\unit_{\CB},y_1,\dots,y_r)=\CB_0$, as desired. For any element $c$ of a $C^*$-algebra $\CC$, we denote by $\spe(c)$ the spectrum of $c$, i.e., $$\spe(c)=\{\lambda\in\C\mid c-\lambda\unit_{\CC} \ \textrm{is not invertible}\}.$$ \[lin2\] Assume that the self[-]{}adjoint elements $x_1,\dots,x_r\in\CA$ and $y_1,\dots,y_r\in\CB$ satisfy the property[:]{} $$\begin{gathered} \label{e1.1} \forall m\in\N \ \forall a_0,a_1,\dots,a_r\in\Mmsa\colon \\ \spe\big(a_0\otimes\unit_{\CA}+\textstyle{\sum_{i=1}^ra_i\otimes x_i}\big) \supseteq \spe\big(a_0\otimes\unit_{\CB}+\textstyle{\sum_{i=1}^ra_i\otimes y_i}\big).\end{gathered}$$ Then there exists a unique surjective unital $*$[-]{}homomorphism $\varphi\colon\CA_0\to\CB_0$[,]{} such that $$\varphi(x_i)=y_i, \qquad i=1,2,\dots,r.$$ Before the proof of Theorem \[lin2\], we make a few observations: \[obs\] (1)  In connection with condition above, let $V$ be a subspace of $M_m(\C)$ containing the unit $\unit_m$. Then the condition: $$\begin{gathered} \label{e1.3} \forall a_0, a_1,\dots,a_r\in V\colon \\ \spe\big(a_0\otimes\unit_{\CA}+\textstyle{\sum_{i=1}^ra_i\otimes x_i}\big) \supseteq \spe\big(a_0\otimes\unit_{\CB}+\textstyle{\sum_{i=1}^ra_i\otimes y_i}\big)\end{gathered}$$ is equivalent to the condition: $$\begin{aligned} \label{e1.4} \forall a_0, a_1,\dots,a_r\in V\colon && a_0\otimes\unit_{\CA}+\textstyle{\sum_{i=1}^ra_i\otimes x_i} \ \textrm{is invertible}\\ && \Longrightarrow a_0\otimes\unit_{\CB}+\textstyle{\sum_{i=1}^ra_i\otimes y_i} \ \textrm{is invertible}.\nonumber\end{aligned}$$ Indeed, it is clear that implies , and the reverse implication follows by replacing, for any complex number $\lambda$, the matrix $a_0\in V$ by $a_0-\lambda\unit_m\in V$. \(2)  Let $\CH_1$ and $\CH_2$ be Hilbert spaces and consider the Hilbert space direct sum $\CH=\CH_1\oplus\CH_2$. Consider further the operator $R$ in $\CB(\CH)$ given in matrix form as $$R= \begin{pmatrix} x & y \\ z & \unit_{\CB(\CH_2)}, \end{pmatrix}$$ where $x\in\CB(\CH_1), y\in\CB(\CH_2,\CH_1)$ and $z\in\CB(\CH_1,\CH_2)$. Then $R$ is invertible in $\CB(\CH)$ if and only if $x-yz$ is invertible in $\CB(\CH_1)$. This follows immediately by writing $$\begin{pmatrix} x & y \\ z & \unit_{\CB(\CH_2)} \end{pmatrix} = \begin{pmatrix} \unit_{\CB(\CH_1)} & y \\ 0 & \unit_{\CB(\CH_2)} \end{pmatrix} \cdot \begin{pmatrix} x-yz & 0 \\ 0 & \unit_{\CB(\CH_2)} \end{pmatrix} \cdot \begin{pmatrix} \unit_{\CB(\CH_1)} & 0 \\ z & \unit_{\CB(\CH_2)} \end{pmatrix},$$where the first and last matrix on the right-hand side are invertible with inverses given by: $$\begin{aligned} \begin{pmatrix} \unit_{\CB(\CH_1)} & y \\ 0 & \unit_{\CB(\CH_2)} \end{pmatrix}^{-1} &=& \begin{pmatrix} \unit_{\CB(\CH_1)} & -y \\ 0 & \unit_{\CB(\CH_2)} \end{pmatrix}\\ \noalign{\noindent and} \begin{pmatrix} \unit_{\CB(\CH_1)} & 0 \\ z & \unit_{\CB(\CH_2)} \end{pmatrix}^{-1} &=& \begin{pmatrix} \unit_{\CB(\CH_1)} & 0 \\ -z & \unit_{\CB(\CH_2)} \end{pmatrix}.\end{aligned}$$ By Lemma \[lin1\], our objective is to prove the existence of a unital completely positive map $u_0\colon E\to F$, satisfying that $u_0(x_i)=y_i$, $i=1,2,\dots,r$ and $u_0(\sum_{i=1}^rx_i^2)=\sum_{i=1}^ry_i^2$. We show first that the assumption is equivalent to the seemingly stronger condition: $$\begin{gathered} \label{e1.2} \forall m\in\N \ \forall a_0, a_1,\dots,a_r\in M_m(\C)\colon \\ \spe\big(a_0\otimes\unit_{\CA}+\textstyle{\sum_{i=1}^ra_i\otimes x_i}\big) \supseteq \spe\big(a_0\otimes\unit_{\CB}+\textstyle{\sum_{i=1}^ra_i\otimes y_i}\big).\end{gathered}$$ Indeed, let $a_0,a_1,\dots,a_r$ be arbitrary matrices in $M_m(\C)$ and consider then the self-adjoint matrices $\tilde{a}_0,\tilde{a}_1,\dots,\tilde{a}_r$ in $M_{2m}(\C)$ given by: $$\tilde{a}_i= \begin{pmatrix} 0 & a_i^* \\ a_i & 0 \end{pmatrix}, \qquad i=0,1,\dots,r.$$ Note then that $$\begin{aligned} &&\hskip-6pt \tilde{a}_0\otimes\unit_{\CA}+\sum_{i=1}^r\tilde{a}_i\otimes x_i \\ &&\qquad = \begin{pmatrix} 0 & a_0^*\otimes\unit_{\CA}+\sum_{i=1}^ra_i^*\otimes x_i \\ a_0\otimes\unit_{\CA}+\sum_{i=1}^ra_i\otimes x_i & 0 \end{pmatrix} \\ &&\qquad = \begin{pmatrix} 0 & \unit_{\CA} \\ \unit_{\CA} & 0 \end{pmatrix} \cdot \begin{pmatrix} a_0\otimes\unit_{\CA}+\sum_{i=1}^ra_i\otimes x_i & 0\\ 0 & a_0^*\otimes\unit_{\CA}+\sum_{i=1}^ra_i^*\otimes x_i \end{pmatrix}.\end{aligned}$$ Therefore, $\tilde{a}_0\otimes\unit_{\CA}+\sum_{i=1}^r\tilde{a}_i\otimes x_i$ is invertible in $M_{2m}(\CA)$ if and only if $a_0\otimes\unit_{\CA}+\sum_{i=1}^ra_i\otimes x_i$ is invertible in $M_m(\CA)$, and similarly, of course, $\tilde{a}_0\otimes\unit_{\CB}+\sum_{i=1}^r\tilde{a}_i\otimes y_i$ is invertible in $M_{2m}(\CB)$ if and only if $a_0\otimes\unit_{\CB}+\sum_{i=1}^ra_i\otimes y_i$ is invertible in $M_m(\CB)$. It follows that $$\begin{split} a_0\otimes\unit_{\CA}+\sum_{i=1}^ra_i\otimes x_i \ \textrm{is invertible} &\iff \tilde{a}_0\otimes\unit_{\CA}+\sum_{i=1}^r\tilde{a}_i\otimes x_i \ \textrm{is invertible} \\[.2cm] &\implies \tilde{a}_0\otimes\unit_{\CB}+\sum_{i=1}^r\tilde{a}_i\otimes y_i \ \textrm{is invertible} \\[.2cm] &\iff a_0\otimes\unit_{\CB}+\sum_{i=1}^ra_i\otimes y_i \ \textrm{is invertible}, \end{split}$$ where the second implication follows from the assumption . Since the argument above holds for arbitrary matrices $a_0,a_1,\dots,a_r$ in $M_m(\C)$, it follows from Remark \[obs\](1) that condition is satisfied. We prove next that the assumption implies the condition: $$\begin{split} \forall m\in\N \ \forall a_0,a_1,\dots,a_r,a_{r+1}&\in M_m(\C)\colon \\[.2cm] \spe\big(a_0\otimes\unit_{\CA}+&\textstyle{\sum_{i=1}^ra_i\otimes x_i}+a_{r+1}\otimes\sum_{i=1}^rx_i^2\big) \\[.2cm] &\supseteq \spe\big(a_0\otimes\unit_{\CB}+\textstyle{\sum_{i=1}^ra_i\otimes y_i} + a_{r+1}\otimes\sum_{i=1}^ry_i^2\big). \label{e1.5} \end{split}$$ Using Remark \[obs\](1), we have to show, given $m$ in $\N$ and $a_0,a_1,\dots,a_{r+1}$ in $M_m(\C)$, that invertibility of $a_0\otimes\unit_{\CA}+\textstyle{\sum_{i=1}^ra_i\otimes x_i}+a_{r+1}\otimes\sum_{i=1}^rx_i^2$ in $M_m(\CA)$ implies invertibility of $a_0\otimes\unit_{\CA}+\textstyle{\sum_{i=1}^ra_i\otimes y_i}+a_{r+1}\otimes\sum_{i=1}^ry_i^2$ in $M_m(\CB)$. For this, consider the matrices: $$S= \begin{pmatrix} a_0\otimes\unit_{\CA} & -\unit_m\otimes x_1 & -\unit_m\otimes x_2 & \cdots & -\unit_m\otimes x_r \\ a_1\otimes\unit_{\CA}+a_{r+1}\otimes x_1 & \unit_m\otimes\unit_{\CA} & \ & \ & O \\ a_2\otimes\unit_{\CA}+a_{r+1}\otimes x_2 & \ & \unit_m\otimes\unit_{\CA} & \ & \ \\ \vdots & \ & \ & \ddots & \ \\ a_r\otimes\unit_{\CA}+a_{r+1}\otimes x_r & O & \ & \ & \unit_m\otimes\unit_{\CA} \end{pmatrix} \in M_{(r+1)m}(\CA)$$ and $$T= \begin{pmatrix} a_0\otimes\unit_{\CB} & -\unit_m\otimes y_1 & -\unit_m\otimes y_2 & \cdots & -\unit_m\otimes y_r \\ a_1\otimes\unit_{\CB}+a_{r+1}\otimes y_1 & \unit_m\otimes\unit_{\CB} & \ & \ & O \\ a_2\otimes\unit_{\CB}+a_{r+1}\otimes y_2 & \ & \unit_m\otimes\unit_{\CB} & \ & \ \\ \vdots & \ & \ & \ddots & \ \\ a_r\otimes\unit_{\CB}+a_{r+1}\otimes y_r & O & \ & \ & \unit_m\otimes\unit_{\CB} \end{pmatrix} \in M_{(r+1)m}(\CB).$$ By Remark \[obs\](2), invertibility of $S$ in $M_{(r+1)m}(\CA)$ is equivalent to invertibility of $$\begin{split} a_0\otimes\unit_{\CA}+\textstyle{\sum_{i=1}^r(\unit_m\otimes x_i)}\cdot(&a_i\otimes\unit_{\CA} + a_{r+1}\otimes x_i) \\[.2cm] &=a_0\otimes\unit_{\CA}+\textstyle{\sum_{i=1}^ra_i\otimes x_i}+a_{r+1}\otimes\sum_{i=1}^rx_i^2 \end{split}$$ in $M_m(\CA)$. Similarly, $T$ is invertible in $M_{(r+1)m}(\CB)$ if and only if $$a_0\otimes\unit_{\CB}+\textstyle{\sum_{i=1}^ra_i\otimes y_i}+a_{r+1}\otimes\sum_{i=1}^ry_i^2$$ is invertible in $M_m(\CB)$. It remains thus to show that invertibility of $S$ implies that of $T$. This, however, follows immediately from Step I, since we may write $S$ and $T$ in the form: $$S=b_0\otimes\unit_{\CA}+\sum_{i=1}^rb_i\otimes x_i \quad \textrm{and} \quad T=b_0\otimes\unit_{\CB}+\sum_{i=1}^rb_i\otimes y_i,$$ for suitable matrices $b_0,b_1,\dots,b_r$ in $M_{(r+1)m}(\C)$; namely $$b_0= \begin{pmatrix} a_0 & 0 & 0 & \cdots & 0 \\ a_1 & \unit_m & \ & \ & \textrm{\large O} \\ a_2 & \ & \unit_m & \ & \ \\ \vdots & \ & \ & \ddots & \ \\ a_r & \textrm{\large O} & \ & \ & \unit_m \end{pmatrix}$$ and $$b_i= \begin{pmatrix} 0 & \cdots & 0 & -\unit_m & 0 & \cdots & 0 \\ \vdots & \ & \ & \ & \ & \ & \ \\ 0 & \ & \ & \ & \ & \ & \ \\ a_{r+1} & \ & \ & \textrm{\Huge O} & & \ & \ \\ 0 & \ & \ & \ & \ & \ & \ \\ \vdots & \ & \ & \ & \ & \ & \ \\ 0 & \ & \ & \ & \ & \ & \ \end{pmatrix}, \qquad i=1,2,\dots,r.$$ For $i$ in $\{1,2,\dots,r\}$, the (possible) nonzero entries in $b_i$ are at positions$(1,i+1)$ and $(i+1,1)$. This concludes Step II. We show, finally, the existence of a unital completely positive mapping $u_0\colon E\to F$, satisfying that $u_0(x_i)=y_i$, $i=1,2,\dots,r$ and $u_0(\sum_{i=1}^rx_i^2)=\sum_{i=1}^ry_i^2$. Using Step II in the case $m=1$, it follows that for any complex numbers $a_0,a_1,\dots,a_{r+1}$, we have that $$\begin{gathered} \spe\big(a_0\unit_{\CA}+\textstyle{\sum_{i=1}^ra_ix_i} + a_{r+1}\sum_{i=1}^rx_i^2\big) \\ \supseteq \spe\big(a_0\unit_{\CB}+\textstyle{\sum_{i=1}^ra_iy_i} + a_{r+1}\sum_{i=1}^ry_i^2\big). \label{e1.6}\end{gathered}$$ If $a_0,a_1,\dots,a_{r+1}$ are real numbers, then the operators $$a_0\unit_{\CA}+\textstyle{\sum_{i=1}^ra_ix_i} + a_{r+1}\sum_{i=1}^rx_i^2 \quad \textrm{and} \quad a_0\unit_{\CB}+\textstyle{\sum_{i=1}^ra_iy_i} + a_{r+1}\sum_{i=1}^ry_i^2$$ are self-adjoint, since $x_1,\dots,x_r$ and $y_1,\dots,y_r$ are self-adjoint. Hence implies that $$\begin{split} \forall a_0,&\ldots,a_{r+1}\in\R\colon \\[.2cm] &\big\|a_0\unit_{\CA}+\textstyle{\sum_{i=1}^ra_ix_i} + a_{r+1}\sum_{i=1}^rx_i^2\big\| \ge \big\|a_0\unit_{\CB}+\textstyle{\sum_{i=1}^ra_iy_i} + a_{r+1}\sum_{i=1}^ry_i^2\big\|. \label{e1.7} \end{split}$$ Let $E'$ and $F'$ denote, respectively, the $\R$-linear span of $\{\unit_{\CA},x_1,\dots,x_r,\sum_{i=1}^rx_i^2\}$ and $\{\unit_{\CB},y_1,\dots,y_r,\sum_{i=1}^ry_i^2\}$: $$\begin{aligned} E'&=&\textrm{span}_{\R}\{\unit_{\CA},x_1,\dots,x_r,\textstyle{\sum_{i=1}^ rx_i^2}\} \\ \noalign{\noindent and} F'&=&\textrm{span}_{\R}\{\unit_{\CB},y_1,\dots,y_r,\sum_{i=1}^ry_i^2\}.\end{aligned}$$ It follows then from that there is a (well-defined) $\R$-linear mapping $u_0'\colon E'\to F'$ satisfying that $u_0'(\unit_{\CA})=\unit_{\CB}$, $u_0'(x_i)=y_i$, $i=1,2,\dots,r$ and $u_0'(\sum_{i=1}^rx_i^2)=\sum_{i=1}^ry_i^2$. For an arbitrary element $x$ in $E$, note that $\re(x)=\frac{1}{2}(x+x^*)\in E'$ and ${\rm Im}(x)=\frac{1}{2\i}(x-x^*)\in E'$. Hence, we may define a mapping $u_0\colon E\to F$ by setting: $$u_0(x)=u'_0(\re(x))+\i u_0'({\rm Im}(x)), \qquad (x\in E).$$ It is straightforward, then, to check that $u_0$ is a $\C$-linear mapping from $E$ onto $F$, which extends $u_0'$. Finally, it follows immediately from Step II that for all $m$ in $\N$, the mapping $\id_{M_m(\C)}\otimes u_0$ preserves positivity. In other words, $u_0$ is a completely positive mapping. This concludes the proof. In Section \[sec6\], we shall need the following strengthening of Theorem \[lin2\]: \[thm1-4\] Assume that the self adjoint elements $x_1,\dots,x_r\in\CA$[,]{} $y_1,\dots,y_r\in\CB$ satisfy the property $$\begin{gathered} \label{eq1-8} \forall m\in\N \ \forall a_0,\dots,a_r\in M_m(\Q + \i\Q)_{\rm sa} :\\ \spe\big(a_0\otimes\unit_A+\sum^r_{i=1} a_i\otimes x_i\big)\supseteq \spe\big(a_0\otimes 1_B+\sum_{i=1}^ra_i\otimes y_i\big).\end{gathered}$$ Then there exists a unique surjective unital $*$[-]{}homomorphism $\varphi\colon A_0\to B_0$ such that $\varphi(x_i)=y_i$[,]{} $i=1,\dots,r$. By Theorem \[lin2\], it suffices to prove that condition is equivalent to condition of that theorem. Clearly $\Rightarrow$ . It remains to be proved that $\Rightarrow$ . Let $d_H(K,L)$ denote the Hausdorff distance between two subsets $K$, $L$ of $\C$: $$\label{eq1-9} d_H(K,L)=\max\Big\{\sup_{x\in K} d(x,L), \ \sup_{y\in L} d(y,K)\Big\}.$$ For normal operators $A,B$ in $M_m(\C)$ or $\CB(\CH)$ ($\CH$ a Hilbert space) one has $$\label{eq1-10} d_H(\spe(A),\spe(B))\le\|A-B\|$$ (cf. [@Da Prop. 2.1]). Assume now that is satisfied, let $m\in\N$, $b_0,\dots,b_r \in M_m(\C)$ and let $\varepsilon >0$. Since $M_m(\Q+ \i\Q)_{\rm sa}$ is dense in $M_m(\C)_{\rm sa}$, we can choose $a_0,\dots,a_r\in M_m(\Q+\i\Q)_{\rm sa}$ such that $$\|a_0-b_0\|+\sum^r_{i=1} \|a_i-b_i\| \| x_i\| <\varepsilon$$ and $$\|a_0-b_0\|+\sum^r_{i=1} \|a_i-b_i\| \| y_i\| <\varepsilon.$$ Hence, by , $$d_H\big(\spe\big(a_0\otimes 1 + \textstyle{\sum^r_{i=1}} a_i\otimes x_i\big), \spe\big(b_0\otimes 1 + \sum^r_{i=1} b_i\otimes x_i\big)\big)<\varepsilon$$ and $$d_H\big(\spe\big(a_0\otimes 1 + \textstyle{\sum^r_{i=1}} a_i\otimes y_i\big), \spe\big(b_0\otimes 1 + \sum^r_{i=1} b_i\otimes y_i\big)\big)<\varepsilon.$$ By these two inequalities and we get $$\begin{aligned} \spe\big(b_0\otimes 1 + \textstyle{\sum^r_{i=1}}b_i\otimes y_i\big) &\subseteq & \spe\big(a_0\otimes 1 + \textstyle{\sum^r_{i=1}}a_i\otimes y_i\big) \ + \ ]-\varepsilon,\varepsilon[\\ &\subseteq & \spe\big(a_0\otimes 1 + \textstyle{\sum^r_{i=1}} a_i\otimes x_i\big) \ + \ ]-\varepsilon,\varepsilon[\\ &\subseteq & \spe\big(b_0\otimes 1 + \textstyle{\sum^r_{i=1}}b_i\otimes x_i) \ + \ ]-2\varepsilon,2\varepsilon[.\end{aligned}$$ Since $\spe(b_0\otimes 1 + \sum^r_{i=1} b_i\otimes y_i)$ is compact and $\varepsilon >0$ is arbitrary, it follows that $$\spe\big(b_0\otimes 1 + \textstyle{\sum^r_{i=1}}b_i\otimes y_i\big)\subseteq\spe\big(b_0\otimes 1 + \sum^r_{i=1} b_i\otimes x_i\big),$$ for all $m\in\N$ and all $b_0,\dots,b_r\in M_m(\C)_{\rm sa}$, i.e. holds. This completes the proof of Theorem \[thm1-4\]. The master equation {#sec2} =================== Let $\CH$ be a Hilbert space. For $T\in \CB(\CH)$ we let $\im T$ denote the self adjoint operator $\im T = \frac{1}{2\i}(T-T^*)$. We say that a matrix $T$ in $M_m(\C)_{\rm sa}$ is positive definite if all its eigenvalues are strictly positive, and we denote by $\lmax(T)$ and $\lmin(T)$ the largest and smallest eigenvalues of $T$, respectively. \[imaginaerdel\] - Let $\CH$ be a Hilbert space and let $T$ be an operator in $\CB(\CH)$[,]{} such that the imaginary part $\im T$ satisfies one of the two conditions[:]{} $$\im T\ge\varepsilon\unit_{\CB(\CH)} \qquad \textrm{or} \qquad \im T\le-\varepsilon\unit_{\CB(\CH)},$$ for some $\varepsilon$ in $]0,\infty[$. Then $T$ is invertible and $\|T^{-1}\|\le\frac{1}{\varepsilon}$. - Let $T$ be a matrix in $M_m(\C)$ and assume that $\im T$ is positive definite. Then $T$ is invertible and $\|T^{-1}\|\le \|(\im T)^{-1}\|$. Note first that (ii) is a special case of (i). Indeed, since $\im T$ is self-adjoint, we have that $\im T\ge\lmin(\im T)\unit_m$. Since $\im T$ is positive definite, $\lmin(\im T)>0$, and hence (i) applies. Thus, $T$ is invertible and furthermore $$\|T^{-1}\|\le\frac{1}{\lmin(\im T)}=\lmax\big((\im T)^{-1}\big)=\|(\im T)^{-1}\|,$$ since $(\im T)^{-1}$ is positive. To prove (i), note first that by replacing, if necessary, $T$ by $-T$, it suffices to consider the case where $\im T\ge\varepsilon\unit_{\CB(\CH)}$. Let $\|\cdot\|$ and $\<\cdot,\cdot\>$ denote, respectively, the norm and the inner product on $\CH$. Then, for any unit vector $\xi$ in $\CH$, we have $$\begin{aligned} \|T\xi\|^2&=&\|T\xi\|^2\|\xi\|^2\ge |\<T\xi,\xi\>|^2 \\ &=&\big|\<\re(T)\xi,\xi\>+\i\<\im T\xi,\xi\>\big|^2 \ge \<\im T\xi,\xi\>^2 \ge \varepsilon^2\|\xi\|^2,\end{aligned}$$ where we used that $\<\re(T)\xi,\xi\>,\<\im T\xi,\xi\>\in\R$. Note further, for any unit vector $\xi$ in $\CH$, that $$\|T^*\xi\|^2\ge|\<T^*\xi,\xi\>|^2=|\<T\xi,\xi\>|^2\ge \varepsilon^2\|\xi\|^2.$$ Altogether, we have verified that $\|T\xi\|\ge\varepsilon\|\xi\|$ and that $\|T^*\xi\|\ge\varepsilon\|\xi\|$ for any (unit) vector $\xi$ in $\CH$, and by [@pe Prop. 3.2.6] this implies that $T$ is invertible and that $\|T^{-1}\|\le\frac{1}{\varepsilon}$. \[Kaplansky trick\] Let $\CA$ be a unital $C^*$[-]{}algebra and denote by $\GL(\CA)$ the group of invertible elements of $\CA$. Let further $A\colon I\to\GL(\CA)$ be a mapping from an open interval $I$ in $\R$ into $\GL(\CA)$[,]{} and assume that $A$ is differentiable[,]{} in the sense that $$A'(t_0):=\lim_{t\to t_0}\frac{1}{t-t_0}\big(A(t)-A(t_0)\big)$$ exists in the operator norm[,]{} for any $t_0$ in $I$. Then the mapping $t\mapsto A(t)^{-1}$ is also differentiable and $$\frac{\rm d}{{\rm d}t}A(t)^{-1}=-A(t)^{-1}A'(t)A(t)^{-1}, \qquad (t\in I).$$ The lemma is well known. For the reader’s convenience we include a proof. For any $t,t_0$ in $I$, we have $$\begin{split} \frac{1}{t-t_0}\big(A(t)^{-1}-A(t_0)^{-1}\big) &= \frac{1}{t-t_0}A(t)^{-1}\big(A(t_0)-A(t)\big)A(t_0)^{-1} \\[.2cm] &=-A(t)^{-1}\Big(\frac{1}{t-t_0}\big(A(t)-A(t_0)\big)\Big)A(t_0)^{-1} \\[.2cm] &\underset{t\to t_0}{\longrightarrow} -A(t_0)^{-1}A'(t_0)A(t_0)^{-1}, \end{split}$$ where the limit is taken in the operator norm, and we use that the mapping $B\mapsto B^{-1}$ is a homeomorphism of $\GL(\CA)$ with respect to the operator norm. \[Gauss trick\] Let $\sigma$ be a positive number[,]{} let $N$ be a positive integer and let $\gamma_1,\dots,\gamma_N$ be $N$ independent identically distributed real valued random variables with distribution $N(0,\sigma^2)$[,]{} defined on the same probability space $(\Omega,\CF,P)$. Consider further a finite dimensional vector space $E$ and a $C^1$[-]{}mapping[:]{} $$(x_1,\dots,x_N)\mapsto F(x_1,\dots,x_N)\colon\R^N\to E,$$ satisfying that $F$ and all its first order partial derivatives $\frac{\partial F}{\partial x_1},\dots,\frac{\partial F}{\partial x_N}$ are polynomially bounded. For any $j$ in $\{1,2,\dots,N\}$[,]{} we then have $$\E\big\{\gamma_jF(\gamma_1,\dots,\gamma_N)\big\} = \sigma^2\E\big\{\textstyle{\frac{\partial F}{\partial x_j}}(\gamma_1,\dots,\gamma_N)\big\},$$ where $\E$ denotes expectation with respect to $P$. Clearly it is sufficient to treat the case $E=\C$. The joint distribution of $\gamma_1,\dots,\gamma_N$ is given by the density function $$\varphi(x_1,\dots,x_N) = (2\pi\sigma^2)^{-\frac n2} \exp\big(\textstyle{-\frac{1}{2\sigma^2}\sum^N_{i=1}x_i^2}\big), \qquad (x_1,\dots,x_N)\in\R^N.$$ Since $$\frac{\partial\varphi}{\partial x_j} (x_1,\dots,x_N) = -\frac{1}{\sigma^2} x_j \varphi(x_1,\dots,x_N),$$ we get by partial integration in the variable $x_j$, $$\begin{aligned} \E\big\{\gamma_j F(\gamma_1,\dots,\gamma_N)\big\} &=& \int_{\R^N} F(x_1,\dots,x_N)x_j\varphi(x_1,\dots,x_N)\6x_1,\dots,\6x_N\\ &=& -\sigma^2 \int_{\R^N} F(x_1,\dots,x_N)\frac{\partial\varphi}{\partial x_j} (x_1,\dots,x_N)\6x_1,\dots,\6x_N\\ &=& \sigma^2 \int_{\R^N} \frac{\partial F}{\partial x_j}(x_1,\dots,x_N) \varphi (x_1,\dots,x_N)\6x_1,\dots,\6x_N\\ &=& \sigma^2\E\bigg\{\frac{\partial F}{\partial x_j} (\gamma_1,\dots,\gamma_N)\bigg\}. \end{aligned}$$ -24pt Let $r$ and $n$ be positive integers. In the following we denote by $\CE_{r,n}$ the [*real*]{} vector space $(\Mnsa)^r$. Note that $\CE_{r,n}$ is a Euclidean space with inner product $\<\cdot,\cdot\>_e$ given by $$\begin{gathered} \<(A_1,\dots,A_r),(B_1,\dots,B_r)\>_{e} \\=\Tr_n\Big(\sum_{j=1}^rA_jB_j\Big), \qquad ((A_1,\dots,A_r),(B_1,\dots,B_r)\in\CE_{r,n}),\end{gathered}$$ and with norm given by $$\|(A_1,\dots,A_r)\|^2_{e} = \Tr_n\Big(\sum_{j=1}^rA_j^2\Big) =\sum_{j=1}^r\|A_j\|^2_{2,\Tr_n}, \qquad ((A_,\dots,A_r)\in\CE_{r,n}).$$ Finally, we shall denote by $S_1(\CE_{r,n})$ the unit sphere of $\CE_{r,n}$ with respect to $\|\cdot\|_e$. \[kanonisk iso\] Let $r,n$ be positive integers, and consider the linear isomorphism $\Psi_0$ between $\Mnsa$ and $\R^{n^2}$ given by $$\Psi_0((a_{kl})_{1\le k,l\le n})=\big((a_{kk})_{1\le k\le n},(\sqrt{2}\re(a_{kl}))_{1\le k<l\le n}, (\sqrt{2}{\rm Im}(a_{kl}))_{1\le k<l\le n}\big), \label{e2.0}$$ for $(a_{kl})_{1\le k,l\le n}$ in $\Mnsa$. We denote further by $\Psi$ the natural extension of $\Psi_0$ to a linear isomorphism between $\CE_{r,n}$ and $\R^{rn^2}$: $$\Psi(A_1,\dots,A_r)=(\Psi_0(A_1),\dots,\Psi_0(A_r)), \qquad (A_1,\dots,A_r\in\Mnsa).$$ We shall identify $\CE_{r,n}$ with $\R^{rn^2}$ via the isomorphism $\Psi$. Note that under this identification, the norm $\|\cdot\|_e$ on $\CE_{r,n}$ corresponds to the usual Euclidean norm on $\R^{rn^2}$. In other words, $\Psi$ is an isometry. Consider next independent random matrices $X_1^{(n)},\dots,X_r^{(n)}$ from$\SGRM(n,\frac{1}{n})$ as defined in the introduction. Then $\X=(X_1^{(n)},\dots,X_r^{(n)})$ is a random variable taking values in $\CE_{r,n}$, so that $\Y=\Psi(\X)$ is a random variable taking values in $\R^{rn^2}$. From the definition of $\SGRM(n,\frac{1}{n})$ and the fact that $X_1^{(n)},\dots,X_r^{(n)}$ are independent, it is easily seen that the distribution of $\Y$ on $\R^{rn^2}$ is the product measure $\mu=\nu\otimes\nu\otimes\cdots\otimes\nu$ ($rn^2$ terms), where $\nu$ is the Gaussian distribution with mean $0$ and variance $\frac{1}{n}$. In the following, we consider a given family $a_0,\dots,a_r$ of matrices in $\Mmsa$, and, for each $n$ in $\N$, a family $X_1^{(n)},\dots,X_r^{(n)}$ of independent random matrices in $\SGRM(n,\frac{1}{n})$. Furthermore, we consider the following random variable with values in $M_m(\C)\otimes M_n(\C)$: $$S_n= a_0\otimes \unit_n + \sum_{i=1}^ra_i\otimes X_i^{(n)}. \label{e2.0a}$$ \[lemma til master eq\] For each $n$ in $\N$[,]{} let $S_n$ be as above. For any matrix $\lambda$ in $M_m(\C)$[,]{} for which $\im\lambda$ is positive definite[,]{} we define a random variable with values in $M_m(\C)$ by [(]{}cf.Lemma [\[imaginaerdel\]),]{} $$H_n(\lambda)=(\id_m\otimes\tr_n) \big[(\lambda\otimes\unit_n-S_n)^{-1}\big].$$ Then[,]{} for any $j$ in $\{1,2,\dots,r\}$[,]{} we have $$\E\big\{H_n(\lambda)a_jH_n(\lambda)\big\}= \E\big\{ (\id_m\otimes\tr_n)\big[(\unit_m\otimes X_j^{(n)})\cdot(\lambda\otimes\unit_n-S_n)^{-1}\big]\big\}.$$ Let $\lambda$ be a fixed matrix in $M_m(\C)$, such that $\im\lambda$ is positive definite. Consider the canonical isomorphism $\Psi\colon\CE_{r,n}\to\R^{rn^2}$, introduced in Remark \[kanonisk iso\], and then define the mappings $\tilde{F}\colon\CE_{r,n}\to M_m(\C)\otimes M_n(\C)$ and $F\colon\R^{rn^2}\to M_m(\C)\otimes M_n(\C)$ by (cf. Lemma \[imaginaerdel\]) $$\begin{gathered} \tilde{F}(v_1,\dots,v_r)\\ =\big(\lambda\otimes\unit_n - a_0\otimes\unit_n -\textstyle{\sum_{i=1}^ra_i\otimes v_i}\big)^{-1}, \qquad (v_1,\dots,v_r\in\Mnsa),\end{gathered}$$ and $$F=\tilde{F}\circ\Psi^{-1}.$$ Note then that $$\big(\lambda\otimes\unit_n-S_n\big)^{-1} = F(\Psi(X_1^{(n)},\dots,X_r^{(n)})),$$ where $\Y=\Psi(X_1^{(n)},\dots,X_r^{(n)})$ is a random variable taking values in $\R^{rn^2}$, and the distribution of $\Y$ equals that of a tuple $(\gamma_1,\dots,\gamma_{rn^2})$ of $rn^2$ independent identically $N(0,\frac{1}{n})$-distributed real-valued random variables. Now, let $j$ in $\{1,2,\dots,r\}$ be fixed, and then define $$\begin{split} X_{j,k,k}^{(n)}&=(X_j^{(n)})_{kk}, \qquad (1\le k\le n), \\[.2cm] Y_{j,k,l}^{(n)}&=\sqrt{2}\re(X_j^{(n)})_{k,l}, \qquad (1\le k<l\le n), \\[.2cm] Z_{j,k,l}^{(n)}&=\sqrt{2}{\rm Im}(X_j^{(n)})_{k,l}, \qquad (1\le k<l\le n). \end{split}$$ Note that $\big((X_{j,k,k}^{(n)})_{1\le k\le n},(Y_{j,k,l}^{(n)})_{1\le k<l\le n}, (Z_{j,k,l}^{(n)})_{1\le k<l\le n}\big)=\Psi_0(X_j^{(n)})$, where $\Psi_0$ is the mapping defined in of Remark \[kanonisk iso\]. Note also that the standard orthonormal basis for $\R^{n^2}$ corresponds, via $\Psi_0$, to the following orthonormal basis for $\Mnsa$: $$\begin{aligned} &&e_{k,k}^{(n)}, \qquad (1\le k\le n)\label{e2.1} \\ &&f_{k,l}^{(n)}=\textstyle{\frac{1}{\sqrt{2}}}\big(e_{k,l}^{(n)}+e_{l,k}^ {(n)}\big) \qquad (1\le k<l\le n), \nonumber\\ &&g_{k,l}^{(n)}=\textstyle{\frac{\i}{\sqrt{2}}}\big(e_{k,l}^{(n)}- e_{l,k}^{(n)}\big) \qquad (1\le k<l\le n). \nonumber \end{aligned}$$ In other words, $\big((X_{j,k,k}^{(n)})_{1\le k\le n},(Y_{j,k,l}^{(n)})_{1\le k<l\le n}, (Z_{j,k,l}^{(n)})_{1\le k<l\le n}\big)$ are the coefficients of $X_j^{(n)}$ with respect to the orthonormal basis set out in . Combining now the above observations with Lemma \[Gauss trick\], it follows that $$\begin{split} \frac{1}{n}\E\Big\{\diff\big(\lambda\otimes\unit_n-S_n-ta_j\otimes e_{k,k}^{(n)}\big)^{-1}\Big\} &=\E\big\{X_{j,k,k}^{(n)}\cdot\big(\lambda\otimes\unit_n-S_n\big)^{ -1}\big\}, \\[.2cm] \frac{1}{n}\E\Big\{\diff\big(\lambda\otimes\unit_n-S_n-ta_j\otimes f_{k,l}^{(n)}\big)^{-1}\Big\} &=\E\big\{Y_{j,k,l}^{(n)}\cdot\big(\lambda\otimes\unit_n-S_n\big)^{ -1}\big\}, \\[.2cm] \frac{1}{n}\E\Big\{\diff\big(\lambda\otimes\unit_n-S_n-ta_j\otimes g_{k,l}^{(n)}\big)^{-1}\Big\} &=\E\big\{Z_{j,k,l}^{(n)}\cdot\big(\lambda\otimes\unit_n-S_n\big)^{ -1}\big\}, \end{split}$$ for all values of $k,l$ in $\{1,2,\dots,n\}$ such that $k<l$. On the other hand, it follows from Lemma \[Kaplansky trick\] that for any vector $v$ in $\Mnsa$, $$\diff\big(\lambda\otimes\unit_n-S_n-ta_j\otimes v\big)^{-1} =(\lambda\otimes\unit_n-S_n)^{-1}(a_j\otimes v)(\lambda\otimes\unit_n-S_n)^{-1},$$ and we obtain thus the identities: $$\begin{aligned} &&\E\big\{X_{j,k,k}^{(n)}\cdot\big(\lambda\otimes\unit_n-S_n\big)^{ -1}\big\} \label{e2.2} \\ &&\qquad\qquad =\frac{1}{n}\E\big\{(\lambda\otimes\unit_n-S_n)^{-1}(a_j\otimes e_{k,k}^{(n)})(\lambda\otimes\unit_n-S_n)^{-1}\big\} \nonumber\\ && \E\big\{Y_{j,k,l}^{(n)}\cdot\big(\lambda\otimes\unit_n-S_n\big)^{ -1}\big\} \label{e2.3}\\ &&\qquad\qquad =\frac{1}{n}\E\big\{(\lambda\otimes\unit_n-S_n)^{-1}(a_j\otimes f_{k,l}^{(n)})(\lambda\otimes\unit_n-S_n)^{-1}\big\} \nonumber \\ &&\E\big\{Z_{j,k,l}^{(n)}\cdot\big(\lambda\otimes\unit_n-S_n\big)^{ -1}\big\} \label{e2.4}\\ &&\qquad\qquad =\frac{1}{n}\E\big\{(\lambda\otimes\unit_n-S_n)^{-1}(a_j\otimes g_{k,l}^{(n)})(\lambda\otimes\unit_n-S_n)^{-1}\big\} \nonumber\end{aligned}$$ for all relevant values of $k,l$, $k<l$. Note next that for $k<l$, we have $$\begin{split} (X_j^{(n)})_{k,l} &=\textstyle{\frac{1}{\sqrt{2}}}\big(Y_{j,k,l}^{(n)}+\i Z_{j,k,l}^{(n)}\big), \\[.2cm] (X_j^{(n)})_{l,k} &=\textstyle{\frac{1}{\sqrt{2}}}\big(Y_{j,k,l}^{(n)}-\i Z_{j,k,l}^{(n)}\big), \\[.2cm] e_{k,l}^{(n)} &=\textstyle{\frac{1}{\sqrt{2}}}\big(f_{k,l}^{(n)}-\i g_{k,l}^{(n)}\big), \\[.2cm] e_{l,k}^{(n)} &=\textstyle{\frac{1}{\sqrt{2}}}\big(f_{k,l}^{(n)}+\i g_{k,l}^{(n)}\big), \\[.2cm] \end{split}$$ and combining this with –, it follows that $$\begin{gathered} \E\big\{(X_j^{(n)})_{k,l}\cdot\big(\lambda\otimes\unit_n-S_n\big)^{ -1}\big\} \\ =\frac{1}{n}\E\big\{(\lambda\otimes\unit_n-S_n)^{-1}(a_j\otimes e_{l,k}^{(n)})(\lambda\otimes\unit_n-S_n)^{-1}\big\}, \label{e2.5}\end{gathered}$$ and that $$\begin{gathered} \E\big\{(X_j^{(n)})_{l,k}\cdot\big(\lambda\otimes\unit_n-S_n\big)^{ -1}\big\} \\ =\frac{1}{n}\E\big\{(\lambda\otimes\unit_n-S_n)^{-1}(a_j\otimes e_{k,l}^{(n)})(\lambda\otimes\unit_n-S_n)^{-1}\big\}, \label{e2.6}\end{gathered}$$ for all $k,l$, $k<l$. Taking also into account, it follows that actually holds for all $k,l$ in $\{1,2,\dots,n\}$. By adding the equation for all values of $k,l$ and by recalling that $$X_j^{(n)}=\sum_{1\le k,l\le n}(X_j^{(n)})_{k,l}e_{k,l}^{(n)},$$ we conclude that $$\begin{gathered} \E\big\{(\unit_m\otimes X_j^{(n)}) (\lambda\otimes\unit_n-S_n)^{-1}\big\} \\ =\frac{1}{n}\sum_{1\le k,l\le n}\E\big\{(\unit_m\otimes e_{k,l}^{(n)})(\lambda\otimes\unit_n-S_n)^{-1} (a_j\otimes e_{l,k}^{(n)})(\lambda\otimes\unit_n-S_n)^{-1}\big\}. \label{e2.7}\end{gathered}$$ To calculate the right-hand side of , we write $$\big(\lambda\otimes\unit_n-S_n\big)^{-1}=\sum_{1\le u,v\le n}F_{u,v}\otimes e_{u,v},$$ where, for all $u,v$ in $\{1,2,\dots,n\}$, $F_{u,v}\colon\Omega\to M_m(\C)$ is an $M_m(\C)$-valued random variable. Recall then that for any $k,l,u,v$ in $\{1,2,\dots,n\}$, $$e_{k,l}^{(n)}\cdot e_{u,v}^{(n)} = \begin{cases} e_{k,v}, &\textrm{if} \ l=u, \\ 0, &\textrm{if} \ l\ne u. \end{cases}$$ For any fixed $u,v$ in $\{1,2,\dots,n\}$, it follows thus that $$\sum_{1\le k,l\le n}(\unit_m\otimes e_{k,l}^{(n)})(F_{u,v}\otimes e_{u,v}^{(n)})(a_j\otimes e_{l,k}^{(n)})= \begin{cases} (F_{u,u}\cdot a_j)\otimes\unit_n, &\textrm{if} \ u=v, \\ 0, &\textrm{if} \ u\ne v. \end{cases} \label{e2.8}$$ Adding the equation for all values of $u,v$ in $\{1,2,\dots,n\}$, it follows that $$\sum_{1\le k,l\le n}(\unit_m\otimes e_{k,l}^{(n)})(\lambda\otimes\unit_n-S_n)^{-1} (a_j\otimes e_{l,k}^{(n)})= \big(\textstyle{\sum_{u=1}^nF_{u,u}a_j}\big)\otimes\unit_n.$$ Note here that $$\sum_{u=1}^nF_{u,u}=n\cdot\id_m\otimes\tr_n\big[(\lambda\otimes\unit_n- S_n)^{-1}\big] =n\cdot H_n(\lambda),$$ so that $$\sum_{1\le k,l\le n}(\unit_m\otimes e_{k,l}^{(n)})(\lambda\otimes\unit_n-S_n)^{-1} (a_j\otimes e_{l,k}^{(n)})= nH_n(\lambda)a_j\otimes\unit_n.$$ Combining this with , we find that $$\E\big\{(\unit_m\otimes X_j^{(n)})(\lambda\otimes\unit_n-S_n)^{-1}\big\} =\E\big\{(H_n(\lambda)a_j\otimes\unit_n) (\lambda\otimes\unit_n-S_n)^{-1}\big\}. \label{e2.9}$$ Applying finally $\id_m\otimes\tr_n$ to both sides of , we conclude that $$\begin{aligned} &&\hskip-36pt\E\big\{\id_m\otimes\tr_n\big[(\unit_m\otimes X_j^{(n)})(\lambda\otimes\unit_n-S_n)^{-1}\big]\big\}\\ &&\qquad =\E\big\{H_n(\lambda)a_j\cdot \id_m\otimes\tr_n\big[(\lambda\otimes\unit_n-S_n)^{-1}\big]\big\} \\ &&\qquad=\E\big\{H_n(\lambda)a_jH_n(\lambda)\big\},\end{aligned}$$ which is the desired formula. \[master eq\] Let[,]{} for each $n$ in $\N$[,]{} $S_n$ be the random matrix introduced in [,]{} and let $\lambda$ be a matrix in $M_m(\C)$ such that ${\rm Im}(\lambda)$ is positive definite. Then with $$H_n(\lambda)=(\id_m\otimes\tr_n) \big[(\lambda\otimes\unit_n-S_n)^{-1}\big]$$ [(]{}cf. Lemma [\[imaginaerdel\]),]{} we have the formula $$\E\Big\{\sum_{i=1}^r a_iH_n(\lambda)a_iH_n(\lambda) + (a_0-\lambda)H_n(\lambda) +\unit_m\Big\}=0, \label{e2.10}$$ as an $M_m(\C)$[-]{}valued expectation. By application of Lemma \[lemma til master eq\], we find that $$\begin{aligned} &&\hskip-36pt \E\Big\{\sum_{j=1}^ra_jH_n(\lambda)a_jH_n(\lambda)\Big\} \\[.2cm] &&\qquad=\sum_{j=1}^ra_j\E\big\{H_n(\lambda)a_jH_n(\lambda)\big\} \\[.2cm] &&\qquad= \sum_{j=1}^ra_j\E\big\{\id_m\otimes\tr_n\big[(\unit_m\otimes X_j^{(n)})(\lambda\otimes\unit_n-S_n)^{-1}\big]\big\} \\[.2cm] &&\qquad=\sum_{j=1}^r\E\big\{\id_m\otimes\tr_n\big[(a_j\otimes\unit_n)(\unit_m\otimes X_j^{(n)})(\lambda\otimes\unit_n-S_n)^{-1}\big]\big\} \\[.2cm] &&\qquad=\sum_{j=1}^r\E\big\{\id_m\otimes\tr_n\big[(a_j\otimes X_j^{(n)})(\lambda\otimes\unit_n-S_n)^{-1}\big]\big\}.\end{aligned}$$ Moreover, $$\begin{aligned} \E\{a_0 H_n(\lambda)\} &=& \E\{a_0(\id_m\otimes\tr_n)((\lambda\otimes \unit_n-S_n)^{-1})\}\\ &=& \E\{(\id_m\otimes\tr_n)((a_0\otimes\unit_n)(\lambda\otimes\unit_n- S_n)^{-1}\}.\end{aligned}$$ Hence, $$\begin{split} \E\Big\{a_0H_n(\lambda)+\sum^r_{i=1} &a_jH_n(\lambda)a_jH_n(\lambda)\Big\} \\[.2cm] &=\E\big\{\id_m\otimes\tr_n\big[S_n(\lambda\otimes\unit_n-S_n)^{ -1}\big]\big\} \\[.2cm] &=\E\big\{\id_m\otimes\tr_n\big[\big(\lambda\otimes\unit_n -(\lambda\otimes\unit_n-S_n)\big)(\lambda\otimes\unit_n-S_n)^{ -1}\big]\big\} \\[.2cm] &=\E\big\{\id_m\otimes\tr_n\big[(\lambda\otimes\unit_n)(\lambda\otimes\unit_n-S_n)^{-1} -\unit_m\otimes\unit_n\big]\big\} \\[.2cm] &=\E\big\{\lambda H_n(\lambda)-\unit_m\big\}, \end{split}$$ from which follows readily. Variance estimates {#sec3} ================== Let $K$ be a positive integer. Then we denote by $\|\cdot\|$ the usual Euclidean norm $\C^K$; i.e., $$\|(\zeta_1,\dots,\zeta_K)\| =\big(|\zeta_1|^2+\cdots+|\zeta_K|^2\big)^{1/2}, \qquad (\zeta_1,\dots,\zeta_K\in\C).$$ Furthermore, we denote by $\|\cdot\|_{2,\Tr_K}$ the Hilbert-Schmidt norm on $M_K(\C)$, i.e., $$\|T\|_{2,\Tr_K}=\big(\Tr_K(T^*T)\big)^{1/2}, \qquad (T\in M_K(\C)).$$ We shall also, occasionally, consider the norm $\|\cdot\|_{2,\tr_k}$ given by: $$\|T\|_{2,\tr_K}=\big(\tr_K(T^*T)\big)^{1/2}=K^{-1/2}\|T\|_{2,\Tr_K}, \qquad (T\in M_K(\C)).$$ \[concentration\] Let $N$ be a positive integer and equip $\R^N$ with the probability measure $\mu=\nu\otimes\nu\otimes\cdots\otimes\nu$ [(]{}$N$ terms[),]{} where $\nu$ is the Gaussian distribution on $\R$ with mean $0$ and variance $1$. Let $f\colon\R^N\to\C$ be a $C^1$-function[,]{} such that $\E\{|f|^2\}<\infty$. Then with $\V\{f\}=\E\{|f-\E\{f\}|^2\}$[,]{} we have $$\V\{f\}\le \E\big\{\|\grad(f)\|^2\big\}.$$ See [@Cn Thm. 2.1]. The Gaussian Poincaré inequality is a folklore result which goes back to the 30’s (cf. Beckner [@Be]). It was rediscovered by Chernoff [@Cf] in 1981 in the case $N=1$ and by Chen [@Cn] in 1982 for general $N$. The original proof as well as Chernoff’s proof is based on an expansion of $f$ in Hermite polynomials (or tensor products of Hermite polynomials in the case $N\ge 2$). Chen gives in [@Cn] a self-contained proof which does not rely on Hermite polynomials. In a preliminary version of this paper, we proved the slightly weaker inequality: $\V\{f\}\le\frac{\pi^2}{8}\E\{\| \grad f \|^2\}$ using the method of proof of [@P1 Lemma 4.7]. We wish to thank Gilles Pisier for bringing the papers by Bechner, Chernoff and Chen to our attention. \[cor3-2\] Let $N\in\N$[,]{} and let $Z_1,\dots,Z_N$ be $N$ independent and identically distributed real Gaussian random variables with mean zero and variance $\sigma^2$ and let $f\colon\R^N\to\C$ be a $C^1$[-]{}function[,]{} such that $f$ and $\grad(f)$ are both polynomially bounded. Then $$\V\big\{ f(Z_1,\dots,Z_N)\big\}\le\sigma^2\E\big\{\|(\grad f)(Z_1,\dots,Z_N)\|^2\big\}.$$ In the case $\sigma =1$, this is an immediate consequence of Proposition \[concentration\]. In the general case, put $Y_j=\frac{1}{\sigma} Z_j$, $j=1,\dots,N$, and define $g\in C^1(\R^N)$ by $$\label{eq31} g(y) = f(\sigma y),\qquad (y\in\R^N).$$ Then $$\label{eq32} (\grad g)(y) = \sigma(\grad f)(\sigma y),\qquad (y\in\R^N).$$ Since $Y_1,\dots,Y_N$ are independent standard Gaussian distributed random variables, we have from Proposition \[concentration\] that $$\label{eq33} \V\big\{g(Y_1,\dots,Y_N)\big\}\le\E\big\{\|(\grad g)(Y_1,\dots,Y_N)\|^2\big\}.$$ Since $Z_j=\sigma Y_j$, $j=1,\dots,N$, it follows from , , and that $ \displaystyle{\V\big\{f(Z_1,\dots,Z_N)\big\} \le\sigma^2\E\big\{\|(\grad f)(Z_1,\dots,Z_N)\|^2\big\}}. $ \[matrix concentration\] Consider the canonical isomorphism $\Psi\colon\CE_{r,n}\to\R^{rn^2}$ introduced in Remark \[kanonisk iso\]. Consider further independent random matrices $X_1^{(n)},\dots\break \dots,X_r^{(n)}$ from $\SGRM(n,\frac{1}{n})$. Then $\X=(X_1^{(n)},\dots,X_r^{(n)})$ is a random variable taking values in $\CE_{r,n}$, so that $\Y=\Psi(\X)$ is a random variable taking values in $\R^{rn^2}$. As mentioned in Remark \[kanonisk iso\], it is easily seen that the distribution of $\Y$ on $\R^{rn^2}$ is the product measure $\mu=\nu\otimes\nu\otimes\cdots\otimes\nu$ ($rn^2$ terms), where $\nu$ is the Gaussian distribution with mean $0$ and variance $\frac{1}{n}$. Now, let $\tilde{f}\colon\R^{rn^2}\to\C$ be a $C^1$-function, such that $\tilde{f}$ and $\grad\tilde{f}$ are both polynomially bounded, and consider further the $C^1$-function $f\colon\CE_{r,n}\to\C$ given by $f=\tilde{f}\circ\Psi$. Since $\Psi$ is a linear isometry (i.e., an orthogonal transformation), it follows from Corollary \[cor3-2\] that $$\V\big\{f(\X)\big\}\le \frac 1n \E\big\{\big\|\grad f(\X)\big\|_e^2\big\}. \label{e3.0a}$$ \[tensor estimat\] Let $m,n$ be positive integers[,]{} and assume that $a_1,\dots,a_r\in \Mmsa$ and $w_1,\dots,w_r\in M_n(\C)$. Then $$\Big\|\sum_{i=1}^ra_i\otimes w_i\Big\|_{2,\Tr_m\otimes\Tr_n} \le m^{1/2}\Big\|\sum_{i=1}^ra_i^2\Big\|^{1/2} \Big(\sum_{i=1}^r\|w_i\|_{2,\Tr_n}^2\Big)^{1/2}.$$ We find that $$\begin{split} \big\|\textstyle{\sum_{i=1}^ra_i\otimes w_i}\big\|_{2,\Tr_m\otimes\Tr_n} &\le\textstyle{\sum_{i=1}^r\|a_i\otimes w_i\|_{2,\Tr_m\otimes\Tr_n}} \\[.2cm] &=\textstyle{\sum_{i=1}^r\|a_i\|_{2,\Tr_m}\cdot\|w_i\|_{2,\Tr_n}} \\[.2cm] &\le \big(\textstyle{\sum_{i=1}^r\|a_i\|_{2,\Tr_m}^2}\big)^{1/2} \big(\textstyle{\sum_{i=1}^r\|w_i\|_{2,\Tr_n}^2}\big)^{1/2} \\[.2cm] &=\big(\Tr_m\big(\textstyle{\sum_{i=1}^ra_i^2}\big)\big)^{1/2}\cdot \big(\textstyle{\sum_{i=1}^r\|w_i\|_{2,\Tr_n}^2}\big)^{1/2} \\[.2cm] &\le m^{1/2}\big\|\textstyle{\sum_{i=1}^ra_i^2}\big\|^{1/2}\cdot \big(\textstyle{\sum_{i=1}^r\|w_i\|_{2,\Tr_n}^2}\big)^{1/2}. \end{split}$$ -20pt Note, in particular, that if $w_1,\dots,w_r\in\Mnsa$, then Lemma \[tensor estimat\] provides the estimate: $$\big\|\textstyle{\sum_{i=1}^ra_i\otimes w_i}\big\|_{2,\Tr_m\otimes\Tr_n} \le m^{1/2}\big(\textstyle{\sum_{i=1}^r}\|a_i\|^2\big)^{1/2}\cdot \big\|(w_1,\dots,w_r)\big\|_e.$$ \[master ineq\] Let $\lambda$ be a matrix in $M_m(\C)$ such that ${\rm Im}(\lambda)$ is positive definite. Consider further the random matrix $H_n(\lambda)$ introduced in Theorem [\[master eq\]]{} and put $$G_n(\lambda)=\E\big\{H_n(\lambda)\big\}\in M_m(\C).$$ Then $$\Big\|\sum_{i=1}^ra_iG_n(\lambda)a_iG_n(\lambda) +(a_0-\lambda) G_n(\lambda) + \unit_m\Big\| \le \frac{C}{n^2}\big\|({\rm Im}(\lambda))^{-1}\big\|^4,$$ where $C= m^3\|\sum_{i=1}^ra_i^2\|^2$. We put $$K_n(\lambda)=H_n(\lambda)-G_n(\lambda) =H_n(\lambda)-\E\big\{H_n(\lambda)\big\}.$$ Then, by Theorem \[master eq\], we have $$\begin{split} \E\Big\{\sum_{i=1}^ra_i&K_n(\lambda)a_iK_n(\lambda)\Big\} \\ &=\E\Big\{\sum_{i=1}^ra_i\big(H_n(\lambda)-G_n(\lambda)\big) a_i\big(H_n(\lambda)-G_n(\lambda)\big)\Big\} \\ &=\E\Big\{\sum_{i=1}^ra_iH_n(\lambda)a_iH_n(\lambda)\Big\} -\sum_{i=1}^ra_iG_n(\lambda)a_iG_n(\lambda) \\ &=\Big( -(a_0-\lambda) \E\big\{H_n(\lambda)\big\}-\unit_m\Big)- \sum_{i=1}^ra_iG_n(\lambda)a_iG_n(\lambda) \\ &=-\Big(\sum_{i=1}^ra_iG_n(\lambda)a_iG_n(\lambda) + (a_0- \lambda) G_n(\lambda) + \unit_m\Big). \end{split}$$ Hence, we can make the following estimates $$\begin{aligned} &&\hskip-.75in\Big\|\sum_{i=1}^ra_iG_n(\lambda)a_iG_n(\lambda) + (a_0-\lambda)G_n(\lambda) + \unit_m\Big\| \\ &&= \Big\|\E\Big\{\sum_{i=1}^ra_iK_n(\lambda)a_iK_n(\lambda)\Big\}\Big\| \\ &&\le \E\Big\{\Big\|\sum_{i=1}^ra_iK_n(\lambda)a_iK_n(\lambda)\Big\|\Big\} \\ &&\le \E\Big\{\Big\|\sum_{i=1}^r a_iK_n(\lambda)a_i\Big\|\cdot \big\|K_n(\lambda)\big\|\Big\}.\end{aligned}$$ Note here that since $a_1,\dots,a_r$ are self-adjoint, the mapping $v\mapsto\break \sum_{i=1}^ra_iva_i: M_m(\C)\to M_m(\C)$ is completely positive. Therefore, it attains its norm at the unit $\unit_m$, and the norm is $\|\sum_{i=1}^ra_i^2\|$. Using this in the estimates above, we find that $$\begin{split} \Big\|\sum_{i=1}^ra_iG_n(\lambda)a_iG_n(\lambda)+(a_0-\lambda) G_n(\lambda) + \unit_m\Big\| &\le\Big\|\sum_{i=1}^ra_i^2\Big\|\cdot \E\Big\{\big\|K_n(\lambda)\big\|^2\Big\} \\[.2cm] &\le\Big\|\sum_{i=1}^ra_i^2\Big\|\cdot \E\Big\{\big\|K_n(\lambda)\big\|_{2,\Tr_m}^2\Big\}, \end{split} \label{e3.1}$$ where the last inequality uses that the operator norm of a matrix is always dominated by the Hilbert-Schmidt norm. It remains to estimate $\E\{\|K_n(\lambda)\|_{2,\Tr_m}^2\}$. For this, let $H_{n,j,k}(\lambda)$, ($1\le j,k\le n$) denote the entries of $H_n(\lambda)$; i.e., $$H_n(\lambda)=\sum_{j,k=1}^mH_{n,j,k}(\lambda)e(m,j,k), \label{e3.2}$$ where $e(m,j,k)$, ($1\le j,k\le m$) are the usual $m\times m$ matrix units. Let, correspondingly, $K_{n,j,k}(\lambda)$ denote the entries of $K_n(\lambda)$. Then $K_{n,j,k}(\lambda)=H_{n,j,k}(\lambda)-\E\{H_{n,j,k}(\lambda)\}$, for all $j,k$, so that $\V\{H_{n,j,k}(\lambda)\}=\E\{|K_{n,j,k}(\lambda)|^2\}$. Thus it follows that $$\E\Big\{\big\|K_n(\lambda)\big\|_{2,\Tr_m}^2\Big\} =\E\Big\{\sum_{j,k=1}^m|K_{n,j,k}(\lambda)|^2\Big\} =\sum_{j,k=1}^m\V\big\{H_{n,j,k}(\lambda)\big\}. \label{e3.2d}$$ Note further that by $$\begin{split} H_{n,j,k}(\lambda)&=\Tr_m\big(e(m,k,j)H_n(\lambda)\big) \\[.2cm] &=m\cdot\tr_m\big(e(m,k,j)\cdot(\id_m\otimes\tr_n) \big[(\lambda\otimes\unit_n-S_n)^{-1}\big]\big) \\[.2cm] &=m\cdot\tr_m\otimes\tr_n\big[(e(m,j,k)\otimes\unit_n) (\lambda\otimes\unit_n-S_n)^{-1}\big]. \end{split}$$ For any $j,k$ in $\{1,2,\dots,m\}$, consider next the mapping $f_{n,j,k}\colon\CE_{r,n}\to\C$ given by: $$\begin{gathered} f_{n,j,k}(v_1,\dots,v_r)\\ =m\cdot(\tr_m\otimes\tr_n) \big[(e(m,k,j)\otimes\unit_n)(\lambda\otimes\unit_n -a_0\otimes\unit_n -\textstyle{\sum_{i=1}^ra_i\otimes v_i})^{-1}\big],\end{gathered}$$ for all $v_1,\dots,v_r$ in $\Mnsa$. Note then that $$H_{n,j,k}(\lambda)=f_{n,j,k}(X_1^{(n)},\dots,X_r^{(n)}),$$ for all $j,k$. Using now the “concentration estimate” in Remark \[matrix concentration\], it follows that for all $j,k$, $$\V\big\{H_{n,j,k}(\lambda)\big\}\le \frac 1n \E\Big\{\big\|\grad f_{n,j,k}(X_1^{(n)},\dots,X_r^{(n)})\big\|_e^2\Big\}. \label{e3.2c}$$ For fixed $j,k$ in $\{1,2,\dots,m\}$ and $v=(v_1,\dots,v_r)$ in $\CE_{r,n}$, note that $\grad f_{n,j,k}(v)$ is the vector in $\CE_{r,n}$, characterized by the property that $$\big\langle\grad f_{n,j,k}(v),w\big\rangle_e = \diff f_{n,j,k}(v+tw),$$ for any vector $w=(w_1,\dots,w_r)$ in $\CE_{r,n}$. It follows thus that $$\begin{aligned} \label{e3.2b} \big\|\grad f_{n,j,k}(v)\big\|_e^2& =& \max_{w\in S_1(\CE_{r,n})}\big|\big\langle\grad f_{n,j,k}(v),w\big\rangle_e\big|^2 \\ &=&\max_{w\in S_1(\CE_{r,n})}\Big|\diff f_{n,j,k}(v+tw)\Big|^2. \nonumber\end{aligned}$$ Let $v=(v_1,\dots,v_n)$ be a fixed vector in $\CE_{r,n}$, and put $\Sigma=a_0\otimes\unit_n + \sum_{i=1}^ra_i\otimes v_i$. Let further $w=(w_1,\dots,w_n)$ be a fixed vector in $S_1(\CE_{r,n})$. It follows then by Lemma \[Kaplansky trick\] that $$\begin{aligned} &&\\ \label{e3.2a} &&\hskip-12pt \diff f_{n,j,k}(v+tw)\nonumber \\[.1cm] &&= \diff\! m\cdot(\tr_m\otimes\tr_n)\nonumber\\[.1cm] &&\quad\cdot \big[(e(m,k,j)\otimes\unit_n)\big(\lambda\otimes\unit_n - a_0\otimes\unit_n -\textstyle{\sum_{i=1}^ra_i\otimes(v_i+tw_i)}\big)^{-1}\big] \nonumber\\[.1cm] &&=m\cdot(\tr_m\otimes\tr_n)\nonumber\\[.1cm] &&\quad\cdot \Big[(e(m,k,j)\otimes\unit_n)\,\diff\!\!\big(\lambda\otimes\unit_n - a_0\otimes\unit_n -\textstyle{\sum_{i=1}^ra_i\otimes(v_i+tw_i)}\big)^{-1}\Big] \nonumber\\[.1cm] &&=m\cdot(\tr_m\otimes\tr_n)\nonumber\\[.1cm] &&\quad\cdot \big[(e(m,k,j)\otimes\unit_n)\big(\lambda\otimes\unit_n-\Sigma\big)^{-1} \big(\textstyle{\sum_{i=1}^ra_i\otimes w_i}\big) \big(\lambda\otimes\unit_n-\Sigma\big)^{-1}\big].\nonumber\end{aligned}$$ Using next the Cauchy-Schwartz inequality for $\Tr_n\otimes\Tr_m$, we find that $$\begin{aligned} &&m^2\big|(\tr_m\otimes\tr_n) \big[e(m,k,j)\otimes\unit_n\label{e3.3} \\[.1cm] &&\qquad \cdot\big(\lambda\otimes\unit_n- \Sigma\big)^{-1} \big(\textstyle{\sum_{i=1}^ra_i\otimes w_i}\big) \big(\lambda\otimes\unit_n-\Sigma\big)^{-1}\big]\big|^2\nonumber \\[.1cm] &&=\frac{1}{n^2}\big|(\Tr_m\otimes\Tr_n) \big[e(m,k,j)\otimes\unit_n\nonumber\\[.1cm] &&\qquad \cdot\big(\lambda\otimes\unit_n- \Sigma\big)^{-1} \big(\textstyle{\sum_{i=1}^ra_i\otimes w_i}\big) \big(\lambda\otimes\unit_n-\Sigma\big)^{-1}\big]\big|^2 \nonumber\\[.1cm] &&\le\frac{1}{n^2}\big\|e(m,j,k) \otimes\unit_n\big\|_{2,\Tr_m\otimes\Tr_n}^2\nonumber\\[.1cm] &&\qquad\cdot \big\|\big(\lambda\otimes\unit_n-\Sigma\big)^{-1} \big(\textstyle{\sum_{i=1}^ra_i\otimes w_i}\big) \big(\lambda\otimes\unit_n-\Sigma\big)^{ -1}\big\|_{2,\Tr_m\otimes\Tr_n}^2 \nonumber\\[.1cm] &&=\frac{1}{n}\big\|\big(\lambda\otimes\unit_n-\Sigma\big)^{-1} \big(\textstyle{\sum_{i=1}^ra_i\otimes w_i}\big) \big(\lambda\otimes\unit_n-\Sigma\big)^{ -1}\big\|_{2,\Tr_m\otimes\Tr_n}^2. \nonumber\end{aligned}$$ Note here that $$\begin{aligned} &&\hskip-12pt\big\|\big(\lambda\otimes\unit_n-\Sigma\big)^{-1} \big(\textstyle{\sum_{i=1}^ra_i\otimes w_i}\big) \big(\lambda\otimes\unit_n-\Sigma\big)^{ -1}\big\|_{2,\Tr_m\otimes\Tr_n}^2 \\[.2cm] &&\qquad\le \big\|\big(\lambda\otimes\unit_n-\Sigma\big)^{-1}\big\|^2\cdot \big\|\textstyle{\sum_{i=1}^ra_i\otimes w_i}\big\|_{2,\Tr_m\otimes\Tr_n}^2\cdot \big\|\big(\lambda\otimes\unit_n-\Sigma\big)^{-1}\big\|^2 \\[.2cm] &&\qquad\le \big\|\textstyle{\sum_{i=1}^ra_i\otimes w_i}\big\|_{2,\Tr_m\otimes\Tr_n}^2 \cdot\big\|\big({\rm Im}(\lambda)\big)^{-1}\big\|^4,\end{aligned}$$ where the last inequality uses Lemma \[imaginaerdel\] and the fact that $\Sigma$ is self-adjoint: $$\begin{aligned} \big\|\big(\lambda\otimes\unit_n-\Sigma\big)^{-1}\big\| &\le& \big\|\big({\rm Im}(\lambda\otimes\unit_n-\Sigma\big)^{-1}\big\| \\ &=&\big\|\big({\rm Im}(\lambda\otimes\unit_n)\big)^{-1}\big\| =\big\|\big({\rm Im}(\lambda)\big)^{-1}\big\|.\end{aligned}$$ Note further that by Lemma \[tensor estimat\], $\|\sum_{i=1}^ra_i\otimes w_i\|_{2,\Tr_m\otimes\Tr_n} \!\le \! m^{1/2}\big\|\textstyle{\sum_{i=1}^ra_i^2}\big\|^{1/2}$, since $w=(w_1,\dots,w_r)\in S_1(\CE_{r,n})$. We conclude thus that $$\begin{gathered} \big\|\big(\lambda\otimes\unit_n-\Sigma\big)^{-1} \big(\textstyle{\sum_{i=1}^ra_i\otimes w_i}\big) \big(\lambda\otimes\unit_n-\Sigma\big)^{ -1}\big\|_{2,\Tr_m\otimes\Tr_n}^2 \\ \le m\big\|\textstyle{\sum_{i=1}^ra_i^2}\big\| \cdot\big\|\big({\rm Im}(\lambda)\big)^{-1}\big\|^4. \label{e3.5}\end{gathered}$$ Combining now formulas –, it follows that for any $j,k$ in $\{1,2,\dots,m\}$, any vector $v=(v_1,\dots,v_r)$ in $\CE_{r,n}$ and any unit vector $w=(w_1,\dots,w_r)$ in $\CE_{r,n}$, we have that $$\Big|\diff f_{n,j,k}(v+tw)\Big|^2 \le\frac{m}{n}\big\|\textstyle{\sum_{i=1}^ra_i^2}\big\| \cdot\big\|\big({\rm Im}(\lambda)\big)^{-1}\big\|^4;$$ hence, by , $$\big\|\grad f_{n,j,k}(v)\big\|_e^2 \le \frac{m}{n}\big\|\textstyle{\sum_{i=1}^ra_i^2}\big\| \cdot\big\|\big({\rm Im}(\lambda)\big)^{-1}\big\|^4.$$ Note that this estimate holds at any point $v=(v_1,\dots,v_r)$ in $\CE_{r,n}$. Using this in conjunction with , we may thus conclude that $$\V\big\{H_{n,j,k}(\lambda)\big\}\le \frac{m}{n^2}\big\|\textstyle{\sum_{i=1}^ra_i^2}\big\| \cdot\big\|\big({\rm Im}(\lambda)\big)^{-1}\big\|^4,$$ for any $j,k$ in $\{1,2\ldots,m\}$, and hence, by , $$\E\Big\{\big\|K_n(\lambda)\big\|_{2,\Tr_m}^2\Big\}\le \frac{m^3}{n^2}\big\|\textstyle{\sum_{i=1}^ra_i^2}\big\| \cdot\big\|\big({\rm Im}(\lambda)\big)^{-1}\big\|^4. \label{e3.6}$$ Inserting finally into , we find that $$\begin{gathered} \Big\|\sum_{i=1}^ra_iG_n(\lambda)a_iG_n(\lambda)+(a_0-\lambda) G_n(\lambda) + \unit_m\Big\|\\ \le \frac{m^3}{n^2}\big\|\textstyle{\sum_{i=1}^ra_i^2}\big\|^2 \cdot\big\|\big({\rm Im}(\lambda)\big)^{-1}\big\|^4,\end{gathered}$$ and this is the desired estimate \[kaederegel\] Let $N$ be a positive integer[,]{} let $I$ be an open interval in $\R$[,]{} and let $t\mapsto a(t)\colon I\to M_N(\C)_{\rm sa}$ be a $C^1$[-]{}function. Consider further a function $\varphi$ in $C^1(\R)$. Then the function $t\mapsto\tr_N[\varphi(a(t))]$ is $C^1$[-]{}function on $I$[,]{} and $$\frac{\rm d}{{\rm d}t}\tr_N\big[\varphi(a(t))\big] =\tr_N\big[\varphi'(a(t))\cdot a'(t)\big].$$ This is well known. For the reader’s convenience we include a proof: Note first that for any $k$ in $\N$, $$\frac{\rm d}{{\rm d}t}\big(a(t)^k\big) = \sum^{k-1}_{j=0} a(t)^j a'(t)a(t)^{k-j-1}.$$ Hence, by the trace property $\tr_N(xy)=\tr_N(yx)$, we get $$\frac{\rm d}{{\rm d}t} (\tr_N(a(t)^k) = \tr_N(k a(t)^{k-1}a'(t)).$$ Therefore $$\frac{\rm d}{{\rm d}t} \tr_N(p(a(t))) = \tr_N(p'(a(t))a'(t))$$ for all polynomials $p\in\C[X]$. The general case $\varphi\in C^1(I)$ follows easily from this by choosing a sequence of polynomials $p_n\in\C[X]$, such that $p_n\to\varphi$ and $p'_n\to\varphi'$ uniformly on compact subsets of $I$, as $n\to\infty$. \[diff-kvotient trick\] Let $a_0,a_1,\dots,a_r$ be matrices in $\Mmsa$ and put as in $$S_n=a_0\otimes\unit_n+\sum_{i=1}^ra_i\otimes X_i^{(n)}.$$ Let further $\varphi\colon\R\to\C$ be a $C^1$-function with compact support[,]{} and consider the random matrices $\varphi(S_n)$ and $\varphi'(S_n)$ obtained by applying the spectral mapping associated to the self[-]{}adjoint [(]{}random[)]{} matrix $S_n$. We then have[:]{} $$\V\big\{(\tr_m\otimes\tr_n)[\varphi(S_n)\big]\big\} \le \frac{1}{n^2}\Big\|\sum_{i=1}^ra_i^2\Big\|^2 \E\big\{(\tr_m\otimes\tr_n)\big[|\varphi'|^2(S_n)\big]\big\}.$$ Consider the mappings $g\colon\CE_{r,n}\to M_{nm}(\C)_{\rm sa}$ and $f\colon\CE_{r,n}\to\C$ given by $$g(v_1,\dots,v_r)=a_0\otimes\unit_n+\sum_{i=1}^ra_i\otimes v_i, \qquad (v_1,\dots,v_r\in\Mnsa),$$ and $$f(v_1,\dots,v_r)=(\tr_m\otimes\tr_n)\big[\varphi(g(v_1,\dots,v_r))\big ], \qquad (v_1,\dots,v_r\in\Mmsa),$$ Note then that $S_n=g(X_1^{(n)},\dots,X_r^{(n)})$ and that $$(\tr_m\otimes\tr_n)[\varphi(S_n)]=f(X_1^{(n)},\dots,X_r^{(n)}).$$ Note also that $f$ is a bounded function on $\Mnsa$, and, by Lemma \[kaederegel\], it has bounded continuous partial derivatives. Hence, we obtain from in Remark \[matrix concentration\] that $$\V\big\{(\tr_m\otimes\tr_n)[\varphi(S_n)]\big\}\le \frac{1}{n}\E\Big\{\big\|\grad f(X_1^{(n)},\dots,X_r^{(n)})\big\|_e^2\Big\}. \label{e3.7}$$ Recall next that for any $v$ in $\CE_{r,n}$, $\grad f(v)$ is the vector in $\CE_{r,n}$, characterized by the property that $$\big\langle\grad f(v),w\big\rangle_e = \diff f(v+tw),$$ for any vector $w=(w_1,\dots,w_r)$ in $\CE_{r,n}$. It follows thus that $$\big\|\grad f(v)\big\|_e^2 = \max_{w\in S_1(\CE_{r,n})}\big|\big\langle\grad f(v),w\big\rangle_e\big|^2 =\max_{w\in S_1(\CE_{r,n})}\Big|\diff f(v+tw)\Big|^2, \label{e3.8}$$ at any point $v=(v_1,\dots,v_r)$ of $\CE_{r,n}$. Now, let $v=(v_1,\dots,v_r)$ be a fixed point in $\CE_{r,n}$ and let $w=(w_1,\dots,w_r)$ be a fixed point in $S_1(\CE_{r,n})$. By Lemma \[kaederegel\], we have then that $$\begin{split} \diff f(v+tw) &= \diff(\tr_m\otimes\tr_n)\big[\varphi(g(v+tw))\big] \\[.2cm] &=(\tr_m\otimes\tr_n)\Big[\varphi'(g(v))\cdot\diff g(v+tw)\Big] \\[.2cm] &=(\tr_m\otimes\tr_n)\big[\varphi'(g(v))\cdot \textstyle{\sum_{i=1}^ra_i\otimes w_i}\big]. \end{split}$$ Using then the Cauchy-Schwartz inequality for $\Tr_m\otimes\Tr_n$, we find that $$\begin{split} \Big|\diff f(v+tw)\Big|^2 &= \frac{1}{m^2n^2}\Big|(\Tr_m\otimes\Tr_n)\big[\varphi'(g(v))\cdot \textstyle{\sum_{i=1}^ra_i\otimes w_i}\big]\Big|^2 \\[.2cm] &=\frac{1}{n^2m^2}\big\|\overline{\varphi}'(g(v))\big\|_{2,\Tr_m\otimes\ Tr_n}^2\cdot \big\|\textstyle{\sum_{i=1}^ra_i\otimes w_i}\big\|_{2,\Tr_m\otimes\Tr_n}^2. \end{split}$$ Note that $$\big\|\overline{\varphi}'(g(v))\big\|_{2,\Tr_m\otimes\Tr_n}^2 =\Tr_m\otimes\Tr_n\big[|\varphi'|^2(g(v))\big] =mn\cdot\tr_m\otimes\tr_n\big[|\varphi'|^2(g(v))\big],$$ and, by Lemma \[tensor estimat\], $$\big\|\textstyle{\sum_{i=1}^ra_i\otimes w_i}\big\|_{2,\Tr_m\otimes\Tr_n}^2 \le m\big\|\textstyle{\sum_{i=1}^r}a_i^2\big\|,$$ since $w$ is a unit vector with respect to $\|\cdot\|_e$. We find thus that $$\Big|\diff f(v+tw)\Big|^2 \le \frac{1}{n}\big\|\textstyle{\sum_{i=1}^r}a_i^2\big\| \tr_m\otimes\tr_n\big[|\varphi'|^2(g(v))\big].$$ Since this estimate holds for any unit vector $w$ in $\CE_{r,n}$, we conclude, using , that $$\big\|\grad f(v)\big\|^2_e\le \frac{1}{n}\big\|\textstyle{\sum_{i=1}^r}a_i^2\big\| \tr_m\otimes\tr_n\big[|\varphi'|^2(g(v))\big],$$ for any point $v$ in $\CE_{r,n}$. Combining this with , we obtain the desired estimate. Estimation of $\|G_n(\lambda)-G(\lambda)\|$ {#sec4} =========================================== \[middelvaerdi af norm\] For each $n$ in $\N$[,]{} let $X_n$ be a random matrix in $\SGRM(n,\frac{1}{n})$. Then $$\E\big\{\|X_n\|\big\}\le 2+2\sqrt{\frac{\log(2n)}{2n}}, \qquad (n\in\N). \label{e4.0}$$ In particular[,]{} it follows that $$\E\big\{\|X_n\|\big\}\le 4, \label{e4.0a}$$ for all $n$ in $\N$. In [@HT1 Proof of Lemma 3.3] it was proved that for any $n$ in $\N$ and any positive number $t$, we have $$\E\big\{\Tr_n(\exp(tX_n))\big\}\le n\exp\big(2t+\textstyle{\frac{t^2}{2n}}\big). \label{e4.1}$$ Let $\lambda_{\max}(X_n)$ and $\lambda_{\min}(X_n)$ denote the largest and smallest eigenvalue of $X_n$ as functions of $\omega\in\Omega$. Then $$\begin{split} \exp(t\|X_n\|)&=\max\{\exp(t\lmax(X_n)),\exp(-t\lmin(X_n))\} \\[.2cm] &\le\exp(t\lmax(X_n))+\exp(-t\lmin(X_n))\\[.2cm] &\le \Tr_n\big(\exp(tX_n)+\exp(-tX_n)\big). \end{split}$$ Using this in connection with Jensen’s inequality, we find that $$\begin{aligned} \exp\big(t\E\{\|X_n\|\}\big)&\le& \E\big\{\exp(t\|X_n\|)\big\} \label{e4.2}\\[.2cm] &\le&\E\big\{\Tr_n(\exp(tX_n))\big\} + \E\big\{\Tr_n(\exp(-tX_n))\big\} \nonumber\\[.2cm] &=&2\E\big\{\Tr_n(\exp(tX_n))\big\}, \nonumber\end{aligned}$$ where the last equality is due to the fact that $-X_n\in\SGRM(n,\frac{1}{n})$ too. Combining and we obtain the estimate $$\exp\big(t\E\{\|X_n\|\}\big)\le 2n\exp\big(2t+\textstyle{\frac{t^2}{2n}}\big),$$ and hence, after taking logarithms and dividing by $t$, $$\E\{\|X_n\|\}\le \frac{\log(2n)}{t}+2+\frac{t}{2n}. \label{e4.3}$$ This estimate holds for all positive numbers $t$. As a function of $t$, the right-hand side of attains its minimal value at $t_0=\sqrt{2n\log(2n)}$ and the minimal value is $2+2\sqrt{\log(2n)/2n}$. Combining this with we obtain . The estimate follows subsequently by noting that the function $t\mapsto\log(t)/t$ ($t>0$) attains its maximal value at $t=\e$, and thus $2+2\sqrt{\log(t)/t}\le 2+2\sqrt{1/\e}\approx 3.21$ for all positive numbers $t$. In the following we consider a fixed positive integer $m$ and fixed self-adjoint matrices $a_0,\dots,a_r$ in $\Mmsa$. We consider further, for each positive integer $n$, independent random matrices $X_1^{(n)},\dots,X_r^{(n)}$ in $\SGRM(n,\frac{1}{n})$. As in Sections \[sec2\] and \[sec3\], we define $$S_n=a_0 + \sum_{i=1}^ra_i\otimes X_i^{(n)}.$$ and, for any matrix $\lambda$ in $M_m(\C)$ such that ${\rm Im}(\lambda)$ is positive definite, we put $$H_n(\lambda)=(\id_m\otimes\tr_n) \big[(\lambda\otimes\unit_n-S_n)^{-1}\big],$$ and $$G_n(\lambda)=\E\{H_n(\lambda)\}.$$ \[estimat1\] Let $\lambda$ be a matrix in $M_m(\C)$ such that ${\rm Im}(\lambda)$ is positive definite. Then $G_n(\lambda)$ is invertible and $$\big\|G_n(\lambda)^{-1}\big\|\le \big(\|\lambda\|+K\big)^2\big\|(\im\lambda)^{-1}\big\|,$$ where $K= \|a_0\|+ 4\sum_{i=1}^r\|a_i\|$. We note first that $$\begin{split} \im&\big((\lambda\otimes\unit_n-S_n)^{-1}\big)\\[.2cm] &=\frac{1}{2\i}\big((\lambda\otimes\unit_n-S_n)^{-1} -(\lambda^*\otimes\unit_n-S_n)^{-1}\big) \\[.2cm] &=\frac{1}{2\i}\big((\lambda\otimes\unit_n-S_n)^{-1} \big((\lambda^*\otimes\unit_n-S_n)-(\lambda\otimes\unit_n-S_n)\big) (\lambda^*\otimes\unit_n-S_n)^{-1}\big) \\[.2cm] &=-(\lambda\otimes\unit_n-S_n)^{-1}({\rm Im}(\lambda)\otimes\unit_n) (\lambda^*\otimes\unit_n-S_n)^{-1}. \end{split}$$ From this it follows that $-{\rm Im}((\lambda\otimes\unit_n-S_n)^{-1})$ is positive definite at any $\omega$ in $\Omega$, and the inverse is given by $$\big(-{\rm Im}((\lambda\otimes\unit_n-S_n)^{-1})\big)^{-1} = (\lambda^*\otimes\unit_n-S_n)((\im\lambda)^{ -1}\otimes\unit_n)(\lambda\otimes\unit_n-S_n).$$ In particular, it follows that $$0\le \big(-{\rm Im}((\lambda\otimes\unit_n-S_n)^{-1})\big)^{-1} \le\big\|\lambda\otimes\unit_n-S_n\big\|^2\big\|(\im\lambda)^{-1}\big\| \cdot\unit_m\otimes\unit_n,$$ and this implies that $$-{\rm Im}\big((\lambda\otimes\unit_n-S_n)^{ -1}\big)\ge\frac{1}{\|\lambda\otimes\unit_n-S_n\|^2\|(\im\lambda)^{ -1}\|} \cdot\unit_m\otimes\unit_n.$$ Since the slice map $\id_m\otimes\tr_n$ is positive, we have thus established that $$\begin{aligned} -\im H_n(\lambda)&\ge& \frac{1}{\|\lambda\otimes\unit_n-S_n\|^2\|(\im\lambda)^{-1}\|} \cdot\unit_m \\ &\ge& \frac{1}{(\|\lambda\|+\|S_n\|)^2\|(\im\lambda)^{-1}\|} \cdot\unit_m,\end{aligned}$$ so that $$-\im G_n(\lambda) = \E\{-\im H_n(\lambda)\}\ge\frac{1}{\|(\im\lambda)^{-1}\|} \E\Big\{\frac{1}{(\|\lambda\|+\|S_n\|)^2}\Big\}\unit_m.$$ Note here that the function $t\mapsto\frac{1}{(\|\lambda\|+t)^2}$ is convex, so applying Jensen’s inequality to the random variable $\|S_n\|$, yields the estimate $$\E\Big\{\frac{1}{(\|\lambda\|+\|S_n\|)^2}\Big\}\ge \frac{1}{(\|\lambda\|+\E\{\|S_n\|\})^2},$$ where $$\begin{aligned} \E\{\|S_n\|\}&\le&\E\Big\{ \|a_0\| + \sum_{i=1}^r\|a_i\|\cdot\|X_i^{(n)}\|\Big\}\\ & =& \|a_0\| + \sum_{i=1}^r\|a_i\|\cdot\E\big\{\|X_i^{(n)}\|\big\} \le \|a_0\| + 4\sum_{i=1}^r\|a_i\|,\end{aligned}$$ by application of Lemma \[middelvaerdi af norm\]. Putting $K=4\sum_{i=1}^r\|a_i\|$, we may thus conclude that $$-\im G_n(\lambda) \ge \frac{1}{\|(\im\lambda)^{-1}\|}\frac{1}{(\|\lambda\|+K)^2}\unit_m.$$ By Lemma \[imaginaerdel\], this implies that $G_n(\lambda)$ is invertible and that $$\big\|G_n(\lambda)^{ -1}\big\|\le(\|\lambda\|+K)^2\cdot\big\|(\im\lambda)^{-1}\big\|,$$ as desired. \[kor til estimat1\] Let $\lambda$ be a matrix in $M_m(\C)$ such that $\im\lambda$ is positive definite. Then $$\Big\| a_0 + \sum_{i=1}^ra_iG_n(\lambda)a_i + G_n(\lambda)^{-1} - \lambda\Big\| \le \frac{C}{n^2}(K+\|\lambda\|)^2\big\|(\im\lambda)^{-1}\big\|^5, \label{e4.3c}$$ where[,]{} as before[,]{} $C=m^3\|\sum_{i=1}^ra_i^2\|^2$ and $K=\|a_0\| + 4\sum_{i=1}^r\|a_i\|$. Note that $$\begin{gathered} a_0 + \sum_{i=1}^ra_iG_n(\lambda)a_i + G_n(\lambda)^{-1} -\lambda\\ = \Big(\sum_{i=1}^ra_iG_n(\lambda)a_iG_n(\lambda)+(a_0 -\lambda) G_n(\lambda) +\unit_m\Big)G_n(\lambda)^{-1}.\end{gathered}$$ Hence, follows by combining Theorem \[master ineq\] with Proposition \[estimat1\]. In addition to the given matrices $a_0,\dots,a_r$ in $\Mmsa$, we consider next, as replacement for the random matrices $X_1^{(n)},\dots,X_r^{(n)}$, free self-adjoint operators $x_1,\dots,x_r$ in some $C^*$-probability space $(\CB,\tau)$. We assume that $x_1,\dots,x_r$ are identically semi-circular distributed, such that $\tau(x_i)=0$ and $\tau(x_i^2)=1$ for all $i$. Then put $$\label{eq4-7a} s=a_0\otimes \unit_{\CB} + \sum_{i=1}^ra_i\otimes x_i\in M_m(\C)\otimes\CB.$$ Consider further the subset $\CO$ of $M_m(\C)$, given by $$\begin{aligned} \CO&=&\{\lambda\in M_m(\C)\mid {\rm Im}(\lambda) \ \textrm{is positive definite}\} \label{e4.3a}\\ &= &\{\lambda\in M_m(\C)\mid\lmin(\im\lambda)>0\} \nonumber\end{aligned}$$ and for each positive number $\delta$, put $$\CO_{\delta}=\{\lambda\in\CO\mid \|(\im\lambda)^{-1}\|<\delta\} =\{\lambda\in\CO\mid \lmin(\im\lambda)>\delta^{-1}\}. \label{e4.3b}$$ Note that $\CO$ and $\CO_{\delta}$ are open subsets of $M_m(\C)$. If $\lambda\in\CO$, then it follows from Lemma \[imaginaerdel\] that $\lambda\otimes\unit_{\CB}-s$ is invertible, since $s$ is self-adjoint. Hence, for each $\lambda$ in $\CO$, we may define $$G(\lambda)=\id_m\otimes\tau\big[(\lambda\otimes\unit_{\CB}-s)^{-1}\big].$$ As in the proof of Lemma \[estimat1\], it follows that $G(\lambda)$ is invertible for any $\lambda$ in $\CO$. Indeed, for $\lambda$ in $\CO$, we have $$\begin{split} \im&\big((\lambda\otimes\unit_{\CB}-s)^{-1}\big)\\[.2cm] &=\frac{1}{2\i}\big((\lambda\otimes\unit_{\CB}-s)^{-1} \big((\lambda^*\otimes\unit_{\CB}-s)-(\lambda\otimes\unit_{\CB}-s)\big) (\lambda^*\otimes\unit_{\CB}-s)^{-1}\big) \\[.2cm] &=-(\lambda\otimes\unit_{\CB}-s)^{-1}({\rm Im}(\lambda)\otimes\unit_{\CB}) (\lambda^*\otimes\unit_{\CB}-s)^{-1}, \end{split}$$ which shows that $-{\rm Im}((\lambda\otimes\unit_{\CB}-s)^{-1})$ is positive definite and that $$\begin{split} 0\le\big(-{\rm Im}((\lambda\otimes\unit_{\CB}-s)^{-1})\big)^{-1} &= (\lambda^*\otimes\unit_{\CB}-s)((\im\lambda)^{ -1}\otimes\unit_{\CB})(\lambda\otimes\unit_{\CB}-s) \\[.2cm] &\le\big\|\lambda\otimes\unit_{\CB}-s\big\|^2\big\|(\im\lambda)^{ -1}\big\| \cdot\unit_m\otimes\unit_{\CB}. \end{split}$$ Consequently, $$-{\rm Im}\big((\lambda\otimes\unit_{\CB}-s)^{-1}\big)\ge \frac{1}{\|\lambda\otimes\unit_{\CB}-s\|^2\|(\im\lambda)^{-1}\|} \cdot\unit_m\otimes\unit_{\CB},$$ so that $$-\im G(\lambda)\ge \frac{1}{\|\lambda\otimes\unit_{\CB}-s\|^2\|(\im\lambda)^{-1}\|} \cdot\unit_m.$$ By Lemma \[imaginaerdel\], this implies that $G(\lambda)$ is invertible and that $$\big\|G(\lambda)^{-1}\big\|\le \big\|(\lambda\otimes\unit_{\CB}-s)\big\|^2\big\|(\im\lambda)^{ -1}\big\|.$$ The following lemma shows that the estimate in Corollary \[kor til estimat1\] becomes an exact equation, when $G_n(\lambda)$ is replaced by $G(\lambda)$. \[formel for G(lambda)\] With $\CO$ and $G(\lambda)$ defined as above[,]{} we have that $$a_0 + \sum_{i=1}^ra_iG(\lambda)a_i + G(\lambda)^{-1} = \lambda,$$ for all $\lambda$ in $\CO$. We start by recalling the definition of the R-transform $\CR_s$ of (the distribution of) $s$ with amalgamation over $M_m(\C)$: It can be shown (cf. [@V6]) that the expression $$G(\lambda)=\id_m\otimes\tau\big[(\lambda\otimes\unit_{\CB}-s)^{-1}\big],$$ gives rise to a well-defined and bijective mapping on a region of the form $$\CU_{\delta}=\big\{\lambda\in M_m(\C)\mid \lambda \ \textrm{is invertible and} \ \|\lambda^{-1}\|<\delta\big\},$$ where $\delta$ is a (suitably small) positive number. Denoting by $G^\brinv$ the inverse of the mapping $\lambda\mapsto G(\lambda)$ $(\lambda\in\CU_{\delta})$, the R-transform $\CR_s$ of $s$ with amalgamation over $M_m(\C)$ is defined as $$\CR_s(\rho)=G^\brinv(\rho)-\rho^{-1}, \qquad (\rho\in G(\CU_{\delta})).$$ In [@Le] it was proved that $$\CR_s(\rho)=a_0 + \sum_{i=1}^ra_i\rho a_i,$$ so that $$G^\brinv(\rho)=a_0 + \sum_{i=1}^ra_i\rho a_i + \rho^{-1}, \qquad (\rho\in G(\CU_{\delta}));$$ hence $$a_0 + \sum_{i=1}^ra_i G(\lambda)a_i + G(\lambda)^{-1} = \lambda, \qquad (\lambda\in\CU_{\delta}). \label{e4.4}$$ Note now that by Lemma \[imaginaerdel\], the set $\CO_{\delta}$, defined in , is a subset of $\CU_{\delta}$, and hence holds, in particular, for $\lambda$ in $\CO_{\delta}$. Since $\CO_{\delta}$ is an open, nonempty subset of $\CO$ (defined in ) and since $\CO$ is a nonempty connected (even convex) subset of $M_m(\C)$, it follows then from the principle of uniqueness of analytic continuation (for analytical functions in $m^2$ complex variables) that formula actually holds for all $\lambda$ in $\CO$, as desired. For $n$ in $\N$ and $\lambda$ in the set $\CO$ (defined in ), we introduce further the following notation: $$\begin{aligned} \Lambda_n(\lambda)&=&a_0 + \sum_{i=1}^ra_iG_n(\lambda)a_i+G_n(\lambda)^{-1}, \label{e4.4d} \\[.2cm] \varepsilon(\lambda)&=&\frac{1}{\|(\im\lambda)^{-1}\|} \ = \ \lmin(\im\lambda), \label{e4.4c} \\[.2cm] \CO_n'&=&\big\{\lambda\in\CO\bigm | \textstyle{\frac{C}{n^2}(K+\|\lambda\|)^2\varepsilon(\lambda)^{ -6}<\frac{1}{2}}\big\}, \label{e4.4b}\end{aligned}$$ where, as before, $C=\frac{\pi^2}{8}m^3\|\sum_{i=1}^ra_i^2\|^2$ and $K=\|a_0\| + 4\sum_{i=1}^r\|a_i\|$. Note that $\CO'_n$ is an open subset of $M_m(\C)$, since the mapping $\lambda\mapsto\varepsilon(\lambda)$ is continuous on $\CO$. With the above notation we have the following \[formel for Gn(lambda)\] For any positive integer $n$ and any matrix $\lambda$ in $\CO_n',$ $$\im\Lambda_n(\lambda)\ge\frac{\varepsilon(\lambda)}{2}\unit_m. \label{e4.4a}$$ In particular[,]{} $\Lambda_n(\lambda)\in\CO$. Moreover $$a_0 + \sum_{i=1}^ra_iG(\Lambda_n(\lambda))a_i + G(\Lambda_n(\lambda))^{-1} = a_0 + \sum_{i=1}^ra_iG_n(\lambda)a_i + G_n(\lambda)^{-1}, \label{e4.5}$$ for any $\lambda$ in $\CO_n'$. Note that the right-hand side of is nothing else than $\Lambda_n(\lambda)$. Therefore, follows from Lemma \[formel for G(lambda)\], once we have established that $\Lambda_n(\lambda)\in\CO$ for all $\lambda$ in $\CO_n'$. This, in turn, is an immediate consequence of . It suffices thus to verify . Note first that for any $\lambda$ in $\CO$, we have by Corollary \[kor til estimat1\] that $$\begin{split} \big\|\im\Lambda_n(\lambda)-\im\lambda\big\| \le \big\|\Lambda_n(\lambda)-\lambda\big\| &=\Big\|a_0 + \sum_{i=1}^ra_iG_n(\lambda)a_i + G_n(\lambda)^{-1} -\lambda\Big\| \\[.2cm] &\le \frac{C}{n^2}(K+\|\lambda\|)^2\varepsilon(\lambda)^{-5}. \end{split}$$ In particular, $\im\Lambda_n(\lambda)-\im\lambda\ge -\frac{C}{n^2}(K+\|\lambda\|)^2\varepsilon(\lambda)^{-5}\unit_m$, and since also $\im\lambda\ge\varepsilon(\lambda)\unit_m$, by definition of $\varepsilon(\lambda)$, we conclude that $$\im\Lambda_n(\lambda) = \im\lambda + (\im\Lambda_n(\lambda)-\im\lambda) \ge\big(\varepsilon(\lambda) -\textstyle{\frac{C}{n^2}}(K+\|\lambda\|)^2\varepsilon(\lambda)^{ -5}\big)\unit_m, \label{e4.6}$$ for any $\lambda$ in $\CO$. Assume now that $\lambda\in\CO_n'$. Then $\frac{C}{n^2}(K+\|\lambda\|)^2\varepsilon(\lambda)^{ -5}<\frac{1}{2}\varepsilon(\lambda)$, and inserting this in , we find that $$\im\Lambda_n(\lambda)\ge\textstyle{\frac{1}{2}}\varepsilon(\lambda)\unit _m,$$ as desired. \[sammenhaeng mellem G og Gn\] Let $n$ be a positive integer. Then with $G$[,]{} $G_n$ and $\CO_n'$ as defined above[,]{} we have that $$G(\Lambda_n(\lambda))=G_n(\lambda),$$ for all $\lambda$ in $\CO'_n$. Note first that the functions $\lambda\mapsto G_n(\lambda)$ and $\lambda\mapsto G(\Lambda_n(\lambda))$ are both analytical functions (of $m^2$ complex variables) defined on $\CO_n'$ and taking values in $M_m(\C)$. Applying the principle of uniqueness of analytic continuation, it suffices thus to prove the following two assertions: - The set $\CO_n'$ is an open connected subset of $M_m(\C)$. - The formula $G(\Lambda_n(\lambda))=G_n(\lambda)$ holds for all $\lambda$ in some open, nonempty subset $\CO_n''$ of $\CO_n'$. [*Proof of*]{} (a). We have already noted that $\CO_n'$ is open. Consider the subset $I_n$ of $\R$ given by: $$I_n=\big\{t\in{}]0,\infty[ \bigm | \textstyle{\frac{C}{n^2}}(K+t)^2t^{-6}<\frac{1}{2}\big\},$$ with $C$ and $K$ as above. Note that since the function $t\mapsto(K+t)^2t^{-6}$ $(t>0)$ is continuous and strictly decreasing, $I_n$ has the form: $I_n={}]t_n,\infty[$, where $t_n$ is uniquely determined by the equation: $\textstyle{\frac{C}{n^2}}(K+t)^2t^{-6}=\frac{1}{2}$. Note further that for any $t$ in $I_n$, $\i t\unit_m\in\CO_n'$, and hence the set $$\CI_n=\{\i t\unit_m\mid t\in I_n\},$$ is an arc-wise connected subset of $\CO_n'$. To prove (a), it suffices then to show that any $\lambda$ in $\CO_n'$ is connected to some point in $\CI_n$ via a continuous curve $\gamma_{\lambda}$, which is entirely contained in $\CO_n'$. So let $\lambda$ from $\CO_n'$ be given, and note that $0\le\varepsilon(\lambda)=\lmin(\im\lambda)\le\|\lambda\|$. Thus, $$\frac{C}{n^2}(K+\varepsilon(\lambda))^2\varepsilon(\lambda)^{-6}\le \frac{C}{n^2}(K+\|\lambda\|)^2\varepsilon(\lambda)^{-6}<\frac{1}{2},$$ and therefore $\varepsilon(\lambda)\in I_n$ and $\i\varepsilon(\lambda)\unit_m\in\CI_n$. Now, let $\gamma_{\lambda}\colon[0,1]\to M_m(\C)$ be the straight line from $\i\varepsilon(\lambda)\unit_m$ to $\lambda$, i.e., $$\gamma_{\lambda}(t)=(1-t)\i\varepsilon(\lambda)\unit_m+t\lambda, \qquad (t\in[0,1]).$$ We show that $\gamma_{\lambda}(t)\in\CO_n'$ for all $t$ in $[0,1]$. Note for this that $$\im\gamma_{\lambda}(t) = (1-t)\varepsilon(\lambda)\unit_m+t\im\lambda, \qquad (t\in[0,1]),$$ so obviously $\gamma_{\lambda}(t)\in\CO$ for all $t$ in $[0,1]$. Furthermore, if $0\le r_1\le r_2\le\cdots\le r_m$ denote the eigenvalues of ${\rm Im}(\lambda)$, then, for each $t$ in $[0,1]$, $(1-t)\varepsilon(\lambda)+tr_j$ ($j=1,2,\dots,m)$ are the eigenvalues of $\im\gamma_{\lambda}(t)$. In particular, since $r_1=\varepsilon(\lambda)$, $\varepsilon(\gamma_{\lambda}(t))=\lmin(\im\gamma_{\lambda}(t)) =\varepsilon(\lambda)$ for all $t$ in $[0,1]$. Note also that $$\|\gamma_{\lambda}(t)\|\le (1-t)\varepsilon(\lambda)+t\|\lambda\| \le (1-t)\|\lambda\|+t\|\lambda\|=\|\lambda\|,$$ for all $t$ in $[0,1]$. Altogether, we conclude that $$\frac{C}{n^2}(K+\|\gamma_{\lambda}(t)\|)^2\varepsilon(\gamma_{\lambda}(t ))^{-6} \le \frac{C}{n^2}(K+\|\lambda\|)^2\varepsilon(\lambda)^{-6} < \frac{1}{2},$$ and hence $\gamma_{\lambda}(t)\in\CO_n'$ for all $t$ in $[0,1]$, as desired. Consider, for the moment, a fixed matrix $\lambda$ from $\CO_n'$, and put $\zeta=G_n(\lambda)$ and $\upsilon=G(\Lambda_n(\lambda))$. Then Lemma \[formel for Gn(lambda)\] asserts that $$a_0 + \sum_{i=1}^ra_i\upsilon a_i + \upsilon^{-1} = a_0 + \sum_{i=1}^ra_i\zeta a_i + \zeta^{-1},$$ so that $$\upsilon\Big(\sum_{i=1}^ra_i\upsilon a_i + \upsilon^{-1}\Big)\zeta =\upsilon\Big( \sum_{i=1}^ra_i\zeta a_i + \zeta^{-1}\Big)\zeta;$$ hence $$\sum_{i=1}^r\upsilon a_i(\upsilon-\zeta)a_i\zeta = \upsilon-\zeta.$$ In particular, it follows that $$\Big(\|\upsilon\|\|\zeta\|\sum_{i=1}^r\|a_i\|^2\Big)\|\upsilon-\zeta\| \ge \|\upsilon-\zeta\|. \label{e4.7}$$ Note here that by Lemma \[imaginaerdel\], $$\begin{aligned} \|\zeta\|&=&\|G_n(\lambda)\|= \big\|\id_m\otimes\tr_n\big[(\lambda\otimes\unit_n-S_n)^{-1}\big]\big\| \label{e4.8}\\[.2cm] &\le& \big\|(\lambda\otimes\unit_n-S_n)^{-1}\big\| \le\big\|(\im\lambda)^{-1}\big\|=\frac{1}{\varepsilon(\lambda)}. \nonumber\end{aligned}$$ Similarly, it follows that $$\|\upsilon\|=\|G(\Lambda_n(\lambda))\|\le \big\|(\Lambda_n(\lambda)\otimes\unit_{\CB}-s)^{-1}\big\| \le \big\|(\im\Lambda_n(\lambda))^{ -1}\big\|\le\frac{2}{\varepsilon(\lambda)}, \label{e4.9}$$ where the last inequality follows from in Lemma \[formel for Gn(lambda)\]. Combining –, it follows that $$\Big(\frac{2}{\varepsilon(\lambda)^2}\sum_{i=1}^r\|a_i\|^2\Big)\|\upsilon-\zeta\| \ge \|\upsilon-\zeta\|. \label{e4.10}$$ This estimate holds for all $\lambda$ in $\CO_n'$. If $\lambda$ satisfies, in addition, that $\frac{2}{\varepsilon(\lambda)^2}\sum_{i=1}^r\|a_i\|^2\break<1$, then implies that $\zeta=\upsilon$, i.e., $G_n(\lambda)=G(\Lambda_n(\lambda))$. Thus, if we put $$\CO_n''=\big\{\lambda\in\CO_n'\bigm | \varepsilon(\lambda)>\textstyle{\sqrt{2\sum_{i=1}^r\|a_i\|^2}}\big\},$$ we have established that $G_n(\lambda)=G(\Lambda_n(\lambda))$ for all $\lambda$ in $\CO_n''$. Since $\varepsilon(\lambda)$ is a continuous function of $\lambda$, $\CO_n''$ is clearly an open subset of $\CO_n'$, and it remains to check that $\CO_n''$ is nonempty. Note, however, that for any positive number $t$, the matrix $\i t\unit_m$ is in $\CO$ and it satisfies that $\|\i t\unit_m\|=\varepsilon(\i t\unit_m)=t$. From this, it follows easily that $\i t\unit_m\in\CO_n''$ for all sufficiently large positive numbers $t$. This concludes the proof of (b) and hence the proof of Proposition \[sammenhaeng mellem G og Gn\]. \[estimat af Gn(lambda)-G(lambda)\] Let $r,m$ be positive integers[,]{} let $a_1,\dots,a_r$ be self[-]{}adjoint matrices in $M_m(\C)$ and[,]{} for each positive integer $n$[,]{} let $X_1^{(n)},\dots,X_r^{(n)}$ be independent random matrices in $\SGRM(n,\frac{1}{n})$. Consider further free self[-]{}adjoint identically semi[-]{}circular distributed operators $x_1,\dots,x_r$ in some $C^*$[-]{}probability space $(\CB,\tau)$[,]{} and normalized such that $\tau(x_i)=0$ and $\tau(x_i^2)=1$ for all $i$. Then put as in and [:]{} $$\begin{aligned} s&=&a_0 \otimes\unit_{\CB} + \sum_{i=1}^ra_i\otimes x_i\in M_m(\C)\otimes\CB\\ S_n &=&a_0\otimes\unit_n + \sum_{i=1}^ra_i\otimes X_i^{(n)}\in M_m(\C)\otimes M_n(\C), \quad (n\in\N),\end{aligned}$$ and for $\lambda$ in $\CO=\{\lambda\in M_m(\C)\mid {\rm Im}(\lambda) \ \textrm{is positive definite}\}$ define $$\begin{aligned} G_n(\lambda)&=&\E\big\{ (\id_m\otimes\tr_n) \big[(\lambda\otimes\unit_n-S_n)^{-1}\big]\big\}\\ G(\lambda)&=&(\id_m\otimes\tau) \big[(\lambda\otimes\unit_{\CB}-s)^{-1}\big].\end{aligned}$$ Then[,]{} for any $\lambda$ in $\CO$ and any positive integer $n$[,]{} we have $$\big\|G_n(\lambda)-G(\lambda)\big\|\le \frac{4C}{n^2}(K+\|\lambda\|)^2\big\|(\im\lambda)^{-1}\big\|^7, \label{e4.11}$$ where $C=m^3\|\sum_{i=1}^r a_i^2\|^2$ and $K=\|a_0\| + 4\sum_{i=1}^r\|a_i\|$. Let $n$ in $\N$ be fixed, and assume first that $\lambda$ is in the set $\CO_n'$ defined in . Then, by Proposition \[sammenhaeng mellem G og Gn\], we have $$\begin{split} \big\|G_n(\lambda)-G(\lambda)\big\| &= \big\|G(\Lambda_n(\lambda))-G(\lambda)\big\| \\[.2cm] &=\big\|\id_m\otimes\tau\big[(\Lambda_n(\lambda)\otimes\unit_{\CB}- s)^{-1} -(\lambda\otimes\unit_{\CB}-s)^{-1}\big]\big\| \\[.2cm] &\le\big\|(\Lambda_n(\lambda)\otimes\unit_{\CB}-s)^{-1} -(\lambda\otimes\unit_{\CB}-s)^{-1}\big\|. \end{split}$$ Note here that $$\begin{gathered} (\Lambda_n(\lambda)\otimes\unit_{\CB}-s)^{-1} -(\lambda\otimes\unit_{\CB}-s)^{-1} \\[.2cm] =(\Lambda_n(\lambda)\otimes\unit_{\CB}-s)^{-1} \big((\lambda-\Lambda_n(\lambda)\otimes\unit_n\big) (\lambda\otimes\unit_{\CB}-s)^{-1},\end{gathered}$$ and therefore, taking Lemma \[imaginaerdel\] into account, $$\begin{split} \big\|G_n(\lambda)-G(\lambda)\big\|&\le \big\|(\Lambda_n(\lambda)\otimes\unit_{\CB}-s)^{-1}\big\|\cdot \big\|\lambda-\Lambda_n(\lambda)\big\|\cdot \big\|(\lambda\otimes\unit_{\CB}-s)^{-1}\big\| \\[.2cm] &\le \big\|(\im\Lambda_n(\lambda))^{-1}\big\|\cdot \big\|\lambda-\Lambda_n(\lambda)\big\|\cdot \big\|(\im\lambda)^{-1}\big\|. \end{split}$$ Now, $\|(\im\lambda)^{-1}\|=1/\varepsilon(\lambda)$ (cf.  ), and hence, by in Lemma \[formel for Gn(lambda)\], $\|(\im\Lambda_n(\lambda))^{-1}\|\le2/\varepsilon(\lambda)= 2\|(\im\lambda)^{-1}\|$. Furthermore, by and Corollary \[kor til estimat1\], $$\begin{aligned} \big\|\Lambda_n(\lambda)-\lambda\big\|& =& \Big\|a_0+ \sum_{i=1}^ra_iG_n(\lambda)a_i+G_n(\lambda)^{-1}-\lambda\Big\| \\ &\le& \frac{C}{n^2}(K+\|\lambda\|)^2\big\|(\im\lambda)^{-1}\big\|^5.\end{aligned}$$ Thus, we conclude that $$\big\|G_n(\lambda)-G(\lambda)\big\|\le \frac{2C}{n^2}(K+\|\lambda\|)^2\big\|(\im\lambda)^{-1}\big\|^7,$$ which shows, in particular, that holds for all $\lambda$ in $\CO_n'$. Assume next that $\lambda\in\CO\setminus\CO_n'$, so that $$\frac{C}{n^2}(K+\|\lambda\|)^2\big\|(\im\lambda)^{-1}\big\|^6= \frac{C}{n^2}(K+\|\lambda\|)^2\varepsilon(\lambda)^{-6}\ge\frac{1}{2}. \label{e4.12}$$ By application of Lemma \[imaginaerdel\], it follows that $$\big\|G(\lambda)\big\|\le \big\|(\lambda\otimes\unit_{\CB}-s)^{-1}\big\|\le \big\|(\im\lambda)^{-1}\big\|, \label{e4.13}$$ and similarly we find that $$\big\|\id_m\otimes\tr_n\big[(\lambda\otimes\unit_n-S_n(\omega))^{ -1}\big]\big\| \le\big\|(\im\lambda)^{-1}\big\|,$$ at all points $\omega$ in $\Omega$. Hence, after integrating with respect to $\omega$ and using Jensen’s inequality, $$\|G_n(\lambda)\|\le \E\big\{\big\|\id_m\otimes\tr_n \big[(\lambda\otimes\unit_n-S_n)^{-1}\big]\big\|\big\} \le\big\|(\im\lambda)^{-1}\big\|. \label{e4.14}$$ Combining –, we find that $$\begin{gathered} \big\|G_n(\lambda)-G(\lambda)\big\|\le 2\big\|(\im\lambda)^{-1}\big\| \\ =\frac{1}{2}\cdot4\big\|(\im \lambda)^{-1}\big\| \le \frac{4C}{n^2}(K+\|\lambda\|)^2\big\|(\im\lambda)^{-1}\big\|^7,\end{gathered}$$ verifying that holds for $\lambda$ in $\CO\setminus\CO_n'$ too. The spectrum of $S_n$ {#sec5} ===================== Let $r,m\in\N$, let $a_0,\dots,a_r\in M_m(\C)_{\rm sa}$ and for each $n\in\N$, let $X_1^{(n)},\dots\break\dots,X_r^{(n)}$ be $r$ independent random matrices in $\SGRM(n,\frac1n)$. Let further $x_1,\dots,x_r$ be a semi-circular family in a $C^*$-probability space $(\CB,\tau)$, and define $S_n$, $s$, $G_n(\lambda)$ and $G(\lambda)$ as in Theorem \[estimat af Gn(lambda)-G(lambda)\]. \[cor5-2\] For $\lambda\in\C$ with $\im\lambda >0$[,]{} put $$\label{eq5-6} g_n(\lambda) = \E\big\{(\tr_m\otimes\tr_n)[(\lambda\unit_{mn}-S_n)^{-1}]\big\}$$ and $$\label{eq5-7} g(\lambda) = (\tr_m\otimes\tau)\big[(\lambda(\unit_{m}\otimes\unit_{\CB})-s)^{ -1}\big].$$ Then $$\label{eq5-8} |g_n(\lambda)-g(\lambda)| \le \frac{4C}{n^2} \big(K+|\lambda|\big)^2(\im\lambda)^{-7}$$ where $C$[,]{} $K$ are the constants defined in Theorem [\[estimat af Gn(lambda)-G(lambda)\].]{} This is immediate from Theorem \[estimat af Gn(lambda)-G(lambda)\] because $$g_n(\lambda) = \tr_m(G_n(\lambda \unit_m))$$ and $ \displaystyle{g(\lambda) = \tr_m(G(\lambda \unit_m)). } $ Let Prob$(\R)$ denote the set of Borel probability measures on $\R$. We equip Prob$(\R)$ with the weak$^*$-topology given by $C_0(\R)$, i.e., a net $(\mu_\alpha)_{\alpha\in A}$ in Prob$(\R)$ converges in weak$^*$-topology to $\mu\in\mbox{Prob}(\R)$, if and only if $$\lim_\alpha\bigg(\int_\R\varphi\6\mu_\alpha\bigg) = \int_\R \varphi\6\mu$$ for all $\varphi\in C_0(\R)$. Since $S_n$ and $s$ are self-adjoint, there are, by Riesz’ representation theorem, unique probability measures $\mu_n$, $n=1,2,\dots$ and $\mu$ on $\R$, such that $$\begin{aligned} \label{eq5-9} \int_\R \varphi\6\mu_n &=& \E\big\{(\tr_m\otimes\tr_n)\varphi(S_n)\big\}\\ \label{eq5-10} \int_\R \varphi\6\mu &=& (\tr_m\otimes\tau)\varphi(s)\end{aligned}$$ for all $\varphi\in C_0(\R)$. Note that $\mu$ is compactly supported while $\mu_n$, in general, is not compactly supported. \[thm5-3\] Let $S_n$ and $s$ be given by and and let $C=\frac{\pi^2}{8} m^3\|\sum^r_{i=1} a_i^2\|^2$ and $K=\|a_0\|+4\sum^r_{i=1} \|a_i\|$. Then for all $\varphi\in C_c^\infty(\R,\R)$[,]{} $$\label{eq5-11} \E\big\{(\tr_m\otimes\tr_n)\varphi(S_n)\big\} = (\tr_m\otimes\tau)\varphi(s)+R_n$$ where $$\label{eq5-12} |R_n| \le\frac{4C}{315\pi n^2}\int_\R\big|((1+D)^8\varphi)(x)\big|\big(K+2+|x|\big)^2 \6x$$ and $D=\frac{\rm d}{{\rm d}x}$. In particular $R_n=O(\frac{1}{n^2})$ for $n\to\infty$. Let $g_n,g,\mu_n,\mu$ be as in , , and . Then for any complex number $\lambda$, such that ${\rm Im}(\lambda)>0$, we have $$\begin{aligned} \label{eq5-13} g_n(\lambda) &=& \int_\R \frac{1}{\lambda -x}\6\mu_n(x)\\ \label{eq5-14} g(\lambda) &=& \int_\R \frac{1}{\lambda-x}\6\mu(x).\end{aligned}$$ Hence $g_n$ and $g$ are the Stieltjes transforms (or Cauchy transforms, in the terminology of [@vdn]) of $\mu_n$ and $\mu$ in the half plane $\im\lambda>0$. Hence, by the inverse Stieltjes transform, $$\mu_n = \lim_{y\to 0^+}\Big(-\frac{1}{\pi}{\rm Im}(g_n(x+\i{}y))\6x\Big)$$ where the limit is taken in the weak$^*$-topology on Prob$(\R)$. In particular, for all $\varphi$ in $C_c^\infty(\R,\R)$: $$\label{eq5-15} \int_\R \varphi(x) \6\mu_n(x) = \lim_{y\to 0^+} \Big[-\frac1\pi \im\Big(\int_\R \varphi(x)g_n(x+\i{}y)\6x\Big)\Big].$$ In the same way we get for $\varphi\in C_c^\infty(\R,\R)$: $$\label{eq5-16} \int_\R \varphi(x) \6\mu(x) = \lim_{y\to 0^+} \Big[-\frac1\pi \im\int_\R \varphi(x)g(x+\i{}y)\6x\Big].$$ In the rest of the proof, $n\in\N$ is fixed, and we put $h(\lambda)=g_n(\lambda)-g(\lambda)$. Then by and $$\label{eq5-17} \Big| \int_\R \varphi(x)\6\mu_n(x)-\int_{\R}\varphi(x)\6\mu(x)\Big| \le\frac1\pi \limsup_{y\to 0^+} \Big|\int_\R \varphi(x)h(x+\i{}y)\6x\Big|.$$ For $\im\lambda >0$ and $p\in\N$, put $$\label{eq5-18} I_p(\lambda) = \frac{1}{(p-1)!} \int^\infty_0 h(\lambda +t)t^{p-1}\e^{-t}\6t.$$ Note that $I_p(\lambda)$ is well defined because, by and , $h(\lambda)$ is uniformly bounded in any half-plane of the form $\im\lambda\ge\varepsilon$, where $\varepsilon >0$. Also, it is easy to check that $I_p(\lambda)$ is an analytic function of $\lambda$, and its first derivative is given by $$\label{eq5-19} I_p'(\lambda) = \frac{1}{(p-1)!} \int^\infty_0 h'(\lambda +t)t^{p-1} \e^{-t} \6t$$ where $h'=\frac{{\rm d}h}{{\rm d}\lambda}$. We claim that $$\begin{aligned} \label{eq5-20} I_1(\lambda)-I'_1(\lambda) &=& h(\lambda)\\[.2cm] \label{eq5-21} I_p(\lambda)-I'_p(\lambda) &=& I_{p-1}(\lambda),\quad p\ge 2.\end{aligned}$$ Indeed, by and partial integration we get $$\begin{aligned} I_1'(\lambda) &=& \big[h(\lambda +t)\e^{-t}\big]^\infty_0 + \int^\infty_0 h(\lambda +t)\e^{-t}\6t\\[.2cm] &=& -h(\lambda)+I_1(\lambda),\end{aligned}$$ which proves and in the same way we get for $p\ge 2$, $$\begin{aligned} I'_p(\lambda) &=& \frac{1}{(p-1)!} \int^\infty_0 h'(\lambda +t)t^{p-1}\e^{-t} \6t\\[.2cm] &=& -\frac{1}{(p-1)!} \int^\infty_0 h(\lambda +t)((p-1)t^{p-2}-t^{p-1})\e^{-t}\6t\\[.2cm] &=& -I_{p-1} (\lambda)+I_p(\lambda),\end{aligned}$$ which proves . Assume now that $\varphi\in C_c^\infty(\R,\R)$ and that $y>0$. Then, by and partial integration, we have $$\begin{aligned} \int_\R\varphi(x)h(x+\i{}y)\6x &=& \int_\R \varphi(x)I_1(x+\i{}y)\6x - \int_\R \varphi(x)I'_1(x+\i{}y)\6x\\[.2cm] &=& \int_\R \varphi(x)I_1(x+\i{}y)\6x + \int_\R \varphi'(x)I_1(x+\i{}y)\6x\\[.2cm] &=& \int_\R ((1+D)\varphi)(x)\cdot I_1(x+\i{}y)\6x,\end{aligned}$$ where $D=\frac{\rm d}{{\rm d}x}$. Using , we can continue to perform partial integrations, and after $p$ steps we obtain $$\int_\R \varphi(x)h(x+\i{}y)\6x = \int_\R ((1+D)^p\varphi)(x)\cdot I_p(x+\i{}y)\6x.$$ Hence, by , we have for all $p\in\N$: $$\begin{gathered} \label{eq5-22} \Big|\int_\R \varphi(x)\6\mu_n(x)-\int_{\R}\varphi(x)\6\mu(x)\Big| \\ \le\frac1\pi \limsup_{y\to 0^+} \Big|\int_\R ((1+D)^p\varphi)(x)\cdot I_p(x+\i{}y)\6x\Big|.\end{gathered}$$ Next, we use to show that for $p=8$ and $\im\lambda >0$ one has $$\label{eq5-23} |I_8(\lambda)| \le \frac{4C(K+2+|\lambda|)^2}{315n^2}.$$ To prove , we apply Cauchy’s integral theorem to the function $$F(z) = \frac{1}{7!} h(\lambda +z)z^7\e^{-z},$$ which is analytic in the half-plane $\im z>-\im\lambda$. Hence for $r>0$ $$\int_{[0,r]}F(z)\6z + \int_{[r,r+\i{}r]} F(z)\6z + \int_{[r+\i{}r,0]} F(z)\6z=0$$ where $[\alpha,\beta]$ denotes the line segment connecting $\alpha$ and $\beta$ in $\C$ oriented from $\alpha$ to $\beta$. Put $$M(\lambda) = \sup\big\{ |h(w)| \bigm | \im w \ge \im\lambda\big\}.$$ Then by and , $M(\lambda) \le\frac{2}{|\im\lambda|}<\infty$. Hence $$\begin{aligned} \Big|\int_{[r,r+\i{}r]} F(z)\6z\Big| &\le & \frac{M(\lambda)}{7!} \int^r_0|r+\i{}t|^7 \e^{-r}\6t\\[.2cm] &\le & \frac{M(\lambda)}{7!} (2r)^7 r\cdot \e^{-r}\\[.2cm] &\to & 0, \qquad\mbox{for $r\to\infty$}.\end{aligned}$$ Therefore, $$\begin{aligned} I_8(\lambda) &=& \frac{1}{7!}\int^\infty_0 h(\lambda +t)t^7 \e^{-t}\6t \\[.2cm] &=& \lim_{r\to\infty} \int_{[0,r]} F(z)\6z \nonumber \\[.2cm] &=& \lim_{r\to\infty} \int_{[0,r+\i{}r]} F(z)\6z \nonumber \\[.2cm] \label{eq5-24} &=& \frac{1}{7!} \int^\infty_0 h(\lambda +(1+\i)t)((1+\i)t)^7 \e^{-(1+\i)t}(1+\i)\6t.\nonumber\end{aligned}$$ By , $$|h(w)| \le\frac{4C}{n^2} (K+|w|)^2(\im w)^{-7},\qquad \im w > 0.$$ Inserting this in we get $$\begin{aligned} |I_8(\lambda)| &\le & \frac{4C}{7!n^2}\int^\infty_0 \frac{\big(K+|\lambda|+\sqrt{2} t\big)^2}{(\im\lambda +t)^7} (\sqrt{2} t)^7 \e^{-t}\sqrt{2}\6t\\[.2cm] &\le & \frac{2^6 C}{7! n^2}\int^\infty_0\big(K+|\lambda|+\sqrt{2} t\big)^2 \e^{-t}\6t\\ &=& \frac{4C}{315 n^2}\big((K+|\lambda|)^2+2\sqrt{2}(K+|\lambda|)+4\big)\\[.2cm] &\le & \frac{4C}{315n^2} (K+|\lambda|+2)^2.\end{aligned}$$ This proves . Now, combining and , we have $$\begin{split} \Big|\int_\R\varphi(x)\6\mu_n(x)&-\int_{\R}\varphi(x)\6\mu(x)\Big| \\[.2cm] &\le\frac{4C}{315\pi n^2} \limsup_{y\to 0^+}\int_\R \big|((1+D)^8\varphi)(x)\big|\big(K+2+|x+\i{}y|\big)^2\6x\\[.2cm] &=\frac{4C}{315\pi n^2} \int_\R\big|((1+D)^8\varphi)(x)\big|\big(K+2+|x|\big)^2\6x \end{split}$$ for all $\varphi\in C_c^\infty(\R,\R)$. Together with and this proves Theorem \[thm5-3\]. \[lemma5-4\] Let $S_n$ and $s$ be given by and and let $\varphi\colon\R\to\R$ be a $C^\infty$-function which is constant outside a compact subset of $\R$. Assume further that $$\label{eq5-25} \supp(\varphi)\cap \spe(s)=\emptyset .$$ Then $$\begin{aligned} \label{eq5-26} \E\big\{(\tr_m\otimes\tr_n)\varphi(S_n)\big\} &=& O\big(\textstyle{\frac{1}{n^2}}\big),\qquad\mbox{for $n\to\infty$}\\[.2cm] \label{eq5-27} \V\big\{(\tr_m\otimes\tr_n)\varphi(S_n)\big\} &=& O\big(\textstyle{\frac{1}{n^4}}\big),\qquad\mbox{for $n\to\infty$}\end{aligned}$$ where $\V$ is the absolute variance of a complex random variable [(]{}cf. [§\[sec3\]).]{} Moreover $$\label{eq5-28} (\tr_m\otimes\tr_n)\varphi(S_n(\omega)) = O(n^{-4/3})$$ for almost all $\omega$ in the underlying probability space $\Omega$. By the assumptions, $\varphi=\psi+c$, for some $\psi$ in $C_c^\infty(\R,\R)$ and some constant $c$ in $\R$. By Theorem \[thm5-3\] $$\E\big\{(\tr_m\otimes\tr_n)\psi(S_n)\big\} = (\tr_m\otimes\tau)\psi(s)+O\big(\textstyle{\frac{1}{n^2}}\big), \qquad \mbox{for $n\to\infty$},$$ and hence also $$\E\big\{(\tr_m\otimes\tr_n)\varphi(S_n)\big\} = (\tr_m\otimes\tau)\varphi(s)+O\big(\textstyle{\frac{1}{n^2}}\big), \qquad\mbox{for $n\to\infty$}.$$ But since $\varphi$ vanishes on $\spe(s)$, we have $\varphi(s)=0$. This proves . Moreover, applying Proposition \[diff-kvotient trick\] to $\psi\in C_c^\infty(\R)$, we have $$\label{eq5-29} \V\big\{(\tr_m\otimes\tr_n)\psi(S_n)\big\}\le\frac{1}{n^2} \Big\|\sum^r_{i=1} a_i^2\Big\|^2\E\big\{(\tr_m\otimes\tr_n)(\psi'(S_n))^2\big\}.$$ By , $\psi'=\varphi'$ also vanishes on $\spe(s)$. Hence, by Theorem \[thm5-3\] $$\E\big\{(\tr_m\otimes\tr_n)|\psi'(S_n)|^2\big\} = O\big(\textstyle{\frac{1}{n^2}}\big),\qquad\mbox{as $n\to\infty$}.$$ Therefore, by $$\V\big\{(\tr_m\otimes\tr_n)\psi(S_n)\big\} = O\big(\textstyle{\frac{1}{n^4}}\big),\qquad\mbox{as $n\to\infty$.}$$ Since $\varphi(S_n)=\psi(S_n)+c\unit_{mn}$, $\V\big\{(\tr_m\otimes\tr_n)\varphi(S_n)\big\} =\V\big\{(\tr_m\otimes\tr_n)\psi(S_n)\big\}$. This proves . Now put $$\begin{aligned} Z_n &=& (\tr_m\otimes\tr_n)\varphi(S_n)\\[.2cm] \Omega_n &=& \big\{\omega\in\Omega\bigm | |Z_n(\omega)|\ge n^{-4/3}\big\}.\end{aligned}$$ By and $$\E\big\{|Z_n|^2\big\} = |\E\{Z_n\}|^2 +\V\{Z_n\} =O\big(\textstyle{\frac{1}{n^4}}\big), \qquad \mbox{for $n\to\infty$}.$$ Hence $$P(\Omega_n)=\int_{\Omega_n}\6P(\omega)\le \int_{\Omega_n}\big|n^{4/3}Z_n(\omega)\big|^2\6P(\omega) \le n^{8/3}\E\big\{|Z_n|^2\big\}=O(n^{-4/3}),$$ for $n\to\infty$. In particular $\sum^\infty_{n=1} P(\Omega_n)<\infty$. Therefore, by the Borel-Cantelli lemma (see e.g. [@bre]), $\omega\notin\Omega_n$ eventually, as $n\to\infty$, for almost all $\omega\in\Omega$; i.e., $|Z_n(\omega)| < n^{-4 /3}$ eventually, as $n\to\infty$, for almost all $\omega\in\Omega$. This proves . \[thm5-5\] Let $m\in\N$ and let $a_0,\dots,a_r\in M_m(\C)_{\rm sa}$[,]{} $S_n$ and $s$ be as in Theorem [\[estimat af Gn(lambda)-G(lambda)\].]{} Then for any $\varepsilon>0$ and for almost all $\omega\in\Omega$[,]{} $$\spe(S_n(\omega))\subseteq \spe(s) \ + \ ]-\varepsilon,\varepsilon[,$$ eventually as $n\to\infty$. Put $$\begin{aligned} K &=& \spe(s)+\big[-\textstyle{\frac{\varepsilon}{2}, \frac{\varepsilon}{2}}\big]\\[.2cm] F &=& \big\{t\in\R\mid d(t,\spe(s))\ge\varepsilon\big\}.\end{aligned}$$ Then $K$ is compact, $F$ is closed and $K\cap F=\emptyset$. Hence there exists $\varphi\in C^\infty(\R)$, such that $0\le\varphi\le 1$, $\varphi(t)=0$ for $t\in K$ and $\varphi(t)=1$ for $t\in F$ (cf. [@F (8.18) p. 237]). Since $\C\backslash F$ is a bounded set, $\varphi$ satisfies the requirements of lemma \[lemma5-4\]. Hence by , there exists a $P$-null set $N\subseteq\Omega$, such that for all $\omega\in\Omega\backslash N$: $$(\tr_m\otimes\tr_n)\varphi(S_n(\omega)) = O(n^{-4/3}),\qquad\mbox{as $n\to\infty$}.$$ Since $\varphi\ge 1_F$, it follows that $$(\tr_m\otimes\tr_n)1_F(S_n(\omega))=O(n^{-4/3}),\qquad\mbox{as $n\to\infty$}.$$ But for fixed $\omega\in\Omega\backslash N$, the number of eigenvalues (counted with multiplicity) of the matrix $S_n(\omega)$ in the set $F$ is equal to $mn(\tr_m\otimes\tr_n)1_F(S_n(\omega))$, which is $O(n^{-1/3})$ as $n\to\infty$. However, for each $n\in\N$ the above number is an integer. Hence, the number of eigenvalues of $S_n(\omega)$ in $F$ is zero eventually as $n\to\infty$. This shows that $$\spe(S_n(\omega)) \subseteq\C\backslash F = \spe(s) \ + \ ]-\varepsilon,\varepsilon[$$ eventually as $n\to\infty$, when $\omega\in\Omega\backslash N$. Proof of the main theorem {#sec6} ========================= Throughout this section, $r\in\N\cup\{\infty\}$, and, for each $n$ in $\N$, we let $(X_i^{(n)})_{i=1}^r$ denote a finite or countable set of independent random matrices from $\SGRM(n,\frac{1}{n})$, defined on the same probability space $(\Omega,\CF,P)$. In addition, we let $(x_i)_{i=1}^r$ denote a corresponding semi-circular family in a $C^*$-probability space $(\CB,\tau)$, where $\tau$ is a faithful state on $\CB$. Furthermore, as in [@vdn], we let $\C\<(X_i)_{i=1}^r\>$ denote the algebra of all polynomials in $r$ noncommuting variables. Note that $\C\<(X_i)_{i=1}^r\>$ is a unital $*$-algebra, with the $*$-operation given by: $$(cX_{i_1}X_{i_2}\cdots X_{i_k})^* = \overline{c}X_{i_k}X_{i_{k-1}}\cdots X_{i_2}X_{i_1},$$ for $c$ in $\C$, $k$ in $\N$ and $i_1,i_2,\dots,i_k$ in $\{1,2,\dots,r\}$, when $r$ is finite, and in $\N$ when $r=\infty$. The purpose of this section is to conclude the proof of the main theorem (Theorem \[thm6-1\] below) by combining the results of the previous sections. \[thm6-1\] Let $r$ be in $\N\cup\{\infty\}$. Then there exists a $P$[-]{}null[-]{}set$N\subseteq\Omega$[,]{} such that for all $p$ in $\C\<(X_i)_{i=1}^r\>$ and all $\omega$ in $\Omega\setminus N$[,]{} we have $$\lim_{n\to\infty}\big\|p\big((X_i^{(n)}(\omega))_{i=1}^r\big)\big\| =\big\|p\big((x_i)_{i=1}^r\big)\big\|.$$ We start by proving the following \[lemma6-2\] Assume that $r\in\N$. Then there exists a $P$[-]{}null set $N_1\subseteq\Omega$[,]{} such that for all $p$ in $\C\<(X_i)_{i=1}^r\>$ and all $\omega$ in $\Omega\backslash N_1$[:]{} $$\label{eq6-2} \liminf_{n\to\infty}\big\| p\big(X_1^{(n)}(\omega),\dots,X_r^{(n)}(\omega)\big)\big\| \ge \|p(x_1,\dots,x_r)\|.$$ We first prove that for each $p$ in $\C\<X_1,\dots,X_r\>$, there exists a $P$-null-set $N(p)$, depending on $p$, such that holds for all $\omega$ in $\Omega\setminus N(p)$. This assertion is actually a special case of [@T Prop. 4.5], but for the readers convenience, we include a more direct proof: Consider first a fixed $p\in\C\<X_1,\dots,X_r\>$. Let $k\in\N$ and put $q=(p^*p)^k$. By [@T Cor. 3.9] or [@HP2], $$\label{eq6-3} \lim_{n\to\infty} \tr_n\big(q(X_1^{(n)}(\omega),\dots,X_r^{(n)}(\omega))\big) =\tau\big(q(x_1,\dots,x_r)\big),$$ for almost all $\omega\in\Omega$. For $s\ge 1$, $Z\in M_n(\C)$ and $z\in\CB$, put $\|Z\|_s=\tr_n(|Z|^s)^{1/s}$ and $\|z\|_s=\tau(|z|^s)^{1/s}$. Then can be rewritten as $$\label{eq6-4} \lim_{n\to\infty} \big\|p\big(X_1^{(n)}(\omega),\dots,X_r^{(n)}(\omega)\big)\big\|_{2k}^{2 k} =\big\|p(x_1,\dots,x_r)\big\|^{2k}_{2k}$$ for $\omega\in\Omega\backslash N(p)$, where $N(p)$ is a $P$-null-set. Since $\N$ is a countable set, we can assume that $N(p)$ does not depend on $k\in\N$. For every bounded Borel function $f$ on a probability space, one has $$\|f\|_{\infty}=\lim_{k\to\infty}\|f\|_k, \label{eq6-4a}$$ (cf. [@F Exercise 7, p. 179]). Put $a=p(x_1,\dots,x_r)$, and let $\Gamma\colon \CD\to C(\hat{\CD})$ be the Gelfand transform of the Abelian $C^*$-algebra $\CD$ generated by $a^*a$ and $\unit_{\CB}$, and let $\mu$ be the probability measure on $\hat{\CD}$ corresponding to $\tau_{\mid\CD}$. Since $\tau$ is faithful, $\supp(\mu)=\hat{\CD}$. Hence, $\|\Gamma(a^*a)\|_{\infty}=\|\Gamma(a^*a)\|_{\sup}=\|a^*a\|$. Applying then to the function $f=\Gamma(a^*a)$, we find that $$\|a\|=\|a^*a\|^{1/2}=\lim_{k\to\infty}\|a^*a\|_k^{1/2}= \lim_{k\to\infty}\|a\|_{2k}. \label{eq6-4b}$$ Let $\varepsilon>0$. By , we can choose $k$ in $\N$, such that $$\|p(x_1,\dots,x_r)\|_{2k} > \|p(x_1,\dots,x_r)\|-\varepsilon.$$ Since $\|Z\|_s\le\|Z\|$ for all $s\ge 1$ and all $Z\in M_n(\C)$, we have by $$\liminf_{n\to\infty} \big\|p\big(X_1^{(n)}(\omega),\dots,X_r^{(n)}(\omega)\big)\big\| \ge\|p(x_1,\dots,x_r)\|_{2k} > \|p(x_1,\dots,x_r)\|-\varepsilon,$$ for all $\omega\in\Omega\backslash N(p)$, and since $N(p)$ does not depend on $\varepsilon$, it follows that holds for all $\omega\in\Omega\backslash N(p)$. Now put $N'= \bigcup_{p\in\CP}N(p)$, where $\CP$ is the set of polynomials from $\C\<X_1,\dots,X_r\>$ with coefficients in $\Q+\i\Q$. Then $N'$ is again a null set, and holds for all $p\in\CP$ and all $\omega\in\Omega\backslash N'$. By [@Ba Thm. 2.12] or [@HT1 Thm. 3.1], $\lim_{n\to\infty}\|X_i^{(n)}(\omega)\| = 2$, $i=1,\dots,r$, for almost all $\omega\in\Omega$. In particular $$\label{eq6-6} \sup_{n\in\N} \|X_i^{(n)}(\omega)\| < \infty,\quad i=1,\dots,r,$$ for almost all $\omega\in\Omega$. Let $N''\subseteq\Omega$ be the set of $\omega\in\Omega$ for which fails for some $i\in\{1,\dots,r\}$. Then $N_1=N'\cup N''$ is a null set, and a simple approximation argument shows that holds for all $p$ in $\C\< X_1,\dots,X_r\>$, when $\omega\in\Omega\backslash N_1$. In order to complete the proof of Theorem \[thm6-1\], we have to prove \[prop6-3\] Assume that $r\in\N$. Then there is a $P$[-]{}null set$N_2\subseteq\Omega$[,]{} such that for all polynomials $p$ in $r$ noncommuting variables and all $\omega\in\Omega\backslash N_2$[,]{} $$\limsup_{n\to\infty} \big\|p\big(X_1^{(n)}(\omega),\dots,X_r^{(n)}(\omega)\big)\big\| \le\|p(x_1,\dots,x_r)\|.$$ The proof of Proposition \[prop6-3\] relies on Theorem \[thm5-5\] combined with the linearization trick in the form of Theorem \[thm1-4\]. Following the notation of [@BK] we put $$\prod_n M_n(\C) = \big\{(Z_n)^\infty_{n=1} \bigm | Z_n\in M_n(\C), \ \textstyle{\sup_{n\in\N}} \|Z_n\| <\infty\big\}$$ and $$\sum_n M_n(\C) = \big\{(Z_n)^\infty_{n=1} \bigm | Z_n\in M_n(\C), \ \textstyle{\lim_{n\to\infty}}\|Z_n\|=0\big\},$$ and we let $\CC$ denote the quotient $C^*$-algebra $$\label{eq6-7} \CC = \prod_n M_n(\C)\Big/ \sum_n M_n(\C).$$ Moreover, we let $\rho: \prod_n M_n(\C)\to \CC$ denote the quotient map. By [@RLL Lemma 6.13], the quotient norm in $\CC$ is given by $$\label{eq6-8} \big\|\rho\big((Z_n)^\infty_{n=1}\big)\big\| = \limsup_{n\to\infty} \|Z_n\|,$$ for $(Z_n)^\infty_{n=1}\in\prod M_n(\C)$. Let $m\in\N$. Then we can identify $M_m(\C)\otimes \CC$ with $$\prod_n M_{mn}(\C)\ / \ \sum M_{mn}(\C),$$ where $\prod_n M_{mn}(\C)$ and $\sum_n M_{mn}(\C)$ are defined as $\prod_n M_n(\C)$ and $\sum_n M_n(\C)$, but with $Z_n\in M_{mn}(\C)$ instead of $Z_n\in M_n(\C)$. Moreover, for $(Z_n)^\infty_{n=1}\in\prod_n M_{mn}(\C)$, we have, again by [@RLL Lemma 6.13], $$\label{eq6-9} \big\|(\id_m\otimes\rho)\big((Z_n)^\infty_{n=1}\big)\big\| = \limsup_{n\to\infty}\|Z_n\|.$$ \[lemma6-4\] Let $m\in\N$ and let $Z=(Z_n)^\infty_{n=1}\in\prod_n M_{mn}(\C)$[,]{} such that each $Z_n$ is normal. Then for all $k\in\N$ $$\spe\big((\id_m\otimes\rho)(Z)\big)\subseteq\overline{\bigcup^\infty_{n= k} \spe(Z_n)}.$$ Assume $\lambda\in\C$ is not in the closure of $\bigcup^\infty_{n=k} \spe(Z_n)$. Then there exists an $\varepsilon >0$, such that $d(\lambda,\spe(Z_n))\ge\varepsilon$ for all $n\ge k$. Since $Z_n$ is normal, it follows that $\|(\lambda\unit_{mn}-Z_n)^{-1}\|\le\frac{1}{\varepsilon}$ for all $n\ge k$. Now put $$y_n = \begin{cases} 0, & \quad \textrm{if} \ 1\le n\le k-1,\\ (\lambda\unit_{mn}-Z_n)^{-1}, &\quad \textrm{if} \ n\ge k. \end{cases}$$ Then $y=(y_n)^\infty_{n=1}\in\prod_n M_{mn}(\C)$, and one checks easily that $\lambda\unit_{M_m(\C)\otimes \CC}-(\id_m\otimes\rho)(Z)$ is invertible in $M_m(\C)\otimes \CC = \prod_n M_{mn}(\C)\ / \ \sum_n M_{mn}(\C)$ with inverse $(\id_m\otimes\rho)y$. Hence $\lambda\notin \spe((\id_m\otimes\rho)(Z))$. [*Proof of Proposition [\[prop6-3\]]{} and Theorem [\[thm6-1\]]{}*]{}. Assume first that $r\in\N$. Put $$\Omega_0 = \big\{\omega\in\Omega\bigm | \textstyle{\sup_{n\in\N}}\|X_i^{(n)}(\omega)\| <\infty, \ i=1,\dots,r\big\}.$$ By , $\Omega\backslash\Omega_0$ is a $P$-null set. For every $\omega\in\Omega_0$, we define $$y_i(\omega)\in \CC=\prod_n M_n(\C)\Big/\sum_n M_n(\C)$$ by $$\label{eq6-9a} y_i(\omega) = \rho\big((X_i^{(n)}(\omega))^\infty_{n=1}\big),\quad i=1,\dots,r.$$ Then for every noncommutative polynomial $p\in\C\< X_1,\dots,X_r\>$ and every $\omega$ in $\Omega_0$, we get by that $$\label{eq6-10} \big\|p(y_1(\omega),\dots,y_r(\omega))\big\| = \limsup_{n\to\infty} \big\|p\big(X_1^{(n)}(\omega),\dots,X_r^{(n)}(\omega)\big)\big\|.$$ Let $j\in\N$ and $a_0,a_1,\dots,a_r\in\Mmsa$. Then by Theorem \[thm5-5\] there exists a null set $N(m,j,a_0,\dots,a_r)$, such that for $$\spe\big(a_0\otimes\unit_n + \textstyle{\sum^r_{i=1}}a_i\otimes X_i^{(n)}(\omega)\big) \subseteq \spe\big(a_0\otimes\unit_{\CB} +\textstyle{\sum^r_{i=1}}a_i\otimes x_i\big) \ + \ \big]\textstyle{-\frac1j,\frac1j}\big[,$$ eventually, as $n\to\infty$, for all $\omega\in\Omega\backslash N(m,j,a_0,\dots,a_r)$. Let $N_0=\break \bigcup N(m,j,a_0,\dots,a_r)$, where the union is taken over all $m,j\in\N$ and $a_0,\dots,a_r\in M_n(\Q + \i\Q)_{\rm sa}$. This is a countable union. Hence $N_0$ is again a $P$-null set, and by Lemma \[lemma6-4\] $$\spe\big(a_0\otimes\unit_n + \textstyle{\sum^r_{i=1}}a_i\otimes y_i(\omega)\big)\subseteq\spe\big(a_0\otimes\unit_{\CB} + \textstyle{\sum^r_{i=1}}a_i\otimes x_i\big)+ \big[\textstyle{-\frac1j,\frac1j}\big],$$ for all $\omega\in\Omega_0\backslash N_0$, all $m,j\in\N$ and all $a_0,\dots,a_r\in M_n(\Q + \i\Q)_{\rm sa}$. Taking intersection over $j\in\N$ on the right-hand side, we get $$\spe\big(a_0\otimes\unit_n + \textstyle{\sum^r_{i=1}}a_i\otimes y_i(\omega)\big)\subseteq\spe\big(a_0\otimes\unit_{\CB} + \textstyle{\sum^r_{i=1}}a_i\otimes x_i\big),$$ for $\omega\in\Omega_0\backslash N_0$, $m\in\N$ and $a_0,\dots,a_r\in M_n(\Q+ \i\Q)_{\rm sa}$. Hence, by Theorem \[thm1-4\], $$\big\|p(y_1(\omega),\dots,y_r(\omega))\big\|\le\|p(x_1,\dots,x_r)\|,$$ for all $p\in\C\< X_1,\dots,X_r \>$ and all $\omega\in\Omega_0\backslash N_0$, which, by , implies that $$\limsup_{n\to\infty} \big\|p\big(X_1^{(n)}(\omega),\dots,X_r^{(n)}(\omega)\big)\big\| \le\|p(x_1,\dots,x_r)\|,$$ for all $p\in\C \< X_1,\dots,X_r\>$ and all $\omega\in\Omega_0\backslash N_0$. This proves Proposition \[prop6-3\], which, together with Lemma \[lemma6-2\], proves Theorem \[thm6-1\] in the case $r\in\N$. The case $r=\infty$ follows from the case $r\in\N$, because $\C\<(X_i)_{i=1}^\infty\>=\cup_{r=1}^\infty\C\<(X_i)_{i=1}^r\>$. $\Ext(\Cred(F_r))$ is not a group {#ext-resultat} ================================= We start this section by translating Theorem \[thm6-1\] into a corresponding result, where the self-adjoint Gaussian random matrices are replaced by random unitary matrices and the semi-circular system is replaced by a free family of Haar-unitaries. Define $C^1$-functions $\varphi\colon\R\to\R$ and $\psi\colon\R\to\C$ by $$\varphi(t)= \begin{cases} -\pi, &\textrm{if} \quad t\le-2, \\ \int_0^t \sqrt{4-s^2}\6s, &\textrm{if} \quad -2<t<2, \\ \pi, &\textrm{if} \quad t\ge2. \end{cases} \label{e7.1}$$ and $$\psi(t)=\e^{\i\varphi(t)}, \qquad (t\in\R). \label{e7.2}$$ Let $\mu$ be the standard semi-circle distribution on $\R$: $$\6\mu(t)=\frac{1}{2\pi}\sqrt{4-t^2}\cdot 1_{[-2,2]}(t)\6t,$$ and let $\varphi(\mu)$ denote the push-forward measure of $\mu$ by $\varphi$, i.e., $\varphi(\mu)(B)=\mu(\varphi^{-1}(B))$ for any Borel subset $B$ of $\R$. Since $\varphi'(t)=\sqrt{4-t^2}\cdot1_{[-2,2]}(t)$ for all $t$ in $\R$, it follows that $\varphi(\mu)$ is the uniform distribution on $[-\pi,\pi]$, and, hence, $\psi(\mu)$ is the Haar measure on the unit circle $\TT$ in $\C$. The following lemma is a simple application of Voiculescu’s results in [@V2]. \[C\*-iso\] Let $r\in\N\cup\{\infty\}$ and let $(x_i)_{i=1}^r$ be a semi[-]{}circular system in a $C^*$[-]{}probability space $(\CB,\tau)$[,]{} where $\tau$ is a faithful state on $\CB$. Let $\psi\colon\R\to\TT$ be the function defined in and then put $$u_i=\psi(x_i), \qquad (i=1,\dots,r).$$ Then there is a [(]{}surjective[)]{} $*$[-]{}isomorphism $\Phi\colon\Cred(F_r)\to C^*((u_i)_{i=1}^r)$[,]{} such that $$\Phi\big(\lambda(g_i)\big)=u_i, \qquad (i=1,\dots,r),$$ where $g_1,\dots,g_r$ are the generators of the free group $F_r$[,]{} and $\lambda\colon F_r\to\CB(\ell^2(F_r))$ is the left regular representation of $F_r$ on $\ell^2(F_r)$. Recall that $\Cred(F_r)$ is, by definition, the $C^*$-algebra in $\CB(\ell^2(F_r))$ generated by $\lambda(g_1),\dots,\lambda(g_r)$. Let $e$ denote the unit in $F_r$ and let $\delta_e\in\ell^2(F_r)$ denote the indicator function for $\{e\}$. Recall then that the vector state $\eta=\<\cdot\delta_e,\delta_e\>\colon\CB(\ell^2(F_r))\to\C$, corresponding to $\delta_e$, is faithful on $\Cred(F_r)$. We recall further from [@V2] that $\lambda(g_1),\dots,\lambda(g_r)$ are $*$-free operators with respect to $\eta$, and that each $\lambda(g_i)$ is a Haar unitary, i.e., $$\eta(\lambda(g_i)^n)= \begin{cases} 1, &\textrm{if} \quad n=0,\\ 0, &\textrm{if} \quad n\in\Z\setminus\{0\}. \end{cases}$$ Now, since $(x_i)_{i=1}^r$ are free self-adjoint operators in $(\CB,\tau)$, $(u_i)_{i=1}^r$ are $*$-free unitaries in $(\CB,\tau)$, and since, as noted above, $\psi(\mu)$ is the Haar measure on $\TT$, all the $u_i$’s are Haar unitaries as well. Thus, the $*$-distribution of $(\lambda(g_i))_{i=1}^r$ with respect to $\eta$ (in the sense of [@V2]) equals that of $(u_i)_{i=1}^r$ with respect to $\tau$. Since $\eta$ and $\tau$ are both faithful, the existence of a $*$-isomorphism $\Phi$, with the properties set out in the lemma, follows from [@V2 Remark 1.8]. Let $r\in\N\cup\{\infty\}$. As in Theorem \[thm6-1\], we consider next, for each $n$ in $\N$, independent random matrices $(X_i^{(n)})_{i=1}^r$ in $\SGRM(n,\frac{1}{n})$. We then define, for each $n$, random unitary $n\times n$ matrices $(U_i^{(n)})_{i=1}^r$, by setting $$U_i^{(n)}(\omega)=\psi(X_i^{(n)}(\omega)), \qquad (i=1,2,\dots,r), \label{e7.3}$$ where $\psi\colon\R\to\TT$ is the function defined in . Consider further the (free) generators $(g_i)_{i=1}^r$ of $F_r$. Then, by the universal property of a free group, there exists, for each $n$ in $\N$ and each $\omega$ in $\Omega$, a unique group homomorphism: $$\pi_{n,\omega}\colon F_r\to\CU(n)=\CU(M_n(\C)),$$ satisfying $$\pi_{n,\omega}(g_i)=U_i^{(n)}(\omega), \qquad (i=1,2,\dots,r). \label{e7.4}$$ \[unitaer version\] Let $r\in\N\cup\{\infty\}$ and let[,]{} for each $n$ in $\N$[,]{} $(U_i^{(n)})_{i=1}^r$ be the random unitaries given by . Let further for each $n$ in $\N$ and each $\omega$ in $\Omega$[,]{} $\pi_{n,\omega}\colon F_r\to\CU(n)$ be the group homomorphism given by . Then there exists a $P$-null set $N\subseteq\Omega$[,]{} such that for all $\omega$ in $\Omega\setminus N$ and all functions $f\colon F_r\to\C$ with finite support[,]{} we have $$\lim_{n\to\infty}\Big\|\sum_{\gamma\in F_r}f(\gamma)\pi_{n,\omega}(\gamma)\Big\| = \Big\|\sum_{\gamma\in F_r}f(\gamma)\lambda(\gamma)\Big\|,$$ where[,]{} as above[,]{} $\lambda$ is the left regular representation of $F_r$ on $\ell^2(F_r)$. In the proof we shall need the following simple observation: If $a_1,\dots,a_s$, $b_1,\dots,b_s$ are $2s$ operators on a Hilbert space $\CK$, such that $\|a_i\|$,$\|b_i\|\le1$ for all $i$ in $\{1,2,\dots,s\}$, then $$\|a_1a_2\cdots a_s - b_1b_2\cdots b_s\|\le \sum_{i=1}^s\|a_i-b_i\|. \label{e7.5}$$ We shall need further that for any positive $\varepsilon$ there exists a polynomial $q$ in one variable, such that $$|q(t)|\le1, \qquad (t\in[-3,3]), \label{e7.6}$$ and $$|\psi(t)-q(t)|\le\varepsilon, \qquad (t\in[-3,3]). \label{e7.7}$$ Indeed, by Weierstrass’ approximation theorem we may choose a polynomial $q_0$ in one variable, such that $$|\psi(t)-q_0(t)|\le\varepsilon/2, \qquad (t\in[-3,3]). \label{e7.7a}$$ Then put $q=(1+\varepsilon/2)^{-1}q_0$ and note that since $|\psi(t)|=1$ for all $t$ in $\R$, it follows from that holds. Furthermore, $$|q_0(t)- q(t)|\le\textstyle{\frac{\varepsilon}{2}}|q(t)|\le\frac{\varepsilon}{2}, \qquad (t\in[-3,3]),$$ which, combined with , shows that holds. After these preparations, we start by proving the theorem in the case $r\in\N$. For each $n$ in $\N$, let $X_1^{(n)},\dots,X_r^{(n)}$ be independent random matrices in $\SGRM(n,\frac{1}{n})$ defined on $(\Omega,\CF,P)$, and define the random unitaries $U_1^{(n)},\dots,U_r^{(n)}$ as in . Then let $N$ be a $P$-null set as in the main theorem (Theorem \[thm6-1\]). By considering, for each $i$ in $\{1,2,\dots,r\}$, the polynomial $p(X_1,\dots,X_r)=X_i$, it follows then from the main theorem that $$\lim_{n\to\infty}\big\|X_i^{(n)}(\omega)\big\|=2,$$ for all $\omega$ in $\Omega\setminus N$. In particular, for each $\omega$ in $\Omega\setminus N$, there exists an $n_{\omega}$ in $\N$, such that $$\big\|X_i^{(n)}(\omega)\big\|\le 3, \qquad \mbox{whenever $n\ge n_{\omega}$ and $i\in\{1,2,\dots,r\}$.}$$ Considering then the polynomial $q$ introduced above, it follows from and that for all $\omega$ in $\Omega\setminus N$, we have $$\big\|q\big(X_i^{(n)}(\omega)\big)\big\|\le1, \qquad \mbox{whenever $n\ge n_{\omega}$ and $i\in\{1,2,\dots,r\}$,} \label{e7.8}$$ and $$\big\|U_i^{(n)}(\omega)- q\big(X_i^{(n)}(\omega)\big)\big\|\le\varepsilon, \qquad \mbox{whenever $n\ge n_{\omega}$ and $i\in\{1,2,\dots,r\}$.} \label{e7.9}$$ Next, if $\gamma\in F_r\setminus\{e\}$, then $\gamma$ can be written (unambiguesly) as a reduced word: $\gamma=\gamma_1\gamma_2\cdots\gamma_s$, where $\gamma_j\in\{g_1,g_2,\dots,g_r,g_1^{-1},g_2^{-1},\dots,g_r^{-1}\}$ for each $j$ in $\{1,2,\dots,s\}$, and where $s=|\gamma|$ is the length of the reduced word for $\gamma$. It follows then, by , that $\pi_{n,\omega}(\gamma)=a_1a_2\cdots a_s$, where $$\begin{aligned} a_j&=&\pi_{n,\omega}(\gamma_j)\\ &\in&\big\{U_1^{(n)}(\omega),\dots,U_r^{(n)}( \omega), U_1^{(n)}(\omega)^*,\dots,U_r^{(n)}(\omega)^*\big\}, \qquad (j=1,2,\dots,s).\end{aligned}$$ Combining now , and , it follows that for any $\gamma$ in $F_r\setminus\{e\}$, there exists a polynomial $p_{\gamma}$ in $\C\<X_1,\dots,X_r\>$, such that $$\begin{gathered} \big\|\pi_{n,\omega}(\gamma)- p_{\gamma}\big(X_1^{(n)}(\omega),\dots,X_r^{(n)}(\omega)\big)\big\|\le |\gamma|\varepsilon,\\ \mbox{whenever $n\ge n_{\omega}$ and $\omega\in \Omega\setminus N$}. \label{e7.10}\end{gathered}$$ Now, let $\{x_1,\dots,x_r\}$ be a semi-circular system in a $C^*$-probability space $(\CB,\tau)$, and put $u_i=\psi(x_i)$, $i=1,2,\dots,r$. Then, by Lemma \[C\*-iso\], there is a surjective $*$-isomorphism $\Phi\colon\Cred(F_r)\to C^*(u_1,\dots,u_r)$, such that $(\Phi\circ\lambda)(g_i)=u_i$, $i=1,2,\dots,r$. Since $\|x_i\|\le 3$, $i=1,2,\dots,r$, the arguments that lead to show also that for any $\gamma$ in $F_r\setminus\{e\}$, $$\big\|(\Phi\circ\lambda)(\gamma)-p_{\gamma}(x_1,\dots,x_r)\big\| \le|\gamma|\varepsilon, \label{e7.11}$$ where $p_{\gamma}$ is the same polynomial as in . Note that and also hold in the case $\gamma=e$, if we put $p_e(X_1,\dots,X_r)=1$, and $|e|=0$. Consider now an arbitrary function $f\colon F_r\to\C$ with finite support, and then define the polynomial $p$ in $\C\<X_1,\dots,X_r\>$, by: $p=\sum_{\gamma\in F_r}f(\gamma)p_{\gamma}$. Then, for any $\omega$ in $\Omega\setminus N$ and any $n\ge n_{\omega}$, we have $$\Big\|\sum_{\gamma\in F_r}f(\gamma)\pi_{n,\omega}(\gamma) -p\big(X_1^{(n)}(\omega),\dots,X_r^{(n)}(\omega)\big)\Big\| \le \Big(\sum_{\gamma\in F_r}|f(\gamma)|\cdot | \gamma | \Big)\varepsilon, \label{e7.12}$$ and $$\Big\|\sum_{\gamma\in F_r}f(\gamma)\cdot(\Phi\circ\lambda)(\gamma) -p(x_1,\dots,x_r)\Big\| \le \Big(\sum_{\gamma\in F_r}|f(\gamma)|\cdot | \gamma | \Big)\varepsilon, \label{e7.13}$$ Taking also Theorem \[thm6-1\] into account, we may, on the basis of and , conclude that for any $\omega$ in $\Omega\setminus N$, we have $$\limsup_{n\to\infty}\Bigg|\Big\|\sum_{\gamma\in F_r}f(\gamma)\pi_{n,\omega}(\gamma)\Big\| - \Big\|\sum_{\gamma\in F_r}f(\gamma)\cdot(\Phi\circ\lambda)(\gamma)\Big\|\Bigg| \le 2\varepsilon\Big(\sum_{\gamma\in F_r}|f(\gamma)|\cdot | \gamma | \Big).$$ Since $\varepsilon>0$ is arbitrary, it follows that for any $\omega$ in $\Omega\setminus N$, $$\lim_{n\to\infty}\Big\|\sum_{\gamma\in F_r}f(\gamma)\pi_{n,\omega}(\gamma)\Big\|= \Big\|\sum_{\gamma\in F_r}f(\gamma)\cdot(\Phi\circ\lambda)(\gamma)\Big\| =\Big\|\sum_{\gamma\in F_r}f(\gamma)\lambda(\gamma)\Big\|,$$ where the last equation follows from the fact that $\Phi$ is a $*$-isomorphism. This proves Theorem \[unitaer version\] in the case where $r\in\N$. The case $r=\infty$ follows by trivial modifications of the above argument. \[aabent spoergsmaal\] The distributions of the random unitaries $U_1^{(n)},\dots,U_r^{(n)}$ in Theorem \[unitaer version\] are quite complicated. For instance, it is easily seen that for all $n$ in $\N$, $$P\big(\big\{\omega\in\Omega\bigm | U_1^{(n)}(\omega)=-\unit_n\big\}\big)>0.$$ It would be interesting to know whether Theorem \[unitaer version\] also holds, if, for each $n$ in $\N$, $U_1^{(n)},\dots,U_r^{(n)}$ are replaced be stochastically independent random unitaries $V_1^{(n)},\dots,V_r^{(n)}$, which are all distributed according to the normalized Haar measure on $\CU(n)$. \[MF-algebra\] For any $r$ in $\N\cup\{\infty\}$[,]{} the $C^*$[-]{}algebra $\Cred(F_r)$ has a unital embedding into the quotient $C^*$[-]{}algebra $$\CC = \prod_n M_n(\C)\ \Big/ \sum_n M_n(\C),$$ introduced in Section [\[sec6\].]{} In particular[,]{} $\Cred(F_r)$ is an MF-algebra in the sense of Blackadar and Kirchberg [(]{}cf. [[@BK])]{}. This follows immediately from Theorem \[unitaer version\] and formula . In fact, one only needs the existence of one $\omega$ in $\Omega$ for which the convergence in Theorem \[unitaer version\] holds! We remark that Corollary \[MF-algebra\] could also have been proved directly from the main theorem (Theorem \[thm6-1\]) together with Lemma \[C\*-iso\]. \[ext is not a group\]For any $r$ in $\{2,3,\ldots\}\cup\{\infty\}$[,]{} the semi[-]{}group$\Ext(\Cred(F_r))$ is [not]{} a group. In Section 5.14 of Voiculescu’s paper [@V5], it is proved that$\Ext(\Cred(F_r))$ cannot be a group, if there exists a sequence $(\pi_n)_{n\in\N}$ of unitary representations $\pi_n\colon F_r\to \CU(n)$, with the property that $$\lim_{n\to\infty}\Big\|\sum_{\gamma\in F_r}f(\gamma)\pi_n(\gamma)\Big\| =\Big\|\sum_{\gamma\in F_r}f(\gamma)\lambda(\gamma)\Big\|, \label{e7.14}$$ for any function $f\colon F_r\to\C$ with finite support. For any $r\in\{2,3,\ldots\}\cup\{\infty\}$, the existence of such a sequence $(\pi_n)_{n\in\N}$ follows immediately from Theorem \[unitaer version\], by considering one single $\omega$ from the sure event $\Omega\setminus N$ appearing in that theorem. \[voiculescus argument\] Let us briefly outline Voiculescu’s argument in [@V5] for the fact that implies Corollary \[ext is not a group\]. It is obtained by combining the following two results of Rosenberg [@Ro] and Voiculescu [@V4], respectively: 1. If $\Gamma$ is a discrete countable nonamenable group, then $\Cred(\Gamma)$ is not quasi-diagonal ([@Ro]). 2. A separable unital $C^*$-algebra $\CA$ is quasi-diagonal if and only if there exists a sequence of natural numbers $(n_k)_{k\in\N}$ and a sequence $(\varphi_k)_{k\in\N}$of completely positive unital maps $\varphi_k\colon\CA\to M_{n_k}(\C)$, such that$\lim_{k\to\infty}\|\varphi_k(a)\|=\|a\|$ and $\lim_{k\to\infty}\|\varphi_k(ab)-\varphi_k(a)\varphi_k(b)\|=0$ for all $a,b\in\CA$ ([@V4]). Let $\CA$ be a separable unital $C^*$-algebra. Then, as mentioned in the introduction, $\Ext(\CA)$ is the set of equivalence classes $[\pi]$ of one-to-one unital $*$-homomorphisms $\pi$ of $\CA$ into the Calkin algebra $\CC(\CH)=\CB(\CH)/\CK(\CH)$ over a separable infinite dimensional Hilbert space $\CH$. Two such $*$-homomorphisms are equivalent if they are equal up to a unitary transformation of $\CH$. $\Ext(\CA)$ has a natural semi-group structure and $[\pi]$ is invertible in $\Ext(\CA)$ if and only if $\pi$ has a unital completely positive lifting: $\psi\colon\CA\to \CB(\CH)$ (cf. [@Arv]). Let now $\CA=\Cred(F_r)$, where $r\in\{2,3,\dots\}\cup \{\infty\}$. Moreover, let $\pi_n\colon F_r\to\CU_n$, $n\in\N$, be a sequence of unitary representations satisfying and let $\CH$ be the Hilbert space $\CH=\bigoplus^\infty_{n=1} \C^n$. Clearly, $\prod_n M_n(\C)/\sum_n M_n(\C)$ embeds naturally into the Calkin algebra $\CC(\CH)=\CB(\CH)/\CK(\CH)$. Hence, there exists a one-to-one $*$-homomorphism $\pi\colon\CA\to \CC(\CH)$, such that $$\pi(\lambda(h)) = \rho\begin{pmatrix} \pi_1(h) & \ & 0 \\ \ & \pi_2(h) & \ \\ 0 & \ & \ddots \end{pmatrix},$$ for all $h\in F_r$ (here $\rho$ denotes the quotient map from $\CB(\CH)$ to $\CC(\CH)$). Assume $[\pi]$ is invertible in $\Ext(\CA)$. Then $\pi$ has a unital completely positive lifting $\varphi\colon\CA\to \CB(\CH)$. Put $\varphi_n(a)=p_n\varphi(a)p_n$, $a\in\CA$, where $p_n\in \CB(\CH)$ is the orthogonal projection onto the component $\C^n$ of $\CH$. Then each $\varphi_n$ is a unital completely positive map from $\CA$ to $M_n(\C)$, and it is easy to check that $$\lim_{n\to\infty} \|\varphi_n(\lambda(h))-\pi_n(h)\|=0,\qquad (h\in F_r).$$ From this it follows that $$\lim_{n\to\infty}\|\varphi_n(a)\|=\|a\| \quad \mbox{and} \quad \lim_{n\to\infty} \|\varphi_n(ab)-\varphi_n(a)\varphi_n(b)\|=0,\qquad (a,b\in \CA)$$ so by (ii), $\CA=\Cred(F_r)$ is quasi-diagonal. But since $F_r$ is not amenable for $r\ge2$, this contradicts (i). Hence $[\pi]$ is not invertible in $\Ext(\CA)$. \[rem7-7\] let $\CA$ be a separable unital $C^*$-algebra and let $\pi\colon\CA\to \CC(\CH)=\CB(\CH)/\CK(\CH)$ be a one-to-one \*-homomorphism. Then $\pi$ gives rise to an extension of $\CA$ by the compact operators $\CK=\CK(\CH)$, i.e., a $C^*$-algebra $\CB$ together with a short exact sequence of \*-homomorphisms $$0\to \CK\stackrel{\iota}{\rightarrow} \CB \stackrel{q}{\rightarrow}\CA\to0.$$ Specifically, with $\rho\colon\CB(\CH)\to\CC(\CH)$ the quotient map, $\CB=\rho^{-1}(\pi(\CA))$, $\iota$ is the inclusion map of $\CK$ into $\CB$ and $q=\pi^{-1}\circ\rho$. Let now $\CA=\Cred(F_r)$, let $\pi\colon\CA\to \CC(\CH)$ be the one-to-one unital \*-homomorphism from Remark \[voiculescus argument\], and let $\CB$ be the compact extension of $\CA$ constructed above. We then have - $\CA=\Cred(F_r)$ is an exact $C^*$-algebra, but the compact extension $\CB$ of $\CA$ is not exact. - $\CA=\Cred(F_r)$ is not quasi-diagonal but the compact extension $\CB$ of $\CA$ is quasi-diagonal. To prove a), note that $\Cred(F_r)$ is exact by [@DH Cor. 3.12] or [@Ki2 p. 453, l. 1–3]. Assume $\CB$ is also exact. Then, in particular, $\CB$ is locally reflexive (cf. [@Ki2]). Hence by the lifting theorem in [@EH] and the nuclearity of $\CK$, the identity map $\CA\to\CA$ has a unital completely positive lifting $\varphi\colon\CA\to\CB$. If we consider $\varphi$ as a map from $\CA$ to $\CB(\CH)$, it is a unital completely positive lifting of $\pi\colon\CA\to \CC(\CH)$, which contradicts that $[\pi]$ is not invertible in $\Ext(\CA)$. To prove b), note that by Rosenberg’s result, quoted in (i) above, $\Cred(F_r)$ is not quasi-diagonal. On the other hand, by the definition of $\pi$ in Remark \[voiculescus argument\], every $x\in\CB$ is a compact perturbation of an operator of the form $$y= \begin{pmatrix} y_1 && 0\\ & y_2 & \\ 0 && \ddots\end{pmatrix},$$ where $y_n\in M_n(\C)$, $n\in\N$. Hence $\CB$ is quasi-diagonal. Other applications {#sec8} ================== Recall that a $C^*$-algebra $\CA$ is called exact if, for every pair $(\CB,\CJ)$ consisting of a $C^*$-algebra $\CB$ and closed two-sided ideal $\CJ$ in $\CB$, the sequence $$\label{eq8-1} 0\to \CA\mathop{\otimes}_{\min} \CJ\to \CA\mathop{\otimes}_{\min}\CB \to \CA\mathop{\otimes}_{\min}(\CB/\CJ)\to0$$ is exact (cf. [@Ki1]). In generalization of the construction described in theparagraph preceding Lemma \[lemma6-4\], we may, for any sequence $(\CA_n)^\infty_{n=1}$ of$C^*$-algebras, define two $C^*$-algebras $$\begin{aligned} \prod_n \CA_n &=& \big\{(a_n)^\infty_{n=1} \mid a_n\in \CA_n,\ \textstyle{\sup_{n\in\N}}\|a_n\| < \infty\big\}\\ \sum_n \CA_n &=& \big\{(a_n)^\infty_{n=1} \mid a_n\in \CA_n,\ \textstyle{\lim_{n\to\infty}}\|a_n\|=0\big\}.\end{aligned}$$ The latter $C^*$-algebra is a closed two-sided ideal in the first, and the norm in the quotient $C^*$-algebra $\prod_n\CA_n/\sum_n\CA_n$ is given by $$\label{eq8-2} \big\|\rho\big((x_n)^\infty_{n=1}\big)\big\| = \limsup_{n\to\infty} \|x_n\|,$$ where $\rho$ is the quotient map (cf. [@RLL Lemma 6.13]) . In the following we let $\CA$ denote an exact $C^*$-algebra. By we have the following natural identification of $C^*$-algebras $$\CA \mathop{\otimes}_{\min} \Big(\prod_n M_n(\C)\Big/ \sum_nM_n(\C)\Big) = \Big(\CA\mathop{\otimes}_{\min}\prod_n M_n(\C)\Big)\Big/ \Big(\CA\mathop{\otimes}_{\min} \sum_nM_n(\C)\Big).$$ Moreover, we have (without assuming exactness) the following natural identification $$\CA\mathop{\otimes}_{\min}\sum_n M_n(\C)=\sum_n M_n(\CA)$$ and the natural inclusion $$\CA\mathop{\otimes}_{\min}\prod_n M_n(\C)\subseteq\prod_n M_n(\CA).$$ If $\dim(\CA)<\infty$, the inclusion becomes an identity, but in general the inclusion is proper. Altogether we have for all exact $C^*$-algebras $\CA$ a natural inclusion $$\label{eq8-3} \CA\mathop{\otimes}_{\min}\Big(\prod_n M_n(\C)\Big/ \sum_nM_n(\C)\Big)\subseteq \prod_nM_n(\CA)\Big/ \sum_n M_n(\CA).$$ Similarly, if $n_1<n_2<n_3<\cdots$, are natural numbers, then $$\label{eq8-4} \CA\mathop{\otimes}_{\min}\Big(\prod_kM_{n_k}(\C)\Big/\sum_k M_{n_k}(\C)\Big)\subseteq \prod_kM_{n_k}(\CA)\Big/\sum_kM_{n_k}(\CA).$$ After these preparations we can now prove the following generalizations of Theorems \[thm6-1\] and \[unitaer version\]. \[thm8-1\] Let $(\Omega,\CF,P)$[,]{} $N$[,]{} $(X_i^{(n)})_{i=1}^r$ and $(x_i)_{i=1}^r$ be as in Theorem [\[thm6-1\],]{} and let $\CA$ be a unital exact $C^*$[-]{}algebra. Then for all polynomials $p$ in $r$ noncommuting variables and with coefficients in $\CA$ [(]{}i.e.[,]{} $p$ is in the algebraic tensor product $\CA\otimes\C\<(X_i)_{i=1}^r\>$[),]{} and all $\omega\in\Omega\backslash N$[,]{} $$\label{eq8-5} \lim_{n\to\infty}\big\|p\big((X_i^{(n)}(w))_{i=1}^r\big)\big\|_{M_n(\CA) } =\big\|p\big((x_i)_{i=1}^r\big)\big\| _{\CA\otimes_{\min}C^*((x_i)_{i=1}^r,\unit_{\CB})}.$$ We consider only the case $r\in\N$. The case $r=\infty$ is proved similarly. By Theorem \[thm6-1\] we can for each $\omega\in\Omega\backslash N$ define a unital embedding $\pi_\omega$ of $C^*(x_1,\dots,x_r,\unit_{\CB})$ into $\prod_n M_n(\C)/\sum_n M_n(\C)$, such that $$\pi_\omega(x_i) = \rho\big(\big(X_i^{(n)}(\omega)\big)^\infty_{n=1}\big),\quad i=1,\dots,r,$$ where $\rho\colon\prod_n M_n(\C)\to\prod_n M_n(\C)/\sum_n M_n(\C)$ is the quotient map. Since $\CA$ is exact, we can, by , consider $\id_\CA\otimes\pi_\omega$ as a unital embedding of $\CA\mathop{\otimes}_{\min}C^*(x_1,\dots,x_r,\unit_{\CB})$ into $\prod_n M_n(\CA)/\sum_n M_n(\CA)$, for which $$(\id_\CA\otimes\pi_\omega)(a\otimes x_i)=\tilde{\rho}\big(\big(a\otimes X_i^{(n)}(\omega)\big)^\infty_{n=1}\big), \quad i=1,\dots,r,$$ where $\tilde{\rho}\colon\prod_n M_n(\CA)\to\prod_n M_n(\CA)/\sum M_n(\CA)$ is the quotient map. Hence, for every $p$ in $\CA\otimes\C\<X_1,\dots,X_r\>$, $$(\id_\CA\otimes\pi_\omega)\big(p(x_1,\dots,x_r)\big) =\tilde{\rho}\big(\big( p(X_1^{(n)}(\omega),\dots,X_r^{(n)}(\omega))\big)^\infty_{n=1}\big).$$ By it follows that for all $\omega\in\Omega/N$, and every $p$ in $\CA\otimes\C\<X_1,\dots,X_r\>$, $$\big\|p(x_1,\dots,x_r)\big\| _{\CA\otimes_{\min}C^*(x_1,\dots,x_r,\unit_{\CB})} = \limsup_{n\to\infty}\big\|p\big(X_1^{(n)}(\omega), \dots,X_r^{(n)}(\omega)\big)\big\|_{M_n(\CA)}.$$ Consider now a fixed $\omega\in\Omega\backslash N$. Put $$\alpha = \liminf_{n\to\infty}\big\|p\big(X_1^{(n)}(\omega), \dots,X_r^{(n)}(\omega)\big)\big\|_{M_n(\CA)},$$ and choose natural numbers $n_1<n_2<n_3<\cdots$, such that $$\alpha = \lim_{k\to\infty}\big\|p\big(X_1^{(n_k)}(\omega), \dots,X_r^{(n_k)}(\omega)\big)\big\|_{M_n(\CA)}.$$ By Theorem \[thm6-1\] there is a unital embedding $\pi'_\omega$ of $C^*(x_1,\dots,x_r,\unit_{\CB})$ into the quotient $\prod_k M_{n_k}(\C)/\sum_k M_{n_k}(\C)$, such that $$\pi'_\omega(x_i)=\rho'\big(\big(X_i^{(n_k)}(\omega)\big)^\infty_{k=1}\big), \quad i=1,\dots,r,$$ where $\rho'\colon\prod_k M_{n_k}(\C)\to \prod_k M_{n_k}(\C)/\sum_k M_{n_k}(\C)$ is the quotient map. Using instead of , we get, as above, that $$\begin{aligned} \|p(x_1,\dots,x_r)\|_{\CA\otimes_{\min}C^*(x_1,\dots,x_r,\unit_{\CB})} &=& \limsup_{k\to\infty} \big\|p\big(X_1^{(n_k)}(\omega), \dots,X_r^{(n_k)}(\omega)\big)\big\|_{M_n(\CA)}\\[.2cm] &=& \alpha\\[.2cm] &=& \liminf_{n\to\infty} \big\|p\big(X_1^{(n)}(\omega), \dots,X_r^{(n)}(\omega)\big)\big\|_{M_n(\CA)}.\end{aligned}$$ This completes the proof of . \[thm8-2\] Let $(\Omega,\CF,P)$[,]{} $(U_i^{(n)})_{i=1}^r$[,]{} $\pi_{n,\omega},\lambda$ and $N$ be as in Theorem [\[unitaer version\].]{} Then for every unital exact $C^*$[-]{}algebra $\CA$[,]{} every function $f\colon F_r\to \CA$ with finite support [(]{}i.e. $f$ is in the algebraic tensor product $\CA\otimes\C F_r$[),]{} and for every $\omega\in\Omega\backslash N$ $$\lim_{n\to\infty}\Big\|\sum_{\gamma\in F_r}f(\gamma)\otimes \pi_{n,\omega}(\gamma)\Big\|_{M_n(\CA)} =\Big\|\sum_{\gamma\in F_r}f(\gamma)\otimes\lambda(\gamma)\Big\| _{\CA\otimes_{\min}\Cred(F_r)}.$$ This follows from Theorem \[unitaer version\] in the same way as Theorem \[thm8-1\] follows from Theorem \[thm6-1\], so we leave the details of the proof to the reader. In Corollary \[cor8-3\] below we use Theorem \[thm8-1\] to give new proofs of two key results from our previous paper [@HT2]. As in [@HT2] we denote by $\GRM(n,n,\sigma^2)$ or $\GRM(n,\sigma^2)$ the class of $n\times n$ random matrices $Y=(y_{ij})_{1\le i,j\le n}$, whose entries $y_{ij}$, $1\le i,j\le n$, are $n^2$ independent and identically distributed complex Gaussian random variables with density $(\pi\sigma^2)^{-1}\exp(-|z|^2/\sigma^2)$, $z\in\C$. It is elementary to check that $Y$ is in $\GRM(n,\sigma^2)$, if and only if $Y=\frac{1}{\sqrt{2}}(X_1+\i X_2)$, where $$X_1 = \frac{1}{\sqrt{2}}(Y+Y^*),\quad X_2 = \frac{1}{\i\sqrt{2}}(Y-Y^*)$$ are two stochastically independent self-adjoint random matrices from the class $\SGRM(n,\sigma^2)$. \[cor8-3\] Let $\CH,\CK$ be Hilbertspaces[,]{} let $c>0$[,]{} let $r\in\N$ and let $a_1,\dots,a_r\in\CB(\CH,\CK)$ such that $$\Big\|\sum_{i=1}^r a^*_ia_i\Big\|\le c\quad\mbox{and}\quad \Big\|\sum_{i=1}^r a_ia^*_i\Big\|\le 1,$$ and such that $\{a_i^*a_j\mid i,j=1,\dots,r\}\cup\{\unit_{\CB(\CH)}\}$ generates an exact $C^*$[-]{}algebra $\CA\subseteq \CB(\CH)$. Assume further that $Y_1^{(n)},\dots,Y_r^{(n)}$ are stochastically independent random matrices from the class $\GRM(n,\frac 1n)$[,]{} and put $S_n=\sum^r_{i=1} a_i\otimes Y_i^{(n)}$. Then for almost all $\omega$ in the underlying probability space $\Omega$[,]{} $$\label{eq8-6} \limsup_{n\to\infty}\max\big\{\spe(S_n(\omega)^*S_n(\omega))\big\}\le (\sqrt{c}+1)^2.$$ If[,]{} furthermore[,]{} $c>1$ and $\sum^r_{i=1} a_i^*a_i=c\unit_{\CB(\CH)}$[,]{} then $$\label{eq8-7} \liminf_{n\to\infty}\min\big\{\spe(S_n(\omega)^*S_n(\omega))\big\} \ge(\sqrt{c}-1)^2.$$ By the comments preceding Corollary \[cor8-3\], we can write $$Y_i^{(n)} = \frac{1}{\sqrt{2}}(X_{2i-1}^{(n)}+\i X_{2i}^{(n)}),\quad i=1,\dots,r,$$ where $X_1^{(n)},\dots,X_{2r}^{(n)}$ are independent self-adjoint random matrices from$\SGRM(n,\frac 1n)$. Hence $S^*_nS_n$ is a second order polynomial in $(X_1^{(n)},\dots,X_{2r}^{(n)})$ with coefficient in the exact unital $C^*$-algebra $\CA$ generated by $\{a_i^*a_j\mid i,j=1,\dots,r\}\cup \{\unit_{\CB(\CH)}\}$. Hence, by Theorem \[thm8-1\], there is a $P$-null set $N$ in the underlying probability space $(\Omega,\CF,P)$ such that $$\lim_{n\to\infty}\|S^*_n(\omega)S_n(\omega)\| = \Big\|\Big(\sum^r_{i=1} a_i\otimes y_i\Big)^*\Big(\sum^r_{i=1} a_i\otimes y_i\Big)\Big\|,$$ where $y_i=\frac{1}{\sqrt{2}}(x_{2i-1}+\i x_{2i})$ and $(x_1,\dots,x_{2r})$ is any semicircular system in a $C^*$-probability space $(\CB,\tau)$ with $\tau$ faithful. Hence, in the terminology of [@V2], $(y_1,\dots,y_r)$ is a circular system with the normalization $\tau(y^*_iy_i)=1$, $i=1,\dots,r$. By [@V2], a concrete model for such a circular system is $$y_i = \ell_{2i-1}+\ell^*_{2i},\quad i=1,\dots,r$$ where $\ell_1,\dots,\ell_{2r}$ are the creation operators on the full Fock space $$\CT=\CT(\CL)=\C\oplus \CL\oplus \CL^{\otimes 2}\oplus\cdots$$ over a Hilbert space $\CL$ of dimension $2r$, and $\tau$ is the vector state given by the unit vector $1\in\C\subseteq\CT(\CL)$. Moreover, $\tau$ is a faithful trace on the $C^*$-algebra $\CB=C^*(y_1,\dots,y_{2r},\unit_{\CB(\CT(\CL))})$. The creation operators $\ell_1,\dots,\ell_{2r}$ satisfy $$\ell_i^*\ell_j = \begin{cases} 1, &\textrm{if} \ i=j, \\ 0, &\textrm{if} \ i\ne j. \end{cases}$$ Hence, we get $$\sum^r_{i=1} a_i\otimes y_i = \Big(\sum^r_{i=1} a_i\otimes\ell_{2i-1}\Big)+\Big(\sum^r_{i=1} a_i\otimes\ell_{2i}^*\Big)=z+w,$$ where $$z^*z = \Big(\sum^r_{i=1} a_i^*a_i \Big)\otimes \unit_{\CB(\CT)} \quad\mbox{and}\quad ww^*=\Big(\sum_{i=1}^ra_ia^*_i\Big)\otimes\unit_{\CB(\CT)}.$$ Thus, $$\Big\| \sum^r_{i=1} a_i\otimes y_i\Big\| \le \|z\|+\|w\| \le \Big\|\sum^r_{i=1} a_i^*a_i\Big\|^\frac12 + \Big\|\sum^r_{i=1} a_ia_i^*\Big\|^\frac12 \le \sqrt{c}+1.$$ This proves . If, furthermore, $c>1$ and $\sum^r_{i=1} a^*_ia_i=c\cdot\unit_{\CB(\CH)}$, then $z^*z=c\unit_{\CA\otimes\CB(\CT)}$ and, as before, $\|w\|\le1$. Thus, for all $\xi\in \CH\otimes\CT$, $\|z\xi\|=\sqrt{c}\|\xi\|$ and $\|w\xi\|\le\|\xi\|$. Hence $$(\sqrt{c}-1)\|\xi\|\le\|(z+w)\xi\|\le(\sqrt{c}+1)\|\xi\|,\quad (\xi\in \CH\otimes\CT),$$ which is equivalent to $$(\sqrt{c}-1)^2\unit_{\CB(\CH\otimes\CT)}\le(z+w)^*(z+w)\le(\sqrt{c}+1)^2 \unit_{\CB(\CH\otimes\CT)},$$ $$-2\sqrt{c} \unit_{\CB(\CH\otimes\CT)}\le (z+w)^*(z+w)-(c+1) \unit_{\CB(\CH\otimes\CT)}\le 2\sqrt{c} \unit_{\CB(\CH\otimes\CT)},$$ and therefore $$\label{eq8-8} \big\|(z+w)^*(z+w)-(c+1)\unit_{\CB(\CH\otimes\CT)}\big\|\le 2\sqrt{c}.$$ Since $S_n^*S_n$ is a second order polynomial in $(X_1^{(n)},\dots,X_{2r}^{(n)})$ with coefficients in $\CA$, the same holds for $S_n^*S_n-(c+1)\unit_{M_n(\CA)}$. Hence, by Theorem \[thm8-1\] and , $$\begin{aligned} &&\hskip-.75in \lim_{n\to\infty}\big\|S_n(\omega)^* S_n(\omega)-(c+1)\unit_{M_n(\CA)}\big\| \\[.2cm] &&= \Big\|\Big(\sum_{i=1}^ra_i\otimes y_i\Big)^*\Big(\sum_{i=1}^ra_i\otimes y_i\Big)-(c+1)\unit_{\CB(\CH\otimes\CT)}\Big\|\\[.2cm] &&\le 2\sqrt{c}.\end{aligned}$$ Therefore, $\liminf_{n\to\infty} \min\{\spe(S_n(\omega)^*S_n(\omega))\}\ge(c+1)-2\sqrt{c}$, which proves . \[rem8-4\] The condition that $\{a_i^*a_j\mid i,j=1,\dots,r\}\cup\{\unit_{\CB(\CH)}\}$ generates an exact $C^*$-algebra is essential for Corollary \[cor8-3\] and hence also for Theorem \[thm8-1\]. Both and are false in the general nonexact case (cf. [@HT2 Prop. 4.9] and [@HT3]). We turn next to a result about the constants $C(r)$, $r\in\N$, introduced by Junge and Pisier in connection with their proof of $$\label{eq8-9} \CB(\CH)\mathop{\otimes}_{\max}\CB(\CH)\ne \CB(\CH)\mathop{\otimes}_{\min} \CB(\CH).$$ \[def8-5\] For $r\in\N$, let $C(r)$ denote the infimum of all $C\in\R_+$ for which there exists a sequence of natural numbers $(n(m))_{m=1}^\infty$ and a sequence of $r$-tuples of $n(m)\times n(m)$ unitary matrices $$(u_1^{(m)},\dots,u_r^{(m)})^\infty_{m=1}$$ such that for all $m,m'\in\N$, $m\ne m'$ $$\label{eq8-10} \big\|\sum^r_{i=1} u_i^{(m)}\otimes\bar{u}_i^{(m')}\big\| \le C,$$ where $\bar{u}_i^{(m')}$ is the unitary matrix obtained by complex conjugation of the entries of $u_i^{(m')}$. To obtain , Junge and Pisier proved that $\lim_{r\to\infty} \frac{C(r)}{r}=0$. Subsequently, Pisier [@P3] proved that $C(r) \ge 2\sqrt{r-1}$ for all $r\ge 2$. Moreover, using Ramanujan graphs, Valette [@V] proved that $C(r)=2\sqrt{r-1}$, when $r=p+1$ for an odd prime number $p$. From Theorem \[thm8-2\] we obtain \[cor8-6\] $C(r)=2\sqrt{r-1}$ for all $r\in\N$[,]{} $r\ge 2$. Let $r\ge 2$, and let $g_1,\dots,g_r$ be the free generators of $F_r$ and let $\lambda$ denote the left regular representation of $F_r$ on $\ell^2(F_r)$. Recall from [@P3 Formulas (4) and (7)] that $$\label{eq8-11} \Big\|\sum^r_{i=1} \lambda(g_i)\otimes v_i\Big\| = 2\sqrt{r-1}$$ for all unitaries $v_1,\dots,v_r$ on a Hilbert space $\CH$. Let $C> 2\sqrt{r-1}$. We will construct natural numbers $(n(m))^\infty_{m=1}$ and $r$-tuples of $n(m)\times n(m)$ unitary matrices $$(u_1^{(m)},\dots,u_r^{(m)})^\infty_{m=1}$$ such that holds for $m,m'\in\N$, $m\ne m'$. Note that by symmetry it is sufficient to check for $m'<m$. Put first $$n(1)=1\quad\mbox{and}\quad u_1^{(1)}=\cdots = u_r^{(1)}=1.$$ Proceeding by induction, let $M\in\N$ and assume that we have found $n(m)\in\N$ and $r$-tuples of $n(m)\times m(n)$ unitaries $(u_1^{(m)},\dots,u_r^{(m)})$ for $2\le m\break\le M$, such that holds for $1\le m' < m\le M$. By , $$\Big\|\sum^r_{i=1} \lambda(g_i)\otimes\bar{u}_i^{(m)}\Big\| = 2\sqrt{r-1},$$ for $m=1,2,\dots,M$. Applying Theorem \[thm8-2\] to the exact $C^*$-algebras $\CA_{m'} = M_{n(m')}(\C)$, $m'=1,\dots,M$, we have $$\lim_{n\to\infty} \Big\|\sum^r_{i=1} \pi_{n,\omega}(g_i)\otimes\bar{u}_i^{(m')}\Big\|=2\sqrt{r-1}<C, \quad (m'=1,2,\dots,M),$$ where $\pi_{n,\omega}\colon F_r\to\CU(n)$ are the group homomorphisms given by . Hence, we can choose $n\in\N$ such that $$\Big\|\sum^r_{i=1} \pi_{n,\omega}(g_i)\otimes\bar{u}_i^{(m')}\Big\|< C,\quad m'=1,\dots,M.$$ Put $n(M+1)=n$ and $u_i^{(M+1)}=\pi_{n,\omega}(g_i)$, $i=1,\dots,r$. Then is satisfied for all $m,m'$ for which $1\le m'<m\le M+1$. Hence, by induction we get the desired sequence of numbers $n(m)$ and $r$-tuples of $n(m)\times n(m)$ unitary matrices. We close this section with an application of Theorem \[thm6-1\] to powers of random matrices: \[cor8-7\] Let for each $n\in\N$ $Y_n$ be a random matrix in the class $\GRM(n,\frac 1n)$[,]{} i.e.[,]{} the entries of $Y_n$ are $n^2$ independent and identically distributed complex Gaussian variables with density $\frac{n}{\pi} \e^{-n|z|^2}$[,]{} $z\in\C$. Then for all $p\in\N$ $$\lim_{n\to\infty} \|Y_n(\omega)^p\| = \bigg(\frac{(p+1)^{p+1}}{p^p}\bigg)^\frac12,$$ for almost all $\omega$ in the underlying probability space $\Omega$. By the remarks preceding Corollary \[cor8-3\], we have $$(Y_n)^p = \bigg(\frac{1}{\sqrt{2}}\big(X_1^{(n)}+\i X_2^{(n)}\big)\bigg)^p,$$ where, for each $n\in\N$, $X_1^{(n)},X_2^{(n)}$ are two independent random matrices from $\SGRM(n,\frac 1n)$. Hence, by Theorem \[thm6-1\], we have for almost all $\omega\in\Omega$: $$\lim_{n\to\infty}\|Y_n(\omega)^p\|=\|y^p\|,$$ where $y=\frac{1}{\sqrt{2}}(x_1+\i x_2)$, and $\{x_1,x_2\}$ is a semicircular system in a $C^*$-probability space $(\CB,\tau)$ with $\tau$ faithful. Hence, $y$ is a circular element in $\CB$ with the standard normalization $\tau(y^*y)=1$. By [@La Prop. 4.1], we therefore have $\|y^p\|=((p+1)^{p+1}/p^p)^\frac12$. \[rem8-7\] For $p=1$, Corollary \[cor8-7\] is just the complex version ofGeman’s result [@Ge] for square matrices (see [@Ba Thm. 2.16] or [@HT1 Thm. 7.1]), but for $p\ge 2$ the result is new. In [@We Example 1, p.125], Wegmann proved that the empirical eigenvalue distribution of $(Y^p_n)^*Y^p_n$ converges almost surely to a probability measure $\mu_p$ on $\R$ with $$\max(\supp(\mu_p)) = \frac{(p+1)^{p+1}}{p^p}.$$ This implies that for all $\varepsilon>0$, the number of eigenvalues of $(Y^p_n)^*Y^p_n$, which are larger than $(p+1)^{p+1}/p^p+\varepsilon$, grows slower than $n$, as $n\to\infty$ (almost surely). Corollary \[cor8-7\] shows that this number is, in fact, eventually 0 as $n\to\infty$ (almost surely). , A $C^*$-algebra $A$ for which $\mathrm{Ext}(A$) is not a group, [*Ann. of Math*]{}. [**107**]{} (1978), 455–458. , On the asymptotic distribution of the eigenvalues of random matrices, [*J. Math. Anal. Appl*]{}. [**20**]{} (1967), 262–268. , Notes on extensions of $C^*$-algebras, [ *Duke Math. J.*]{} [**44**]{} (1977), 329–355. , Methodologies in spectral analysis of large dimensional random matrices, A review, [*Statistica Sinica*]{} [**9**]{} (1999), 611–677. , and P.  , Unitary equivalence modulo the compact operators and extensions of $C^*$-algebras, [*Proc. Conf. on Operator Theory*]{}, [*Lecture Notes in Math*]{}. [**34**]{} (1973), 58–128, [*Springer-Verlag*]{}, New York. , Extensions of $C^*$-algebras and $K$-homology, [*Ann. of Math*]{}. [**105**]{} (1977), 265–324. , A generalized Poincaré inequality for Gaussian measures, [*Proc. Amer. Math. Soc*]{}. [**105**]{} (1989), 397–400. and , Generalized inductive limits of finite dimensional $C^*$-algebras, [*Math. Ann*]{}. [ **307**]{} (1997), 343–380. , [*Probability*]{}, [*Classics in Applied Mathematics*]{} [**7**]{}, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA (1992). and , Neccesary and sufficient conditions for almost sure convergence of the largest eigenvalue of a Wigner matrix, [*Ann. of Probab*]{}. [**16**]{} (1988), 1729–1741. and , The completely positive lifting problem for $C^*$-algebras, [*Ann. of Math*]{}. [**104**]{} (1976), 585–609. , A note on an inequality involving the normal distribution, [*Ann. Probab*]{}. [**9**]{} (1981), 533–535. , An inequality for the multivariate normal distribution, [*J. Multivariate Anal*]{}. [**12**]{} (1982), 306–315. , The distance between unitary orbits of normal operators, [*Acta Sci. Math*]{}. [**50**]{} (1986), 213–223. and , Multipliers of the Fourier algebras of some simple Lie groups and their discrete subgroups, [ *Amer. J. Math*]{}. [**107**]{} (1984), 455–500. and , Lifting problems and local reflexivity for $C^*$-algebras, [*Duke Math. J*]{}. [**52**]{} (1985), 103–128. , [*Real Analysis*]{}, [*Modern Techniques and their Applications*]{}, [*Pure and Applied Mathematics*]{}, [*John Wiley and Sons*]{}, New York (1984). , A limit theorem for the norm of random matrices, [*Annals Prob*]{}. [**8**]{} (1980) 252–261. and , Asymptotic freeness almost everywhere for random matrices, [*Acta Sci. Math.*]{} (Szeged) [**66**]{} (2000), 809–834. and , Random matrices with complex Gaussian entries, [*Expo. Math*]{}. [**21**]{} (2003), 293-337. , Random matrices and K-theory for exact $C^*$-algebras, [*Documenta Math*]{}. [**4**]{} (1999), 330–441. , Random matrices and nonexact $C^*$-algebras, in [*$C^*$-algebras*]{} (J. Cuntz,S.  Echterhoff eds.), 71–91, Springer-Verlag, New York (2000). and , Bilinear forms on exact operator spaces and $B(H)\otimes B(H)$, [*Geom. Funct. Anal*]{}. [**5**]{} (1995), 329–363. , The Fubini theorem for Exact $C^*$-algebras, [*J. Operator Theory*]{} [**10**]{} (1983), 3–8. , On nonsemisplit extensions, tensor products and exactness of group$C^*$-algebras, [*Invent. Math*]{}. [**112**]{} (1993), 449–489. and , [*Fundamentals of the Theory of Operator Algebras*]{}, Vol. II, [*Academic Press*]{}, New York (1986). , Powers of $R$-diagonal elements, [ *J. Operator Theory*]{} [**47**]{} (2002), 197–212. , Computing norms of free operators with matrix coefficients, [*Amer. J. Math*]{}. [**121**]{} (1999), 453–486. , [*Random Matrices*]{}, second edition, Academic Press, New York (1991). , A simple approach to global regime of random matriix theory, in [*Mathematical Results in Statistical Mechanics*]{} (Marseilles, 1998) (S. Miracle-Sole, J. Ruiz and V. Zagrebnov, eds.), 429–454, World Sci. Publishing, River Edge, NJ (1999). V, [*Completely Bounded Maps and Dilations*]{}, [*Pitman Research Notes in Mathematic*]{} [**146**]{}, Longman Scientific & Technical, New York (1986). , [*Analysis Now*]{}, [*Grad.  Texts in Math*]{}. [**118**]{}, Springer-Verlag, New York (1989). , [*The Volume of Convex Bodies and Banach Space Geometry*]{}, Cambridge Univ.  Press, Cambridge (1989). , A simple proof of a Theorem of Kirchberg and related results on $C^*$-norms, [*J. Operator Theory*]{} [**35**]{} (1996), 317–335. , Quadratic forms in unitary operators, [ *Linear Algebra and its Appl*]{}. [**267**]{} (1997), 125–137. M. R[ø]{}rdam, F. Larsen, and N.J. Laustsen, [*An Introduction to $K$-theory for$C^*$-algebras*]{}, [Cambridge Univ. Press]{}, Cambridge (2000). , Quasidiagonality and inuclearity (appendix to strongly quasidiagonal operators by D. Hadwin), [*J. Operator Theory*]{} [**18**]{} (1987), 15–18. , Mixed moments of Voiculescu’s Gaussian random matrices, [*J. Funct. Anal*]{}. [**176**]{} (2000), 213–246. , An application of Ramanujan graphs to $C^*$-algebra tensor products, [*Discrete Math*]{}. [**167**]{} (1997), 597–603. , A non commutative Weyl-von Neumann Theorem, [*Rev. Roum. Pures et Appl*]{}. [**21**]{} (1976), 97–113. , Symmetries of some reduced free group $C^*$-algebras, in [*Operator Algebras and Their Connections with Topology and Ergodic Theory*]{}, [*Lecture Notes in Math*]{}. [**1132**]{}, Springer-Verlag, New York (1985), 556–588. , Circular and semicircular systems and free product factors, in [*Operator Algebras*]{}, [*Unitary Representations*]{}, [*Algebras, and Invariant Theory*]{} (Paris, 1989), [*Progress in Math*]{}. [**92**]{}, Birkhäuser Boston, Boston, MA (1990), 45–60. , Limit laws for random matrices and free products, [*Invent.  Math*]{}. [**104**]{} (1991), 202–220. , A note on quasi-diagonal $C^*$-algebras and homotopy, [*Duke Math. J*]{}. [**62**]{} (1991), 267–271. , Around quasidiagonal operators, in [*Integral Equations and Operator Theory*]{} [**17**]{} (1993), 137–149. , Operations on certain noncommutative random variables, in [*RecentAdvances in Operator Algebras*]{} (Orĺeans 1992), [*Astérisque*]{} [**232**]{} (1995), 243–275. , Free probability theory: Random matrices and von Neumann algebras, [*Proc.  of the International Congress of Mathematicians*]{} (Zürich 1994), Vol. 1, Birkhäuser, Basel (1995), 227–241. , Free entropy, [*Bull. London Math. Soc.*]{} [**34**]{} (2002), 257–278. , and , [*Free Random Variables*]{}, [*CMR Monograph Series*]{} [**1**]{}, A. M. S., Providence, RI (1992). , The asymptotic eigenvalue-distribution for a certain class of random matrices, [*J. Math. Anal. Appl*]{}. [**56**]{} (1976), 113–132. , Characterictic vectors of boardered matrices with infinite dimensions, [*Ann. of Math*]{}. [**62**]{} (1955), 548–564.
--- abstract: 'The discovery of lithium-rich giants contradicts expectations from canonical stellar evolution. Here we report on the serendipitous discovery of 20 Li-rich giants observed during the [[*Gaia-ESO Survey*]{}]{}, which includes the first nine Li-rich giant stars known towards the [[*CoRoT*]{}]{} fields. Most of our Li-rich giants have near-solar metallicities, and stellar parameters consistent with being before the luminosity bump. This is difficult to reconcile with deep mixing models proposed to explain lithium enrichment, because these models can only operate at later evolutionary stages: at or past the luminosity bump. In an effort to shed light on the Li-rich phenomenon, we highlight recent evidence of the tidal destruction of close-in hot Jupiters at the sub-giant phase. We note that when coupled with models of planet accretion, the observed destruction of hot Jupiters actually predicts the existence of Li-rich giant stars, and suggests Li-rich stars should be found early on the giant branch and occur more frequently with increasing metallicity. A comprehensive review of all known Li-rich giant stars reveals that this scenario is consistent with the data. However more evolved or metal-poor stars are less likely to host close-in giant planets, implying that their Li-rich origin requires an alternative explanation, likely related to mixing scenarios rather than external phenomena.' author: - | A. R. Casey$^1$[^1], G. Ruchti$^2$, T. Masseron$^1$, S. Randich$^3$, G. Gilmore$^1$, K. Lind$^{4,5}$,G. M. Kennedy$^1$, S. E. Koposov$^{1}$, A. Hourihane$^1$, E. Franciosini$^3$, J. R. Lewis$^{1}$,L. Magrini$^{3}$, L. Morbidelli$^3$, G. G. Sacco$^3$, C. C. Worley$^1$, S. Feltzing$^2$, R. D. Jeffries$^{6}$, A. Vallenari$^{7}$, T. Bensby$^2$, A. Bragaglia$^{8}$, E. Flaccomio$^{9}$, P. Francois$^{10}$, A. J. Korn$^5$, A. Lanzafame$^{11}$, E. Pancino$^{3,12}$, A. Recio-Blanco$^{13}$,R. Smiljanic$^{14}$, G. Carraro$^{15}$, M. T. Costado$^{16}$, F. Damiani$^{9}$, P. Donati$^{8,17}$, A. Frasca$^{18}$, P. Jofré$^1$, C. Lardo$^{19}$, P. de Laverny$^{13}$, L. Monaco$^{20}$, L. Prisinzano$^{9}$,L. Sbordone$^{21,22}$, S. G. Sousa$^{23}$, G. Tautvaišienė$^{24}$, S. Zaggia$^{7}$, T. Zwitter$^{25}$, E. Delgado Mena$^{23}$, Y. Chorniy$^{24}$, S. L. Martell$^{26}$, V. Silva Aguirre$^{27}$, A. Miglio$^{28}$, C. Chiappini$^{29}$, J. Montalban$^{30}$, T. Morel$^{31}$, M. Valentini$^{29}$\ Affiliations can be found at the end of this Article. date: 'Accepted 2016 XX XX. Received 2016 February XX; in original form 2016 May XX' title: 'The Gaia-ESO Survey: Revisiting the Li-rich giant problem' --- === === \[firstpage\] stars: abundances Introduction {#sec:introduction} ============ Lithium is a fragile element which cannot be easily replenished. Given its fragility, canonical stellar evolution models predict that a star’s Li abundance should decrease as it ascends the giant branch. Observations since @Bonsack_1959 have repeatedly confirmed these predictions. However, Population I stars show lithium abundances approximately ten times higher than older Population II stars, implying some kind of Galactic lithium enrichment. More puzzlingly, there exists a growing number of giant stars with lithium abundances that are near to, or exceed Big Bang nucleosynthesis predictions. Although rare, these stars constitute a fundamental outstanding problem for stellar evolution. Stellar evolution theory suggests that the depth of the convective envelope increases when a star leaves the main-sequence. In doing so, the star experiences first dredge-up: material from deep internal layers is mixed towards the surface [@Iben_1967a; @Iben_1967b]. The inner material is hot enough that Li has been destroyed, therefore first dredge up dilutes the surface Li abundance. Consequently, stellar evolution theory predicts the observable Li abundance should be $\sim$1.5 dex lower for evolved stars than their main-sequence counterparts. [e.g., @Iben_1967a; @Lagarde_2012] Stars on the upper red giant branch (RGB) may be even more depleted in Li due to mixing occurring just after the RGB bump [@Sweigart_Mengel_1979; @Charbonnel_1994; @Charbonnel_1995]. Other changes to surface abundances are also predicted: increases in $^4$He, $^{14}$N, $^{13}$C, and decreases in $^{12}$C [@Iben_1964; @Chaname_2005; @Charbonnel_2006; @Charbonnel_Lagarde_2010; @Karakas_2010; @Lattanzio_2015]. Detailed observations have repeatedly provided convincing evidence of these predictions [e.g., @Lambert_1980; @Spite_1982; @Gratton_2000; @Lind_2009b; @Mucciarelli_2012; @Tautvaisiene_2013]. The existence of Li-rich (A(Li) $\gtrsim 2$) giant stars implies an additional mechanism that produces and/or preserves surface Li. This process may be internal or external. In the right conditions stars can produce Li internally through the Cameron-Fowler mechanism [@Cameron_Fowler_1971]: $^3{\rm He}(\alpha,\gamma)^7{\rm Be}(e^-,\nu)^7{\rm Li}$. The temperature must be hot enough for $^7$Be to be produced, but $^7$Be must be quickly transported towards cooler regions so that fresh $^7$Li can be created without being immediately destroyed by proton capture. The Cameron-Fowler mechanism can operate in red giants in two different stages. During hot bottom burning (HBB), the bottom of the convective envelope is hot enough for $^7$Be production. The convection carries $^7$Be to cooler regions where it can capture an electron to produce $^7$Li. In the absence of HBB, a radiative zone exists between the shell and the convective envelope. A mechanism is then required to mix material down to the outer part of the shell – where temperatures are high enough to produce $^7$Be – and then fresh $^7$Be must be mixed across the radiative zone to the convective envelope. The mechanism for mixing through the radiative zone is under debate, but various mechanisms are collectively referred to as ‘deep-mixing’ or ‘extra-mixing’. Moreover, because the conditions required to produce $^7$Li are also sufficient to destroy it (e.g., by mixing fresh $^7$Li back to hotter regions), the level of subsequent Li-enhancement due to extra mixing is critically sensitive to the mixing speed, geometry, and episodicity [e.g., @Sackmann_Boothroyd_1999]. Several scenarios have been proposed to reconcile the existence of Li-rich giant stars, including ones that aim to minimise the amount of partial burning (i.e., preserve existing Li). However using *Hipparcos* parallaxes [@van_Leeuwen_2007] and stellar tracks to precisely estimate stellar masses and evolutionary states, @Charbonnel_2000 highlight fifteen Li-rich giants where Li preservation is insufficient: a Li-*production* mechanism is required to match the data. While precise, Li abundance measurements can be limited by the stellar tracks employed (e.g., including the horizontal or asymptotic branch for low-mass stars), emphasising the need for accurate knowledge of the evolutionary status. @Charbonnel_2000 propose two distinct episodes of Li production that depend on the stellar mass. For low-mass RGB stars at the bump in the luminosity function, the outward-moving hydrogen shell burns through the mean molecular weight discontinuity produced during first dredge up, enabling extra mixing and facilitating the Cameron-Fowler mechanism. However in intermediate-mass stars, the composition discontinuity is not destroyed until after the star begins core He burning. For this reason extra mixing can only be induced in intermediate-mass asymptotic giant branch (AGB) stars when the convective envelope deepens at the base of the AGB. While these scenarios explain the necessary internal conditions required to produce and transport Li to the photosphere, they do not speculate on the actual mechanism that drives the mixing [however see @Charbonnel_Zahn_2007]. @Palacios_2006 have shown that rotation alone is insufficient to produce the observed Li abundances, implying that an additional mechanism is required to induce the extra mixing. Thermohaline mixing has been proposed as a mechanism to drive extra mixing at the bump in the giant branch luminosity function. In addition to removing any existing molecular weight gradient, an inversion in the molecular weight gradient is produced, which drives thermohaline mixing [@Eggleton_2006]. In contrast, @Denissenkov_2003 incorporate diffusion and shear-driven mixing to facilitate extra-mixing in low-mass RGB stars. Their prescription relies on main-sequence stars (as the precursors of upper RGB stars) to possess rapidly rotating radiative cores. Instead of encouraging interactions between different mass shells [e.g., @Charbonnel_2000], @Denissenkov_2003 require the specific angular momentum to be conserved in each shell during the star’s evolution. This situation would therefore permit a reservoir of angular momentum which could later induce deep mixing. @Palacios_2001 proposed that internal instabilities occurring near the luminosity bump were sufficient to produce additional Li. Specifically, internally-produced $^7$Be could be transported to a nearby convective region where $^7$Li is produced, but immediately destroyed by proton capture. In effect, a thin burning layer of Li is created, where $^7{\rm Li}(p,\alpha)\alpha$ becomes the dominant reaction, increasing the local temperature and the level of meridional circulation. The molecular weight gradient is eventually destroyed, allowing for deep mixing to occur. While promising, this scenario requires an arbitrary and substantially large change in diffusion rates. A significant amount of mass-loss is expected as a consequence of this scenario, as well as an excess in infrared colours. Given extensive investigations into the (lack of) association between far infrared excesses and Li-rich giants, it would appear this scenario may be unlikely, unless the infrared excess phase is short [@Rebull_2015; @de_la_Reza_2015]. The extra mixing required may be induced by external phenomena. The ingestion of a massive planet or brown dwarf would contribute significant angular momentum to the system, producing additional Li before it is destroyed by convection [@Alexander_1967; @Siess_Livio_1999a; @Siess_Livio_1999b; @Denissenkov_Weiss_2000; @Denissenkov_Herwig_2004; @Carlberg_2010]. In this scenario the planet is assumed to be dissipated at the base of the convective envelope of a giant star, causing the star to substantially expand in size. If the accretion rate is high, HBB can be triggered. The predicted observational signatures vary depending on the accretion rate and the ingestion angle of the planet/dwarf. However, the predicted observables include increased mass loss and/or the ejection of a shell (and therefore a subsequent phase of infrared emission), an increase in the $^7$Li surface abundance, potential stellar metallicity enrichment, possibly increased rotational velocity due to the transfer of angular momentum, and less discernible effects such as the generation of magnetic fields [however see @Lebre_2009] or changes to the morphology of the horizontal branch. @Siess_Livio_1999a [@Siess_Livio_1999b] argue that the planet/dwarf star accretion scenario is not limited to a single evolutionary stage, allowing for Li-rich giants to exist on the red giant and the asymptotic giant branch. It can also advantageously explain stars with either high or low rotational velocities, depending on the extent that magnetic braking has influenced spin-down. However, there has been no discussion in the literature on how this scenario alone relates to why Li-rich giants tend to appear more frequently just below the RGB bump (e.g., see Figure \[fig:literature\]). Similarly, there has been no discussion of links between Li-rich giant stars and the properties or occurrence rates of exoplanet host stars. @Martin_1994 propose a novel external mechanism to reconcile observations of Li-rich giant stars. High Li abundances detected in the secondaries of a stellar-mass black hole [@Martin_1992] and neutron star [@Martin_1994] candidates led to the postulation that Li could be produced during a supernova explosion [see also @Tajitsu_2015], or through $\alpha$-$\alpha$ reactions during strong outbursts from a transient X-ray binary system. These conditions could be sufficiently energetic to induce cosmic-ray spallation and produce Li [@Walker_1985]. Li would presumably be accreted to the edge of the convective envelope of the secondary thereby producing a Li-rich giant star, potentially at any stage across the RGB, with low rotational velocities. A consequence of Li spallation is that beryllium and boron would also be created. To date, no Li-rich giant star has been found to have Be enhancement [@De_Medeiros_1997; @Castilho_1999; @Melo_2005]. Finally, although no long-term radial velocity studies have been conducted, the non-detection of a white dwarf companion in the vicinity of a present-day Li-rich giant star weakens this idea. Observations have been key to guiding models that can explain Li-rich giant stars. Unfortunately most Li-rich giant stars are not distinguishable by their photometric colours, therefore they cannot be efficiently selected solely on the basis of photometry. Early observations of far-infrared colours showed that many Li-rich stars show far-infrared excesses [@de_La_Reza_1996; @de_La_Reza_1997], suggesting that the Li-rich phase was associated with a mass-loss event. However later K-giant selections based on far-infrared colour excesses did not reveal any new Li-rich stars [@Fekel_Watson_1998; @Jasniewicz_1999]. @Rebull_2015 studied this phenomenon extensively and revealed that the largest infrared excesses do indeed appear in Li-rich K giants [typically with fast rotation, see also @Fekel_Balachandran_1993], although very few Li-rich K giants show any infrared excess. @Kumar_2015 came to the same conclusion from a study of $\sim 2000$ K giants. Therefore, if mass-loss or dust shell production is a regular consequence of the Li-enrichment mechanism, the infrared excess phase must be short [@de_la_Reza_2015]. Discoveries of Li-rich giant stars have been slow relative to advances in modelling. Their sparsity is partly to blame: only 1% of slow rotating K giant stars are Li-rich [although $\sim$50% of rapid rotating K giants are Li-rich, see @Drake_2002; @Lebre_2006]. For this reason most discoveries have been reported individually, although they cover all major components of the Galaxy: towards the bulge [@McWilliam_1994; @Uttenthaler_2007; @Gonzalez_2009], disk [@Monaco_2011], as well as plenty in the field[^2]. Li-rich giant stars have also been found in dwarf galaxies [@Kirby_2016], where the most metal-poor (${\rm [Fe/H]} \approx -2.8$) Li-rich giant star known has been found [@Kirby_2012]. Interestingly, despite large observational programs dedicated to obtaining high-quality spectra in clusters, fewer than ten Li-rich giants have been discovered in globular clusters [2 in NGC 362, M3-IV101, M5-V42, M68-A96, etc; @Smith_1999; @DOrazi_2015; @Kraft_1999; @Carney_1998; @Ruchti_2011; @Kirby_2016], and just five in open clusters [NGC 7789-K301, Berkeley 21, M 67, Trumpler 5, and NGC 6819; @Pilachowski_1986; @Hill_1999; @Canto_Martins_2006; @Monaco_2014; @Twarog_2013; @Carlberg_2015 respectively][^3]. Because the mixing mechanisms required to produce Li-rich giant stars are sensitive to the evolutionary stage, asteroseismology is a promising field to distinguish proposed mixing scenarios. To date five Li-rich giant stars have been discovered in the [[*Kepler*]{}]{} field [@Martell_2013; @Silva_Aguirre_2014; @Jofre_2015; @Twarog_2013; @Carlberg_2015]. However only two have benefited from seismic information. One Li-rich giant star has been shown to host a He-burning core, suggesting that Li production may have occurred through non-canonical mixing at the RGB tip [@Kumar_2011], possibly during the helium flash [see also @Cassisi_2016]. In contrast, seismic data for the Li-rich star KIC 9821622 has shown that it does *not* host a He-burning core, and sits just before the luminosity bump on the giant branch [@Jofre_2015]. Clearly, a larger sample of Li-rich giant stars with detectable solar-like oscillations is needed. Large scale spectroscopic surveys are ideal vehicles for increasing the sample of known Li-rich giant stars. In this [[*Article*]{}]{} we report the serendipitous discovery of 20 previously unknown Li-rich giants in the [[*Gaia-ESO Survey*]{}]{}. Four were observed with the UVES spectrograph, and the remainder using GIRAFFE. This constitutes one of the largest sample of Li-rich giant stars ever discovered. This [[*Article*]{}]{} is organised in the following manner. In Section \[sec:data\] we describe the data and analysis. We discuss the evolutionary stage and associated environments for all stars in our sample in Section \[sec:discussion\], before commenting on the likelihood of different Li production mechanisms. We conclude in Section \[sec:conclusions\]. Data & Analysis {#sec:data} =============== The [[*Gaia-ESO Survey*]{}]{} [@Gilmore_2012; @Randich_2013 ESO programs 188.B-3002 and 193.B-0936] is a $\sim$300-night program that simultaneously uses the UVES and GIRAFFE spectrographs [@Dekker_2000; @Pasquini_2000] on the Very Large Telescope in Chile to obtain high-resolution optical spectra for $>$100,000 stars in the Galaxy. Targets from all stellar populations are observed. We searched the fourth internal data release (iDR4) of the [[*Gaia-ESO Survey*]{}]{} for giant stars with peculiarly high lithium abundances. We restricted our search to K-type giant stars with Li measurements (i.e., not upper limits) where A(Li LTE) $\gtrsim 2$. Our search revealed 4 bonafide Li-rich giant stars observed with UVES, and 16 observed with GIRAFFE. A cross-match of the *Survey* observing logs reveals these spectra were obtained in good seeing (0.6-0.9) throughout 2013-2014. Standard data reduction procedures were employed, as detailed in @Sacco_2014 and @Lewis_2016. The S/N of the spectra range from $\approx$30 to $\approx$100. The [[*Gaia-ESO Survey*]{}]{} employs multiple analysis pipelines to produce a robust ensemble measurement of the stellar parameters (Table \[tab:stellar-parameters\]: $T_{\rm eff}$, $\log{g}$, ${\rm [M/H]}$) and detailed chemical abundances. The analysis of FGK-type stars within the *Survey* is split between different working groups (WGs): WG10 analyses FGK-type stars observed with GIRAFFE, WG11 analyses FGK-type stars observed with UVES [@Smiljanic_2014], and WG12 analyses pre-main-sequence candidates [@Lanzafame_2015] – irrespective of whether they were observed with GIRAFFE or UVES. Within each WG there are multiple analysis nodes. A node consists of a sufficiently distinct pipeline, and expert spectroscopists that are familiar with the pipeline employed. All nodes provide estimates of the stellar parameters and/or detailed chemical abundances. For the [[*Gaia-ESO Survey*]{}]{} iDR4, there are up to six nodes for WG10, and eleven for WG11. There are some commonalities between the nodes. The MARCS 1D model atmospheres [@Gustafsson_2008] are used by all nodes, the same atomic line data [@Ruffoni_2014; @Heiter_2015] and solar abundances [@Grevesse_2007] are employed, and where relevant, the same grid of synthetic spectra is used. The WG10/GIRAFFE nodes are provided initial guesses of the stellar parameters from a pre-processing pipeline. The data reduction procedure also produces normalised spectra for all nodes, however some nodes opted to repeat or redo the normalisation. The spectral analysis is performed in two consecutive stages. The stellar parameters reported by each node are homogenised to produce an ensemble measurement of stellar parameters for a given star. Those homogenised measurements are then returned to the nodes, at which point the detailed chemical abundances are calculated using the homogenised stellar parameters. Appropriate data are accounted for during the abundance determination of each line or element (e.g., hyperfine structure, the Fe 6707.4 Å blend for Li abundances, etc). Individual abundances are subsequently homogenised, producing a single set of abundance measurements for all co-investigators of the *Survey* to use. In both stages (stellar parameters, chemical abundances), the homogenisation procedure identifies erroneous node measurements, accounts for the covariance between sources of measurements, and quantifies or minimises systematics present in the data. Most critically, the top-level homogenisation (performed by WG15) ensures that results from multiple WGs are on a consistent, comparable scale. Details of the analysis nodes, work structure and homogenisation procedure for the previous WG11 data release is presented in @Smiljanic_2014. A full description of the homogenisation procedure for UVES iDR4 data will be presented in @Casey_2016. Characterisation and Evolutionary Status of Li-Rich Stars --------------------------------------------------------- Our sample of bonafide Li-rich giant stars includes targets analysed by WG10, WG11, and WG12. While the WG12 group include experts on the analysis of pre-main-sequence stars, they are also specialists in standard FGK-type star analyses. This is important to note, as not all stars targeted by WG12 are later found to be pre-main-sequence stars; some stars targeted by WG12 are actually standard FGK-type stars. Half (10) of our Li-rich giant stars were analysed by WG10 or WG11. The remainder were targeted as pre-main-sequence candidates towards young clusters, but were later found to be giant stars that are likely non-members of those clusters (see below). Their evolved nature is indicative from their stellar parameters, the empirical $\gamma$-index [we required $\gamma > 1.01$; @Damiani_2014], and lack of H-$\alpha$ emission (a youth indicator for pre-main-sequence stars). Most stars in our sample lie below the RGB bump (Figure \[fig:stellar-params\]), consistent with previous studies of Li-rich giant stars with near-solar metallicities (Figure \[fig:literature\]). Some stars are exceptions: 18033785–3009201 was observed with UVES and lies just above the RGB bump, near the clump. 19230935+0123293 has a similar surface gravity, but is hotter and more consistent with being a red clump (RC) or AGB star. 19301883–0004175 is the coolest and most metal-poor (${\rm [Fe/H]} = -0.52$) Li-rich giant star in our sample. Our stellar parameters place 19301883–0004175 slightly red-ward (below) of the isochrone. Given this star is in the [[*CoRoT*]{}]{} field, combining asteroseismic oscillations with the high-quality [[*Gaia-ESO Survey*]{}]{} spectra would be advantageous to firmly establish the evolutionary state of this highly evolved Li-rich giant star. The [[*Gaia-ESO Survey*]{}]{} reports individual chemical abundances for up to 45 species in iDR4: 34 elements at different ionisation stages. These range from $Z = 3$–$63$ (Li to Eu) and include odd-Z, $\alpha$-, Fe-peak, as well as neutron-capture ($s$- and $r$-process) elements. The resolution, wavelength coverage, and S/N of the GIRAFFE sample is inferior to UVES, therefore only a maximum of 15 species are available from GIRAFFE spectra. Given the S/N and spectral type of our Li-rich giant sample, for some stars we report abundances for only a few (or no) elements. Tables \[tab:abundances-giraffe\]-\[tab:lithium\] contain the detailed abundances for all Li-rich giants in our sample. We find no obvious anomalous pattern in the detailed chemical abundances of our Li-rich stars (Figure \[fig:uves-abundances\]). This confirms findings from other studies that conclude Li seems to be the only element of difference [e.g., @Ruchti_2011; @Martell_2013]. For completeness purposes we have calculated non-LTE lithium abundances using the grid of corrections from @Lind_2009a. These measurements are listed in Table \[tab:lithium\], but throughout this text all abundances refer to those calculated in LTE. There is little doubt that these stars are indeed Li-rich. In Figure \[fig:uves-spectra\] we show the spectra surrounding the Li resonance doublet at 6707 Å and the subordinate line at 6103 Å for the Li-rich stars observed with UVES. A comparison giant star of similar stellar parameters is shown in each panel, highlighting the difference in Li. The 6707 Å line is strong in all four stars and saturates in the bulge star 18033785–3009201. The 6103 Å line is also visible. Similarly, we show the 6707 Å line for all Li-rich stars observed with GIRAFFE in Figure \[fig:giraffe-spectra\], confirming their high Li abundances. The 6103 Å line is not covered by the GIRAFFE setups employed. We find only one Li-rich giant star in our sample to be a fast rotator ($v\sin{i} \gtrsim 20$ km s$^{-1}$): 11000515-7623259, the star towards Chameleon 1. We find no evidence of binarity in our sample: no significant secondary peak is seen in the cross-correlation function, and no spectral lines are repeated. However this does not preclude the possibility of a faint binary companion. Repeat radial velocity measurements over a long baseline may be required to infer the presence of any companion. We searched for indications of significant mass-loss in our sample of Li-rich stars. We cross-matched our sample with the Wide-Field Infrared Survey Explorer [hereafter WISE, @Wright_2010] and the 2 Micron All Sky Survey [2MASS, @Skrutskie_2006] catalogues to search for infrared excesses that may be attributable to ejected shells or dust-loss. All stars had entries in 2MASS and WISE. We investigated all possible combinations of near- and mid-infrared colours and found no significant difference in the colours (or magnitudes) of our Li-rich stars. Two stars exhibited mild excesses in WISE colours, but there are indications that the reported excess is due to source confusion and high background levels. If the Li-rich stars in our sample are experiencing significant mass-loss as dust, that signature may only be observable in the far infrared. Because these stars are (relatively) faint (see Table \[tab:stellar-parameters\]), they may not be visible in the far infrared even if a substantial relative excess exists due to the presence of a shell. Giant stars experiencing significant mass-loss as gas often show blue-ward asymmetry in their H-$\alpha$ profile [e.g., @Meszaros_2009]. Figure \[fig:halpha-spectra\] shows spectra for all Li-rich giant stars around the H-$\alpha$ line. No obvious asymmetry is present for the UVES sample. There is some suggestion of asymmetry in some of the GIRAFFE Li-rich giants, most notably 08102116-4740125 and 11000515-7623259. However for most Li-rich stars in our sample, there is weak evidence for any recent and significant mass loss, either in the form of gas, dust, or shells. Discussion {#sec:discussion} ========== The key to understanding the nature of the Li production and preservation mechanisms in giant stars is to accurately know their evolutionary stage and the surrounding environment. Although some of our Li-rich stars have evolved past the RGB bump, the majority of our Li-rich giants lie just below the RGB bump. This is consistent with other studies of Li-rich giants of solar-metallicity [e.g., @Martell_2013 and Figure \[fig:literature\]], whereas most metal-poor Li-rich giants have been found at more evolved stages: either slightly past the RGB bump [e.g., @DOrazi_2015], towards the RGB tip, red clump, or on the AGB [e.g., @Kumar_2011; @Ruchti_2011]. The fact that many of our stars lie before the RGB bump is a genuine problem, because this is before the discontinuity in mean molecular weight can be destroyed, irrespective of mass. An alternative scenario is that these stars have simply been mis-classified as pre-bump stars [e.g., @da_Silva_2006], and they are more likely past the luminosity bump or are red clump stars. Below we discuss the observational signatures, the evolutionary stage, environment and membership thereof for all Li-rich giant stars in our sample, before commenting on the plausibility of the proposed scenarios. Environment & Evolution ----------------------- ### Li-rich giants towards clusters Half of our Li-rich stars are in the direction of open clusters. This is due to an observational bias: the GIRAFFE instrument setups used for the [[*Gaia-ESO Survey*]{}]{} Milky Way fields do not include the Li line. Additional setups are used for clusters and special fields (e.g., the [[*CoRoT*]{}]{} fields), which include Li. The clusters surrounding each Li-rich star are shown in Table \[tab:stellar-parameters\]. Below we discuss why these Li-rich giant stars are unlikely to be bonafide cluster members. However, we stress that our conclusions are not conditional on (non-)membership for any of the Li-rich giant stars. While cluster membership clearly has an influence on the frequency of Li-rich giant stars in the field and clusters (Section \[sec:frequency-discussion\]), these inferences are similarly complicated by the absence of quantifiable selection functions for other Li-rich giant studies. We find two Li-rich giants towards the young open cluster gamma2 Velorum, neither of which are likely members. 08102116-4740125 has a radial velocity that is inconsistent with the cluster, and 08095783-4701385 has a velocity near the maximum cluster value (26 km s$^{-1}$). More crucially, any giants towards *any young cluster* like gamma2 Velorum (5-10 Myr) are extremely unlikely to be cluster members given the cluster age. This reasoning extends to 08395152-5315159 towards IC 2391 (53 Myr), the Li-rich giants towards NGC 2547 (35 Myr) and Chamaeleon 1 (2 Myr), and the four Li-rich giant stars towards IC 2602 (32 Myr). This argument does not extend to NGC 6802, which is substantially older (1 Gyr). Nevertheless, the UVES Li-rich star towards NGC 6802 is also unlikely to be a bonafide member. @Janes_Hoq_2011 classify it as a likely non-member in their detailed cluster study, and @Dias_2014 estimate a 66% membership probability based on proper motions. The radial velocity is mildly ($\sim{}2\sigma$) inconsistent with the distribution of cluster velocities. Finally, the metallicity places 19304281+2016107 a full 0.2 dex lower than the cluster mean, significantly away from the otherwise small dispersion in metallicity seen for this cluster. ### 18033785-3009201, the Li-rich bulge star {#sec:discussion-bulge} The discovery of 18033785–3009201 at $(l, b) = (1^\circ, -4^\circ)$ makes it the most Li-rich giant star known towards the bulge [@McWilliam_1994; @Gonzalez_2009]. Its radial velocity ($-70$ km s$^{-1}$) is consistent with bulge membership for stars at this location [@Ness_2013]. The detailed chemical abundances we derive are in excellent agreement with the literature. @Bensby_2013 report detailed chemical abundances from 58 microlensed dwarf and sub-giant stars in the bulge. A comparison of their work with respect to 18033785–3009201 is shown in Figure \[fig:bulge\]. Although we find slightly higher \[Na/Fe\] and \[Al/Fe\] ratios than @Bensby_2013, our abundances are consistent with other bulge studies focusing on giant stars [e.g., @Fulbright_2007]. 18033785–3009201 exhibits a noteworthy deficiency in the classical $s$-process elemental abundances: Ba, La, Ce, Pr, and Nd. Although the uncertainty on Pr II is quite large ($\sim$0.5 dex), on average we find 18033785–3009201 to be depleted in $s$-process elements relative to iron, by $\sim$0.3 dex. This signature is not seen in the classical $r$-process element Eu, where we find ${\rm [Eu/Fe]} = 0.05 \pm 0.10$ dex. Low \[$s$-process/Fe\] abundance ratios are generally consistent with an ancient population (e.g., dwarf galaxies, however there are exceptions), and the depletion in these elements firmly rules out any scenarios where the increased surface Li abundance is associated with mass transfer from a nearby companion, which would result in an increase of \[$s$/Fe\] abundance ratios. The stellar parameters for 18033785–3009201 place it near the RGB bump. Given the uncertainty in $\log{g}$, we cannot rule out whether this star is on the RGB or is actually a red clump star. The measured \[C/O\] ratio of 0.03 is near-solar, and while this is only weak evidence, it suggests the star has not completed first dredge up as a decrease in C abundances would be expected [e.g., @Karakas_2014]. A better understanding of the evolutionary state would be useful to constrain the details of any internal mixing. However we note that detecting asteroseismic oscillations from 18033785–3009201 is not likely in the foreseeable future, as its position lies 2$^\circ$ from the closest planned *K2* field[^4] towards the bulge. ### Li-rich giants in the CoRoT field {#sec:discussion-corot} Our sample contains the first Li-rich giant stars discovered towards any [[*CoRoT*]{}]{}  fields. One star was observed with UVES, and the remaining eight using GIRAFFE. Most of the [[*CoRoT*]{}]{} Li-rich giant stars are approximately around solar metallicity, with a higher frequency of stars observed just below the RGB bump. However, at least two, perhaps three, stars are consistent with being more evolved. 19301883–0004175 is the coolest and most metal-poor Li-rich star in our sample ($T_{\rm eff} = 4070$ K, ${\rm [Fe/H]} = -0.52$). In contrast to observations where most Li-rich giant stars are found below the RGB bump, 19301883–0004175 adds to the small sample of Li-rich stars at more evolved stages. Li-rich giant stars past the RGB bump are preferentially more metal-poor, consistent with 19301883–0004175. Given the stellar parameters, 19230935+0123293 is consistent with being a red clump star. The uncertainties in stellar parameters for 19253819+0031094 are relatively large, therefore its exact evolutionary stage is uncertain. Given the uncertainties in stellar parameters and the tendency of solar-metallicity Li-rich giants to occur more frequently around the RGB bump, it is perhaps likely that 19230935+0123293 is indeed located near the RGB bump, as indicated by the reported stellar parameters. The ambiguity in evolutionary stage for these stars would be easily resolved if astereoseismic oscillations were detectable for these objects. However, at this stage, it would appear these stars are slightly too faint for the evolutionary stage to be derived from [[*CoRoT*]{}]{} light curves. Explaining the Li-rich giant phenomena {#sec:discussion-explaining} -------------------------------------- Here we discuss the plausibility of internal and external mechanisms proposed to reconcile observed properties of Li-rich giants. We note that our data are inadequate to comment on external mechanisms involving supernovae or transient X-ray binaries, therefore we do not consider this hypothesis further. The internal scenarios that we have previously outlined describe the deep mixing conditions required to produce an increased surface Li-abundance. However – other than thermohaline mixing – these models lack any description for why a given star begins to experience deep mixing, or why the frequency of stars undergoing deep mixing is so low. Therefore, while the Li production mechanism and the conditions required for it to occur are well-understood, there still exists a *missing link* in exactly what causes the extra mixing. ### Are Li-rich K-type giants likely due to planet ingestion? {#sec:discussion-planet-ingestion} The increasing number of stars known to host close-in giant planets (“hot Jupiters”) provides a potential solution to the Li-rich giant problem. In this framework two factors actually contribute towards the increase in surface Li abundance: (1) the injection of a large planet provides a reservoir of primordial (unburnt) levels of lithium, and (2) deep mixing that is induced as the planet is dissipated throughout the convective envelope, bringing freshly produced Li to the surface. @Siess_Livio_1999a [@Siess_Livio_1999b] first explored this scenario theoretically and showed that while the results are sensitive to the accretion rate and structure of the star, the accretion of a planet or brown dwarf star can produce the requisite surface Li abundance and explain their frequency. However, this mechanism was invoked to reconcile the existence of Li-rich giants across the RGB and the AGB, which is not commensurate with the properties of close-in hot Jupiters or their occurrence rates. Exoplanet occurrence rates are correlated with the host star. For example, close-in giant planets form preferentially around metal-rich stars [e.g., @Santos_2004; @Fischer_2005]. Indeed, the frequency of metal-rich giant planets is well-represented as a log-linear function of the host star metallicity [e.g., @Fischer_2005]. For FGK stars with near-solar metallicity, the fraction of stars hosting close-in giant planets is approximately 8%, and decreases to 0.6% for stars of ${\rm [Fe/H]} = -0.5$ [@Schlaufman_2014]. The occurrence rate of close-in giant planets also appears to be a function of the evolutionary state of the host star. It is well-established that sub-giant stars have systematically higher giant planet occurrence rates when all orbital periods are considered. However, sub-giant stars are also found to have fewer close-in hot Jupiters than main-sequence stars of the same metallicity [@Bowler_2010; @Johnson_2010]. There has been considerable debate to explain the differing occurrence rates of close-in hot Jupiters, including suggestions that stellar mass differences between the two populations is sufficient to explain the discrepancy [@Burkert_Ida_2007; @Pasquini_2007; @Kennedy_Kenyon_2008a; @Kennedy_Kenyon_2008b]. If the sub-giant stellar masses were considerably larger than those of main-sequence stars at the same metallicity, then one could imagine changes in the proto-planetary disk or dissipation timescales (due to increased radiative pressure) that could hamper the formation of close-in giant planets and reconcile the observations [@Kennedy_Kenyon_2009]. The alternative scenario is that close-in giant planets become tidally destroyed as stars leave the main-sequence and the convective envelope increases. It would be difficult to unambiguously resolve these two possibilities (difference in stellar masses or tidal destruction of hot Jupiters) using models of stellar evolution and planet formation, given the number of unknown variables. @Schlaufman_Winn_2013 employed a novel approach to untangle this mystery using precise Galactic space motions. Their sample comprised main-sequence and sub-giant F- and G-type stars in the thin disk. Thin disk stars form with a very cold velocity distribution because they grow from dense, turbulent gas in a highly dissipative process. Over time the velocity distribution for a thin disk stellar population increases due to interactions between stars, molecular clouds and spiral waves. Because massive stars spend very little time on the main-sequence, there is only a short period for interactions to kinematically heat a population of massive stars. In contrast, solar-mass stars spend a long time on the main-sequence, allowing for plenty of interactions to kinematically heat the population. For these reasons one would expect the space velocity dispersion of thin disk stars to decrease with increasing stellar mass. This logic extends to evolved stars, since they only spend a small fraction as a sub-giant or giant relative to their main-sequence lifetime. Using precise parallaxes and proper motions from *Hipparcos* [@van_Leeuwen_2007], @Schlaufman_Winn_2013 find that the distribution of Galactic space motions of planet-hosting sub-giant stars are on average equal to those of planet-hosting main-sequence stars. For this reason, the distribution of planet-hosting sub-giant and main-sequence stars can only differ in age (or radius, as expected from the increasing stellar envelope), but not mass. Moreover the orbital eccentricities of Jupiters around sub-giants are systematically lower than those of main-sequence stars [e.g., @Jones_2014], indicating that some level of angular momentum transfer and orbital circularisation has occurred. Because the main-sequence and sub-giant planet-host stars are likely to only differ in age, they provide insight on what happens to close-in giant planets when a star’s convective envelope deepens at the base of the giant branch. Therefore the lack of close-in giant planets orbiting sub-giant stars provides clear evidence for their destruction [e.g., @Rasio_1996; @Villaver_Livio_2009; @Lloyd_2011; @Schlaufman_2014]. Given this empirical evidence for tidal destruction of close-in hot Jupiters as a star begins its ascent on the giant branch, it is intriguing to consider what impact the planet accretion would have on the host star. @Siess_Livio_1999b show that while the extent of observable signatures are sensitive to the mass of the planet and the accretion rate, the engulfment of a close-in giant planet can significantly increase the photospheric Li abundance. Recall that two factors contribute to this signature. Firstly, the accreted mass of the giant planet – where no Li burning has occurred – can produce a net increase in photospheric Li. The second effect allows for Li *production* within the star: the spiralling infall of a giant planet and the associated angular momentum transfer is sufficient to induce deep mixing, bringing freshly produced $^7$Li to the surface before it is destroyed. If the additional Li reservoir were the only effect contributing to the net increase in photospheric Li, then an order-of-magnitude estimate of the requisite planetary mass suggests a brown dwarf is required. However a brown dwarf will have a fully convective envelope, and will therefore have depleted some of its primordial Li abundance. Moreover, the lack of brown dwarfs found within 3-5 AU around solar-mass stars [$<$1%; the *‘brown dwarf desert’*, see @Grether_2006] indicates that brown dwarfs are not frequent enough to later produce the higher frequency of Li-rich giant stars. For these reasons Li-rich giant stars are unlikely to be primarily produced from the ingestion of a brown dwarf, implying that the deep mixing induced by angular momentum transfer is crucial to produce high photospheric Li abundances. Moreover, without any additional mixing (and just a reservoir of unburnt Li) we would expect a similar increase in Be, which has not been detected in Li-rich giant stars to date [@De_Medeiros_1997; @Castilho_1999; @Melo_2005; @Monaco_2014]. Indeed, if we simply take the models of @Siess_Livio_1999a [@Siess_Livio_1999b] at face value and assume that *some* conditions of accretion rate can produce a net increase in photospheric Li (either through a fresh reservoir of Li and induced deep mixing), then the observed occurrence rates of close-in giant planets *predicts a population of Li-rich giant stars before the RGB bump*. The occurrence rates of close-in giant planets at solar metallicity [$\approx$8%, or more conservatively $\approx$1%, e.g., @Santerne_2015] is commensurate with the idea that some accretion conditions could produce a population of Li-rich giant stars with a frequency of $\approx$1%. If this scenario were true, the correlation between the occurrence rate of close-in giant planets and the host stellar metallicity suggests that we should expect to see more Li-rich giant stars before the RGB bump with higher metallicities. Although the lack of reproducible selection functions for studies of Li-rich giant stars prevents us from commenting on the fraction of Li-rich giants at a given metallicity, the observations are consistent with our expectations. Indeed, like @Martell_2013, we find that most of our Li-rich giant stars have near-solar metallicities. However this observation may be complicated by the [[*Gaia-ESO Survey*]{}]{} selection function, as the metallicity distribution function of [[*Gaia-ESO Survey*]{}]{} stars peaks near solar metallicity for the UVES sample in iDR4. Contrary to the original motivation in @Siess_Livio_1999a, the planet engulfment model is actually less likely to produce Li-rich stars all across the RGB and AGB, because close-in giants are likely to be destroyed as soon as the convective envelope increases. Although planets are found more frequently around sub-giant stars, those planets are preferentially found on long orbital periods. Moreover, the timescale of Li-depletion suggests that our proposed scenario is unlikely to account for highly evolved stars with increased Li. As the planet is destroyed the subsequent Li enhancement will be depleted over the next $\sim$0.2–1 Myr. Because low-mass stars spend such a short time from the main-sequence to the sub-giant phase, we should expect any Li enhancement to be depleted by the time they have ascended even moderately up the giant branch. Alternatively, if a giant planet is formed sufficiently far from the host star it may be unaffected by the initial expansion of the convective envelope. In this scenario it may be accreted at a subsequent time, ultimately being destroyed when the star is more evolved. However the circularisation and long orbital periods of giant planets around sub-giant stars suggests that the long-timescale engulfment scenario is somewhat improbable [@Jones_2014; @Schlaufman_2014]. On the other hand, one could imagine a somewhat unusual scenario where the planet is not fully dissolved, and orbits within the stellar photosphere without any large transfer of angular momentum. In principle, this kind of scenario may explain Li-rich giant stars at more evolved stages. Our assertion linking the majority of Li-rich stars as a consequence of tidal destruction of close-in giant planets is unlikely to fully explain the existence of very metal-poor Li-rich giants. The occurrence rates of close-in giant planets for stars with low metallicity (${\rm [Fe/H]} = -0.5$) is a mere $\sim$1%, and decreases with total metallicity. Therefore, a very metal-poor star (e.g., ${\rm [Fe/H]} < -2$) is quite unlikely to host any planet (including a close-in giant planet), and therefore planet accretion is an improbable explanation for the increased surface Li. However, of the Li-rich stars that are also metal-poor, these are almost ubiquitously found to also be highly evolved (e.g., AGB, RGB tip, red clump), which are thus explainable through a host of internal mechanisms. Dynamical interactions would suggest that our proposed link between close-in giant planets and Li-rich giants implies a lower fraction of Li-rich giant stars should be found in dense stellar environments. Three body interactions in a dense cluster can sufficiently perturb a close-in hot Jupiter before a star leaves the main-sequence [@Sigurdsson_1992; @Hurley_Shara_2002]. While the evidence is weak, this appears to be consistent with the observations of Li-rich giants (see Section \[sec:frequency-discussion\]). ### Has the evolutionary stage been mis-estimated? {#sec:discussion-evolutionary-stage} An alternative scenario is that spectroscopic studies of Li-rich giants are systematically biased in their determination of surface gravities. Indeed, if the majority of Li-rich giant stars are actually red clump stars that have been misclassified as stars below the bump, there may be little or no requirement for an external mechanism to induce additional mixing. In their low-resolution study of $\sim 2,000$ low-mass giant stars, @Kumar_2011 identified fifteen new Li-rich stars and noted a concentration of them at the red clump, or on the RGB. Either evolutionary state was plausible, as it is difficult to unambiguously determine the precise evolutionary state directly from spectroscopy. Because the lifetime for clump stars is much longer than those at the bump, it is reasonable to expect that many field stars identified to be near the luminosity bump are indeed clump stars. Moreover, stellar evolution models suggest that Li can be synthesised during the He-core flash [@Eggleton_2008; @Kumar_2011], suggesting that most Li-rich giants may actually be red clump stars, and have been mis-identified as being near the luminosity bump. @Silva_Aguirre_2014 used asteroseismic data from the [[*Kepler*]{}]{} space telescope and came to this conclusion for their metal-poor (${\rm [Fe/H]} = -0.29$) Li-rich star. Although stellar parameters derived from spectroscopy alone were unable to confidently place their star on the RGB or at the clump, the internal oscillations for a star with or without a He-burning core show small differences [@Bedding_2011; @Mosser_2011]. However @Jofre_2015 showed that solar-like oscillations in KIC 9821622 (another Li-rich giant star) demonstrated that it does not have a He-burning core, and firmly places the evolutionary stage of KIC 9821622 below the luminosity bump on the giant branch. Our sample constitutes the largest number of Li-rich giant stars identified in a field observed by a space telescope capable of detecting asteroseimic oscillations. Although our stellar parameters are more consistent with the majority of these stars being on the RGB at or below the luminosity bump, they are each *individually* consistent with being red clump stars: the red clump position (in $T_{\rm eff}$ and $\log{g}$) is 1-$\sigma$ to 2-$\sigma$ of the quoted uncertainty for each *individual* star. However as a coherent sample, the *population* significance depends on how correlated these measurements are. For these reasons, employing asteroseismic data from [[*CoRoT*]{}]{} may reveal whether these stars are indeed red clump stars, or associated with the bump in the luminosity. If indeed it is the former, an external planet ingestion scenario becomes unlikely, which would provide strong direction on where to focus modelling efforts. We encourage follow-up work to distinguish these possibilities. Frequency of Li-rich K giants {#sec:frequency-discussion} ----------------------------- The selection function and observing strategy employed for the [[*Gaia-ESO Survey*]{}]{} preclude us from robustly commenting on the frequency of Li-rich giants for the Milky Way field population. All UVES spectra include the 6707 Å Li line, but the standard GIRAFFE settings used for Milky Way *Survey* fields (HR10 and HR21) do not span this region. [[*CoRoT*]{}]{} observations within the [[*Gaia-ESO Survey*]{}]{} are a unique subset of high scientific interest, which is why the HR15N setup (covering Li) was employed for these stars. Therefore we can only comment on the frequency of Li-enhanced (A(Li) $\gtrsim 2$) K-giant stars identified in the [[*CoRoT*]{}]{} field, or the fraction observed in the larger UVES sample. At first glance the discovery of 9 Li-rich giant stars in the [[*CoRoT*]{}]{} field may appear as a statistically high number, suggesting there may be something special about the location of the [[*CoRoT*]{}]{} field, or the distribution of stellar masses within it. The [[*Gaia-ESO Survey*]{}]{} iDR4 contains 1,175 giant stars that match our selection criteria ($\log{g} < 3$ and $T_{\rm eff} < 5200$ K) where the abundance of Li is reported. We identify 9 Li-rich giants, resulting in an observed frequency of slow rotating Li-rich K giants of $\sim$1%, consistent with previous studies [e.g., @Drake_2002]. The frequency in the total UVES sample from the [[*Gaia-ESO Survey*]{}]{} iDR4 is even smaller. The sample contains 992 giants that match our selection criteria, of which 845 have Li abundance measurements or upper limits. Four of these are Li-rich, implying a frequency of just 0.4%. These are small-number statistics that may be strongly impacted by the *Survey* selection function. For example, the UVES sample contains a considerable fraction (27%) of cluster stars. Only about 50% of the sample are Milky Way fields, with the remainder comprised of bulge fields, benchmark stars, and radial velocity standards. The UVES cluster sample (open and globular) contains 256 stars, of which two are Li-rich. It is of interest to speculate whether the occurrence rate of slow-rotating Li-rich K giants differs between clusters and the field. While it is difficult for us to make robust inferences on the field frequency based on the literature or the iDR4 *Survey* data set, it is important to note that the vast majority ($\gtrsim90$%) of Li-rich giant stars *have* been discovered in the field. After accounting for the fact that star clusters have been extensively observed with multi-object spectroscopic instruments for over a decade, it seems curious that less than ten Li-rich giant stars have been detected in globular clusters to date. However, we stress that standard instrumental setups do not always include the Li line, so this line of argument is further complicated by observational (or scientific) biases. Conclusions {#sec:conclusions} =========== We have presented one of the largest samples of Li-rich K-giant stars. Our sample of Li-rich giant stars includes the most Li-rich giant known towards the bulge, and the first sample of Li-rich giants towards the [[*CoRoT*]{}]{} fields. Most stars have stellar parameters and abundances that are consistent with being just below the luminosity bump on the red giant branch. Given that about half of our sample is towards the [[*CoRoT*]{}]{} fields, accurately knowing the evolutionary stage of this sample could confirm their position below the luminosity bump. The ensemble properties of Li-rich giant stars in the literature suggest two sub-classes, which may point towards their formation mechanism(s). The first is comprised of near-solar (${\rm [Fe/H]} \gtrsim -0.5$) metallicity stars, which are preferentially found slightly before or near the luminosity bump. The second class of Li-rich giants are found in later evolutionary stages and are usually more metal-poor. We argue that Li-rich giant stars before or near the luminosity bump are a consequence of planet/brown dwarf engulfment when the stellar photosphere expands at the sub-giant stage. Our assertion is supported by recent evidence on the occurrence rates of close-in giant planets, which demonstrate that hot Jupiters are accreted onto the host star as they begin to ascend the giant branch. If we take planet accretion models at face value and trust that *some* conditions of accretion rate can produce a net positive abundance of Li by amassing unburnt Li *and* inducing deep mixing by angular momentum transfer, then these two lines of evidence actually *predict the existence of Li-rich giant stars*. This scenario would predict an increasing frequency of Li-rich giant stars with increasing metallicity, and the Li-depletion timescales would suggest that these stars should be preferentially found below the RGB bump. Moreover, it would imply a lower fraction of Li-rich giant stars in dense stellar environments (e.g., clusters) due to three body interactions. The majority of Li-rich giant stars are consistent with these predictions. The remainder are mostly Li-rich giant stars at late evolutionary stages, a fact that is reconcilable with internal mixing prescriptions, late-time engulfment, or mass-transfer from a binary companion. Acknowledgments {#acknowledgments .unnumbered} =============== We thank the anonymous referee for a constructive report which improved this paper. We thank Ross Church, Ghina Halabi, Jarrod Hurley, Benoit Mosser, Kevin Schlaufman, as well as the *Gaia/Stars* and *Stellar Interiors* research groups at the Institute of Astronomy, University of Cambridge for helpful discussions on this work. Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 188.B-3002. These data products have been processed by the Cambridge Astronomy Survey Unit (CASU) at the Institute of Astronomy, University of Cambridge, and by the FLAMES/UVES reduction team at INAF/Osservatorio Astrofisico di Arcetri. These data have been obtained from the Gaia-ESO Survey Data Archive, prepared and hosted by the Wide Field Astronomy Unit, Institute for Astronomy, University of Edinburgh, which is funded by the UK Science and Technology Facilities Council. This work was partly supported by the European Union FP7 programme through ERC grant number 320360 and by the Leverhulme Trust through grant RPG-2012-541. We acknowledge the support from INAF and the Ministero dell’ Istruzione, dell’ Università’ e della Ricerca (MIUR) in the form of the grant “Premiale VLT 2012” and “The Chemical and Dynamical Evolution of the Milky Way and Local Group Galaxies” (prot. 2010LY5N2T). The results presented here benefit from discussions held during the Gaia-ESO workshops and conferences supported by the ESF (European Science Foundation) through the GREAT Research Network Programme. G. R. acknowledges support from the project grant “The New Milky Way" from the Knut and Alice Wallenberg Foundation. G. M. K is supported by the Royal Society as a Royal Society University Research Fellow. L. S. acknowledges support provided by the Chilean Ministry of Economy, Development, and Tourism’s Millennium Science Initiative through grant IC120009, awarded to The Millennium Institute of Astrophysics, MAS. Funding for the Stellar Astrophysics Centre is provided by The Danish National Research Foundation. The research is supported by the ASTERISK project (ASTERoseismic Investigations with SONG and Kepler) funded by the European Research Council (Grant agreement no.: 267864). V. S. A. acknowledges support from VILLUM FONDEN (research grant 10118). S. L. M acknowledges support from the Australian Research Council through DECRA fellowship DE140100598. T. M. acknowledges financial support from Belspo for contract PRODEX *Gaia*-DPAC. J. M. acknowledges the support from the European Research Council Consolidator Grant funding scheme (*project STARKEY*, G.A. n. 615604). This research has made use of the ExoDat Database, operated at LAM-OAMP, Marseille, France, on behalf of the CoRoT/Exoplanet program. This research made use of Astropy, a community-developed core Python package for Astronomy [@astropy]. This research has made use of NASA’s Astrophysics Data System Bibliographic Services. [99]{} Adam[ó]{}w, M., Niedzielski, A., Villaver, E., Wolszczan, A., & Nowak, G. 2014, , 569, A55 Adam[ó]{}w, M., Niedzielski, A., Villaver, E., et al. 2015, , 581, A94 Alcal[á]{}, J. M., Biazzo, K., Covino, E., Frasca, A., & Bedin, L. R. 2011, , 531, L12 Alexander, J. B. 1967, The Observatory, 87, 238 Andrievsky, S. M., Gorlova, N. I., Klochkova, V. G., Kovtyukh, V. V., & Panchuk, V. E. 1999, Astronomische Nachrichten, 320, 35 Anthony-Twarog, B. J., Deliyannis, C. P., Rich, E., & Twarog, B. A. 2013, , 767, L19 Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, Astronomy & Astrophysics, 558, AA33 Balachandran, S. C., Fekel, F. C., Henry, G. W., & Uitenbroek, H. 2000, , 542, 978 Barrado y Navascues, D., de Castro, E., Fernandez-Figueroa, M. J., Cornide, M., & Garcia Lopez, R. J. 1998, , 337, 739 Bedding, T. R., Mosser, B., Huber, D., et al. 2011, , 471, 608 Bensby, T., Yee, J. C., Feltzing, S., et al. 2013, , 549, A147 Bonsack, W. K. 1959, , 130, 843 Bowler, B. P., Johnson, J. A., Marcy, G. W., et al. 2010, , 709, 396 Bressan, A., Marigo, P., Girardi, L., et al. 2012, , 427, 127 Brown, J. A., Sneden, C., Lambert, D. L., & Dutchover, E., Jr. 1989, , 71, 293 Burkert, A., & Ida, S. 2007, , 660, 845 Cameron, A. G. W., & Fowler, W. A. 1971, , 164, 111 Canto Martins, B. L., L[è]{}bre, A., de Laverny, P., et al. 2006, , 451, 993 Carlberg, J. K., Smith, V. V., Cunha, K., Majewski, S. R., & Rood, R. T. 2010, , 723, L103 Carlberg, J. K., Smith, V. V., Cunha, K., et al. 2015, , 802, 7 Carney, B. W., Fry, A. M., & Gonzalez, G. 1998, , 116, 2984 Casey, A. R., et al. 2016, in preparation Cassisi, S., Salaris, M., & Pietrinferni, A. 2016, , 585, A124 Castilho, B. V., Spite, F., Barbuy, B., et al. 1999, , 345, 249 Chanam[é]{}, J., Pinsonneault, M., & Terndrup, D. M. 2005, , 631, 540 Charbonnel, C. 1994, , 282, 811 Charbonnel, C. 1995, , 453, L41 Charbonnel, C., & Balachandran, S. C. 2000, , 359, 563 Charbonnel, C. 2006, EAS Publications Series, 19, 125 Charbonnel, C., & Lagarde, N. 2010, , 522, A10 Charbonnel, C., & Zahn, J.-P. 2007, , 467, L15 Cottrell, P. L., & Da Costa, G. S. 1981, , 245, L79 Damiani, F., Prisinzano, L., Micela, G., et al. 2014, , 566, A50 da Silva, L., Girardi, L., Pasquini, L., et al. 2006, , 458, 609 Delgado Mena, E., Tsantaki, M., Sousa, S. G., et al. 2015, arXiv:1512.05296 Dias, W. S., Monteiro, H., Caetano, T. C., et al. 2014, , 564, A79 Denissenkov, P. A., & Herwig, F. 2004, , 612, 1081 Denissenkov, P. A., & VandenBerg, D. A. 2003, , 593, 509 Denissenkov, P. A., & Weiss, A. 2000, , 358, L49 de La Reza, R., Drake, N. A., & da Silva, L. 1996, , 456, L115 de La Reza, R., Drake, N. A., da Silva, L., Torres, C. A. O., & Martin, E. L. 1997, , 482, L77 de la Reza, R., Drake, N. A., Oliveira, I., & Rengaswamy, S. 2015, , 806, 86 de Medeiros, J. R., Lebre, A., de Garcia Maia, M. R., & Monier, R. 1997, , 321, L37 Dekker, H., D’Odorico, S., Kaufer, A., Delabre, B., & Kotzlowski, H. 2000, , 4008, 534 D’Orazi, V., Gratton, R. G., Angelou, G. C., et al. 2015, , 801, L32 Drake, N. A., de la Reza, R., da Silva, L., & Lambert, D. L. 2002, , 123, 2703 Eggleton, P. P., Dearborn, D. S. P., & Lattanzio, J. C. 2006, Science, 314, 1580 Eggleton, P. P., Dearborn, D. S. P., & Lattanzio, J. C. 2008, , 677, 581 Fekel, F. C., & Balachandran, S. 1993, , 403, 708 Fekel, F. C., & Watson, L. C. 1998, , 116, 2466 Fischer, D. A., & Valenti, J. 2005, , 622, 1102 Fulbright, J. P., McWilliam, A., & Rich, R. M. 2007, , 661, 1152 Gilmore, G., Randich, S., Asplund, M., et al. 2012, The Messenger, 147, 25 Gonzalez, O. A., Zoccali, M., Monaco, L., et al. 2009, , 508, 289 Gratton, R. G., & D’Antona, F. 1989, , 215, 66 Gratton, R. G., Sneden, C., Carretta, E., & Bragaglia, A. 2000, , 354, 169 Grether, D., & Lineweaver, C. H. 2006, , 640, 1051 Grevesse, N., Asplund, M., & Sauval, A. J. 2007, , 130, 105 Gustafsson, B., Edvardsson, B., Eriksson, K., et al. 2008, , 486, 951 Hanni, L. 1984, Soviet Astronomy Letters, 10, 51 Heiter, U., Lind, K., Asplund, M., et al. 2015, Physica Scripta, 90, 054010 Hill, V., & Pasquini, L. 1999, , 348, L21 Hurley, J. R., & Shara, M. M. 2002, , 565, 1251 Iben, I., Jr. 1964, , 140, 1631 Iben, I., Jr. 1967, , 147, 624 Iben, I., Jr. 1967, , 147, 650 Janes, K. A., & Hoq, S. 2011, , 141, 92 Jasniewicz, G., Parthasarathy, M., de Laverny, P., & Th[é]{}venin, F. 1999, , 342, 831 Jofr[é]{}, E., Petrucci, R., Garc[í]{}a, L., & G[ó]{}mez, M. 2015, , 584, L3 Johnson, J. A., Howard, A. W., Bowler, B. P., et al. 2010, , 122, 701 Jones, M. I., Jenkins, J. S., Bluhm, P., Rojo, P., & Melo, C. H. F. 2014, , 566, A113 Karakas, A. I. 2010, , 403, 1413 Karakas, A. I., & Lattanzio, J. C. 2014, , 31, e030 Kennedy, G. M., & Kenyon, S. J. 2008, , 673, 502 Kennedy, G. M., & Kenyon, S. J. 2008, , 682, 1264 Kennedy, G. M., & Kenyon, S. J. 2009, , 695, 1210 Kirby, E. N., Fu, X., Guhathakurta, P., & Deng, L. 2012, , 752, L16 Kirby, E. N., Guhathakurta, P., Zhang, A. J., et al. 2016, arXiv:1601.01315 K[ő]{}v[á]{}ri, Z., Korhonen, H., Strassmeier, K. G., et al. 2013, , 551, A2 Kraft, R. P., Peterson, R. C., Guhathakurta, P., et al. 1999, , 518, L53 Kumar, Y. B., Reddy, B. E., & Lambert, D. L. 2011, , 730, L12 Kumar, Y. B., Reddy, B. E., Muthumariappan, C., & Zhao, G. 2015, , 577, A10 Lagarde, N., Decressin, T., Charbonnel, C., et al. 2012, , 543, A108 Lambert, D. L., Dominy, J. F., & Sivertsen, S. 1980, , 235, 114 Lanzafame, A. C., Frasca, A., Damiani, F., et al. 2015, , 576, A80 Lattanzio, J. C., Siess, L., Church, R. P., et al. 2015, , 446, 2673 L[è]{}bre, A., de Laverny, P., Do Nascimento, J. D., Jr., & de Medeiros, J. R. 2006, , 450, 1173 L[è]{}bre, A., Palacios, A., Do Nascimento, J. D., Jr., et al. 2009, , 504, 1011 Lewis, J, et al. 2016, in preparation Lind, K., Asplund, M., & Barklem, P. S. 2009, , 503, 541 Lind, K., Primas, F., Charbonnel, C., Grundahl, F., & Asplund, M. 2009, , 503, 545 Liu, Y. J., Tan, K. F., Wang, L., et al. 2014, , 785, 94 Lloyd, J. P. 2011, , 739, L49 Luck, R. E. 1982, , 94, 811 Martell, S. L., & Shetrone, M. D. 2013, ,430, 611 Martin, E. L., Rebolo, R., Casares, J., & Charles, P. A. 1992, , 358, 129 Martin, E. L., Rebolo, R., Casares, J., & Charles, P. A. 1994, , 435, 791 Melo, C. H. F., de Laverny, P., Santos, N. C., et al. 2005, , 439, 227 M[é]{}sz[á]{}ros, S., Avrett, E. H., & Dupree, A. K. 2009, , 138, 615 Mosser, B., Barban, C., Montalb[á]{}n, J., et al. 2011, , 532, A86 Mucciarelli, A., Salaris, M., & Bonifacio, P. 2012, , 419, 2195 McWilliam, A., & Rich, R. M. 1994, , 91, 749 Monaco, L., Villanova, S., Moni Bidin, C., et al. 2011, , 529, A90 Monaco, L., Boffin, H. M. J., Bonifacio, P., et al. 2014, , 564, L6 Ness, M., Freeman, K., Athanassoula, E., et al. 2013, , 430, 836 Netopil, M., Paunzen, E., Maitzen, H. M., et al. 2007, , 462, 591 Palacios, A., Charbonnel, C., & Forestini, M. 2001, , 375, L9 Palacios, A., Charbonnel, C., Talon, S., & Siess, L. 2006, , 453, 261 Pasquini, L., Avila, G., Allaert, E., et al. 2000, , 4008, 129 Pasquini, L., D[ö]{}llinger, M. P., Weiss, A., et al. 2007, , 473, 979 Pilachowski, C. 1986, , 300, 289 Randich, S., Gilmore, G., & Gaia-ESO Consortium 2013, The Messenger, 154, 47 Rasio, F. A., & Ford, E. B. 1996, Science, 274, 954 Rebull, L. M., Carlberg, J. K., Gibbs, J. C., et al. 2015, , 150, 123 Reyniers, M., & Van Winckel, H. 2001, , 365, 465 Ruchti, G. R., Fulbright, J. P., Wyse, R. F. G., et al. 2011, , 743, 107 Ruffoni, M. P., Den Hartog, E. A., Lawler, J. E., et al. 2014, , 441, 3127 Sacco, G. G., Morbidelli, L., Franciosini, E., et al. 2014, , 565, A113 Sackmann, I.-J., & Boothroyd, A. I. 1999, , 510, 217 Santerne, A., Moutou, C., Tsantaki, M., et al. 2015, arXiv:1511.00643 Santos, N. C., Israelian, G., & Mayor, M. 2004, , 415, 1153 Schlaufman, K. C., & Winn, J. N. 2013, , 772, 143 Schlaufman, K. C. 2014, , 790, 91 Siess, L., & Livio, M. 1999, , 304, 925 Siess, L., & Livio, M. 1999, , 308, 1133 Sigurdsson, S. 1992, , 399, L95 Silva Aguirre, V., Ruchti, G. R., Hekker, S., et al. 2014, , 784, L16 Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, , 131, 1163 Smiljanic, R., Korn, A. J., Bergemann, M., et al. 2014, , 570, A122 Smith, V. V., Shetrone, M. D., & Keane, M. J. 1999, , 516, L73 Spite, F., & Spite, M. 1982, , 115, 357 Sweigart, A. V., & Mengel, J. G. 1979, , 229, 624 Tajitsu, A., Sadakane, K., Naito, H., Arai, A., & Aoki, W. 2015, , 518, 381 Tautvai[š]{}ien[ė]{}, G., Barisevi[č]{}ius, G., Chorniy, Y., Ilyin, I., & Puzeras, E. 2013, , 430, 621 Uttenthaler, S., Lebzelter, T., Palmerini, S., et al. 2007, , 471, L41 van Leeuwen, F. 2007, , 474, 653 Villaver, E., & Livio, M. 2009, , 705, L81 Walker, T. P., Viola, V. E., & Mathews, G. J. 1985, , 299, 745 Wallerstein, G., & Sneden, C. 1982, , 255, 577 Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, , 140, 1868 ![image](literature.pdf){width="\textwidth"} ![image](stellar-parameters.pdf){width="\textwidth"} ![image](uves-abundances-x-h.pdf){width="\textwidth"} ![image](uves-spectrum-comparison.pdf){width="\textwidth"} [0.45]{} ![image](giraffe-spectra-1.pdf) [0.45]{} ![image](giraffe-spectra-2.pdf) ![image](halpha-spectra.pdf){width="\textwidth"} ![Detailed chemical abundances of the Li-rich bulge star, 18033785$-$3009201, compared to the microlensed bulge dwarf and sub-giant sample of @Bensby_2013. The uncertainty in a given \[X/Fe\] abundance ratio for 18033785$-$3009201 is taken as the quadrature sum of \[X/H\] and \[X/Fe\].[]{data-label="fig:bulge"}](Bensby-comparison.pdf){width="8.3cm"} Object $T_{\rm eff}$ $\log{g}$ \[Fe/H\] A(Li) $v\sin{i}$ Year Reference ---- -------------------- --------------- ----------- -------------- ------- -------------- ------ ---------------------------- 1 HD172365 5500 2.1 -0.6 2.49 70 1982 @Luck_1982 2 HD 174104 5750 0.9 -0.3 3.46 50 1982 @Luck_1982 3 HD 112127 4750 2.6 0.3 3.2 [$\cdots$]{} 1982 @Wallerstein_Sneden_1982 4 9 Bootis (BS 5247) 4000 2.0 0.1 2.5 [$\cdots$]{} 1984 @Hanni_1984 5 NGC7789-443 5600 3.1 [$\cdots$]{} 2.4 44 1986 @Pilachowski_1986 6 NGC7789-1238 5800 3.1 [$\cdots$]{} 2.4 $<$10 1986 @Pilachowski_1986 7 NGC7789-308 6350 3.3 [$\cdots$]{} 3.0 80 1986 @Pilachowski_1986 8 NGC7789-268 6450 3.4 [$\cdots$]{} 3.3 30 1986 @Pilachowski_1986 9 HD 183492 4700 2.4 0.08 2.0 [$\cdots$]{} 1989 @Brown_1989 10 HD 126868 5440 3.2 -0.25 2.3 [$\cdots$]{} 1989 @Brown_1989 11 HD 112127 4340 2.1 0.31 2.7 [$\cdots$]{} 1989 @Brown_1989 12 HD 108471 4980 2.8 -0.02 2.0 [$\cdots$]{} 1989 @Brown_1989 13 HD 148293 4640 2.5 0.23 2.0 [$\cdots$]{} 1989 @Brown_1989 14 HD 9746 4420 2.3 -0.13 2.7 [$\cdots$]{} 1989 @Brown_1989 15 HD 39853 3900 1.16 -0.5 2.8 [$\cdots$]{} 1989 @Gratton_1989 16 Be21-T33 4600 2.0 -0.58 3.0 [$\cdots$]{} 1999 @Hill_1999 17 HD 219025 4500 2.3 -0.1 3.0 23 1999 @Jasniewicz_1999 18 HDE 233517 4475 2.25 -0.37 3.85 17.6 2000 @Balachandran_2000 19 HD 9746$^a$ 4400 2.3 [$\cdots$]{} 3.44 9 2000 @Balachandran_2000 20 HD 172481 7250 1.5 -0.55 3.57 14 2001 @Reyniers_Van_Winckel_2001 -------------------- ------------------- ------------- --------------- ------ ------ ----------------- --------------- ----------- ---------- Star Field $\alpha$ $\delta$ $J$ $K$ $V_{rad}$ $T_{\rm eff}$ $\log{g}$ \[Fe/H\] (J2000) (J2000) (km s$^{-1}$) (K) 08095783$-$4701385 $\gamma$2 Velorum 08:09:57.83 $-$47:01:38.5 10.5 9.8 $25.6 \pm 0.2$ 4964 2.43 $-0.15$ 18033785$-$3009201 Bulge 18:03:37.85 $-$30:09:20.1 11.4 10.6 $-70.0 \pm 0.6$ 4467 2.34 $0.07$ 19242472$+$0044106 [[*CoRoT*]{}]{}  19:24:24.73 $+$00:44:10.5 11.3 10.5 $77.7 \pm 0.1$ 4740 2.70 $0.08$ 19304281$+$2016107 NGC 6802 19:30:42.81 $+$20:16:10.7 11.7 10.7 $17.4 \pm 0.6$ 4766 2.63 $-0.10$ 08102116$-$4740125 $\gamma$2 Velorum 08:10:21.16 $-$47:40:12.5 11.5 10.6 $71.0 \pm 0.2$ 4591 2.27 $-0.12$ 08110403$-$4852137 NGC 2547 08:11:04.03 $-$48:52:13.7 12.4 11.6 $54.1 \pm 0.2$ 4762 2.59 $-0.12$ 08395152$-$5315159 IC2391 08:39:51.52 $-$53:15:15.9 12.4 11.5 $27.0 \pm 0.2$ 4726 2.55 $0.01$ 10300194$-$6321203 IC2602 10:30:01.94 $-$63:21:20.3 11.4 10.6 $-10.2 \pm 0.2$ 4612 2.37 $-0.06$ 10323205$-$6324012 IC2602 10:32:32.05 $-$63:24:01.2 11.3 10.6 $13.3 \pm 0.2$ 4607 2.53 $0.13$ 10495719$-$6341212 IC2602 10:49:57.19 $-$63:41:21.2 11.1 10.3 $13.8 \pm 0.2$ 4789 2.55 $0.03$ 10503631$-$6512237 IC2602 10:50:36.31 $-$65:12:23.7 11.7 10.8 $-34.1 \pm 0.2$ 4708 2.49 $-0.05$ 11000515$-$7623259 Chameleon 1 11:00:05.15 $-$76:23:25.9 10.1 9.1 $-15.9 \pm 0.2$ 4505 2.22 $0.06$ 19230935$+$0123293 [[*CoRoT*]{}]{}  19:23:09.35 $+$01:23:29.3 13.1 12.3 $11.9 \pm 0.2$ 4845 2.37 $-0.12$ 19252571$+$0031444 [[*CoRoT*]{}]{}  19:25:25.71 $+$00:31:44.4 12.7 11.9 $-38.6 \pm 0.3$ 4825 2.87 $-0.10$ 19252758$+$0153065 [[*CoRoT*]{}]{}  19:25:27.58 $+$01:53:06.5 11.3 10.5 $28.2 \pm 0.1$ 4617 2.80 $0.21$ 19252837$+$0027037 [[*CoRoT*]{}]{}  19:25:28.37 $+$00:27:03.7 13.4 12.6 $0.3 \pm 0.3$ 4731 2.91 $0.18$ 19253819$+$0031094 [[*CoRoT*]{}]{}  19:25:38.19 $+$00:31:09.4 13.0 12.1 $26.6 \pm 0.3$ 4655 2.51 $-0.25$ 19261007$-$0010200 [[*CoRoT*]{}]{}  19:26:10.07 $-$00:10:20.0 11.8 11.1 $-21.6 \pm 0.2$ 4752 2.84 $0.12$ 19264038$-$0019575 [[*CoRoT*]{}]{}  19:26:40.38 $-$00:19:57.5 13.0 12.2 $41.8 \pm 0.3$ 4782 2.91 $0.02$ 19301883$-$0004175 [[*CoRoT*]{}]{}  19:30:18.83 $-$00:04:17.5 11.6 10.5 $57.3 \pm 0.1$ 4070 1.63 $-0.52$ -------------------- ------------------- ------------- --------------- ------ ------ ----------------- --------------- ----------- ---------- [llrcrr]{} Element & Ion & A(X) & $\sigma$ & \[X/H\] & \[X/Fe\]\ \ Ti & 1 & $4.82$ & [$\cdots$]{}& $-0.08$ & $0.04$\ Co & 1 & $4.82$ & [$\cdots$]{}& $-0.10$ & $0.02$\ \ Al & 1 & $6.48$ & 0.05 & $0.11$ & $0.23$\ Si & 1 & $7.61$ & 0.04 & $0.10$ & $0.22$\ Ca & 1 & $6.12$ & 0.10 & $-0.19$ & $-0.07$\ Ti & 1 & $4.94$ & [$\cdots$]{}& $0.04$ & $0.16$\ Co & 1 & $4.77$ & [$\cdots$]{}& $-0.15$ & $-0.03$\ Ni & 1 & $6.19$ & 0.03 & $-0.04$ & $0.08$\ Ba & 2 & $1.81$ & 0.25 & $-0.36$ & $-0.24$\ \ Al & 1 & $6.29$ & 0.12 & $-0.08$ & $0.02$\ Si & 1 & $7.36$ & 0.02 & $-0.15$ & $-0.05$\ Ca & 1 & $5.97$ & 0.01 & $-0.34$ & $-0.24$\ Ti & 1 & $4.71$ & [$\cdots$]{}& $-0.19$ & $-0.09$\ Co & 1 & $4.75$ & [$\cdots$]{}& $-0.17$ & $-0.07$\ Ni & 1 & $5.82$ & 0.02 & $-0.41$ & $-0.31$\ Ba & 2 & $2.20$ & 0.08 & $0.03$ & $0.13$\ \ Mg & 1 & $7.84$ & 0.01 & $0.31$ & $0.10$\ Al & 1 & $6.68$ & 0.03 & $0.31$ & $0.10$\ Si & 1 & $7.73$ & 0.03 & $0.22$ & $0.01$\ Ti & 1 & $5.10$ & [$\cdots$]{}& $0.20$ & $-0.01$\ Mn & 1 & $5.48$ & 0.04 & $0.09$ & $-0.12$\ Fe & 1 & $7.71$ & 0.02 & $0.26$ & $0.05$\ Co & 1 & $4.92$ & [$\cdots$]{}& $0.00$ & $-0.21$\ \ Ti & 1 & $4.86$ & [$\cdots$]{}& $-0.04$ & $-0.22$\ Co & 1 & $4.93$ & [$\cdots$]{}& $0.01$ & $-0.17$\ \ Ti & 1 & $4.69$ & [$\cdots$]{}& $-0.21$ & $0.04$\ Co & 1 & $4.52$ & [$\cdots$]{}& $-0.40$ & $-0.15$\ Ba & 2 & $1.84$ & 0.11 & $-0.33$ & $-0.08$\ \ Ti & 1 & $4.64$ & [$\cdots$]{}& $-0.26$ & $-0.38$\ Co & 1 & $4.77$ & [$\cdots$]{}& $-0.15$ & $-0.27$\ \ Ti & 1 & $4.65$ & [$\cdots$]{}& $-0.25$ & $-0.27$\ Co & 1 & $4.78$ & [$\cdots$]{}& $-0.14$ & $-0.16$\ \ Mg & 1 & $7.48$ & 0.01 & $-0.05$ & $0.47$\ Al & 1 & $6.23$ & 0.01 & $-0.14$ & $0.38$\ Si & 1 & $7.13$ & 0.07 & $-0.38$ & $0.14$\ Ca & 2 & $6.12$ & 0.07 & $-0.19$ & $0.33$\ Ti & 1 & $4.68$ & 0.02 & $-0.22$ & $0.30$\ Ti & 2 & $4.81$ & 0.06 & $-0.09$ & $0.43$\ Cr & 1 & $5.13$ & 0.06 & $-0.51$ & $0.01$\ Mn & 1 & $4.77$ & 0.19 & $-0.62$ & $-0.10$\ Fe & 1 & $7.02$ & 0.02 & $-0.43$ & $0.09$\ Co & 1 & $4.47$ & 0.02 & $-0.45$ & $0.07$\ A(X) $\sigma$ \[X/H\] \[X/Fe\] A(X) $\sigma$ \[X/H\] \[X/Fe\] ---- ------ -------------- -------------- -------------- -------------- ------ -------------- -------------- -------------- -------------- -- -- -- -- -- C 1 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $8.50$ 0.18 $0.11$ $0.04$ O 1 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $8.74$ 0.12 $0.08$ $0.01$ Na 1 $6.12$ 0.03 $-0.05$ $0.10$ $6.53$ 0.05 $0.36$ $0.29$ Mg 1 $7.49$ 0.04 $-0.04$ $0.11$ $7.91$ 0.12 $0.38$ $0.31$ Al 1 $6.25$ 0.01 $-0.12$ $0.03$ $6.68$ 0.07 $0.31$ $0.24$ Si 1 $7.30$ 0.01 $-0.21$ $-0.06$ $7.52$ 0.07 $0.01$ $-0.06$ S 1 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $7.48$ 0.07 $0.34$ $0.27$ Ca 1 $6.07$ 0.01 $-0.24$ $-0.09$ $6.35$ 0.08 $0.04$ $-0.03$ Sc 1 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $3.00$ 0.09 $-0.17$ $-0.24$ Sc 2 $3.08$ 0.02 $-0.09$ $0.06$ $3.29$ 0.08 $0.12$ $0.05$ Ti 1 $4.74$ 0.06 $-0.16$ $-0.01$ $4.95$ 0.08 $0.05$ $-0.02$ Ti 2 $4.73$ 0.02 $-0.17$ $-0.02$ $5.01$ 0.09 $0.11$ $0.04$ V 1 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $4.12$ 0.08 $0.12$ $0.05$ Cr 1 $5.33$ 0.03 $-0.31$ $-0.16$ $5.65$ 0.11 $0.01$ $-0.06$ Cr 2 $5.35$ 0.14 $-0.29$ $-0.14$ $5.88$ 0.12 $0.24$ $0.17$ Mn 1 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $5.42$ 0.16 $0.03$ $-0.04$ Fe 1 $7.20$ 0.02 $-0.25$ $-0.10$ $7.52$ 0.10 $0.07$ $0.00$ Fe 2 $7.17$ 0.06 $-0.28$ $-0.13$ $7.56$ 0.09 $0.11$ $0.04$ Co 1 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $5.04$ 0.10 $0.12$ $0.05$ Ni 1 $5.97$ 0.02 $-0.26$ $-0.11$ $6.47$ 0.11 $0.24$ $0.17$ Cu 1 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $4.15$ 0.12 $-0.06$ $-0.13$ Zn 1 $4.44$ 0.12 $-0.16$ $-0.01$ $4.29$ 0.13 $-0.31$ $-0.38$ Sr 1 $3.27$ 0.03 $0.35$ $0.50$ $3.08$ 0.21 $0.16$ $0.09$ Y 2 $1.82$ 0.06 $-0.39$ $-0.24$ $1.96$ 0.12 $-0.25$ $-0.32$ Zr 1 $2.29$ 0.02 $-0.29$ $-0.14$ $2.48$ 0.13 $-0.10$ $-0.17$ Zr 2 $2.37$ 0.06 $-0.21$ $-0.06$ $2.55$ 0.17 $-0.03$ $-0.10$ Mo 1 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $1.78$ 0.15 $-0.14$ $-0.21$ Ba 2 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $2.06$ 0.14 $-0.11$ $-0.18$ La 2 $0.73$ 0.03 $-0.40$ $-0.25$ $0.91$ 0.15 $-0.22$ $-0.29$ Ce 2 $1.40$ 0.08 $-0.30$ $-0.15$ $1.42$ 0.14 $-0.28$ $-0.35$ Pr 2 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $-0.19$ 0.46 $-0.77$ $-0.84$ Nd 2 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $1.37$ 0.18 $-0.08$ $-0.15$ Eu 2 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $0.64$ 0.10 $0.12$ $0.05$ C 1 $8.47$ 0.13 $0.08$ $0.00$ $8.20$ 0.13 $-0.19$ $-0.09$ N (CN) $8.26$ 0.10 $0.48$ $0.40$ [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} O 1 $8.94$ 0.12 $0.28$ $0.20$ $8.76$ 0.20 $0.10$ $0.20$ Na 1 $6.51$ 0.05 $0.34$ $0.26$ $6.25$ 0.05 $0.08$ $0.18$ Mg 1 $7.69$ 0.12 $0.16$ $0.08$ $7.55$ 0.12 $0.02$ $0.12$ Al 1 $6.63$ 0.07 $0.26$ $0.18$ $6.42$ 0.07 $0.05$ $0.15$ Si 1 $7.52$ 0.07 $0.01$ $-0.07$ $7.41$ 0.07 $-0.10$ $0.00$ S 1 $7.21$ 0.07 $0.07$ $-0.01$ $7.06$ 0.07 $-0.08$ $0.02$ Ca 1 $6.32$ 0.08 $0.01$ $-0.07$ $6.19$ 0.07 $-0.12$ $-0.02$ Sc 1 $3.36$ 0.09 $0.19$ $0.11$ $2.98$ 0.07 $-0.19$ $-0.09$ Sc 2 $3.22$ 0.06 $0.05$ $-0.03$ $3.18$ 0.07 $0.01$ $0.11$ Ti 1 $5.05$ 0.08 $0.15$ $0.07$ $4.76$ 0.08 $-0.14$ $-0.04$ Ti 2 $5.04$ 0.07 $0.14$ $0.06$ $4.88$ 0.09 $-0.02$ $0.08$ V 1 $4.11$ 0.08 $0.11$ $0.03$ $3.78$ 0.08 $-0.22$ $-0.12$ Cr 1 $5.72$ 0.10 $0.08$ $0.00$ $5.40$ 0.13 $-0.24$ $-0.14$ Cr 2 $5.70$ 0.14 $0.06$ $-0.02$ $5.72$ 0.15 $0.08$ $0.18$ Mn 1 $5.41$ 0.11 $0.02$ $-0.06$ $5.23$ 0.16 $-0.16$ $-0.06$ Fe 1 $7.52$ 0.09 $0.07$ $-0.01$ $7.37$ 0.10 $-0.08$ $0.02$ Fe 2 $7.47$ 0.09 $0.02$ $-0.06$ $7.48$ 0.08 $0.03$ $0.13$ Co 1 $5.12$ 0.10 $0.20$ $0.12$ $4.80$ 0.10 $-0.12$ $-0.02$ Ni 1 $6.36$ 0.10 $0.13$ $0.05$ $6.17$ 0.10 $-0.06$ $0.04$ Cu 1 $4.30$ 0.14 $0.09$ $0.01$ $4.05$ 0.14 $-0.16$ $-0.06$ Zn 1 $3.95$ 0.13 $-0.65$ $-0.73$ $4.78$ 0.13 $0.18$ $0.28$ Sr 1 $3.53$ 0.20 $0.61$ $0.53$ $3.20$ 0.21 $0.28$ $0.38$ Y 2 $2.17$ 0.11 $-0.04$ $-0.12$ $2.14$ 0.12 $-0.07$ $0.03$ Zr 1 $2.55$ 0.15 $-0.03$ $-0.11$ $2.56$ 0.14 $-0.02$ $0.08$ Zr 2 $2.99$ 0.10 $0.41$ $0.33$ $2.90$ 0.10 $0.32$ $0.42$ Mo 1 $2.15$ 0.10 $0.23$ $0.15$ $1.72$ 0.10 $-0.20$ $-0.10$ Ba 2 $2.26$ 0.14 $0.09$ $0.01$ $2.20$ 0.14 $0.03$ $0.13$ La 2 $1.12$ 0.15 $-0.01$ $-0.09$ $0.90$ 0.16 $-0.23$ $-0.13$ Ce 2 $1.70$ 0.13 $0.00$ $-0.08$ $1.59$ 0.15 $-0.11$ $-0.01$ Pr 2 [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} [$\cdots$]{} $0.72$ 0.29 $0.14$ $0.24$ Nd 2 $1.83$ 0.15 $0.38$ $0.30$ $1.37$ 0.19 $-0.08$ $0.02$ Eu 2 $0.84$ 0.13 $0.32$ $0.24$ $0.58$ 0.10 $0.06$ $0.16$ Star A(Li, LTE) A(Li, nLTE) -------------------- ------------ ------------- 08095783$-$4701385 3.51 3.21 18033785$-$3009201 3.19 3.11 19242472$+$0044106 2.74 2.72 19304281$+$2016107 2.60 2.60 08102116$-$4740125 3.52 3.33 08110403$-$4852137 3.51 3.25 08395152$-$5315159 2.15 2.28 10300194$-$6321203 2.96 2.88 10323205$-$6324012 3.07 2.98 10495719$-$6341212 3.05 2.94 10503631$-$6512237 2.59 2.61 11000515$-$7623259 2.59 2.64 19230935$+$0123293 2.80 2.75 19252571$+$0031444 2.10 2.22 19252758$+$0153065 2.99 2.92 19252837$+$0027037 2.86 2.82 19253819$+$0031094 2.99 2.85 19261007$-$0010200 2.95 2.88 19264038$-$0019575 3.35 3.13 19301883$-$0004175 2.52 2.43 : Non-LTE Li calculated using corrections from @Lind_2009a. Stars observed using UVES and GIRAFFE are separated by the horizontal line. For these calculations we adopted ${\xi = 1.5}$ km s$^{-1}$ for the GIRAFFE spectra as $\xi$ measurements were unavailable.[]{data-label="tab:lithium"} $^1$Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK\ $^2$Lund Observatory, Department of Astronomy and Theoretical Physics, Box 43, SE-221 00 Lund, Sweden\ $^3$INAF - Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, 50125, Florence, Italy\ $^4$Max-Planck Institut für Astronomie, Königstuhl 17, 69117 Heidelberg, Germany\ $^5$Department of Physics and Astronomy, Uppsala University, Box 516, SE-751 20 Uppsala, Sweden\ $^6$Astrophysics Group, Keele University, Keele, Staffordshire ST5 5BG, United Kingdom\ $^7$INAF - Padova Observatory, Vicolo dell’Osservatorio 5, 35122 Padova, Italy\ $^8$INAF - Osservatorio Astronomico di Bologna, via Ranzani 1, 40127, Bologna, Italy\ $^9$INAF - Osservatorio Astronomico di Palermo, Piazza del Parlamento 1, 90134, Palermo, Italy\ $^{10}$GEPI, Observatoire de Paris, CNRS, Université Paris Diderot, 5 Place Jules Janssen, 92190 Meudon, France\ $^{11}$Dipartimento di Fisica e Astronomia, Sezione Astrofisica, Universitá di Catania, via S. Sofia 78, 95123, Catania, Italy\ $^{12}$ASI Science Data Center, Via del Politecnico SNC, 00133 Roma, Italy\ $^{13}$Laboratoire Lagrange (UMR7293), Université de Nice Sophia Antipolis, CNRS,Observatoire de la Côte d’Azur, CS 34229,F-06304\ Nice cedex 4, France\ $^{14}$Department for Astrophysics, Nicolaus Copernicus Astronomical Center, ul. Rabiańska 8, 87-100 Toruń, Poland\ $^{15}$European Southern Observatory, Alonso de Cordova 3107 Vitacura, Santiago de Chile, Chile\ $^{16}$Instituto de Astrofísica de Andalucía-CSIC, Apdo. 3004, 18080 Granada, Spain\ $^{17}$Università di Bologna, Dipartimento di Fisica e Astronomia, viale Berti Pichat 6/2, 40127 Bologna, Italy\ $^{18}$INAF - Osservatorio Astrofisico di Catania, via S. Sofia 78, 95123, Catania, Italy\ $^{19}$Astrophysics Research Institute, Liverpool John Moores University, 146 Brownlow Hill, Liverpool L3 5RF, United Kingdom\ $^{20}$Departamento de Ciencias Fisicas, Universidad Andres Bello, Republica 220, Santiago, Chile\ $^{21}$Millennium Institute of Astrophysics, Chile\ $^{22}$Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, 782-0436 Macul, Santiago, Chile\ $^{23}$Instituto de Astrofísica e Ciências do Espaço, Universidade do Porto, CAUP, Rua das Estrelas, 4150-762 Porto, Portugal\ $^{24}$Institute of Theoretical Physics and Astronomy, Vilnius University, A. Gostauto 12, LT-01108 Vilnius\ $^{25}$Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, 1000, Ljubljana, Slovenia\ $^{26}$School of Physics, University of New South Wales, Sydney NSW 2052, Australia\ $^{27}$Stellar Astrophysics Centre, Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, DK-8000 Aarhus C, Denmark\ $^{28}$School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham B15 2TT, United Kingdom\ $^{29}$Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, 14482 Potsdam, Germany\ $^{30}$Department of Physics and Astronomy G. Galilei, University of Padova, Vicolo dell’Osservatorio 3, I-35122 Padova, Italy\ $^{31}$Institut d’Astrophysique et de Géophysique, Université de Liège, Quartier Agora, Bât. B5c, Allée du 6 Août, 19c 4000 Liège, Belgium\ \[lastpage\] [^1]: E-mail: [email protected] [^2]: For example, see @Wallerstein_Sneden_1982 [@Luck_1982; @Hanni_1984; @Andrievsky_1999; @Balachandran_2000; @Reyniers_Van_Winckel_2001; @Lebre_2009; @Alcala_2011; @Kumar_2011; @Ruchti_2011; @Kovari_2013; @Liu_2014; @Adamow_2015]. [^3]: See also @Delgado_Mena_2015. [^4]: http://keplerscience.arc.nasa.gov/
--- abstract: 'Radar pulse streams exhibit increasingly complex temporal patterns and can no longer rely on a purely value-based analysis of the pulse attributes for the purpose of emitter classification. In this paper, we employ Recurrent Neural Networks (RNNs) to efficiently model and exploit the temporal dependencies present inside pulse streams. With the purpose of enhancing the network prediction capability, we introduce two novel techniques: a per-sequence normalization, able to mine the useful temporal patterns; and attribute-specific RNN processing, capable of processing the extracted information effectively. The new techniques are evaluated with an ablation study and the proposed solution is compared to previous Deep Learning (DL) approaches. Finally, a comparative study on the robustness of the same approaches is conducted and its results are presented.' address: | $^1$ Computer Aided Medical Procedures, Technical University of Munich, Germany\ $^2$ Airbus Defence and Space, Manching, Germany\ $^3$ Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, USA bibliography: - 'refs.bib' title: | Radar Emitter Classification with\ Attribute-specific Recurrent Neural Networks --- Emitter Classification, Deep Learning, Recurrent Neural Networks, Radar Signals. Introduction {#sec:intro} ============ Emitter classification of radar pulse sequences is a critical task in Electronic Warfare (EW) disciplines such as Electronic Support Measures (ESM), where the correct identification of a target is crucial to determine adequate countermeasures for protection of sensible units and other defence purposes [@adamy; @spezio]. The increasing complexity of the electromagnetic environment, due to more sophisticated radar characteristics and higher emitter density, has rendered classification of pulse streams an increasingly difficult task [@spezio]. Traditional approaches to tackle this problem are based on categorization of pulses through statistical measures of different pulse attributes [@wiley]. Commonly exploited pulse features are, in this sense, the Pulse Width (PW) and the Radio Frequency (RF). During a process known as deinterleaving, incoming pulses are clustered by emitter [@adamy; @deinterleaving] so that time-variant parameters, like the pulse repetition interval (PRI), are computed and can be further utilized for classification [@pri-classification]. Other intrapulse features are occasionally used, although frequently discarded in real-time operations to avoid storage overloading. More recent is the application of Machine Learning and especially Deep Learning (DL) to radar pulse stream classification, for which proposed solutions include Support Vector Machines (SVMs) [@svm], Multilayer Perceptrons (MLPs) [@petrov; @shieh] and Convolutional Neural Networks (CNNs) [@cnn-hong; @cnn-cain; @cnn-sun]. However, these approaches have several shortcomings. Firstly, a sufficient number of pulses needs to be acquired before taking a prediction step, having repercussions on their real-time applicability. Secondly, temporal patterns and dependencies inside pulse streams are not modelled efficiently, because the pulse order inside the sequence is either not taken into account or, in some cases, the entire series of values is summarized to the average or the domain interval of the attributes [@petrov]. Towards the overcoming of the aforementioned shortcomings, a recent work [@radar-rnn] has introduced Recurrent Neural Networks (RNNs) in the ESM domain as a method to efficiently process pulse streams, due to their proven effectiveness in several sequence processing problems, such as neural machine translation, time series forecasting and classification [@nmt; @forecasting; @tsc-survey]. In this paper, we propose to utilize attribute-specific RNNs in combination with a novel normalization scheme, for the challenging task of emitter classification. Towards this end our contribution is two-fold: 1) we introduce a new type of normalization, here called per-sequence normalization, and we apply it in parallel to the more commonly used min-max normalization, concatenating the output of the two transformations along the feature axis to obtain $2 * M$ channels, where $M$ is the number of attributes extracted from each pulse. 2) we leverage attribute-specific Long Short Term Memory (LSTM) layers [@lstm] for each feature after the normalization process to compute an intermediate representation useful for classification purposes (see Fig. \[fig:architecture\]). Methodology {#sec:methodology} =========== Our method utilizes attributes that are extracted from Pulse Descriptor Words (PDWs) to construct sequences of pulses. These sequences are of the form $\boldsymbol{S} = [\boldsymbol{s_1}, \boldsymbol{s_2}, ..., \boldsymbol{s_t}, ..., \boldsymbol{s_T}]$, $T$ being the length of the sequence, where the individual pulse is represented as a tuple $\boldsymbol{s_t} = (s^1_t, s^2_t, ..., s^j_t, ..., s^M_t) \in \mathbb{R}^M$, $M$ being the number of pulse attributes extracted from the PDW. Moreover, we define $S^j = [s^j_1, s^j_2, ..., s^j_t, ..., s^j_T]$ as the $j$th-attribute sequence of $\boldsymbol{S}$. Finally, each sequence $\boldsymbol{S}$ is associated with a class $y$, $1\leq y \leq C$, to form a dataset $\mathcal{D} =\{(\boldsymbol{S}_1, y_1), (\boldsymbol{S}_2, y_2), ..., (\boldsymbol{S}_i, y_i), ..., (\boldsymbol{S}_N, y_N)\}$ of $N$ samples divided in $C$ classes. For a classifier $f_w(\bullet)$ with output prediction $\hat{y}=f_w(\boldsymbol{S})$, the goal is to maximize the classification accuracy, so that $\hat{y}_i = y_i$ for as many $\boldsymbol{S}_i,y_i \in \mathcal{D}$ as possible. The classifier $f_w$ is parametrized by its weights $w$, which are subject to optimization and trained via first-order methods based on gradient descent. Finally, the model employs LSTM [@lstm] cells, which have shown to be able to learn long-term dependencies (by means of the internal cell state) while handling the vanishing and exploding gradient problems typically encountered with standard RNN cells [@rnn-gradient]. Normalization scheme -------------------- The model $f_w$ incorporates a normalization scheme that maps the original sequence $\boldsymbol{S}$ to $\overline{\boldsymbol{S}}$. Differently from [@radar-rnn], the input is not digitized but normalized according to two different techniques. The first one is min-max normalization, for which the sequence values are linearly mapped into the range $[-1, 1]$ according to: $$\overline{S}^j = 2\cdot\frac{S^j - MIN_j(\mathcal{D})}{MAX_j(\mathcal{D}) - MIN_j(\mathcal{D})} - 1,\;\forall j$$ The attribute domains $[MIN_j(\mathcal{D}), MAX_j(\mathcal{D})]$ are estimated from the whole training data distribution and the normalization is applied attribute-wise to all the sequences of the dataset. The second transformation applied is a per-sequence normalization, i.e. sequence attributes are normalized based on the values inside their respective attribute sequence only. This differs from min-max normalization, where the entire dataset distribution of values plays a role in defining the normalization. As $S^j$ defines the values of attribute $j$ in sequence $\boldsymbol{S}$, we have: $$\overline{S}^j = 2\cdot\frac{S^j - \min_t(S^j)}{\max_t(S^j) - \min_t(S^j)} - 1,$$ This normalization is again applied for $\;1\leq j \leq M$ in the sequence and for all $\;1\leq i \leq N$ in the dataset independently, mapping the observed attribute sequence domain to the range $[-1, 1]$. Doing so enables the network to isolate temporal patterns inside the sequence with more precision, due to the restricted domain of values taken by attribute along the time axis only. The two normalizations are performed in parallel and the outputs are concatenated along the feature axis to obtain a $T \times 2M$ input. Compared to the discretization proposed in [@radar-rnn], this normalization scheme allows for reduced model complexity and increased inference speed, by eliminating the need for embeddings and reduction of the input dimension of the remaining network. Attribute-specific LSTMs ------------------------ Other differences with respect to [@radar-rnn] are the choice and the architectural layout of the RNN layers. In our approach, a dedicated RNN network is assigned to each sequence channel, so that temporal dependencies can be extracted, while the values of different attribute sequences can be processed separately (Fig. \[fig:architecture\]). This is motivated by the fact that joint-attribute patterns are oftentimes spurious since attribute values changes occur independently. These RNN blocks are constituted of $L$ stacked LSTM [@lstm] layers, each block producing an output feature map of size $T \times h_{LSTM}$. After the RNN layers have processed the normalized input sequences, the outputs of the $2 * M$ LSTM layers are concatenated along the hidden size dimension to obtain a $T \times 2 * h_{LSTM} * M$ output sensor for a $T$-long input sequence of $M$ pulse attributes. In our experiments we set $L=2$ and $h_{LSTM}=64$. Model training -------------- The architecture is completed with a Fully Connected (FC) layer, mapping the hidden features from the LSTMs to the prediction class scores. The softmax function is then applied on these prediction scores to obtain the final class probabilities $\boldsymbol{\hat{y}_i} \in \mathbb{R}^C$, which represent the input of the loss function. The loss function employed for training is the weighted Cross Entropy loss [@weighted-cross-entropy], defined as $$L = -\frac{1}{W}\sum_{i=1}^N w_{y_i} \log(\hat{y}_{i,y_i}),\;W = \sum_{i=1}^N\frac{1}{w_{y_n}} $$ where weights $w_c$, $1\leq c\leq C$ are estimated on the training set through median frequency balancing [@mfb]. Finally, dropout [@dropout] (with $p=0.5$) is used between the stacked LSTM layers and before the final FC layer to prevent overfitting and improve generalization. Experimental Setup {#sec:experimental_setup} ================== To test the described supervised approach, a dataset of labeled pulse sequences was produced. It was obtained by means of a radar simulation software, in which different radar characteristics were defined and where, resembling a real-world scenario, signals were programmed to be detected by the receiving antenna of an external agent. Afterwards detected pulses are clustered by emitter and sorted by time, similar to a real-operating threat detection system, to produce the final pulse sequences. The dataset consists of 60910 training samples and 17382 test samples, divided into 17 classes of emitters, taken from different operating environments (aircraft, marine or ground-based). To depict a more realistic case, classes are not equally represented inside the dataset, making the classification task more challenging. Moreover, the pulse sequences present in the dataset have variable length, ranging from few pulses (7-8) to an upper limit of 512. In our experiments, the set of pulse attributes consists of PRI, PW and RF. The class imbalance was taken into consideration in terms of evaluation metrics as well. To emphasize correct classification for all the classes, regardless of their relative frequency, macro-averaged classification accuracy ($M$) was measured during all the experiments. Macro-averaged metrics are obtained by computing the metric independently for each class and then taking the average, i.e. $M = \sum_{c=0}^{C-1}{ACC_c}/C$, where $ACC_c=\sum_{i=1}^N\mathbbm{1}(y^{pred}_i=y_i=c)/N_c$. Accuracies have been measured on the test set, consisting of unseen examples, in order to test the generalization power of the models. This evaluation metric was used to perform an ablation study of the different techniques introduced in the paper. First, a model was tested without any input normalization. Then the same model was tested with min-max input normalization only. Finally, the same evaluation was performed on a model with the proposed normalization scheme. Results are then compared. According to the same principle, the same model was tested with and without attribute-specific LSTM layers to showcase their effectiveness. Moreover, a baseline comparison of different DL-based emitter classification approaches was carried out on our dataset. Other tested models include: - The already mentioned Liu et al. [@radar-rnn], who apply a RNN architecture based on Gated Recurrent Units (GRUs) [@gru] with discretization of the two input features PRI and PW, without employing RF; - The work of Petrov et al. [@petrov], who employ a MLP on statistics computed from attribute sequences, more specifically minimum and maximum observed values for PRI, PW, and RF; - A ResNet18 [@resnet] model, which is a state-of-the-art CNN architecture. Even though it was initially designed for image classification on 2D inputs, it has been shown to work effectively on time series classification as well [@tsc-survey; @tsc-resnet]. To ensure a more objective baseline for comparison, the same set of attributes, consisting of PRI, PW and RF, was utilized. Therefore, methods who explicitly mention to use only a subset of these three pulse features were tested both in the configuration described in the original paper and in the configuration of this paper. Finally, in case some papers proposed more than one normalization, all the different proposals were included in our baselines. Results and Discussion {#sec:results_discussion} ====================== Table \[tab:accuracies-ablative-sfp\] summarizes the results of the ablation study regarding the prediction accuracy, when applying attribute-specific LSTMs compared to the standard joint RNN processing case. In all the cases the introduction of attribute-specific LSTMs outperforms the joint RNNs by 2-14%. Additionally, Table \[tab:accuracies-ablative-sfp\] clearly highlights the improvement in accuracy of attribute-specific LSTMs brought upon by the proposed normalization scheme, rendering the combination of those two components the most favorable option for the radar emitter classification. In Table \[tab:accuracies-comparison\] the accuracy of our method is compared with a variety of approaches which have been previously deployed for emitter classification. The superiority of the proposed method is clear with an improvement across the board ranging between 2%-19%. Specifically, comparing our method with Liu et al. [@radar-rnn] the improvement brought upon our method is 19% if we utilize the method described explicitly in [@radar-rnn] and 10% when we also incorporate the RF information for fairness. The increase in performance can be attributed to the fact that the proposed normalization is more suitable for this domain compared to discretization. Furthermore, we achieved a 2%-12% improvement in comparison to Petrov et al. [@petrov]. Utilizing temporal information and per-sequence normalization, which isolates temporal patterns inside the sequences efficiently, provides a tailored approach for the emitter classification. Finally, a state-of-the-art ResNet18 [@resnet] was also outperformed by our method by 14%, highlighting the importance of the temporal information stored by the LSTMs. In Figure \[fig:cf\_comparison\], we compare the confusion matrices of Liu et al. [@radar-rnn] and ResNet18 [@resnet] with the proposed method, in order to showcase the performance achieved per class and the main sources of ambiguity that increased the difficulty of the task at hand. Neighboring classes usually represent similar type of emitters, such as air-based or ground-based. Thus we can clearly see that all methods are prone to confusing classes originating from similar emitter types. However, the proposed method achieves the lowest amount of misclassifications and as can be seen the majority of the predictions lie in the diagonal overlapping with the ground truth classes. Finally, we performed a robustness evaluation experiment by adding Gaussian noise on the radar signals in order to showcase the resistance of our method to noise perturbations. In our experimental setup, additive Gaussian noise was applied in increasing quantities to the pulse sequences reaching up to 10% of the signal magnitude. This value corresponds to a Signal-to-Noise Ratio (SNR) of 20 dB, a threshold in the proximity of the minimum SNR requirement for most EW systems [@adamy]. Fig. \[fig:robustness-noise\] shows that all the methods, except for Liu et al. [@radar-rnn] maintained a relatively stable performance in presence of noise. However, the proposed approach still outperformed the other baselines increasingly as the percentage of noise got higher, ranging from 2% increase for no noise to 6% for a SNR of 20 dB in comparison to Petrov et al. [@petrov]. ResNet18 [@resnet] shows high robustness to noise, however the proposed method combines not only resilience to noise but also an improved accuracy by 15%. Conclusion {#sec:conclusion} ========== In this paper we introduced a novel method for the complex task of radar emitter classification. Our approach comprises an application-specific normalization scheme to address the large variability of values within signal attributes and feature-specific RNNs, which not only incorporate temporal dependencies in the method but also improve the processing of individual features. Through thorough ablation testing of the individual components of the proposed technique and comparison with previous state-of-the-art methods, we showcased the superiority of our method in terms of accuracy. Furthermore, our robustness evaluation with additive Gaussian noise proved the ability of our approach to be more stable in the presence of noise compared to other baselines. Future work may include the application of the proposed method across domains on other pulsed signals, such as LIDAR signal classification in autonomous driving. Other future applications expand to tasks of the medical domain, such as pulse irregularity detection in ECGs or MRIs and EEG signal classification.
--- abstract: 'The purpose of this paper is to establish $L^p$ error estimates, a Bernstein inequality, and inverse theorems for approximation by a space comprising spherical basis functions located at scattered sites on the unit $n$-sphere. In particular, the Bernstein inequality estimates $L^p$ Bessel-potential Sobolev norms of functions in this space in terms of the minimal separation and the $L^p$ norm of the function itself. An important step in its proof involves measuring the $L^p$ stability of functions in the approximating space in terms of the $\ell^p$ norm of the coefficients involved. As an application of the Bernstein inequality, we derive inverse theorems for SBF approximation in the $L^P$ norm. Finally, we give a new characterization of Besov spaces on the $n$-sphere in terms of spaces of SBFs.' author: - 'H. N. Mhaskar[^1], F. J. Narcowich[^2], J. Prestin[^3], J. D. Ward[^4]' title: '$L^p$ Bernstein Estimates and Approximation by Spherical Basis Functions [^5] [^6]' --- Introduction ============ Various applications in meteorology, cosmology, and geophysics require a modeling of functions based on *scattered data* collected on (or near) a sphere; i.e., when one does not have any control on where the data sites are located [@Freeden-etal-97-1; @Freeden-etal-98-1; @Freeden-etal-04-1]. On ${\mathbb{S}}^n$, the unit sphere in ${\mathbb{R}}^{n+1}$, $n\ge 1$, a popular method is to construct the required approximation from spaces of spherical basis functions (SBFs), which are kernels located at points in a discrete set $X=\{\xi_j\}_{j=1}^N\in {\mathbb{S}}^n$, the set of *centers* or *nodes*. A function $\phi : [-1,1]\to{\mathbb{R}}$ is an SBF on ${\mathbb{S}}^n$ if, in its expansion in ultraspherical polynomials $\ultra {\lambda_n} \ell$, $\lambda_n=\frac{n-1}{2}$, the Fourier-Legendre coefficients $\{\hat\phi(\ell)\}$ of $\phi$ are all positive; see section \[SBFs\] for details. These $\phi$ are to be used as kernels of the form $\phi(x\cdot y)$, $x,y\in {\mathbb{S}}^n$, $x\cdot y$ being the usual “dot” product. The approximation space here is the span $$\calg_{\phi,X}:= {\operatorname{span}}\{\phi(x\cdot\xi)\}_{\xi\in X}.$$ Following usage common in the neural network community, we will say that a function $g\in \calg_{\phi,X}$ is an *SBF network* associated with $\phi$. The SBF $\phi$ is sometimes called an *activation function* or a *neuron*, but we will not use these terms here. Such $\phi$ may have singular behavior. This is the case for certain thin-plate splines; $(1-x\cdot y)^{-1/2}$ is an SBF in ${\mathbb{S}}^n$, $n\ge 2$, for instance. However, when they are continuous, they are positive definite in Schoenberg’s sense [@Schoenberg-42-1]. In that case the interpolation matrix $[\phi(\xi_i\cdot\xi_j)]$ is positive definite, and it is possible to use SBFs to interpolate data given at the points in $X$. The focus of this paper is approximation. To handle noisy data, both least squares and quasi-interpolants have been used for many years. More recently, the issue in many meshless numerical methods for solving PDEs is how well a network approximates a solution to the PDE. Singular SBFs should prove useful in probing for a corresponding singularity in solutions. To be effective, though, such methods require knowing the degree of approximation in various spaces, especially the $L^p$, $1\le p\le \infty$. The $L^2$ case for SBFs $\phi$ with $\hat\phi(\ell) \sim (\ell+1)^{-\beta}$, $\beta>n/2$ has recently been investigated in [@Narcowich-etal-07-1], with nearly optimal rates being attained by interpolatory networks. The known estimates on the degree of approximation in the case of $L^p$, $p\ne 2$ provided by interpolatory networks are not asymptotically optimal. This has lead to the development of other approximation tools [@Mhaskar-etal-99-1; @Mhaskar-06-1; @Narcowich-etal-06-1], involving SBFs or spherical harmonics, in $L^p$, $1\le p\le \infty$. A central step in obtaining approximation rates in $L^2$ was establishing a Bernstein estimate, which was the used to get an inverse approximation theorem. The paper has three main goals. The first is to derive an $L^p$ Bernstein inequality, for $1\le p\le \infty$; namely, $\|g\|_{H^p_\gamma} \le Cq^{-\gamma}\|g\|_p$, $0<\gamma<c_\phi$. Here $H^p_\gamma$ is a Bessel-potential Sobolev space [@Strichartz-83-1; @Triebel-86-1]; it measures derivatives of $g$ (cf. section \[bessel-sobolev\]). The quantity $q$ is a half the minimal separation of points in $X$; $q^{-1}$ plays the role of a Nyquist frequency. The second is to obtain is to obtain $L^p$ error estimates, $1\le p\le \infty$, for approximating a function by networks in $\calg_{\phi, X}$. We combine these direct (Favard-Jackson) estimates with the Bernstein inequalities to provide new characterizations of Besov spaces on ${\mathbb{S}}^n$, characterizations that use rates of approximation from the $\calg_{\phi,X}$. The Bernstein estimates are then used to establish inverse theorems and obtain nearly optimal rates of approximation. The third is to show that the results gotten here will apply for nearly all of the SBFs of interest. In particular, they apply to various RBFs restricted to the sphere – the thin-plate splines and Wendland functions, whose Fourier-Legendre coefficients have algebraic decay, and also Gaussians and multiquadrics, whose coefficients decay faster than algebraically. SBFs in the latter class are well known to be difficult to treat. The paper is organized this way. Section \[background\] reviews various geometric quantities, such as the set of centers, mesh norm, and so on. It also discusses spherical harmonics and the Bessel-potential Sobolev spaces. Section \[SBFs\] discusses SBFs, their Fourier-Legendre expansions, and deals in detail with the SBFs mentioned earlier, along with ones corresponding to certain Green’s functions that play a significant role in the paper. It is here that we will show that nearly all of the SBFs of interest have the properties necessary for our results will hold. We also mention that we obtain precise asymptotic expressions for the Fourier-Legendre coefficients in the case of the Wendland functions. The strategy for establishing the Bernstein inequality, which will be detailed below, consists of two key components: $L^p$ approximation results for functions in $\calg_{\phi,X}$ by means of spherical polynomials, and $L^p$ stability estimates; these are developed in sections \[approximation\] and \[stability\], respectively. The approximation results are based on Marcinkiewicz-Zygmund inequalities developed in [@Mhaskar-etal-01-1; @Mhaskar-etal-01-2; @Narcowich-etal-06-1], as well as frame results from [@Narcowich-etal-06-1]. The stability results, which are of interest in their own right, are for all $L^p$ – not just for interpolation with continuous SBFs. To obtain them, we introduce a *stability ratio*, which provides some measure of the extent to which a finite set in $L^p$ is linearly independent. In section \[bernstein\_inverse\_theorems\], the results of the previous two sections are combined to yield $L^p$ Bernstein inequalities (section \[bernstein\_inequalities\]), direct theorems for approximation by networks in $\calg_{\phi,X}$ (section \[direct\_thms\]), characterizations of Besov spaces on ${\mathbb{S}}^n$ (section \[besov\_spaces\]), and inverse theorems for $L^p$ functions approximated at given rates by SBF networks (section \[inverse\_theorems\]). #### Strategy Let $g$ be an SBF network in $\calg_{\phi,X} \subset H^p_\gamma({\mathbb{S}}^n)$, so that it has the form $$g(\bfx)=\sum_{\xi\in X}a_\xi \phi(\bfx \cdot \xi).$$ One of our main goals is to obtain an $L^p$ Bernstein inequality for such networks; that is, a bound of the form $\|g\|_{H^p_\gamma} \le Cq^{-\gamma}\|g\|_p$, where the norms are those appropriate for ${\mathbb{S}}^n$ and $\gamma>0$ is bounded above by a constant depending on $\phi$ and $p$. Our strategy involves approximating $g$ by degree $L$ spherical polynomials on ${\mathbb{S}}^n$, where $L\sim q^{-1}$. Now, for fixed $L$ and any $S$, there is a Bernstein inequality, $\|S\|_{H^p_\gamma}\le CL^\gamma \|S\|_p$, which is found in Theorem \[bernstein-nikolskii-poly-ineqs\]. Using it and manipulations involving the triangle inequality, one has that $$\|g\|_{H^p_\gamma} \le \|S\|_{H^p_\gamma}+\|g-S\|_{H^p_\gamma}\le CL^\gamma \|S\|_p+\|g-S\|_{H^p_\gamma},$$ which holds for given $L$ and any $S$. Obtaining an appropriate polynomial $S$ is crucial to the argument. To do that, we will use the frame operators introduced in [@Narcowich-etal-06-1] and discussed in more detail in section \[frames\] below. In particular, we need reconstruction operators $\sfb_J$, with $J\sim \log_2 L$. These rotationally invariant operators have other very useful approximation properties, which are given in Proposition \[B\_J\_approx\_thm\]. They take $L^p$ spaces and the space of continuous function boundedly into spherical polynomials having degree $\calo(2^J)$. Consequently, with $S=\sfb_J g$, we have $\|S\|_p\le C\|g\|_p$, and also $$\|g\|_{H^p_\gamma} \le C2^{\gamma J} \|g\|_p+\|g-\sfb_J g\|_{H^p_\gamma} = C2^{\gamma J} \|g\|_p + \frac{|a|_p}{\|g\|_p} \cdot \frac{\|g-\sfb_J g\|_{H^p_\gamma}}{|a|_p} \cdot \|g\|_p$$ where $|a|_p = \left(\sum_{\xi \in X} |a_\xi|^p \right)^{1/p}$ is the $p$-norm of $a=\{a_\xi\}_{\xi\in X}$. The functions $\{ \phi((\cdot)\cdot \xi)\}_{\xi\ni X}$ are linearly independent and form a basis for $\calg$, and so the pairing $a\leftrightarrow g$ is bijective. Since $\calg$ has finite dimension $| \calg |$, the ratio $$\label{s-ratio} \maxg_{\calg,\, p}:= \max_{\calg \ni g \neq 0}\frac{|a|_p}{\|g\|_p}$$ is finite; it will be called the $p$-norm *stability ratio* of the network $\calg=\calg_{\phi,X}$. This ratio is similar to a condition number in interpolation, but for $L^p$. With it, the inequality directly above becomes $$\label{basic_ineq} \|g\|_{H^p_\gamma} \le \left( C2^{\gamma J} + C'\maxg_{\calg,p} \left(\frac{\|(I-\sfb_J)g\|_{H^p_\gamma}}{|a|_p} \right) \right)\|g\|_p.$$ To obtain the desired Bernstein inequality, we require two bounds: the first on $\|(I-\sfb_J)g\|_{H^p_\gamma}/|a|_p$ and the second on $\maxg_{\calg,p}$. The first bound relies only on approximation results; these we cover in section \[approximation\]. The second is a bound on the stability ratio. This bound requires a more detailed analysis involving both the geometry of $X$ and properties of $\phi$. It is carried out in section \[stability\]. An interesting point is the that the two bounds make different demands on the properties required for $\phi$. This makes the analysis of both bounds subtle. Fortunately, the common demands are satisfied by large classes of SBBs, including restrictions to ${\mathbb{S}}^n$ of the most common RBFs – the thin-plate splines, Wendland functions, Gaussians, Hardy multiquadrics, and others. Background ========== Background and notation for ${\mathbb{S}}^n$ {#notation} -------------------------------------------- #### Centers and decompositions of ${\mathbb{S}}^n$. Let $X$ be a finite set of distinct points in ${\mathbb{S}}^n$; we will call these the *centers*. For $X$, we define these quantities: *mesh norm*, $h_X=\sup_{y\in {\mathbb{S}}^n} \inf_{\xi\in X} d(\xi,y)$, where $d(\cdot,\cdot)$ is the geodesic distance between points on the sphere; the *separation radius*, $q_X=\frac12 \min_{\xi\neq\xi'}\, d(\xi,\xi')\,$; and the *mesh ratio*, $\rho_X:=h_X/q_X\ge 1$. For $\rho\ge 1$, define $\calf_\rho=\calf_\rho({\mathbb{S}}^n)$ be the family of all sets of centers $X$ with $\rho_X\le \rho\,$. We say that $X$ is *$\rho$-uniform* if $X\in \calf_\rho$. For every $\rho\ge 2$, $\calf_\rho({\mathbb{S}}^n)$ is not only non empty, but it contains *nested* sequences of sets of centers for which $h_X$ becomes arbitrarily small; precisely, the result is this: \[nesting\_Xs\] Let $\rho\ge 2$ and let $\calf_\rho$ be the corresponding $\rho$-uniform family. Then, there exists a sequence of sets $X_k\in \calf_\rho$, $k=0,1,\ldots$, such that the sequence is nested, $X_k\subset X_{k+1}$, and such that at each step the mesh norms satisfy $\frac14 h_{X_k} < h_{X_{k+1}}\le \frac12 h_{X_k}$. We will need to consider a decomposition of ${\mathbb{S}}^n$ into a finite number of non-overlapping, connected regions $R_\xi$, each containing an interior point $\xi$ that will serve for function evaluations as well as labeling. For example, if $\calx$ is the Voronoi tessellation for a set of centers $X$, then we may take $R_\xi$ to be the region associated with $\xi\in X$. In any case, we will let $X$ be the set of the $\xi$’s used for labels and $\calx=\{R_\xi\subset {\mathbb{S}}^n\,|\, \xi\in X\}$. In addition, let $\|\calx\|=\max_{\xi\in X} \{\mbox{diam}(R_\xi)\}$. Spherical harmonics ------------------- Let $n\ge 2$. Let $d\mu$ be the standard measure on the $n$-sphere, and let the spaces $L^p({\mathbb{S}}^n)$, $1\le p\le \infty$, have their usual meanings. In addition, let $\Delta_{{\mathbb{S}}^n}$ denote the Laplace-Beltrami operator on ${\mathbb{S}}^n$. The eigenvalues of $\Delta_{{\mathbb{S}}^n}$ are $-\ell(\ell +n-1)$, $\ell=0,1,\ldots$. For $n\ge 2$ and $\ell$ fixed, the dimension of the eigenspace is $$\label{H_dimension} d_\ell^n =\frac{\ell+\lambda_n}{\lambda_n} \binom{\ell+n-2}{\ell} \ \stackrel{\ell\to\infty}{\sim}\ \frac{\ell^{n-1}}{\lambda_n (n-2)!},\ \lambda_n:=\frac{n-1}{2}.$$ For $n=1$, the case of the circle, $d_0^1=1$ and $d_\ell^1=2$, $\ell\ge1$. A spherical harmonic $Y_{\ell,m}$ is an eigenfunction of $\Delta_{{\mathbb{S}}^n}$ corresponding to the eigenvalue $-\ell(\ell +n-1)$ [@Mueller-66-1; @Stein-Weiss-71-1], where $m=1\ldots d_\ell^n$. The set $\{Y_{\ell,m}\;\colon \ell=0,1, \ldots, m=1\ldots d_\ell^n\}$ is orthonormal in $L^2({\mathbb{S}}^n)$. Denote by $\calh_\ell$ the span of the spherical harmonics with fixed order $\ell$, and let $\Pi_L=\bigoplus_{\ell=0}^L \calh_\ell$ be the span of all spherical harmonics of order at most $L$. The orthogonal projection $\sfp_\ell$ onto $\calh_\ell$ is given by $$\label{proj_L} \sfp_\ell f = \sum_{m=1}^{d_\ell^n} \langle f,Y_{\ell,m} \rangle Y_{\ell,m}\,.$$ We regard the sphere ${\mathbb{S}}^n$ as being the unit sphere in ${\mathbb{R}}^{n+1}$, and we let the quantity $\xi\cdot \eta$ denote the usual “dot” product for ${\mathbb{R}}^{n+1}$. Using the addition formula for spherical harmonics, when $n\ge 2$, one can write the kernel for this projection as $$\label{kernel_proj_L} P_\ell(\xi\cdot\eta) = \sum_{m=1}^{d_\ell^n} Y_{\ell,m}(\xi) \overline{ Y_{\ell,m}(\eta)} = \frac{\ell+\lambda_n}{\lambda_n\omega_n}\ultra {\lambda_n} \ell (\xi\cdot \eta), \ \lambda_n := \frac{n-1}{2},$$ where $\ultra {\lambda_n} \ell (\cdot)$ is the ultraspherical polynomial of order $\lambda_n$ and degree $\ell$. Also, we have that $\|\ultra {\lambda_n} \ell \|_\infty \le \ultra {\lambda_n} \ell (1)=\frac{d_\ell^n \lambda_n}{\ell+\lambda_n}$. We will briefly discuss these polynomials in section \[SBFs\], in connection spherical basis functions. For $n=1$, $\lambda_1=0$. In that case, the kernel for $\sfp_\ell$ has the form $$\label{kernel_proj_L_n=1} P_\ell(\xi\cdot\eta) = \left\{ \begin{array}{ll} \frac{1}{2\pi}, &\ell=0\\[5pt] \frac{1}{\pi}T_\ell(\xi\cdot \eta ), &\ell \ge 1, \end{array}\right.$$ where $T_\ell(\cdot)$ the degree-$\ell$ Chebyshev polynomial of the first kind, which is a limiting case of the ultraspherical polynomials [@Szego-75-1 Section 4.7]. We will also need to consider operators of the form $\sum_{\ell=0}^\infty c_\ell \sfp_\ell$. The kernels for the projections $\sfp_\ell$ then provide us with kernels $\sum_{\ell=0}^\infty c_\ell P_\ell(\xi\cdot\eta)$, which may be distributional. Bessel-potential Sobolev spaces {#bessel-sobolev} ------------------------------- The spherical harmonic $Y_{\ell,m}$ is an eigenfunction corresponding to the eigenvalue $-\ell(\ell +n-1)=\lambda_n^2-(\ell+\lambda_n)^2$ for Laplace-Beltrami operator $\Delta_{{\mathbb{S}}^n}$ on ${\mathbb{S}}^n$. It follows that $\ell+\lambda_n$ is an eigenvalue corresponding to the eigenfunctions $Y_{\ell,m}\,, m=1\ldots d^n_\ell$, of the pseudo-differential operator $$\label{psiDO_Ln} \sfl_n:=\sqrt{\lambda_n^2-\Delta_{{\mathbb{S}}^n}} = \sum_{\ell=0}^\infty (\ell+\lambda_n)\sfp_\ell.$$ Let $\gamma$ be real, $1\le p \le \infty$ and $n\ge 2$. If $f$ is a distribution on ${\mathbb{S}}^n$, define the Bessel-potential Sobolev spaces $H^p_\gamma({\mathbb{S}}^n)$ [@Strichartz-83-1; @Triebel-86-1] to be all $f$ such that $$\label{def-H} \|f\|_{H^p_\gamma}:= \Big\|\sum_{\ell=0}^\infty (\ell+\lambda_n)^{\gamma} \sfp_\ell f \Big\|_{L^p} < \infty,$$ where $\sfp_\ell$ is from (\[proj\_L\]). The notation we use here is that of Triebel [@Triebel-86-1]. Strichartz [@Strichartz-83-1] defined these spaces on an a complete Riemannian manifold, using the equivalent operator $(1-\Delta_{{\mathbb{S}}^n})^{\gamma/2}$ to do so. One more thing: \[dom\_L\] The space $H^2_\gamma({\mathbb{S}}^n)$ is the domain of $\sfl_n^\gamma$ [[@Strichartz-83-1 Theorem 4.4]]{}, which implies that $H^2_\gamma({\mathbb{S}}^n)$ is norm equivalent to the usual sobolev space $W^\gamma_2({\mathbb{S}}^n)$. Spherical basis functions {#SBFs} ========================= For any real $\lambda>0$, not just $\lambda_n=\frac{n-1}2$, the ultraspherical polynomials satisfy the orthogonality relation, $$\label{orthog_rel} \int_{-1}^1\ultra {\lambda} \ell (x) \ultra {\lambda} k (x)(1-x^2)^{\lambda- \frac12}dx = \frac{2^{1-\lambda}\pi \Gamma(\ell+2\lambda)}{(\ell+\lambda)\Gamma^2( \lambda) \Gamma(\ell+1)} \delta_{k,\ell}.$$ For the circle, we have $\lambda_1=0$. With $\ell\ge 1$, as $\lambda \to 0$, the ratio $ \ultra {\lambda_n} \ell (\cdot)/\lambda$ converges to $(2/\ell)T_\ell(\cdot)$, the degree-$\ell$ Chebyshev polynomial of the first kind [@Szego-75-1 Section 4.7]. Consider a function $\phi$ in $L^p$ or $C$. We will assume that $\phi$ has the following expansion in the orthogonal set of ultraspherical polynomials: $$\label{sbf_def} \phi(\underbrace{\xi\cdot \eta}_{\cos\theta}) := \left\{ \begin{array}{ll} \frac{1}{2\pi}\hat\phi(0) + \frac{1}{\pi}\sum_{\ell=1}^\infty \hat\phi(\ell)\cos \ell \theta, &n=1,\\ [5pt] \sum_{\ell=0}^\infty \hat\phi(\ell) \frac{\ell+\lambda_n}{\lambda_n\omega_n}\ultra {\lambda_n} \ell (\cos \theta ), &n\ge 2. \end{array}\right.$$ where $\omega_n := \frac{2\pi^{\frac{n+1}{2}}}{\Gamma(\frac{n+1}{2})}$ is the volume of ${\mathbb{S}}^n$. Functions of this form are called *zonal*. We will assume that the series converges in at least a distributional sense. The coefficients in the expansion are obtained via the orthogonality relations in (\[orthog\_rel\]). These are given below. $$\frac{\ell+\lambda_n}{\lambda_n\omega_n}\hat\phi(\ell)= \frac{(\ell+\lambda_n)\Gamma^2(\lambda_n)\Gamma(\ell+1)} {2^{1-\lambda_n}\pi \Gamma(\ell+2\lambda_n)} \int_{-1}^1\phi(x)\ultra {\lambda_n} \ell (x) (1-x^2)^{\lambda_n-\frac12}dx.$$ Using Rodrigues’ formula [@Szego-75-1 Eqn. (4.7.12)] for $\ultra {\lambda_n} \ell (x)$ in the equation above and employing the duplication formula and other standard properties of the Gamma function, one can obtain this expression: $$\hat\phi(\ell)=\frac{(-1)^\ell \omega_n \Gamma(\lambda_n+1)}{2^\ell\sqrt{\pi} \Gamma(\ell +\lambda_n +\frac12)}\int_{-1}^1\phi(x)\frac{d^\ell}{dx^\ell} \left\{(1-x^2)^{\ell+\lambda_n-\frac12}\right\}dx,$$ which holds for all $\ell$, even when $n=1$ – i.e., $\lambda_1=0$. Schoenberg [@Schoenberg-42-1] defined $\phi$ to be positive definite if for every set of centers $X$ the matrix $[\phi(\xi_j\cdot\xi_k)]$ is positive semidefinite. He showed that $\phi$ is positive definite if and only if the coefficients satisfy $\hat\phi(\ell)\ge 0$ for all $\ell$ and $\sum_{\ell=0}^\infty \hat\phi(\ell)d_\ell <\infty$. If in addition $\hat\phi(\ell)>0$, then $[\phi(\xi_j\cdot\xi_k)]$ is a positive definite matrix and one can use shifts of $\phi$ to interpolate any function $f\in C({\mathbb{S}}^n)$ on $X$. We will say that $\phi$ is a *spherical basis function* (SBF) in this case. One usually makes the assumption that the sum $\sum_{\ell=0}^\infty \hat\phi(\ell)d_\ell <\infty$, for then $\phi$ is continuous and $\phi(1)=\|\phi\|_{L^\infty}$. This is essential if we are doing standard interpolation of a function from its values on $X$. However, we are more interested in approximation than interpolation, and so we will *not* make this assumption here. Indeed, we will say that any distribution $\phi$ for which $\hat\phi(\ell)>0$ for all $\ell$ is a spherical basis function. In general, we will be interested in SBFs in $L^p$. Zonal functions that satisfy $\hat\phi(\ell)>0$ for $\ell\ge L>0$ are said to be *conditionally positive definite* SBFs. In the RBF theory on Euclidean space, the difference between strictly positive definite RBFs and conditionally strictly positive definite RBFs is significant. On ${\mathbb{S}}^n$, this difference is less important: a conditionally positive definite SBF differs from an SBF by a polynomial of degree $L-1$. This does play a role in interpolation, but is much less significant in approximation problems. That being the case, unless there is a genuine need to distinguish between the two, we will refer to both as simply SBFs. Below we will list Fourier-Legendre expansion coefficients for some of the more significant SBFs. Apart from certain Green’s functions that we will do first, these SBFs are restrictions of Euclidean RBFs in ${\mathbb{R}}^{n+1}$ to the ${\mathbb{S}}^n$, which are themselves SBFs [@Narcowich-Ward-02-1 Corollary 4.3]. These include Gaussians, multiquadrics, thin-plate splines, and Wendland functions. Such SBFs are RBFs expressed in terms of the Euclidean distance between $\xi$ and $\eta$ or its square, $\|\xi -\eta\|^2= 2 - 2\xi\cdot \eta$ and, with $t=\xi\cdot \eta$, these give rise to functions of $1-t$. #### Green’s functions Let $\beta>0$. The Green’s function solution to $\sfl_n^\beta G_\beta =\delta$ is a kernel with an expansion in spherical harmonics having coefficients $\widehat G_\beta(\ell,m)=(\ell+\lambda)^{-\beta}$. Properties of Green’s functions are discussed in more detail in Proposition \[perturbation\_approx\_prop\]. We simply remark that the kernel $G_\beta$ is an SBF that is in $L^1({\mathbb{S}}^n)$ for all $\beta>0$. For us, $G_\beta$ will play a significant role. The SBFs we consider will generally be of two types: $\phi=G_\beta+G_\beta\ast \psi$, where $\psi$ is an $L^1$ zonal function, or $\phi$ will be in $C^\infty$. The first type includes the thin-plate splines and Wendland functions, and the second, the Gaussians and multiquadrics. #### Thin-plate splines The thin-plate splines are defined in [@Wendland-05-1 Section 8.3]; their Fourier-Legendre coefficients are found in [@Narcowich-etal-07-2 §4.2]. These are given below. $$\label{TPS} \left. \begin{array}{l} \displaystyle{ \phi_s(t)= \left\{ \begin{array}{cc} (-1)^{\lceil (s)_+\rceil}(1-t)^s, & s > - \frac{n}{2}, \ s \not\in \NN\\[5pt] (-1)^{s+1}(1-t)^s\log(1-t), & s\in\NN. \end{array} \right.}\\[18pt] \hat\phi_s(\ell)= C_{s,n}\frac{\Gamma(\ell-s)}{\Gamma(\ell+s+n)}. \end{array} \right\}$$ where the factor $C_{s,n}$ is given by $$C_{s,n}:=2^{s+n}\pi^{\frac{n}{2}}\Gamma(s+1)\Gamma(s+\frac{n}{2}) \left\{ \begin{array}{ll} \frac{\sin(\pi s)}{\pi} & s > - \frac{n}{2}, \ s \not\in \NN\\[5pt] 1, & s\in\NN. \end{array} \right.$$ Let $\nu=\ell+\lambda_n$. For large $\nu$, the Fourier-Legendre coefficients $\phi_s(\ell)$ for the thin-plate splines have the asymptotic form $$\label{tps_asymtotics} \hat\phi_s(\ell)=C_{s,n}\nu^{-2s-n}\left(1+\sum_{j=1}^{p-1} G_{j}(n,s)\nu^{-j}+R_p(n,s,\nu)\right),$$ where $R_p(n,s,\nu)=\calo(\nu^{-p})$ and $G_{j}(n,s)$ are defined in [@Olver-74-1 p. 119]. Two remarks. First, we have made use of $G_0(n,s)=1$ in the expansion from [@Olver-74-1 p. 119]. Second, when $s$ is an integer or half-integer, $\hat\phi_s(\ell)$ is a rational function of $\ell$, and, hence, of $\nu$. In that case, it follows that the series for $\hat\phi_s(\ell)$ is actually a convergent power series in $\nu^{-1}$. For other $s$, the expansion is only asymptotic. From the structure of the expansions above and the properties of Green’s functions listed in Proposition \[perturbation\_approx\_prop\], we see that any finite linear combination of thin-plate splines $$\label{lin_comb_tps} \phi= \sum_{j=1}^m A_j \phi_{s_j}, \ - \frac{n}{2} < s_1<s_2 <\cdots <s_m,$$ has the form $$\label{lin_comb_tps_greens_fnc} \phi=A_1(G_{2s+n}+G_{2s+n}\ast \psi), \ \psi\in L^1.$$ #### Wendland functions All of the SBFs we have discussed so far are related to RBFs stemming from completely monotonic functions. These RBFs have the property that they are strictly positive definite or conditionally positive definite in ${\mathbb{R}}^n$ for all $n$. The corresponding SBFs are also positive definite in ${\mathbb{S}}^n$, again for all $n$. These RBFs are *not* compactly supported, however. This can be remedied, but there is a price: we must give up positive definiteness beyond a certain dimension. Wendland (cf. [@Wendland-05-1 Section 9.4]) constructed families of RBFs that are compactly supported on $0\le r\le R$, strictly positive definite in Euclidean spaces of dimension $d$ or less, have smoothness $C^{2k}$, and, within their supports, are polynomials of degree $\lfloor \frac{d}{2} \rfloor + 3k+1$. The quantities $d$, $k$, and $R$ are parameters and may be adjusted as needed. Restricting the Wendland functions to ${\mathbb{S}}^n$ just requires setting $r=\sqrt{2(1 - t)}$ and $R=\sqrt{2(1 - t_0)}$, where $-1<t_0\le t \le1$. We will denote these functions by $\phi_{d,k}(t)$. The support of $\phi_{d,k}$ on ${\mathbb{S}}^n$ is then $0\le \theta\le \cos^{-1}(t_0)<\pi$. From [@Wendland-05-1 Theorems 9.12 & 9.13], if $t>t_0$, then these functions are polynomials in $\sqrt{1-t}$ that may be put into the form, $$p_{d,k}(t) = e_1(1-t) + (1-t)^{k+\frac12}e_2(1-t),$$ where $e_1$ and $e_2$ are polynomials having with $\deg e_1=\lfloor\frac12( \lfloor \frac{d}{2} \rfloor + 3k+1)\rfloor$ and $\deg e_2 = \lfloor \frac12(\lfloor \frac{d}{2} \rfloor + k)\rfloor$. Outside of this interval, the $\phi_{d,k}$ are identically $0$. Using a power series argument, we have that, near $t\gtrapprox t_0$, $\phi_{d,k}(t) = A(t-t_0)^{\lfloor \frac{d}{2} \rfloor + 2k+1}\big(1+ \calo(t-t_0)\big)$, from which it follows that $\phi_{d,k}(t) $ is piecewise $C^{\lfloor \frac{d}{2} \rfloor + 2k+1}$ near $t_0$. In addition, it follows that $\psi_{d,k}(t) := \phi_{d,k}(t) - p_{d,k}(t) $ is piecewise $C^{\lfloor \frac{d}{2} \rfloor + 2k+1}$ on the whole interval $[-1,1]$. Putting all of this together, we conclude that $$\label{WendSBF-structure} \phi_{d,k}(t) = e_1(1-t) + (1-t)^{k+\frac12}e_2(1-t) + \psi_{d,k}(t).$$ Our aim is to use this decomposition to obtain large $\ell$ asymptotics for the Fourier-Legendre coefficients $\hat\phi_{d,k}(\ell)$ in ${\mathbb{S}}^n$. This we now do. \[WendSBF-coef\] Let $m=\lfloor \frac{d}{2} \rfloor + 2k+1$. If $\ell>\deg e_1$, then $$\hat \phi_{d,k}(\ell) = (\ell+\lambda_n)^{-(2k+1+n)}\left(A_0+\frac{A_1}{\ell+\lambda_n}+ \calo(\ell+\lambda_n)^{-2} \right)+ \frac{\widehat{\sfl^m\psi_{d,k}}(\ell)}{(\ell+\lambda_n)^m}.$$ Moreover, if we choose $\lfloor \frac{d}{2} \rfloor>n$, then the $\phi_{d,k}$ have the structure $$\phi_{d,k} = \mbox{\rm polynomial } + A_0 \left(G_{2k+n}+G_{2k+n}\ast \tilde\psi\right), \ \tilde\psi \in L^.$$ The polynomial term $e_1(1-t)$ doesn’t contribute to coefficients with $\ell>\deg e_1$. The term $(1-t)^{k+\frac12}e_2(1-t)$ is a linear combination of thin-plate splines, starting with $s=k+\frac12$. Thus it contributes the first term on the right above. By Remark \[dom\_L\], the function $\psi_{d,k}$ is in $H^2_m$, so it can be written as $\psi_{d,k}=\sfl_n^{-m}\sfl_n^m \psi_{d,k}$. The second term on the right follows directly from this fact. Finally, the form of the $\hat \phi_{d,k}(\ell)$’s leads to the second statement. Before leaving the topic, we point out that, when $\lfloor \frac{d}{2} \rfloor>n$, we have determined the precise asymptotics of the Fourier-Legendre coefficients for the Wendland functions. Heretofore only upper and lower bounds were known. #### Gaussians The Fourier-Legendre coefficients for the Gaussians, which are given below, may be found in [@Whittaker-Watson-65-1 Ex. 37, p.383], [@Mhaskar-etal-99-1 Example 5.2], and [@Narcowich-etal-07-2 §4.3]. $$\label{gaussians} \left. \begin{array}{l} \gamma_\sigma(t)=e^{-2\sigma(1-t)}, \ \sigma>0, \\[6pt] \hat \gamma_\sigma(\ell) = 2\pi\left(\frac{2\pi}{\sigma}\right)^{\lambda_n} e^{-\sigma} I_{\lambda_n+\ell}(\sigma), \end{array} \right\}$$ where $I_{\lambda_n+\ell}$ is an order $\lambda_n+\ell$ modified Bessel function of the first kind. For all $\ell\ge 0$, the coefficient $\hat \gamma_\sigma(\ell)$ satisfies this bound: [@Narcowich-etal-07-2 Proposition 4.3]: $$\label{gaussian_bounds} \frac{2 \sigma^\ell e^{-2\sigma}\pi^{\frac{n+1}2} }{\Gamma(\ell+\frac{n+1}2)}\le \hat\gamma_\sigma(\ell) \le \frac{2\sigma^\ell \pi^{\frac{n+1}2}}{\Gamma(\ell+\frac{n+1}2)}.$$ #### Multiquadrics The Hardy multiquadrics are treated in [@Narcowich-etal-07-2 §5]. The results are: $$\label{multiquadric} \left. \begin{array}{l} {\mathrm{mq}}_\alpha(t)=-\sqrt{\delta^2+2(1-t)},\ \delta>0. \\[12pt] \begin{aligned} \widehat{{\mathrm{mq}}}_\delta(\ell) =&\frac{ \pi^{\lambda_n} \Gamma(\ell-1/2)}{(\alpha^2+2)^{\ell-1/2} \Gamma(\ell+\lambda_n+1)} \times \\ &{\,{}_{2\!}F_{1}}\left(\frac{\ell-1/2}{2}, \frac{\ell+1/2}{2}; \ell+\lambda_n+1; \frac{4}{(\delta^2+2)^2}\right). \end{aligned} \end{array}\right\}$$ Here, ${\,{}_{2\!}F_{1}}$ is the usual hypergeometric function. Expressions for Fourier-Legendre coefficients for generalized multiquadrics may be found in [@Narcowich-etal-07-2 §5]. Again, this time for $\ell$ sufficiently large, the coefficient $\widehat{{\mathrm{mq}}}_\delta(\ell)$ satisfies the following bound [@Narcowich-etal-07-2 Proposition 5.1]: $$\label{multiquadric_bounds} C_1 \ell^{-\frac{n}{2}-1} \left( \frac{1}{\delta^2+2} \right)^{\ell- \frac12} <\widehat{{\mathrm{mq}}}_\delta(\ell) <C_2 \ell^{-1-n} \left( \frac{2}{\delta^2+2} \right)^{\ell-\frac12},$$ #### Ultraspherical generating functions For $n\ge 2$, the ultraspherical polynomials $\ultra {\lambda_n} \ell$ are frequently defined in terms of the generating function [@Szego-75-1 Equation (4.7.23)] below: $$\label{ultrasph_gen_fns} \left. \begin{array}{l} u_{\lambda_n,w}(t) = (1 - 2tw+w^2)^{-\lambda_n},\ 1>w>0,\ n\ge 2 \\[6pt] \hat u_{\lambda_n,w}(\ell) = w^\ell \end{array} \right\}$$ When $n=1$, $\lambda_1=0$, the expansion is in terms the $T_\ell(t)$’s, the Chebyshev polynomials of the first kind. In this case, the gerating function is simply the Poisson kernel. $$\label{poisson_kernel} \left. \begin{array}{l} P_w(t) = \frac{1-w^2}{1 - 2tw+w^2},\ 1>w>0, \\[6pt] \widehat P_w(\ell) = \left\{ \begin{array}{l} 1, \ \ell=0,\\ 2w^\ell, \ \ell\ge 1 \end{array} \right. \end{array} \right\}$$ Approximation ============= The approximation part of the analysis makes use of kernels and frames, which are related to them. These were studied in [@Dai-07-1; @Mhaskar-05-2; @Mhaskar-etal-00-1; @Mhaskar-Prestine-05-1; @Narcowich-etal-06-1] and further developed in [@Petrushev-Xu-2008-1]; we review them here, along with a number of other results important to attaining the goals of this paper. First, we will develop various types Marcinkiewicz-Zygmund inequalities for the sphere. Although some of these were previously derived [@Mhaskar-etal-01-1; @Mhaskar-etal-01-2; @Narcowich-etal-06-1], those pertinent to both the approximation and stability analysis are new. Second, using frames we establish a Bernstein inequality for spherical polynomials. Again, using frames we establish various distance estimates for $\phi\in H^1_\beta$ and we discuss Green’s function solutions to $\sfl_n^\beta G_\beta=\delta$. As we have mentioned earlier, these form a very important class of SBFs. Finally, at the end of the section we will complete the approximation part of the analysis. Kernels ------- Let $\kappa(t)\in C^k({\mathbb{R}})$, with $k\ge \max\{2,n-1\}$, be even, not identically 0, and satisfy $$\label{kappa_condits} |\kappa^{(r)}(t)| \le C_\kappa (1+|t|)^{r-\alpha}\ \mbox{for all } t\in {\mathbb{R}},\ r=0,\ldots,k,$$ where $\alpha>n+k$ and $C_\kappa>0$ are fixed constants. We remark that all compactly supported, $C^k$ functions that are even satisfy (\[kappa\_condits\]). Functions in the Schwartz-class $\cals({\mathbb{R}})$ that are even satisfy (\[kappa\_condits\]) for arbitrarily large $k$ and $\alpha$. Given such a $\kappa$, define the family of operators $$\sfk_{\varepsilon,n} := \kappa(\varepsilon \sfl_n) = \sum_{\ell=0}^\infty \kappa(\varepsilon (\ell+\lambda_n))\sfp_\ell, \ 0<\varepsilon\le 1,$$ along with the associated family of kernels $$\label{K_kernel_def} K_{\varepsilon,n}(\underbrace{\xi\cdot \eta}_{\cos\theta}) := \left\{ \begin{array}{ll} \frac{1}{2\pi}\kappa(0) + \frac{1}{\pi}\sum_{\ell=1}^\infty \kappa(\varepsilon\ell)\cos \ell \theta, &n=1,\\ [5pt] \sum_{\ell=0}^\infty \kappa(\varepsilon (\ell+\lambda_n)) \frac{\ell+\lambda_n}{\lambda_n\omega_n}\ultra {\lambda_n} \ell (\cos \theta ), &n\ge 2, \end{array}\right.$$ where $\cos \theta =\xi\cdot \eta$ and $0<\varepsilon\le 1$. It is worthwhile noting that $\kappa(t) = e^{-t^2}$ satisfies (\[kappa\_condits\]) and that the corresponding kernel is essentially the heat kernel for ${\mathbb{S}}^n$. We will need several results concerning these kernels and operators. First of all, we require the estimates on the $L^p$ norms for the kernels. Material closely connected to the theorem below appeared in [@Mhaskar-05-2 Proposition 4.1]. \[K\_kernel\_main\_bnd\] Let $\kappa$ satisfy [(\[kappa\_condits\])]{}, with $k\ge \max\{2,n-1\}$. If $0\le \theta\le \pi$, then there is a constant $\beta_{n,k,\kappa}>0$ such the kernel $K_{\varepsilon,n}$ satisfies the bound $$\label{K_kernel_bnd_final} |K_{\varepsilon,n}(\cos\theta)|\le \frac{\beta_{n,k,\kappa}} {1+(\frac{\theta}{\varepsilon})^k} \varepsilon^{-n}.$$ Moreover, we have that $$\label{K_kernel_Lp_bound} \|K_{\varepsilon,n}\|_p:=\|K_{\varepsilon,n}(\cos\theta)\|_{L^p({\mathbb{S}}^n)} \le C_{n,k,\kappa}\varepsilon^{-n/p'}.$$ These operators can be applied to functions in $L^p({\mathbb{S}}^n)$ or even distributions in $ \cald'({\mathbb{S}}^n)$, provided $\kappa$ decays fast enough – compact support will certainly work. As the result below shows, all them are bounded operators taking $L^p({\mathbb{S}}^n)\to L^q({\mathbb{S}}^n)$. \[K\_kernel\_op\_bnd\] If $\kappa$ satisfies (\[kappa\_condits\]), with $k>\max\{2,n\}$, then, for all $1\le p\le \infty$ and $1\le q\le \infty$, the operator $\sfk_{\varepsilon,n}\colon L^p({\mathbb{S}}^n)\to L^q({\mathbb{S}}^n)$ is bounded and its norm satisfies $$\|\sfk_{\varepsilon,n}\|_{p,q} \le C_{n,k,\kappa}(4\omega_{n-1} \varepsilon^n)^{-(\frac{1}{p}-\frac{1}{q})_+}\,,$$ where $C_{n,k,\kappa}$ is a constant that depends only on $n, k, \kappa$, and where $(x)_+=x$ for $x>0$ and $(x)_+= 0$ otherwise. We point out that more can be said when $\kappa$ has restrictions on its support. The result below follows from the spherical harmonics of degree $L\sim1/\varepsilon$ being in the kernel of $\sfk_{\varepsilon,n}$ when $\kappa(t) = 0$ near $t=0$. \[kernel\_support\_condit\] If $\kappa(t)=0$ for $|t|\le 1$, then for any spherical harmonic in $\Pi_{L_\varepsilon}$, where $L_\varepsilon = \lfloor \varepsilon^{-1} - \lambda_n^{-1} \rfloor \sim \varepsilon^{-1}$ or less, then we have $g_\varepsilon:=\sfk_{\varepsilon,n}g = \sfk_{\varepsilon,n}(g-P)$, and hence $\| g_\varepsilon\|_q \le \|\sfk_{\varepsilon,n}\|_{p,q} E_{L_\varepsilon }(g)_p$. Another important result for $\kappa$ supported away from $t=0$ and having fast decay is the one below, which follows directly from Theorems \[K\_kernel\_main\_bnd\] and \[K\_kernel\_op\_bnd\]. To simplify matters, we will assume that $\kappa$ is also compactly supported. \[kernel\_scaling\_prop\] Let $k>\max\{2,n\}$. If the support of $\kappa$ is compact and does *not* include $t=0$, then, for every fixed $\gamma$ in $\CC$, the function $\tilde\kappa(t):=|t|^\gamma \kappa(t)$ is also an even $C^k$ function that satisfies (\[K\_kernel\_def\]). Moreover, $\sfl^\gamma \sfk_{\varepsilon,n} =\varepsilon^{-\gamma} \tilde \sfk_{\varepsilon,n}.$ Finally, for real $\gamma$, we have the two bounds below: $$\begin{aligned} \|\sfl^\gamma \sfk_{\varepsilon,n}\|_{p,q} &\le C_{n,k,\tilde\kappa} (4\omega_{n-1})^{-(\frac{1}{p}-\frac{1}{q})_+} \varepsilon^{-\gamma -n(\frac{1}{p}-\frac{1}{q})_+} \\ \|\sfl^\gamma \sfk_{\varepsilon,n}\delta\|_p &\le C_{n,k,\tilde\kappa}\varepsilon^{-\gamma-n/p'} ,\end{aligned}$$ where $\delta$ is the Dirac distribution and thus $\sfl^\gamma \sfk_{\varepsilon,n}\delta$ is the kernel for $\sfl^\gamma \sfk_{\varepsilon,n}$. Marcinkiewicz-Zygmund inequalities ---------------------------------- Marcinkiewicz-Zygmund (MZ) inequalities provide equivalences between norms defined through integrals and ones defined through discrete sums. For ${\mathbb{S}}^n$, these were developed in [@Mhaskar-etal-01-1; @Mhaskar-etal-01-2; @Narcowich-etal-06-1]. We will need to adapt these MZ inequalities to estimate certain sums. Let $X\subset {\mathbb{S}}^n$ be the set of centers; also, let $q=q_X$, $h=h_X$, and $\rho=\rho_X:=h/q$ be the separation radius, mesh norm, and mesh ratio, respectively. We will need a decomposition of the sphere into a finite number of non-overlapping regions. The Voronoi tessellation corresponding to $X$ will serve our purpose here, although many other decompositions will work as well. Let $R_\xi$ be the Voronoi region containing $X$. Denote the collection of these regions by $\calx =\{R_\xi\subset {\mathbb{S}}^n\,|\, \xi\in X\}$ and its partition norm by $\|\calx\|=\max_{\xi\in X}\{\mbox{diam}(R_\xi)\}$. It is easy to show that the following geometric inequalities hold: $$\label{q_h_calx} h\le \|\calx\| \le 2h \ \mbox{and } \min_{\xi\in X}\mu(R_\xi) \ge c_n q^n.$$ Here $c_n$ is a constant related to the volume of ${\mathbb{S}}^n$. We will need these later. For a sequence space version of results below, see [@Mhaskar-06-1 Proposition 4.1]. \[MZ\_kernel\] Fix $\zeta\in{\mathbb{S}}^n$ and $k\ge n+2$. Let $K_\varepsilon(\eta):=K_{\varepsilon,n}(\eta\cdot\zeta)$. Then, there is a constant $C=C_{n,\kappa,k}$ for which $$\label{kernel_norm_est} \bigg| \|K_{\varepsilon}\|_1 - \sum_{\xi\in X}\mu(R_\xi)|K_\varepsilon(\xi)| \bigg| \le C\left\{ \begin{array}{cc} \| \calx\|/\varepsilon& \|\calx\| \le \varepsilon \\[5pt] \left(\|\calx\|/\varepsilon\right)^n & \|\calx\| \ge \varepsilon. \end{array} \right.$$ Moreover, if $\zeta\in X$, then $$\label{kernel_region_est} \bigg| \int_{{\mathbb{S}}^n-R_\zeta}|K_\varepsilon(\eta)|d\mu(\eta) - \sum_{X\ni\xi\ne \zeta}\mu(R_\xi)|K_\varepsilon(\xi)| \bigg| \le C_{n,\kappa,k} \left\{ \begin{array}{cc} (\| \calx\|/\varepsilon)^{-1}& \|\calx\| \le \varepsilon \\[5pt] \left(\varepsilon/\|\calx\| \right)^{k-n-2} & \|\calx\| \ge \varepsilon. \end{array} \right.$$ The proof follows along the the lines of the one for [@Narcowich-etal-06-1 Proposition 4.1]. Therefore, we will only sketch it here, referring the reader to [@Narcowich-etal-06-1] for the technical details. The inequalities in both (\[kernel\_norm\_est\]) and (\[kernel\_region\_est\]) involve bounding sums of contributions from each $R_\xi$ having the form $$D_\xi:=\bigg|\int_{R_\xi}|K_\varepsilon(\eta)|d\mu(\eta) - \mu(R_\xi)|K_\varepsilon(\xi)| \bigg| \le \int_{R_\xi}|K_\varepsilon(\eta)-K_\varepsilon(\xi)|d\mu(\eta).$$ Take $\zeta$ to be the north pole of the sphere and $\theta$ to be the co-latitude. Divide the sphere into $M \sim \pi/\| \calx\| $ bands, $B_m$, in which $(m-1)\pi/M \le \theta \le m\pi/M$, $m=1,\ldots, M$. Each $R_\xi$ can have non-trivial intersection with at most two adjacent bands, because $\mbox{diam}(R_\xi) \le \|\calx\| \sim \pi/M$. Thus, if $R_\xi\subset B_m\cup B_{m+1}$, then its lowest and highest co-latitudes satisfy $(m-1)\pi/M\le \theta_\xi^- \le \theta_\xi^+\le (m+1)\pi/M$. As is shown in [@Narcowich-etal-06-1], for $m=2,\ldots,M-1$ the sum of the $D_\xi$ from all $R_\xi \subset B_m\cup B_{m+1}$ is bounded above by the quantity, $$\label{D_non_cap_bnd} \sum_{R_\xi \subset B_m\cup B_{m+1}} D_\xi \le \frac{C_{n,\kappa,k}}{M\varepsilon} \int_{\frac{m-1}{M\varepsilon}\pi}^{\frac{m+1}{M\varepsilon}\pi} \frac{t^n}{1+t^k} dt .$$ If $R_\xi \ni \zeta$, then dealing with the corresponding $D_\xi$ can be done by estimating the integral that bounds the contribution from the region $R_\xi$ in the cap $0\le \theta\le 2\pi/M$, $$\label{D_cap_bnd} D_\xi \le C'_{n,\kappa,k}(M\varepsilon)^{-n} \int_0^{\frac{2\pi}{M\varepsilon} }\frac{ t dt } {1+t^k}\le \frac{C''_{n,\kappa,k}}{(M\varepsilon)^n} \left\{ \begin{array}{cc} (M\varepsilon)^{-2} & M\varepsilon\ge 1 \\ [5pt] 1 & M\varepsilon \le 1 . \end{array} \right.$$ Now, let $M=\lfloor \pi/\|\calx\|\rfloor$, precisely. Adding up the $D_\xi$ for all $\xi\in X$ yields the bound in (\[kernel\_norm\_est\]), which was implicit in the proof of [@Narcowich-etal-06-1 Proposition 4.1]. To get (\[kernel\_region\_est\]), we need to adjust $M$ so that all $R_\xi \not\ni \zeta$ are contained in the bands $B_m\cup B_{m+1}$, $m=2,\ldots,M-1$. This is easy to do. Just take $M=\lfloor (\pi-q)/\|\calx\|\rfloor$. Summing the $D_\xi$ bounded in (\[D\_non\_cap\_bnd\]) and taking care of some double counting yields $$\begin{aligned} \bigg| \int_{{\mathbb{S}}^n-R_\zeta}|K_\varepsilon(\eta)|d\mu(\eta) - \sum_{X\ni\xi\ne \zeta}\mu(R_\xi)|K_\varepsilon(\xi)| \bigg| &\le& \frac{C_{n,\kappa,k}}{M\varepsilon} \int_{\frac{\pi}{M\varepsilon}}^{\frac{\pi}{\varepsilon}} \frac{t^n}{1+t^k} dt \\ &\le& \frac{C_{n,\kappa,k}}{M\varepsilon} \int_{\frac{\pi}{M\varepsilon}}^{\infty} \frac{t^n}{1+t^k} dt \\ &\le & C_{n,\kappa,k}\left\{ \begin{array}{cc} (M\varepsilon)^{-1} & M\varepsilon\ge 1 \\ [5pt] (M\varepsilon)^{k-n-2} & M\varepsilon \le 1 , \end{array} \right. \end{aligned}$$ from which (\[kernel\_region\_est\]) follows easily. Let $f\in L^1({\mathbb{S}}^n)$ and set $f_\varepsilon := K_{\varepsilon,n}*f$; the function $f$ is *not* assumed to be zonal. We wish to estimate the difference $ E_\calx:=\big|\|f_\varepsilon\|_1 - \sum_{\xi\in X}|f_\varepsilon(\xi)|\mu(R_\xi)\big| $. It is straightforward to show that $$E_\calx \le \sum_{\xi\in X}\int_{R_\xi}|f_\varepsilon(\eta) - f_\varepsilon(\xi)|d\mu(\eta) \le \sup_{\zeta\in{\mathbb{S}}^n} F_{\varepsilon,\calx}(\zeta) \|f\|_1\,,$$ where $F_{\varepsilon,\,\calx}(\zeta):= \sum_{\xi\in X}\int_{R_\xi}\big| K_{\varepsilon,n}(\eta\cdot\zeta) - K_{\varepsilon,n} (\xi\cdot\zeta)\big|d\mu(\eta)$, which is the quantity estimated in Proposition \[MZ\_kernel\]. Applying that proposition and Remark \[kernel\_support\_condit\], we obtain the desired estimate below. \[MZ\_function\_cor\] Let $\kappa$ satisfy (\[kappa\_condits\]), with $k\ge n+2$, and, for $f\in L^1({\mathbb{S}}^n)$, let $f_\varepsilon=K_{\varepsilon,n}*f$. If $\calx$ is the decomposition of ${\mathbb{S}}^n$ described above, $\|\calx\|\ge \varepsilon$ and $L_\varepsilon = \lfloor \varepsilon^{-1} - \lambda_n^{-1} \rfloor \sim \varepsilon^{-1}$. then $$\label{MZ_function_est} \bigg|\|f_\varepsilon\|_1 - \sum_{\xi\in X}|f_\varepsilon(\xi)|\mu(R_\xi)\bigg| \le C_{n,\kappa,k} \left(\|\calx\|/\varepsilon\right)^n \left\{ \begin{array}{cc} E_{L_\varepsilon}(f)_1,& \kappa(t) = 0, \, |t|\le 1. \\[4pt] \|f\|_1, & \mbox{otherwise}, \end{array} \right.$$ \[MZ\_zonal\_fnct\_est\] If $f$ is zonal, i.e. $f(\xi) = \psi(\xi\cdot \zeta)$, then the right side (\[MZ\_function\_est\]) is independent of the variable $\zeta$. Also, the strict inequality $\|\calx\|\ge \varepsilon$ isn’t absolutely necessary. The results still hold when $\|\calx\|$ and $\varepsilon$ are comparable. For the most part, we will use these results to bound the sums $\big|\sum_{\xi\in X}a_\xi f_\varepsilon(\xi)\big|$, under the assumption that $ \|\calx\| \ge \varepsilon$. Using Corollary \[MZ\_function\_cor\] for that case, we see that $$\begin{aligned} \bigg|\sum_{\xi\in X}a_\xi f_\varepsilon(\xi)\bigg| &\le& \frac{|a|_\infty}{\min_{\xi\in X}\mu(R_\xi)}\sum_{\xi\in X} \mu(R_\xi)|f_\varepsilon(\xi)| \nonumber \\ & \le& \frac{|a|_\infty}{\min_{\xi\in X}\mu(R_\xi)}\left(\|f_\varepsilon \|_{L^1} + C_{n,\kappa,k}\left(\|\calx\|/\varepsilon\right)^n \|f\|_{L^1} \right) \nonumber\end{aligned}$$ From Theorem \[K\_kernel\_op\_bnd\], (\[q\_h\_calx\]), and $h=\rho q$, with $L_\varepsilon \sim \varepsilon^{-1}$ and $\rho q \approx \|\calx\|\ge \varepsilon$. we have that $$\label{SUM_function_est} \bigg|\sum_{\xi\in X}a_\xi f_\varepsilon(\xi)\bigg| \le C \rho^n \varepsilon^{-n} |a|_\infty \left\{ \begin{array}{cc} E_{L_\varepsilon}(f)_1,& \mbox{if } \kappa(t) = 0, \, |t|\le 1, \\[4pt] \|f\|_1, & \mbox{otherwise}, \end{array} \right.$$ If $f$ is a zonal function, then, by Remark \[MZ\_zonal\_fnct\_est\], we may use the $\|\cdot\|_\infty$ norm on the left above. We want to make the same kind of estimate, but for $f$ being replaced by $\delta_\zeta$, the usual Dirac delta function. Thus $f_\varepsilon$ is replaced by $K_\varepsilon(\cdot):=K_{\varepsilon,n}*\delta(\cdot)=K_{\varepsilon,n}((\cdot)\cdot\zeta)$. A nearly identical argument to the one used above, coupled with (\[kernel\_norm\_est\]) for $\|\calx\|\ge \varepsilon$ and the bound on $\|K_\varepsilon\|_1$ from Theorem \[K\_kernel\_op\_bnd\], results in $$\label{LC_kernel_bnd_pointwise} \bigg|\sum_{\xi\in X}a_\xi K_{\varepsilon}((\cdot)\cdot \xi)\bigg| \le C\rho^n\varepsilon^{-n} |a|_\infty .$$ The constants on the right above hold uniformly, so we thus have $$\label{LC_kernel_bnd} \bigg\|\sum_{\xi\in X}a_\xi K_{\varepsilon}((\cdot)\cdot \xi)\bigg\|_\infty \le C\rho^n\varepsilon^{-n} |a|_\infty.$$ The two bounds above are very similar and can be used in combination. They will be needed to complete the approximation part of the analysis. There is another bound, somewhat different from these two, that we will need in section \[stability\]: If $\rho q \sim \| \calx \| \ge \varepsilon>0$ and if $k\ge n+2$, then $$\label{SUM_kernel_est} \max_{\zeta\in X}\sum_{X\ni\xi\ne \zeta} |K_{\varepsilon,n}(\xi\cdot \zeta)| \le C_{n,\kappa,k} q^{-n}.$$ In equation (\[kernel\_region\_est\]), Proposition \[MZ\_kernel\], again for $\|\calx\|\ge \varepsilon$, an argument similar to the ones used above gives us $$\sum_{X\ni\xi\ne \zeta} |K_{\varepsilon,n}(\xi\cdot \zeta)| \le C'_{n,\kappa,k} q^{-n} \int_{{\mathbb{S}}^n-R_\zeta}|K_{\varepsilon,n}(\eta\cdot\zeta)|d\mu(\eta) + C''_{n,\kappa,k}q^{-n}\left(\frac{\varepsilon}{\|\calx\|} \right)^{k-n-2}$$ Using $\int_{{\mathbb{S}}^n-R_\zeta}|K_{\varepsilon,n}(\eta\cdot\zeta)|d\mu(\eta)\le \|K_{\varepsilon,n}\|_1\le C_{n,\kappa,k}$, $\frac{\varepsilon}{\|\calx\|}\le 1$, and maximizing over $\zeta\in X$, we obtain (\[SUM\_kernel\_est\]). This estimate is more delicate than (\[LC\_kernel\_bnd\]), because the term missing from the sum is $K_{\varepsilon,n}(\zeta \cdot \zeta) = K_{\varepsilon,n}(1)$, which turns out to be $\calo(\varepsilon^{-n})$. For $\varepsilon/q$ small enough, the sum (\[SUM\_kernel\_est\]) will be majorized by $ K_{\varepsilon,n}(1)$. This is needed as part of a diagonal dominance argument. Frames ------ We now address the question of the frame decomposition mentioned previously. Our approach follows the one in [@Narcowich-etal-06-1]. As mentioned earlier, others are certainly possible. For this, we need a function $a \in C^k({\mathbb{R}})$,which we may assume is even, with support in $[-2,-\frac12]\cup [\frac12,2]$, and satisfying $|a(t)|^2 + |a(2t)|^2 \equiv 1$ on $[\frac12,1]$. Such a function can be easily constructed out of an *orthogonal wavelet mask* $m_0$ [@Daubechies-92-1 §8.3]. In fact, if $m_0(\xi)\in C^{k+1}$, then $a(t) := m_0(\pi \log_2(|t|))$ on $[-2,-\frac12]\cup [\frac12,2]$, and $0$ otherwise, is a $C^k$ function that satisfies the appropriate criteria. Define $b\in C^k({\mathbb{R}})$ by $$\label{b_def} b(t):= \left\{ \begin{array}{cc} 1 & |t|\le 1 \\[3pt] |a(t)|^2 & |t|>1. \end{array}\right.$$ Using the properties of $a$ we see that $ \sum_{j=-\infty}^J |a(t/2^j)|^2 = b(t/2^J)$ if $t>0$. In the sum on the left, only terms with $j\ge \lfloor \log_2(t) \rfloor$ contribute. Terms with $j<\lfloor \log_2(t) \rfloor$ are identically 0. The quantity $\lfloor \log_2(t) \rfloor$ is obviously important. On the ${\mathbb{S}}^n$, the integer that corresponds to it is this: $$\label{j_n_def} j_n:= \left\{ \begin{array}{cc} 0&n=1,\\ \lfloor \log_2(\lambda_n) \rfloor &n\ge 2. \end{array}\right.$$ The integer $j_n$ helps us in defining our frame operators, which we now do. Let $\sfa_j := a(2^{-j-j_n}\sfl_n)$ and $\sfb_j := b(2^{-j-j_n}\sfl_n)$. Taking into account the support of $a$, we have $\sfb_J = \sum_{j=0}^J \sfa_j \sfa_j^\ast$ for $n\ge 2$ . For $n=1$, a projection $\sfp_0$ onto the constant function enters, and $\sfb_J = \sfp_0+\sum_{j=0}^J \sfa_j \sfa_j^\ast$. We will need the following approximation result concerning these operators. \[B\_J\_approx\_thm\] Let $k>\max\{n,2\}$, and let $b$ be defined by (\[b\_def\]), with $a\in C^k({\mathbb{R}})$. If $f\in L^p({\mathbb{S}}^n)$, $1\le p\le \infty$, and if $L>0$ is an integer such that $2^{-J-j_n}\le(L+\lambda_n)^{-1}$, then $$\label{B_J_approx_id} \|f-\sfb_J f\|_p\le C_{b,k,n} E_L(f)_p\,,\ E_L(f)_p:={\operatorname{dist}}_{L^p}(f,\Pi_L).$$ Also, for $1\le p<\infty$ or, if $p=\infty$, for $f\in C({\mathbb{S}}^n)$, we have $\label{B_J__approx_id_lim} \lim_{J\to\infty} \sfb_J f = f$. #### Bernstein/Nikolskii inequalities. There are several inequalities that follow easily using frames. We will give a Nikolskii-type inequality, which is a well-known inequality ([@Mhaskar-etal-99-1 Proposition 2.1] and [@Narcowich-etal-06-1 §3.5]), From our point of view, the most important inequality derived here is a Bernstein theorem for spherical polynomials [@Rustamov-92-1 Theorem 2 (Eng. transl.)]. An independent proof is given in [@LeGia-Mhaskar-06-1 Proposition 4.3]. For the convenience of the reader, short proofs for both are given below. \[bernstein-nikolskii-poly-ineqs\] Let $S\in \Pi_L$. Then, for $1\le p,q \le \infty$ and for $\gamma>0$, we have $$\begin{aligned} \label{nikolskii_poly_est} \mbox{\rm (Nikolskii) }\ \| S\|_q &\le& C_{p,q,n}L^{n(\frac{1}{p}-\frac{1}{q})_+}\|S\|_p\\[5pt] \label{bernstein_poly_est} \mbox{\rm (Bernstein) }\ \|S\|_{H^p_\gamma} &\le& C_{n,\gamma} L^\gamma \|S\|_p\end{aligned}$$ Let $\gamma>0$ and suppose $L+\lambda_n\le 2^{J+j_n}$. From the definition of $\sfb_J$, it is easy to see that $\sfb_J$ reproduces $\Pi_L$, and so $\sfb_JS=S$ for all $S\in \Pi_L$. By Theorem \[K\_kernel\_op\_bnd\], with $\kappa=b$ and $\varepsilon=2^{-J-j_n}\sim L^{-1}$, we see that $\| S\|_q \le C_{p,q,n}L^{n(\frac{1}{p}-\frac{1}{q})_+}\|S\|_p, \ S\in \Pi_L$. Dependence of the constants on $b$ and $k$ disappears upon taking the infimum over these two quantities, yielding (\[nikolskii\_poly\_est\]). We now establish the Bernstein inequality. If $S\in \Pi_L$, then so is $\sfl^\gamma S$, and we have that $\sfb_J \sfl_n^\gamma S=\sfl_n^\gamma S$, provided $L+\lambda_n\le 2^{J+j_n}$. Using the expansion $\sfb_J = \sum_{j=0}^J \sfa_j \sfa_j^\ast$, we see that $$\sfl^\gamma S = \sum_{j=0}^J \sfa_j \sfa_j^\ast \sfl^\gamma S = \sum_{j=0}^J \sfl^\gamma\sfa_j \sfa_j^\ast S.$$ Consequently, we have that $\|S\|_{H^\gamma_p} = \|\sfl^\gamma S\|_p \le \sum_{j=0}^J \|\sfl^\gamma\sfa_j \sfa_j^\ast \|_{p,p} \|S\|_p$. Applying Corollary \[kernel\_scaling\_prop\], with $\kappa(t)=|a(t)|^2 $ and $\varepsilon = 2^{-j-j_n}$ for each $j$, then yields this: $$\begin{aligned} \|S\|_{H^\gamma_p} =&\le& \bigg( \sum_{j=0}^J 2^{(j+j_n)\gamma}\bigg) C_{a,n,\gamma} \| S\|_p \\ &\le& \frac{2^{(J+j_n+1)\gamma}-2^{j_n\gamma}}{2^\gamma-1} C_{a,n,\gamma} \| S\|_p \le L^\gamma C_{a,n,\gamma}\|S\|_p\,,\end{aligned}$$ where again $L\sim 2^{J+j_n}$. In the last inequality of the chain above, we can take the infimum over all $a$ satisfying the requisite conditions. This yields (\[bernstein\_poly\_est\]) #### Distance estimates. Frames can be used to estimate the distance in $L^p({\mathbb{S}}^n)$ from the polynomials to a function in a smoother space. If $f\in L^p$, let $E_L(f)_p:={\operatorname{dist}}_{L^p}(f,\Pi_L)$. Because $\sfb_Jf$ is a spherical polynomial in $\Pi_{2^{J+j_n+1}}$, we have $$E_L(f)_p \le \|f-\sfb_Jf\|_p, \ L+\lambda_n\le 2^{J+j_n+1}.$$ And because $\sfb_Jf$ converges to $f$ in all $L^p$, $1\le p <\infty$ and $p=\infty$ if $f\in C({\mathbb{S}}^n)$, we also have that $$E_L(f)_p \le \|f-\sfb_Jf\|_p \le \sum_{j=J+1}^\infty \|\sfa_j\sfa_j^* f \|_p,$$ where the right side above may be infinite. Now, suppose that $f=\sfl_n^\gamma h$, $h \in H^q_\beta({\mathbb{S}}^n)$, In that case, we have $\sfa_j\sfa_j^* \sfl_n^\gamma h = \sfl_n^{-(\beta-\gamma)}\sfa_j\sfa_j^* \sfl_n^\beta h$. From this and Corollary \[kernel\_scaling\_prop\], with $p\leftrightarrow q$, we arrive at $$\|\sfa_j\sfa_j^* \sfl_n^\gamma h \|_p = \| \sfl_n^{-(\beta-\gamma)}\sfa_j\sfa_j^* \sfl_n^\beta h\|_p \le 2^{-(\beta-\gamma -n(\frac{1}{q}-\frac{1}{p})_+)(j+j_n)}C_{n,k,a} \|h\|_{H^q_\beta}$$ Insert this in the equation above, sum the appropriate geometric series, and take $L\sim 2^{J+j_n}$ to get $$E_{2^{J+j_n}}(\sfl_n^\gamma h)_p \le C'_{\beta-\gamma,a,k,n}2^{-(\beta-\gamma -n(\frac{1}{q}-\frac{1}{p})_+)(J+j_n)}\|h\|_{H^q_\beta}\,,$$ which was essentially obtained by Kamzolov [@Kamzolov-82-1]. Now, since the left side above is unchanged if we replace $\sfl_n^\gamma$ by $\sfl_n^\gamma -S$, $S\in \Pi_{2^{J+j_n}}$, we can replace $\|h\|_{H^q_\beta}$ by $E_{2^{J+j_n}}(\sfl_n^\beta h)_q$. Collecting these results yields the proposition below. \[H\^p\_gamma\_H\^q\_beta\] Let $\gamma\ge 0$, and $\beta>\gamma+n(\frac{1}{q}-\frac{1}{p})_+)$, where $1\le p,q \le\infty$. If $h\in H^q_\beta$, then there is a constant $C=C_{n,\beta, \gamma,a}$ such that $$E_{2^{J+j_n}}(\sfl_n^\gamma h)_p \le \|(I-\sfb_J)h\|_{H^p_\gamma} \le C_{n,\beta,\gamma, a}2^{-(\beta-\gamma-n(\frac{1}{q}-\frac{1}{p})_+))(J+j_n)} E_{2^{J+j_n}}(\sfl_n^\beta h)_q \,.$$ #### Green’s functions and their properties. Let $\beta>n/p'$. Recall that the Green’s function solution to $\sfl_n^\beta G_\beta =\delta$ is a kernel with an expansion in spherical harmonics having coefficients $\widehat G_\beta(\ell,m)=(\ell+\lambda)^{-\beta}$. Properties of Green’s functions (pseudo-differential operator kernels, really) on manifolds have been studied extensively (cf. [@Hormander-87-1]). Our aim here is to use frames to obtain properties and various distance estimates that we need here quickly, and in a self contained way, for SBFs of the form $\phi_\beta = G_\beta+G_\beta\ast \psi$, where $\psi\in L^1$. Because the $\phi_\beta$’s are not in any of the Bessel-Sobolev spaces $H^p_\beta$, they have to be treated separately from the class in Proposition \[H\^p\_gamma\_H\^q\_beta\] above We will begin with Green’s functions themselves. Note that $\sfa_j \sfa_j^*G_\beta= \sfl_n^{-\beta} \sfa_j \sfa_j^*\delta$. Since $ \sfa_j \sfa_j^*=|a|^2(2^{-j-j_n}\sfl_n)$, where both $a$ and, of course, $|a|^2$, have compact support that excludes $t=0$. we may apply Corollary \[kernel\_scaling\_prop\], with $\varepsilon_j:=2^{-(j+j_n)}$. $$\label{p-j-est-green_fnct} \|\sfa_j \sfa_j^*G_\beta \|_p \le C_{n,\beta,a} \varepsilon_j^{\beta - n/p'}= C_{n,\beta,a} 2^{-(\beta-n/p') (j+j_n)}.$$ Thus, for $\beta>n/p'$, the terms in $\sum_{j=0}^\infty \sfa_j \sfa_j^*G_\beta $ are bounded by a geometric series, and so the Weierstrass $M$ test implies that the series converges in $L^p$. That is, we have shown that when $\beta>n/p'$ the limit $ \lim_{J\to\infty} \sfb_JG_\beta$ is in $L^p$. A simple duality argument then shows that the kernel $G_\beta = \lim_{J\to\infty} \sfb_JG_\beta$ in $L^p({\mathbb{S}}^n)$. Summing the geometric series in (\[p-j-est-green\_fnct\]) yields $\|G_\beta -\sfb_J G_\beta\|_p \le C\, 2^{-(\beta-n/p') (J+j_n)}$. These results also give us error bounds in $H^p_\gamma({\mathbb{S}}^n)$. If $\gamma\ge 0$, then $\sfl^\gamma G_\beta = G_{\beta-\gamma}$ and $\sfl^\gamma \sfb_J G_\beta = \sfb_J G_{\beta - \gamma}$. This and the estimate above imply that if in addition $\beta>\gamma+n/p'$, then $$\label{G-beta-H-gamma-dist-est} \|G_\beta -\sfb_J G_\beta\|_{H^p_\gamma} = \|G_{\beta-\gamma} -\sfb_J G_{\beta-\gamma}\|_p \le C\, 2^{-(\beta-\gamma-n/p')(J+j_n)}.$$ Perturbations of $G_\beta$ can be dealt with, too. Let $\psi$ be in $L^1$. By Theorem \[K\_kernel\_op\_bnd\], (\[p-j-est-green\_fnct\]) and Remark \[kernel\_support\_condit\], we have that, for all $j\ge J$, $$\|\sfa_j \sfa_j^*G_\beta\ast \psi \|_p \le \|\sfa_j \sfa_j^*G_\beta\|_{1, p\,}E_{2^{j+j_n}}(\psi)_1 \le C 2^{-(\beta-\gamma)(j+j_n)} E_{2^{J+j_n}}(\psi)_1$$ Summing a geometric series and using (\[G-beta-H-gamma-dist-est\]), we arrive at the following bound. \[perturbation\_approx\_prop\] Let $\gamma\ge 0$, $\beta>\gamma+n/p'$, $\varepsilon_j = 2^{-(j+j_n)}$, and let $\psi\in L^1$ be a zonal function. If $\phi_\beta=G_\beta + G_\beta\ast \psi$, then $\phi_\beta \in H^p_\gamma$ and there is a constant $C=C_{n,\beta,\gamma,a}$, which depends only on $n$, $\beta$, $\gamma$, and the function $a$, such that $$\label{perturbed_approx_est} E_{2^{J+j_n}}(\sfl^\gamma\phi_\beta)_p \le \|(I - \sfb_J )\phi_\beta \|_{H^p_\gamma} \le C_{n,\beta,\gamma,a}\bigg(1+\varepsilon_J^{n/p'}E_{2^{J+j_n}}(\psi)_1 \bigg) \varepsilon_J^{\beta-\gamma-n/p'}.$$ Approximation analysis {#approx_analysis} ---------------------- The task at hand is to estimate the norms $\|(I-\sfb_J)g\|_{H^p_\gamma}/|a|_p$, where $g\in \calg_{X,\phi}$. Our approach will be to carry this out for $p=1$ and $p=\infty$, then use the Riesz-Thorin theorem to obtain the result for all intermediate values of $p$. The easier of the two cases is $p=1$. Since $g\in \calg_{X,\phi}$, we have $g=\sum_{\xi\in X}a_\xi \phi((\cdot)\cdot \xi)$. Again, let $\varepsilon_j = 2^{-(j+j_n)}$. From the triangle inequality, the rotational invariance of the norms involved, and Proposition \[H\^p\_gamma\_H\^q\_beta\] and Proposition \[perturbation\_approx\_prop\] it follows that $$\begin{aligned} \|(I-\sfb_J)g\|_{H^1_\gamma}&\le |a|_1 \| (I-\sfb_J)\phi \|_{ H^1_\gamma} \\ &\le C \varepsilon_J^{\beta-\gamma}|a|_1 \left\{ \begin{array}{cl} E_{2^{J+j_n}}(\sfl_n^\beta \phi)_1 & \phi\in H^1_\beta\,,\\ (1+E_{2^{J+j_n}}(\psi)_1 ) & \phi=G_\beta + G_\beta\ast \psi\,. \end{array}\right.\end{aligned}$$ The $p=\infty$ case requires using frames. Again, we have that $$\|(I-\sfb_J)g\|_{H^\infty_\gamma}\le \sum_{j=J+1}^\infty \|\sfa_j\sfa_j^*\sfl_n^\gamma g\|_\infty \,,$$ where $ \sfa_j\sfa_j^*\sfl_n^\gamma g = \sum_{\xi\in X}a_\xi \sfa_j\sfa_j^*\sfl_n^\gamma \phi((\cdot)\cdot \xi)$. By equation (\[SUM\_function\_est\]), with $f=\sfl_n^\gamma \phi$, $K_{\varepsilon_j,n}$ corresponding to $\kappa(t)=|a(t)|^2$, $h \ge \varepsilon_J\ge \varepsilon_j$, all $j\ge J$, and $L_\varepsilon\sim 2^{j+j_n}$, we have $$\|\sfa_j\sfa_j^*\sfl_n^\gamma g\|_\infty = \bigg\|\sum_{\xi\in X}a_\xi f_\varepsilon((\cdot)\cdot \xi) \bigg\|_\infty \le C \rho^n \varepsilon_j^{-n}|a|_\infty \,E_{2^{j+j_n}}(\sfl_n^\gamma \phi)_1\,.$$ By Proposition \[H\^p\_gamma\_H\^q\_beta\] and Proposition \[perturbation\_approx\_prop\], with $J$ there replaced by $j$, $p=\infty$, we have $$\|\sfa_j\sfa_j^*\sfl_n^\gamma g\|_\infty \le C |a|_\infty \rho^n \varepsilon_j^{\beta-\gamma-n} \left\{ \begin{array}{cl} E_{2^{j+j_n}}(\sfl_n^\beta \phi)_1 & \phi\in H^1_\beta\,,\\ (1+\varepsilon_j^n E_{2^{j+j_n}}(\psi)_1 ) & \phi=G_\beta + G_\beta\ast \psi\,. \end{array}\right.$$ Since $E_{2^{j+j_n}}(f)_1\le E_{2^{J+j_n}}(f)_1$ when $j\ge J$, in the inequality above we may replace the distances with respect to $2^{j+j_n}$ with ones with respect to $2^{J+j_n}$. Doing so and again summing a geometric series, we obtain $$\|(I-\sfb_J)g\|_{H^\infty_\gamma}\le C |a|_\infty \rho^n \varepsilon_J^{\beta-\gamma-n} \left\{ \begin{array}{cl} E_{2^{J+j_n}}(\sfl_n^\beta \phi)_1 & \phi\in H^1_\beta\,,\\ (1+\varepsilon_J^n E_{2^{J+j_n}}(\psi)_1 ) & \phi=G_\beta + G_\beta\ast \psi\,. \end{array}\right.$$ Applying the Riesz-Thorin theorem in conjunction with the bounds above, we complete the approximation part of the problem: \[main\_approximation\_thm\] Let $\gamma\ge 0$, $1\le p\le \infty$, $\beta>\gamma+n/p'$, $\varepsilon_j = 2^{-(J+j_n)}$. If $h_X\ge \varepsilon_j$ and if $g\in \calg_{X,\phi}$, then $$\label{final_approx_est} \frac{\|(I-\sfb_J)g\|_{H^p_\gamma}}{|a|_p} \le C \rho^{n/p'} \varepsilon_J^{\beta-\gamma-n/p'} \left\{ \begin{array}{cl} E_{2^{J+j_n}}(\sfl_n^\beta \phi)_1 & \phi\in H^1_\beta\,,\\ (1+E_{2^{J+j_n}}(\psi)_1 ) & \phi=G_\beta + G_\beta\ast \psi\,. \end{array}\right.$$ Stability ========= The problem that we address here is estimating the norm $|a|_p$ in terms of the $L^p({\mathbb{S}}^n)$ norm of $g$, where $g(\bfx)=\sum_{\xi\in X}a_\xi \phi(\bfx \cdot \xi)$ and $\phi\in L^p$ is an SBF. Specifically, we wish to estimate the $p$-norm stability ratio $$\maxg_{\calg,\,p}:= \max_{\calg \ni g \neq 0}\frac{|a|_p}{\|g\|_p}$$ which we defined in (\[s-ratio\]). This quantity exists and is finite because the set $\{\phi(\bfx\cdot \xi)\}_{\xi\in X}$ is a linearly independent, finite set of functions. The quantity $\maxg_{\calg,\,p}$ provides a measure of the linear independence of the set, albeit one that scales with the norm of $\phi$. Once $\phi$ is fixed, it depends completely on the geometry of $X$. For a continuous SBF $\phi$, this is related to the stability of the interpolation matrix for $\phi$ and $X$. However, we are only assuming that $\phi$ is in $L^p$, and thus evaluating $\phi$ on $X$ is meaningless. Even so, using a smoothed version of $\phi$ allows us to connect the two concepts. Stability ratios and interpolation matrices ------------------------------------------- Let $\kappa \ge 0$ be in $C^k({\mathbb{R}})$, $k\ge n+2$, and let it satisfy (\[kappa\_condits\]). Of course, since $\kappa$ is not identically 0, we also have that there is some open interval on which $\kappa>0$. Consider the corresponding operator $\sfk_{\varepsilon,n}=\kappa(\varepsilon \sfl_n)$ and its kernel $K_{\varepsilon,n}$. To smooth $g(\bfx)=\sum_{\xi\in X}a_\xi \phi(\bfx \cdot \xi)$, apply $\sfk_{\varepsilon,n}$ to both sides. Doing this yields $$\label{g_eps_coef_vec} g_\varepsilon(\bfx)=\sfk_{\varepsilon,n}g(\bfx) = \sum_{\xi\in X}a_\xi \underbrace{\sfk_{\varepsilon,n} \phi(\bfx \cdot \xi)}_{\displaystyle{\phi_\varepsilon(\bfx \cdot \xi)}}$$ We want to relate $\maxg_{\calg,\,p}$ to quantities in a standard SBF interpolation problem on $X$ involving $\phi_\varepsilon$. The function $\phi_\varepsilon$ is a spherical harmonic, with nonnegative Fourier-Legendre coefficients, whose degree depends on the support of $\kappa$. It is thus a positive definite function on ${\mathbb{S}}^n$, but not an SBF. The interpolation matrix corresponding to $\phi_\varepsilon$ is $$A_\varepsilon=[\phi_\varepsilon(\eta\cdot \xi)]_{\xi,\eta\in X}.$$ Later, as a by-product of our analysis, we will establish the invertibility of $A_\varepsilon$, provided $\varepsilon$ satisfies certain conditions. When $\varepsilon$ is sufficiently small, one can also establish it by using a result of Ron and Sun [@Ron-Sun-96-1 Theorem 6.4]: Let $X\subset {\mathbb{S}}^n$ be fixed and let $\psi$ be a positive definite function, but not necessarily an SBF (i.e., some of coefficients $\hat\psi(\ell)$ may vanish). Then, there is an integer $j_{X,n}$ such that the interpolation matrix $A_\psi$ will be positive definite if the set of integers on which $\hat \psi(\ell)>0$ contains at least $j_{X,n}$ consecutive even integers and $j_{X,n}$ consecutive odd integers. With our assumptions on $\kappa$ – in particular, that $\kappa$ is not identically 0 – it is clear that for sufficiently small $\varepsilon$ there are arbitrarily large sets of consecutive integers for which $\hat \phi_\varepsilon(\ell)>0$. Thus $A_\varepsilon$ is (strictly) positive definite, and hence invertible, for all such $\varepsilon$. Our approach will again be to use the Riesz-Thorin theorem. Let $y_\varepsilon := g_\varepsilon |_X$, the restriction of $g_\varepsilon$ to $X$. Using (\[g\_eps\_coef\_vec\]), we can interpolate $g_\varepsilon$ on $X$: $$y_\varepsilon = A_\varepsilon a\,, \ A_\varepsilon=[\phi_\varepsilon(\eta\cdot \xi)]_{\xi,\eta\in X},$$ Solving and taking the $\ell^1$ norm, we see that $$|a|_1\le \|A_\varepsilon^{-1}\|_1 |y_\varepsilon|_1\,, \ |y_\varepsilon|_1 = \sum_{\xi \in X} |g_\varepsilon(\xi)|.$$ By our assumptions on $\kappa$ and by (\[SUM\_function\_est\]), we have that $ \ |y_\varepsilon|_1 \le C_{n,\kappa,k} \rho^n \varepsilon^{-n} \|g\|_{L^1}$. Consequently, for $\phi\in L^1$ we have that $$\maxg_{\calg,\,1} \le C_{\kappa,n,k}\rho^n \varepsilon^{-n} \|A_\varepsilon^{-1}\|_1\,.$$ Similarly, working with $p=\infty$ we obtain $$|a|_\infty\le \|A_\varepsilon^{-1}\|_\infty |y_\varepsilon|_\infty\,, \ |y_\varepsilon|_\infty = \max_{\xi\in X}\{|g_\varepsilon(\xi)|\}\le \|g\|_\infty.$$ Recall that $A_\varepsilon^{-1}$ is a self-adjoint matrix, and that for such matrices the $p=1$ and $p=\infty$ norms are equal: $\|A_\varepsilon^{-1}\|_\infty = \|A_\varepsilon^{-1}\|_1$. Hence, for $\phi\in C$ ($p=\infty$), we obtain $$\maxg_{\calg,\,\infty} \le \|A_\varepsilon^{-1}\|_1\,.$$ Applying the Riesz-Thorin theorem to these bounds yields the following: \[stability\_ratio\_interp\_est\] Let $\varepsilon \le \|\calx\|$ and let $\phi\in L^p$. Then, $$\maxg_{\calg,\,p} \le C_{\kappa,n,k}^{1/p}\rho^{n/p} \varepsilon^{-n/p} \|A_\varepsilon^{-1}\|_1\,.$$ $\ell^1$ stability estimates for interpolation matrices ------------------------------------------------------- The estimates we need next are for $\|A_\varepsilon^{-1}\|_1$, and the approach we take to get them will depend on $\phi$ and the behavior of the $\hat\phi(\ell)$’s. We will first deal with the Green’s function case, in which $\hat\phi(\ell)$ decays algebraically. After that, we will deal with the case in which $\phi$ is $C^\infty$, and $\hat\phi(\ell)$ has very fast decay. ### SBFs that are perturbations of Green’s functions A straightforward way to estimate the 1-norm of the inverse of a matrix is to use diagonal dominance techniques, if the matrix is amenable to them. To that end, split an $n\times n$ matrix $A$ into its diagonal $D$ and off-diagonal $F$, so $A=D+F$. We then have the following standard norm estimate, whose proof we omit. \[diag\_dom\_lem\] If $D$ is invertible and $\|D^{-1}F\|_1<1$, then $A$ is invertible and $\|A^{-1}\|_1<\|D^{-1}\|_1(1- \|D^{-1} F\|_1)^{-1}$. We can apply this to $A_\varepsilon$. The diagonal part is $D=\phi_\varepsilon(1) I$, and so $\| D^{-1}\|_1= \phi_\varepsilon(1)^{-1}$ and $\|D^{-1}F\|_1 = \phi_\varepsilon(1)^{-1}\|F\|_1$. Since the 1-norm of a matrix is the maximum of the 1-norms of its columns, our condition becomes $$\label{diag_dom_condit} \phi_\varepsilon(1)^{-1}\|F\|_1 = \phi_\varepsilon(1)^{-1}\max_{\eta\in X} \sum_{X\ni \xi\neq\eta}| \phi_\varepsilon(\eta\cdot \xi)| <1.$$ We now want to deal with a special $\phi_\varepsilon$, which is not necessarily generated by an SBF $\phi$. Let $\psi$ be a zonal function in $L^1$, so that $$\psi(\xi\cdot\eta)=\sum_{\ell=0}^\infty \hat\psi(\ell) \frac{\ell+\lambda_n}{\lambda_n\omega_n}\ultra {\lambda_n} \ell (\xi\cdot \eta ).$$ We will assume that $1+\hat\psi(\ell)>0$ for all $\ell\ge 0$ and that $\kappa$ has support in $|t|\in[1,\infty)$. Take $\phi_\varepsilon = K_{\varepsilon,n}+K_{\varepsilon,n}\ast \psi$, where $K_{\varepsilon,n}$ is the kernel for the operator $\kappa(\varepsilon \sfl)$. In addition, define $\psi_\varepsilon=K_{\varepsilon,n}\ast \psi$. Since $\hat \phi_{\varepsilon}(\ell) = \kappa(\varepsilon (\ell+\lambda_n))(1+ \hat\psi(\ell)) \ge 0$, we see that $\phi_\varepsilon$ is a positive definite spherical function, but not an SBF. Using (\[SUM\_kernel\_est\]) yields $$\begin{aligned} \sum_{X\ni \xi\neq\eta} |\phi_{\varepsilon}(\eta\cdot \xi)| &\le& \sum_{X\ni \xi\neq\eta} |K_{\varepsilon,n}(\eta\cdot \xi)| + \sum_{X\ni \xi\neq\eta} |\psi_{\varepsilon}(\eta\cdot \xi)| \\ &\le& C_{n,\kappa,k} q^{-n} + \sum_{\xi\in X} |\psi_{\varepsilon} (\eta\cdot \xi)| \end{aligned}$$ Thus, from this and equation (\[SUM\_function\_est\]), with $\kappa(t)=0$, $|t|\le 1$, we have shown that $$\label{SUM_phi_eps_bnd} \sum_{X\ni \xi\neq\eta} |\phi_{\varepsilon}(\eta\cdot \xi)| \le C_{n,\kappa,k} (q^{-n} + \rho^n\varepsilon^{-n} E_{L_\varepsilon}(\psi)_1),\ L_\varepsilon = \lfloor 1/\varepsilon - \lambda_n \rfloor.$$ Thus we have bounded the sum involved in the diagonal dominace condition (\[diag\_dom\_condit\]). Next, we will deal with $\phi_\varepsilon(1)$. We have the following chain of inequalities: $$\begin{aligned} \phi_\varepsilon(1)&=&K_{\varepsilon,n}(1) +K_{\varepsilon,n}\ast \psi(1) \\ &=&\sum_{\ell=0}^\infty \kappa(\varepsilon(\ell+\lambda_n))(1+\hat\psi(\ell)) d_\ell^n \\ &\ge&c_0 \sum_{\ell=0}^\infty \kappa(\varepsilon(\ell+\lambda_n))d_\ell^n = c_0 K_{\varepsilon,n}(1),\end{aligned}$$ where $c_0=\min_{\ell\ge 0}(1+\psi(\ell))>0$. (This is true because $\psi\in L^1$ implies that $\hat\psi(\ell) \to 0$ as $\ell\to \infty$.) Furthermore, it is easy to see that $$K_{\varepsilon,n}(1) = \sum_{\ell=0}^\infty \kappa(\varepsilon(\ell+\lambda_n)) d_\ell^n \sim \varepsilon^{-n}\underbrace{ \int_1^\infty\kappa(t)t^{n-1}dt}_{>0}.$$ Thus, $\phi_\varepsilon(1)\ge C''_{n,\kappa,k} \varepsilon^{-n}$. From this and (\[SUM\_phi\_eps\_bnd\]), we arrive at the bound below: $$\label{diag_dom_phi_eps_bnd} \|D^{-1}F\|_1\le C_{n,\kappa,k}\left((\varepsilon/q)^n + \rho^n E_{L_\varepsilon}(\psi)_1\right), \ L_\varepsilon = \lfloor 1/\varepsilon - \lambda_n \rfloor.$$ By choosing $\varepsilon \le q$ sufficiently small, we can make $ C_{n,\kappa,k}\rho^n E_{L_\varepsilon}(\psi)_1$ less than $1/4$, since $E_{L_\varepsilon}(\psi)_1 \to 0$ as $L_\varepsilon\to \infty$. At this point, the choice of $\varepsilon$ depends only on $\psi$ and the mesh ratio $\rho$. If necessary, we may then choose $\varepsilon$ smaller still in order to force the first term on the right to be less than $1/4$. With this choice of $\varepsilon$, which depends on $\rho$, $n$, $\kappa$ and $k$, we obtain $\|D^{-1}F\|_1<1/2$. By Lemma \[diag\_dom\_lem\], we get the bound on $\|A_\varepsilon^{-1}\|_1$ below. \[norm\_A\_1\_kernel\_est\] Suppose that $\kappa$ has support in $|t|\in[1,\infty)$. Let $\phi_\varepsilon = K_{\varepsilon,n}+K_{\varepsilon,n}\ast\psi$, where $\psi\in L^1$ is a zonal function satisfying $1+\psi(\ell)>0$ for $\ell\ge 0$. Then there are constants $c$ and $C$, which depend on $\psi$, on $\rho$, $n$, $\kappa$ and $k$, such that whenever $\varepsilon \le c q$ we have $\|A_\varepsilon^{-1}\|_1 \le C \varepsilon^n $. The proof above required conditions on the support of $\kappa$ in order to deal with the perturbation generated by $\psi$. If $\psi$ is $0$, then there is no need for such restrictions. Also, the term involving $\rho$ is gone, and it is no longer involved in determining $c$ and $C$. We collect these observations below. \[norm\_A\_1\_remark\] If $\psi=0$, then Propostion \[norm\_A\_1\_kernel\_est\] holds without restriction on the support of $\kappa$, and neither $c$ nor $C$ depend on $\rho$. We now take an SBF $\phi$ of the form $\phi=G_\beta+G_\beta\ast \psi$, where $G_\beta$ is the Green’s function for $\sfl^{\beta}$ and $\psi\in L^1$. Our aim is to establish a bound on the stability ratio for such $\phi$. \[stability\_ratio\_algebraic\] Consider the SBF $\phi=G_\beta+G_\beta\ast \psi$, where $G_\beta$ is the Green’s function for $\sfl^{\beta}$ and $\psi\in L^1$. Let $X$ be a set of centers with separation radius $q$ and mesh ratio $\rho$. Let $\calg = \calg_{\phi,X}$ be the corresponding SBF network. Then there is a constant $C=C(n,\phi,\beta)$ such that the stability ratio of $\calg$ satisfies $$\label{stability_ratio_algebraic_est} \maxg_{\calg,\,p} \le C\rho^{n/p}q^{n/p'-\beta}$$ Since we are assuming that $\phi$ is an SBF, the coefficients of the $L^1$ function $\psi$ must satisfy $1+\hat\psi (\ell)>0$ for all $\ell\ge 0$. Assume $\kappa$ satisfies (\[kappa\_condits\]) and has support in $|t|\in[1,\infty)$. The corresponding $\phi_\varepsilon$ is just $\phi_\varepsilon=\sfk_{\varepsilon,n}\phi = \sfk_{\varepsilon,n}(G_\beta + G_\beta\ast\psi)$. By Corollary \[kernel\_scaling\_prop\], we have that $\sfk_{\varepsilon,n}G_\beta=\varepsilon^\beta \widetilde \sfk_{\varepsilon,n}=\tilde\kappa(\varepsilon \sfl)$, where $\tilde\kappa(t)=|t|^{-\beta}\kappa(t)$ satisfies (\[kappa\_condits\]). From this, we have that $\phi_\varepsilon = \varepsilon^\beta \tilde \phi_\varepsilon$. If we let $\tilde A_\varepsilon$ be the interpolation matrix for $\tilde \phi_\varepsilon$, we see that $A_\varepsilon = \varepsilon^\beta \tilde A_\varepsilon$. The function $\tilde \phi_\varepsilon$ satisfies the conditions on the corresponding function in Proposition \[norm\_A\_1\_kernel\_est\]. Thus, by choosing $\varepsilon \le cq$, we have $$\| A_\varepsilon^{-1}\|_1 = \varepsilon^{-\beta}\| \tilde A_\varepsilon^{-1}\|_1 \le C\varepsilon^{n-\beta}.$$ From Proposition \[stability\_ratio\_interp\_est\], we obtain $$\maxg_{\calg,\,p}\le C_{\kappa,n,k}^{1/p}\rho^{n/p} \varepsilon^{-n/p} \|A_\varepsilon^{-1}\|_1\le C' \rho^{n/p} \varepsilon^{n/p'-\beta},$$ Choosing $\varepsilon$ as large as possible, namely $\varepsilon=cq$, we have $$\maxg_{\calg,\,p}\le C\rho^{n/p} q^{n/p'-\beta},$$ where the constant $C=C(n,\kappa,k,\phi,p,\beta)$. By taking the infimum over all $\kappa$, $p$ and $k$, we reduce the dependency of $C$ to $C=C(n,\phi,\beta)$. This completes the proof. ### Infinitely differentiable SBFs Let $\phi$ be infinitely differentiable SBF. The fast decay of the Fourier-Legendre coefficient $\hat\phi(\ell)$ requires a different approach to bounding $\maxg_\calg$ than the one used to obtain Theorem \[stability\_ratio\_algebraic\]. As before, we let $A_{\varepsilon}$ be the $N\times N$ interpolation matrix for $\phi_\varepsilon = \sfk_{\varepsilon,n}\phi$. In addition, we will let $A$ be the corresponding matrix for $\phi$. By standard matrix estimates, the norm $\|A_\varepsilon^{-1}\|_1$ satisfies $$\|A_\varepsilon^{-1}\|_1\le N^{1/2} \|A_\varepsilon^{-1}\|_2.$$ Since $A_\varepsilon$ is a positive definite selfadjoint matrix, the norm $\|A_\varepsilon^{-1}\|_2$ is equal to the reciprocal of $\lambda_{\mathrm {min}}(A_\varepsilon)$, the smallest eigenvalue of $A_\varepsilon$; that is, $\|A_\varepsilon^{-1}\|_2 = 1/\lambda_{\mathrm {min}}(A_\varepsilon)$. We will begin by estimating this eigenvalue. In preparation for this, we define the quantity $$\label{hat_phi_min_def} \hat\phi_{\mathrm {min}}(L) :=\min_{\,0\le \ell \le L}\hat\phi(\ell)>0.$$ where the strict positivity follows from $\phi$ being an SBF. \[min\_eval\_prop\] Let $\kappa \ge 0$ be in $C^k({\mathbb{R}})$, $k\ge n+2$, and let it satisfy (\[kappa\_condits\]). In addition, suppose that ${\operatorname{supp}}(\kappa) \subseteq [-2,2]$ and that $\kappa\le 1$. Then, there are constants $c=c_{n,\kappa,k}>0$ and $C=C_{n,\kappa,k}>0$ such that for all $\varepsilon\le c q$, $$\lambda_{\mathrm {min}}(A) \ge \lambda_{\mathrm {min}}(A_\varepsilon) \ge C\hat\phi_{\mathrm {min}}(L_{\varepsilon/2})\varepsilon^{-n},\ L_{\varepsilon/2} := \lfloor 2/\varepsilon - \lambda_n\rfloor.$$ Using the Rayleigh-Ritz principle, we thus have $$\|A_\varepsilon^{-1}\|_2^{-1}=\lambda_{\mathrm {min}}(A_\varepsilon) = \min_{a\in \CC^N} a^\ast A_\varepsilon a.$$ where $A_\varepsilon=[\phi_\varepsilon(\eta\cdot \xi)]_{\xi,\eta\in X}$. Because $\phi_\varepsilon$ is a (positive definite) zonal function, we can use its expansion in spherical harmonics to represent $\lambda_{\mathrm {min}}(A_\varepsilon)$ via $$\label{lambda_min} \lambda_{\mathrm {min}}(A_\varepsilon)= \min_{a\in \CC^N} \left( \sum_{\ell=0}^\infty \sum_{m=1}^{d_\ell} \kappa((\ell+\lambda_n)\varepsilon) \hat\phi(\ell) \bigg| \sum_{\xi\in X} Y_{\ell,m}(\xi)a_\xi\bigg|^2\right)$$ Since the support of $\kappa$ is $[-2,2]$, the sum above cuts off at $L_{\varepsilon/2} := \lfloor 2/\varepsilon - \lambda_n\rfloor$. Consequently, we can bound below $\lambda_{\mathrm min}(A_\varepsilon)$ this way: $$\lambda_{\mathrm {min}}(A_\varepsilon) \ge \hat\phi_{\mathrm min}(L_{\varepsilon/2}) \underbrace{\min_{a\in \CC^N} \left( \sum_{\ell=0}^{L_{\varepsilon/2}} \sum_{m=1}^{d_\ell} \kappa((\ell+\lambda_n)\varepsilon) \bigg| \sum_{\xi\in X} Y_{\ell,m}(\xi)a_\xi\bigg|^2\right)}_{\displaystyle{\lambda_{\mathrm min}([K_{\varepsilon,n}(\xi\cdot \eta)])}},$$ Note that $\lambda_{\mathrm {min}}([K_{\varepsilon,n}(\xi\cdot \eta)])= \| \, [K_{\varepsilon,n}(\xi\cdot \eta)]^{-1}\|_2^{-1} \le \| \, [K_{\varepsilon,n}(\xi\cdot \eta)]^{-1}\|_1^{-1}$, because $\|B\|_2\le \|B\|_1$ for all selfadjoint $B$. The existence of $c$ and $C$ and their dependencies, along with $\| \, [K_{\varepsilon,n}(\xi\cdot \eta)]^{-1}\|_1\le C\varepsilon^n$ for $\varepsilon \le cq$, follow from Proposition \[norm\_A\_1\_kernel\_est\] and Remark \[norm\_A\_1\_remark\]. Finally, applying the Rayleigh-Ritz principle, (\[lambda\_min\]), and $0\le \kappa\le 1$, we have that $\lambda_{\mathrm {min}}(A) \ge \lambda_{\mathrm min}(A_\varepsilon)$. This finishes the proof. There are two immediate consequences that follow from Proposition \[min\_eval\_prop\]. The first is a bound on the stability ratio in this case. \[stability\_ratio\_infitely\_dif\] Consider the SBF $\phi$, where $\phi$ is assumend to be infinitely differentiable, and let $X$ be a set of centers with separation radius $q$ and mesh ratio $\rho$. Let $\calg = \calg_{\phi,X}$ be the corresponding SBF network. Then there are positive constants $C=C_{n,\kappa,k}$ and $c=c_{n,\kappa,k}$ such that the stability ratio of $\calg$ satisfies $$\maxg_{\calg,\,p} \le C\rho^{n/p}\frac{q^{n(1/p'-1/2)}}{ \hat\phi_{\mathrm {min}}(L_{cq/2})},\ \text{where } L_{cq/2}= \lfloor 2/(cq) - \lambda_n\rfloor$$ Since $\|A_\varepsilon^{-1} \|_1\le N^{1/2} \|A_\varepsilon^{-1}\|_2$, Proposition \[min\_eval\_prop\] implies that for $\varepsilon \le cq$, $$\|A_\varepsilon^{-1} \|_1\le C_{n,\kappa,k}\frac{N^{1/2}\varepsilon^n}{\hat\phi_{\mathrm { min}}(L_{\varepsilon/2})}$$ By Proposition \[stability\_ratio\_interp\_est\], we then have that $$\maxg_{\calg,\, p} \le C_{\kappa,n,k,p}\frac{N^{1/2}\rho^{n/p}\varepsilon^{n/p'}}{\hat\phi_{\mathrm min}(L_{\varepsilon/2})}.$$ Noting that $N\sim q^{-n}$ and choosing $\varepsilon=cq$, which is as large as possible, we obtain the desired inequality. The second consequence is a new stability estimate for interpolation via a $C^\infty$ SBF $\phi$. Again, let $A$ be the interpolation matrix for $\phi$ on the set $X$. By Proposition \[min\_eval\_prop\], $\|A^{-1}\|_2=\lambda_{\mathrm {min}}(A)^{-1} \le C\varepsilon^n/\hat\phi_{\mathrm {min}}(L_{\varepsilon/2})$. Taking $\varepsilon=cq$, we obtain a new bound on the norm of $A^{-1}$: $$\label{new_norm_inverse_bound} \|A^{-1}\|_2 \le C\frac{q^n}{\hat\phi_{\mathrm {min}}(L_{cq/2})}.$$ Bernstein inequalities and inverse theorems {#bernstein_inverse_theorems} =========================================== In this section, we will discuss both direct and inverse theorems for approximation by SBFs. For an overview of these notions, see [@DeVore-Lorentz-93-1]. Bernstein inequalities {#bernstein_inequalities} ---------------------- Bernstein inequalities are a primary tool in obtaining inverse theorems. In the introduction, we gave a strategy for obtaining Bernstein theorems. We have completed the preparation required to state and prove them. Our first result is for SBFs that are perturbations of Green’s functions. \[bernstein\_greens\_fn\] Consider the SBF $\phi=G_\beta+G_\beta\ast \psi$, where $G_\beta$ is the Green’s function for $\sfl^{\beta}$ and $\psi\in L^1$. Let $X$ be a set of centers with separation radius $q$ and mesh ratio $\rho$, and let $\calg = \calg_{\phi,X}$ be the corresponding SBF network. If $1\le p\le \infty$, $0<\gamma<\beta-n/p'$ and $g\in \calg$, then $$\label{bernstein_algebraic_est} \|g\|_{H^p_\gamma} \le C q^{-\gamma}\|g\|_p.$$ Recall that $\|g\|_{H^p_\gamma} \le \|B_Jg\|_{H^p_\gamma} + \|(I-\sfb_J)g\|_{H^p_\gamma}$, where $\sfb_J$ is the frame reconstruction operator defined in section \[frames\]. Of course, from (\[B\_J\_approx\_id\]), this operator is bounded independently of $J$ From the polynomial version of the Bernstein inequality in (\[bernstein\_poly\_est\]), we have that $\|\sfb_J g\|_{H^p_\gamma} \le C2^{\gamma J}\|\sfb_J g\|_p \le C2^{\gamma J}\|g\|_p$, which implies (\[basic\_ineq\]). Inserting the approximation estimate (\[final\_approx\_est\]) and the stability-ratio estimate (\[stability\_ratio\_algebraic\_est\]) into (\[basic\_ineq\]) yields $$\begin{aligned} \|g\|_{H^p_\gamma} &\le& \left( C2^{\gamma J} + C' 2^{-(\beta-\gamma-n/p')J}q^{n/p'-\beta}(1+E_{2^{J+j_n}}(\psi)_1 \right)\|g\|_p \\ &\le& q^{-\gamma}\left( C(2^Jq)^{\gamma} + C'(2^{-J}q)^{(\beta-\gamma-n/p')}(1+\|\psi\|_1) \right)\|g\|_p\end{aligned}$$ The integer $J$ is still a free parameter. Choose it to be $J=-\log_2(q)$. The Bernstein inequality (\[bernstein\_algebraic\_est\]) then follows on noting that $q\le \pi$, $\beta-\gamma-n/p'>0$, and $\|\psi\|_1$ is finite and fixed. Up to a point, an SBF $\phi\in C^\infty$ is handled in the same way as one related to a Green’s function. In particular, using the argument above, coupled with the approximation estimate (\[final\_approx\_est\]), with $\beta=\gamma+n$, and the stability estimate in Theorem \[stability\_ratio\_infitely\_dif\], we obtain $$\label{bernstein_infty_preliminary} \|g\|_{H^p_\gamma} \le C L^\gamma \left( 1 +C' \rho^n(qL)^{n(1/p'-1/2)} \frac{L^{-(\beta - \frac{n}{2})}E_L(\sfl_n^\beta \phi)_1} { \hat\phi_{\mathrm min}(L_{cq/2})} \right)\|g\|_p, \ L=2^{J+j_n},$$ where $L_{cq/2} =\lfloor 2/cq -\lambda_n\rfloor$. Because $\phi\in C^\infty$, it is in $H^p_\beta$ for all $\beta$. The inequality thus holds for all $\beta>\gamma+n/p'$. The object here is to find a constant $L = \alpha q^{-1}$, where $\alpha$ is independent of $q$, such that the ratio on the right above is bounded. The other terms will be controlled easily in that case. To obtain a simple, applicable condition, we need the following lemma. \[sequence\_condition\] Let $0<\mu(\ell)\le \sigma(\ell)$ be eventually decreasing sequences. Assume that for every $\alpha>0$ there is an integer $m_1=m_1(\alpha,\sigma)\ge 0$ such that $\ell^{\alpha}\sigma(\ell)\le \sigma(2^{-m_1}\ell)$. If in addition for all $\ell$ sufficiently large there is an integer $m_2(\alpha,\mu,\sigma)\ge 0$ such that $\sigma(2^{m_2}\ell)\le C_{\mu,\sigma}\mu(\ell)$, then with $m=m_1+m_2$, $$\frac{1}{\mu(L)}\sum_{\ell=2^m L}^\infty \ell^\alpha \sigma(\ell) \le C_{\mu,\sigma}2^{-m} L^{-1}.$$ Let $m_1=m_1(\alpha+2,\phi)$. then $$\sum_{\ell=L}^\infty \ell^\alpha \sigma(\ell)\le \sum_{\ell=L}^\infty \ell^{-2}\ell^{\alpha+2} \sigma(\ell)\le \sigma(2^{-m_1}L)\sum_{\ell=L}^\infty \ell^{-2} \le \frac{\sigma(2^{-m_1}L)}{L}.$$ Replace $L$ by $2^{m} L$ in the inequality above, so that the sum on the left above is bounded by $(2^{m}L)^{-1}\sigma(2^{m_2}L)\le C_{\mu,\sigma}2^{-m}L^{-1}\mu(L)$. Dividing by $\mu(L)$ yields the desired inequality. \[coef\_seq\_condit\] If there are two sequences $\mu(\ell)$ and $\sigma(\ell)$ that satisfy the conditions of Lemma \[sequence\_condition\] and in addition satisfy $\mu(\ell)\le \hat\phi(\ell)\le \sigma(\ell)$, then there is an integer $m=m(\beta,\phi,n)$ such that for all $L$ sufficiently large $$\label{error_bnd_phi_infty_condit} \frac{E_{2^m L}(\sfl_n^\beta \phi)_1} { \hat\phi_{\mathrm {min}}(L)} \le C_{\beta,\phi,n}2^{-m} L^{-1},$$ Because $\phi$ is a $C^\infty$ SBF, the error $E_L(\sfl_n^\beta \phi)_1$ satisfies $$E_L(\sfl_n^\beta \phi)_1 \le \omega_n E_L(\sfl_n^\beta \phi)_\infty \le \sum_{\ell=L}^\infty \frac{(\ell+\lambda_n)^\beta\ultra {\lambda_n} \ell (1)}{\lambda_n}\hat\phi(\ell) \le \frac{2^{\beta+n}}{\Gamma(n)}\sum_{\ell=L}^\infty \ell^{\beta+n-1} \hat\phi(\ell),$$ where we have estimated factors independent of $\phi$ to get the term on the right. Applying Lemma \[sequence\_condition\] then completes the proof. Putting all these results together leads to this theorem. \[bernstein\_infty\_fn\] Let $\phi$ be a $C^\infty$ SBF. If there are two sequences $\mu(\ell)$ and $\sigma(\ell)$ that satisfy the conditions of Lemma \[sequence\_condition\] and in addition satisfy $\mu(\ell)\le \hat\phi(\ell)\le \sigma(\ell)$, then for every $\gamma>0$ Bernstein’s inequality, $$\|g\|_{H^p_\gamma} \le C_{\phi,\gamma,p} q^{-\gamma} \|g\|_p,$$ holds for all $g\in \calg_{\phi,X}$, $1\le p\le \infty$. In particular, it holds for the Gaussians, multiquadrics, ultraspherical generating funstions, and the Poisson kernel. To get the inequality itself, use Lemma \[bernstein\_infty\_fn\] with $\beta = \gamma+n>\gamma+n/p'$. The statement concerning the list of functions may be established by checking that upper and lower bounds given in section \[SBFs\] for each function satisfy the conditions on $\mu(\ell)$ and $\sigma(\ell)$. Direct theorems {#direct_thms} --------------- In [@Mhaskar-etal-99-1 §4], we used a linear process to estimate the distance ${\operatorname{dist}}_{L^p}(f, \calg_{\phi,X})$, given that $\phi$ is a continuous SBF and $f\in L^p$. In several important cases, including the Gaussian, the process produced a near-best approximant. We will use a similar process here for an SFB of the form $\phi_\beta=G_\beta+G_\beta\ast \psi$, $\psi\in L^1$, again obtaining the corresponding distance estimates. Such SBFs are at least in $L^1$, but they might not be continuous. Our approach also makes use of recently developed positive-weight quadrature formulas for ${\mathbb{S}}^n$, introduced in [@Mhaskar-etal-01-1] and further developed in [@Narcowich-etal-06-1]. We remark that a version of Theorem \[phi\_beta\_favard\], with the conditions on $\phi$ given in terms of sequence spaces involving the $\hat\phi(\ell)$’s, was established in [@Mhaskar-06-1 Theorem 3.1]. The general framework is this. Let $\phi$ be an SBF, so that the Fourier-Legendre coefficients $\hat\phi(\ell)$ are positive for all $\ell$. Define $\phi^{-1}$ to be the formal expansion $$\phi^{-1} \sim \sum_{\ell=0}^\infty \frac{\ell+\lambda_n}{\lambda_n\omega_n}\hat\phi(\ell)^{-1}\ultra {\lambda_n} \ell\,.$$ This expansion will converge in a distributional sense if the $\hat\phi(\ell)^{-1}$ grow polynomially fast. Otherwise, i.e. for faster growth, the expansion is purely formal. Since we are using it in connection with polynomials of finite degree, this is not a problem. For every spherical polynomial $S\in \Pi_L$, we can use $\phi^{-1}$ to define an inverse for the convolution operator $S \to \phi\ast S \in \Pi_L$; namely, the expression $\phi^{-1}\ast S$, which is defined by the expansion $$\phi^{-1}\ast S= \sum_{\ell=0}^L \sum_{m=1}^{d_\ell^n} \frac{\ell+\lambda_n}{\lambda_n\omega_n}\frac{\hat S(\ell,m)}{\hat\phi(\ell)} Y_{\ell,m}$$ which is just the convolution of $S$ with the polynomial $\sum_{\ell=0}^L\frac{\ell+\lambda_n}{\lambda_n\omega_n}\hat\phi(\ell)^{-1}\ultra {\lambda_n} \ell$. Suppose that $S$ is a spherical polynomial for which $\deg S +\lambda_n \le 2^{J+j_n}$. By Theorem \[bernstein-nikolskii-poly-ineqs\], we have that $\sfb_JS=S$. In addition, $S= \phi\ast\phi^{-1}\ast S$. Combining these two then yields $$S(x) =\sfb_J \phi\ast \phi^{-1}\ast S = \int_{{\mathbb{S}}^n}(\sfb_J\phi)(x\cdot \eta)(\phi^{-1}\ast S)(\eta)d\mu(\eta).$$ The kernel $\sfb_J\phi(x\cdot \eta)$ is a zonal polynomial with degree less than $2^{J+j_n+1}$. In addition,, $\phi^{-1}\ast S$ is a spherical polynomial of degree $2^{J+j_n-1}$. Thus, the integrand above is a polynomial of degree less than $2^{J+j_n+1} + 2^{J+j_n-1}<2^{J+j_n+2}$. We will discretize this integral by applying the quadrature formula in [@Narcowich-etal-06-1 §4.2]. Let $X$ be a set of centers, with $q$, $h$, $\rho$, and $\calx$ being the separation radius, mesh norm, mesh ratio, and Voronoi (or similar) decomposition, respectively. Take $L>0$ be an integer. There are positive weights $c_\xi$, $\xi\in X$ and a constant $s_n>0$ (cf. [@Narcowich-etal-06-1 §4.1]) such that $$\label{quad_formula} \int_{{\mathbb{S}}^n}f(\eta)d\mu(\eta) \doteq \sum_{\xi\in X}c_\xi f(\xi)$$ holds exactly for polynomials in $\Pi_L$, provided that $h \le \frac14 s_n^{-1}(L+\lambda_n)^{-1}$. The weights behave like $c_\xi = \calo\left(h^n\right)$, where the constants hidden by “big” $\calo$ are dependent only on the dimension $n$. Applying the quadrature formula to the integral representing $S$ yields $$S(x) = \sum_{\xi\in X}c_\xi (\sfb_J\phi)(x\cdot \xi)(\phi^{-1}\ast S)(\xi).$$ Of course we are assuming that $h\sim 2^{-J}$. Let $\sfq : \Pi_L \to \calg_{\phi,X}$ be given via $$\sfq_\calg S(x) := \sum_{\xi\in X}c_\xi \phi(x\cdot \xi)(\phi^{-1}\ast S)(\xi),$$ and let $g=\sfq_\calg S$, where $\sfq$ is used because of the operator’s relationship with quadrature. The difference between $g$ and $S$ is thus $$g-S = \sum_{\xi \in X}c_\xi(I-\sfb_J)\phi((\cdot)\cdot \xi) (\phi^{-1}\ast S)(\xi) = (I-\sfb_J)g.$$ We now want to estimate the $H^p_\gamma$ norm of the difference $g-S=(I-\sfb_J)g$ in terms of $\|\phi^{-1}\ast S\|_p$. It is important to note that the norm $\|\phi^{-1}\ast S\|_p$ depends on the degree of $S$ and on $\phi$. We will deal with it later. The easiest way to estimate $\|g-S\|_{H^p_\gamma}$ is to employ Theorem \[main\_approximation\_thm\], where the norm ratios $\|(I-\sfb_J)g\|_{H^p_\gamma}/|a|_p$ have been estimated. Thus, the task to be accomplished is to relate $|a|_p$ to $ \|\phi^{-1}\ast S\|_p$. To do this, we will again use the Riesz-Thorin theorem. First of all, we have that $a_\xi$, which is the coefficient of $\phi((\cdot)\cdot \xi)$ in $g$, is given by $a_\xi = c_\xi (\phi^{-1}\ast S)(\xi)$. Thus, $|a|_\infty = \max_{\xi\in X}c_\xi |(\phi^{-1}\ast S)(\xi)|$. Since $c_\xi = \calo(h^n)$, the bound $|a|_\infty \le C h^n \|\phi^{-1}\ast S\|_\infty$ holds. The $p=1$ case requires more work. Now, $|a|_1 = \sum_{\xi \in X}c_\xi | (\phi^{-1}\ast S)(\xi)|$. Since $c_\xi = \calo\left(h^n\right)\le C_n\rho^n q^n\le C''_n \rho^n \min_{\xi\in X}\mu(R_\xi) \le C''_n \rho^n\mu(R_\xi)$, we have $$|a|_1\le C''_n\rho^n \bigg(\sum_{\xi \in X}\mu(R_\xi) |(\phi^{-1}\ast S)(\xi)|\bigg) \le \frac{5 C''_n\rho^n}{4} \|\phi^{-1}\ast S \|_1\,.$$ The right side above follows on applying the polynomial version of the Marcinkiewicz-Zygmund inequality from [@Narcowich-etal-06-1 Theorem 4.2], with $\delta = 1/4$, to bound the sum in the middle by $(5/4)\|\phi^{-1}\ast S\|_1$. The Riesz-Thorin theorem then implies $$|a|_p \le C_{n,p} \rho^{n/p}h^{n/p'} \|\phi^{-1}\ast S\|_p.$$ Combining this with the estimate (\[final\_approx\_est\]), where $h\sim \varepsilon_J=2^{-(J+j_n)}$ and noting that $g-S=(\sfq_\calg - I)S$, we obtain the following result. \[direct\_est\_phi\_poly\] Let $\gamma\ge 0$, $1\le p\le \infty$, $\beta>\gamma+n/p'$, $h\sim 2^{-(J+j_n)}$. If $S$ is a spherical polynomial of degree $2^{J+j_n-1}$ or less , then $$\|(\sfq_\calg - I)S\|_{H^p_\gamma} \le C_{n,p} \rho^n h^{\beta-\gamma}\|\phi^{-1}\ast S\|_p \left\{ \begin{array}{cl} E_{2^{J+j_n}}(\sfl_n^\beta \phi)_1 & \phi\in H^1_\beta\,,\\ (1+E_{2^{J+j_n}}(\psi)_1 ) & \phi=G_\beta + G_\beta\ast \psi\,. \end{array}\right.$$ ##### The $\phi_\beta$ case. We will now focus on the $\phi_\beta$’s. Our immediate concern is estimating $\|\phi_\beta^{-1}\ast S\|_p$. \[norm\_phi\_beta\_inv\_ast\_poly\] Let $1\le p\le \infty$, $\beta>0$, $\psi\in L^1$, and $S\in \Pi_L$. If $\phi_\beta=G_\beta+G_\beta\ast \psi$, then there is a constant $C=C_{n,p,\psi}$, which is independent of $\beta$, $L$, and $S$, such that this holds: $$\label{phi_beta_inv_S} \|\phi_\beta^{-1}\ast S\|_p \le C_{n,p,\psi} \|S\|_{H^p_\beta}.$$ Note that $\phi_\beta^{-1}\ast S = (\sfl_n^\beta \phi_\beta)^{-1}\ast \sfl_n^\beta S$. The kernel $G_\beta$ is a Green’s function for $\sfl_n^\beta$, and so $\sfl_n^\beta \phi_\beta =\delta+\delta\ast \psi=\delta +\psi$, which is to be regarded as a distributional kernel. Finding $(\sfl_n^\beta \phi_\beta)^{-1}\sfl_n^\beta S$ requires solving $\sfl_n^\beta \phi_\beta\ast T =T+\psi\ast T=\sfl_n^\beta S$ for $T$ in $\Pi_L$, which can be done directly, coefficient by coefficient. The solution $T$ is of course unique. There is another way to look at this equation, in an $L^p$ setting. Suppose that we want to solve $Hf:=f+\psi\ast f=h$ in $L^p$, for $1\le p<\infty$ and in $C$ (for $p=\infty$). The operator norm for $f\to \psi\ast f$ is $\|\psi\|_1$. By Theorem \[B\_J\_approx\_thm\], we have that $\|\psi-\sfb_J\psi\|_1\to 0$ as $J\to \infty$. It follows that the convolution operator with kernel $\psi$ is the norm limit of finite rank operators with convolution kernels, $\sfb_J\psi$. The operator $\psi\ast$ is therefore compact on all $L^p$ and $C$; hence, $Hf=f + \psi\ast f$ has closed range on these spaces. Moreover, a simple coefficient argument shows that $\ker(H)=\{0\}$. The Fredholm Alternative [@Dunford-Schwartz-58-I §VII.11] then implies that $\ker(H^\ast)=\{0\}$, so $H^{-1}$ exists and is bounded on all $L^p$ and $C$. Since $\phi_\beta^{-1}\ast S=H^{-1}\sfl_n^\beta S$, we have that $$\label{phi_beta_inv_H_inv} \|\phi_\beta^{-1}\ast S\|_p \le \|H^{-1}\|_p \|S\|_{H^p_\beta}.$$ We emphasize that $\|H^{-1}\|_p$ is *independent* of $\beta$, $L$, and $S$. It depends only on $p$, $n$, and $\psi$. Consequently, $ C_{n,p,\psi}=\|H^{-1}\|_p$, and (\[phi\_beta\_inv\_S\]) holds. These lemmas lead to the following two direct theorems, the first for $S\in \Pi_L$ and the second for $f\in H^p_\gamma$. \[poly\_direct\_est\] Let $1\le p\le \infty$, $\gamma\ge 0$, and $\beta>\gamma + n/p'$. If $S$ is a spherical polynomial of degree $2^{J+j_n-1}$ or less and if $h=\rho q \sim 2^{-J-j_n}$, then we have for $\phi=\phi_\beta$, $$\label{phi_beta_poly_direct} {\operatorname{dist}}_{H^p_\gamma}(S, \calg_{\phi_\beta,X}) \le C_{n,\beta,\gamma,p,\psi}\rho^n h^{\beta-\gamma} \|S\|_{H^p_\beta}\,,$$ The two lemmas, when applied to $\phi_\beta$, yield $$\label{norm_est_Q} \|(\sfq_\calg - I)S\|_{H^p_\gamma} \le C_{n,\beta,\gamma,p,\psi}\rho^n h^{\beta-\gamma}\|S\|_{H^p_\beta}.$$ The result follows on observing that ${\operatorname{dist}}_{H^p_\gamma}(S, \calg_{\phi_\beta,X})\le \|(\sfq_\calg - I)S\|_{H^p_\gamma} $. Note that the dependence of $C$ on the particular frame operator disappears on minimizing the constants involved over all functions $a$. \[phi\_beta\_favard\] Let $1\le p\le \infty$, $\gamma\ge 0$, and $\beta>\gamma + n/p'$. If $f\in H^p_\beta$, then for $\phi_\beta=G_\beta+G_\beta\ast\psi$, $\psi\in L^1$, $${\operatorname{dist}}_{H^p_\gamma}(f, \calg_{\phi,X}) \le C_{\beta,\gamma,n,p,\psi} h^{\beta-\gamma} \rho^n \|f\|_{H^p_\beta}\,.$$ Let $2^{-J-j_n}\sim h$ and choose $S$ to be the polynomial $S=\sfb_J f$; note that $\sfq_\calg S\in \calg_{\phi_\beta,X}$. From these choices and (\[norm\_est\_Q\]), it follows that $$\begin{aligned} \|f-\sfq_\calg S\|_{H^p_\gamma} &\le \|f-\sfb_Jf\|_{H^p_\gamma}+\|(\sfq_\calg - I)S\|_{H^p_\gamma} \\ &\le \|f-\sfb_Jf\|_{H^p_\gamma} + h^{\beta-\gamma}\rho^nC_{\beta, n,p}\|\sfb_J f \|_{H^p_\beta}.\end{aligned}$$ By Proposition \[H\^p\_gamma\_H\^q\_beta\], with $p=q$, we have and $$\|f-\sfb_Jf\|_{H^p_\gamma}\le C_{\beta,\gamma, n,a}2^{-(\beta-\gamma) (J+j_n)} E_{2^{J+j_n}}(\sfl_n^\beta f)_p\le C_{\beta,\gamma, n,a}h^{\beta - \gamma} \|f\|_{H^p_\beta}.$$ From Proposition \[B\_J\_approx\_thm\], we easily see that $\|\sfb_Jf\|_{H^p_\beta} \le C_{\beta,\gamma,n,a}\|f\|_{H^p_\beta}$. Combining all of these inequalities establishes that $$\label{near_best_approx_H_gamma} \|f-\sfq_\calg S\|_{H^p_\gamma} \le C_{\beta,\gamma, n,a,\psi} \rho^nh^{\beta - \gamma} \|f\|_{H^p_\beta}$$ Since ${\operatorname{dist}}_{H^p_\gamma}(f, \calg_{\phi,X})\le \|f-\sfq_\calg S\|_{H^p_\gamma}$, and since the distance itself doesn’t depend on the particular frame function, minimizing over the $a$ yields the result, with the constant independent of $a$. ##### The $C^\infty$ case. The case in which the SBF $\phi$ is $C^\infty$ was in large part done in [@Mhaskar-etal-99-1]. However, some adjustments need to be made because the estimates in that paper did not involve $H^p_\gamma$. One difference is in estimating the norm $\|\phi^{-1}\ast S\|_p$. \[norm\_phi\_infin\_inv\_ast\_poly\] Let $1\le p\le \infty$, $\delta\ge 0$, $L>0$ an integer, and $S\in \Pi_L$. If $\phi\in H^P_\delta$ is an SBF, then there is a constant $C=C_n$, depending only on $n$, such that this holds: $$\label{phi_infin_inv_S} \|\phi^{-1}\ast S\|_p \le C_n \frac{L^{n\left|\frac{1}{2}-\frac{1}{p}\right|}}{ \widehat{\sfl_n^\delta \phi}_{\mathrm {min}}(L)}\|S\|_{H^p_\delta},$$ where $\widehat{\sfl_n^\delta \phi}_{\mathrm {min}}(L) = \min_{\,0\le \ell \le L}(\ell+\lambda_n)^\delta \hat\phi(\ell)$. We begin by estimating $\|\phi^{-1}\ast S\|_p$. The case in which $\phi \in H^p_\delta$ was essentially done in the proof of [@Mhaskar-etal-99-1 Theorem 4.1]; the result, which makes use of the Nikolskii inequality (\[nikolskii\_poly\_est\]), is the following. If $S\in \Pi_L$, then the Nikolskii inequality implies that $$\|\phi^{-1}\ast S\|_p =\|(\sfl_n^\delta \phi)^{-1}\ast \sfl_n^\delta S\|_p \le C_n L^{n (\frac{1}{2}-\frac{1}{p})_+}\|(\sfl_n^\delta \phi)^{-1}\ast \sfl_n^\delta S\|_2\,.$$ At this point, we simply use the $2$-norm estimate done in [@Mhaskar-etal-99-1 Theorem 4.1] and a second application of (\[nikolskii\_poly\_est\]) to get $$\|(\sfl_n^\delta \phi)^{-1}\ast \sfl_n^\delta S\|_2 \le (\widehat{\sfl_n^\delta \phi}_{\mathrm {min}}(L))^{-1} \|\sfl_n^\delta S\|_2 \le C_n L^{n (\frac{1}{p}-\frac{1}{2})_+} (\widehat{\sfl_n^\delta \phi}_{\mathrm {min}}(L))^{-1} \|\sfl_n^\delta S\|_p\,.$$ Putting the two inequalities together completes the proof. Let $\phi\in C^\infty$. We can now estimate the $H^p_\gamma$ distance of $S\in \Pi_L$ to $\calg_{\phi,X}$, in terms of $\|S\|_{H^p_\delta}$, where $\delta>\gamma+n/p'$. In Lemma \[direct\_est\_phi\_poly\], let $\beta=\delta+n/2$. Apply Lemma  \[norm\_phi\_infin\_inv\_ast\_poly\], noting that $L\le 2^{J+j_n-1}\le h^{-1}$ implies $L^{n\left|\frac{1}{2}-\frac{1}{p}\right|}\le L^{n/2} \le h^{-n/2}$ to get this: $$\label{dist_poly_norm_est_Q} {\operatorname{dist}}_{H^p_\gamma}(S, \calg_{\phi,X}) \le \|(\sfq_\calg - I)S\|_{H^p_\gamma} \le C_{n,p} \rho^n h^{\delta-\gamma}\frac{E_{2^{J+j_n}}(\sfl_n^{\delta+n/2} \phi)_1}{\widehat{ \sfl_n^\delta \phi}_{\mathrm {min}}(L)} \|S\|_{H^p_\delta}\,.$$ \[phi\_H\^p\_alpha\_\_poly\_favard\] Let $1\le p\le \infty$, $\gamma\ge 0$, $\delta>\gamma + n/p'$, and $\phi\in C^\infty$. If there is an integer $m=m(\delta,\phi)>0$ such that $$\label{phi_H^p_gamma_error_ratio} \sup_{\ell>0}\frac{E_{2^m \ell}(\sfl_n^{\delta+ n/2} \phi)_1}{ \widehat{\sfl_n^\delta \phi}_{\mathrm {min}}(\ell)} \le C_{m,n,\delta,\phi}$$ holds, and if $S\in \Pi_L$, with $L \le 2^{J+j_n-1-m}$ and $h \sim 2^{-J -j_n}$, then $$\label{phi_H^p_delta_poly_direct} {\operatorname{dist}}_{H^p_\gamma}(S, \calg_{\phi,X}) \le C_{m,n,p,\delta,\gamma} h^{\delta-\gamma}\rho^n \|S\|_{H^p_\delta}\,.$$ In addition, for $f\in H^p_\gamma$, we have that $$\label{phi_H^p_beta_H^p_gamma_direct} {\operatorname{dist}}_{H^p_\gamma}(f, \calg_{\phi,X}) \le C_{m,n,p,\gamma,\delta,\phi} h^{\delta-\gamma} \rho^n \|f\|_{H^p_\delta}.$$ Finally, these estimates hold for Gaussians, multiquadrics, ultrasherical generating functions and Poisson kernels. If (\[phi\_H\^p\_gamma\_error\_ratio\]) holds, then, since $\widehat{\sfl_n^\delta \phi}_{\mathrm {min}}(L) \ge \widehat{\sfl_n^\delta \phi}_{\mathrm {min}}(2^{J+j_n-m})$, it follows that $$\frac{E_{2^{J+j_n}}(\sfl_n^{\delta+n/2} \phi)_1}{\widehat{\sfl_n^\delta \phi}_{\mathrm {min}}(L)} \le \frac{E_{2^{J+j_n}}(\sfl_n^{\delta+n/2} \phi)_1}{\widehat{\sfl_n^\delta \phi}_{\mathrm {min}}(2^{J+j_n-m})}\le C_{m,n,\delta,\phi},$$ and (\[phi\_H\^p\_delta\_poly\_direct\]) follows form this and (\[dist\_poly\_norm\_est\_Q\]). One can establish the $H^p_\gamma$ distance estimate (\[phi\_H\^p\_beta\_H\^p\_gamma\_direct\]) below using a proof virtually identical to that for Theorem \[phi\_beta\_favard\]. Essentially the same argument used in section \[bernstein\_inequalities\] can be used here to show that Gaussians, multiquadrics, etc. satisfy (\[phi\_H\^p\_gamma\_error\_ratio\]), and so the estimates hold for them, too. Besov spaces. {#besov_spaces} ------------- In this section, we review the definitions and basic facts regarding Besov spaces on ${\mathbb{S}}^n$. These spaces, which will interpolate between $L^p({\mathbb{S}}^n)$ and $H^p_\gamma$, are defined in [@Triebel-86-1]. Other, equivalent definitions of Besov spaces on ${\mathbb{S}}^n$ are given [@Narcowich-etal-06-2]. Below, we will make use of a general construction general construction found in [@DeVore-Lorentz-93-1 Chapters 6] to characterize these spaces in terms of spaces of SBF networks, $\calg_{\phi,X}$. There are two ingredients. First, we need to introduce certain sequence spaces. If $r>0$ and $0<\tau\le \infty$, we define for a sequence ${\bf a}=\{a_n\}_{n=0}^\infty$ of real numbers, $$\label{seqbdef} \|{\bf a}\|_{\tau,r} := \left\{\begin{array}{ll}\displaystyle \left\{\sum_{n=0}^\infty 2^{nr\tau}|a_n|^\tau\right\}^{1/\tau}, & \mbox{if } 0<\tau<\infty,\\ \displaystyle\sup_{n\ge 0} 2^{nr}|a_n|, & \mbox{if } \tau=\infty.\end{array}\right.$$ The space of sequences ${\bf a}$ for which $\|{\bf a}\|_{\tau,r}<\infty$ will be denoted by $\seqb_{\tau,r}$. The other ingredient in the definition of Besov spaces is a $K$–functional [@DeVore-Lorentz-93-1 Chapter 6]. For $\delta, \gamma>0$, $1\le p\le\infty$ and $f\in L^p$, the $K$–functional for $L^p$ and $H^p_\gamma$ is given by $$\label{kfuncdef} \K_\gamma(p,f,\delta):=\inf_{g\in H^p_\gamma}\{\|f-g\|_p +\delta^\gamma (\|g\|_p+\| g\|_{H^p_\gamma})\}.$$ If $r>0$, $0<\tau\le \infty$, $r<\gamma$, we define the class of all $f\in L^p$ for which $$\label{besovnormdef} \|f\|_{r,\gamma,\tau,p}:=\|f\|_p + \|\{\K_\gamma(p,f,2^{-n})\}_{n=0}^\infty\|_{\tau,r} <\infty.$$ to be the Besov space $B^r_{\tau,p}$. As we shall see, other than the requirement $r<\gamma$, the $\gamma$ dependence will disappear from the characterization of the space, so it isn’t necessary to keep it in designating the space. An important problem in approximation theory is to characterize Besov spaces using degrees of approximation of functions. We recall the results [@DeVore-Lorentz-93-1 Theorems 7.5.1 and 7.9.1], as it applies in the context of the present paper. \[besovprop\] Let $1\le p \le \infty$, $\gamma>0$, and let $\{V_j\}_{j=0}^\infty$, with $V_0=\{0\}$, be a nested sequence of finite dimensional linear subspaces of $L^p$, $p<\infty$ or $C$, $p=\infty$ Suppose that for $j=1,2,\cdots$, one has both the Favard (Jackson) estimate $$\label{genfavard} {\operatorname{dist}}_{L^p}(f,V_j) \le C\,2^{-j\gamma}(\|f\|_p+\|f\|_{H^p_\gamma}),$$ for all $f \in H^p_\gamma$, and the Bernstein inequality $$\label{genbernineq} \|g\|_{H^p_\gamma} \le C 2^{j\gamma}\|g\|_p, \qquad g\in V_j.$$ Then for $0<r<\gamma$, $0<\tau\le\infty$, $f\in B^r_{\tau,p}$ if and only if $\{{\operatorname{dist}}_{L^p}(f,V_j)\}_{j=0}^\infty\in \seqb_{\tau,r}$. This is just [@DeVore-Lorentz-93-1 Theorem 7.5.1], with the sequence of spaces satisfying all requirements in listed in [@DeVore-Lorentz-93-1 (5.2), p. 216], except possibly density. This requirement is in fact satisfied if the Favard inequality (\[genfavard\]) is satisfied. To see this, note that $H^p_\gamma$ contains all of the spherical polynomials, which form a dense set in $L^p$, $1\le p<\infty$ and in $C$. The Favard inequality (\[genfavard\]) then implies that the $\cup_j V_j$ is dense in $H^p_\gamma$ and therefore in $L^p$, $1\le p<\infty$, or in $C$. In the important case when $V_j=\Pi_{2^j}$, Proposition \[bernstein-nikolskii-poly-ineqs\] gives the Bernstein estimate, while Proposition \[H\^p\_gamma\_H\^q\_beta\] provides the Favard estimate. In addition, since the criterion that $\{{\operatorname{dist}}_{L^p}(f,\Pi_{2^J})\}_{n=0}^\infty\in \seqb_{\tau,r}$ does *not* depend upon $\gamma$, it follows that the Besov spaces $B^r_{\tau,p}$ are independent of the different choices of $\gamma>r$ in their definition. This is why we don’t need to include the parameter $\gamma$ to index these spaces. The polynomial characterization of $B^r_{\tau,p}$ is precisely the one given in [@Narcowich-etal-06-2 Proposition 5.3], so that the “needlet” definition [@Narcowich-etal-06-2 Definition 5.1] is equivalent to the one above. (See also [@Mhaskar-Prestine-05-1].) The needlet definition is itself known to be equivalent (cf. [@Narcowich-etal-06-2]) to that given in [@Triebel-86-1]. It follows that all three are equivalent. Using the proposition above, one can also characterize Besov spaces using a variety of spherical basis functions. To do this, we must first have an appropriate nested sequence of sets of centers. By Proposition \[nesting\_Xs\], we can find a nested sequence $\{X_j\}_{j=0}^\infty \in \calf_\rho$, $\rho\ge 2$, each $X_j$ having mesh norm $h_j:=h_{X_j}$ satisfying $\frac14 h_j < h_{j+1}\le \frac12 h_j\le \frac{1}{2^j}h_0$. If $\phi\in L^p$ is an SBF, then define the $V_j$’s to be $$\label{nested_networks} V_j:=\calg_{\phi,X_j}, \ j=1, 2, \ldots,\ \mbox{and} \ V_0=\{0\}.$$ These spaces have finite dimension equal to the cardinality of $X_j$ and by virtue of the $X_j$’s being nested, are themselves nested. At issue then are the Favard and Bernstein inequalities. Since any $\phi$ that satisfies both will provide us with a Besov space via Proposition \[besovprop\], we have the following result. \[besov\_cor\_phi\] Let $1\le p\le \infty$, $\phi_\beta=G_\beta+G_\beta\ast \psi $, where $\psi\in L^1$ and $0<\beta$. Fix $0<\gamma<\beta-n/p'$ and suppose that $V_j=\calg_{\phi_\beta,X_j}$, with $X_j$ as in (\[nested\_networks\]). For all $0<r<\gamma$ and all $0<\tau\le\infty$, we have that $f\in B^r_{\tau,p}$ if and only if $\{{\operatorname{dist}}_{L^p}(f,V_j)\}_{j=0}^\infty\in \seqb_{\tau,r}$. The same conclusion holds true, with any $\gamma>0$, for all $\phi$ that simultaneously satisfy (\[error\_bnd\_phi\_infty\_condit\]) and (\[phi\_H\^p\_gamma\_error\_ratio\]), including the Gaussians, multiquadrics, etc. When $V_j=\calg_{\phi_\beta,X_j}$, the result follows immediately from the Bernstein inequality in Theorem \[bernstein\_greens\_fn\] and the Favard inequality in Theorem \[phi\_beta\_favard\]. If $\phi$ satisfies both (\[error\_bnd\_phi\_infty\_condit\]) and (\[phi\_H\^p\_gamma\_error\_ratio\]), then it also satisfies both the Bernstein inequality in Theorem \[bernstein\_infty\_fn\] and the Favard inequality in Theorem \[phi\_H\^p\_alpha\_\_poly\_favard\]. As before, with the same set of $V_j$’s, the same conclusion holds. Inverse theorems {#inverse_theorems} ---------------- Inverse theorems give indications of rates of approximation being best, or nearly best, possible. We now establish inverse theorems for the approximation rates in the previous section and in [@Mhaskar-etal-99-1]. These involve Bessel-potential Sobolev spaces, and in addition Besov spaces. \[inverse\_thm\] Let $1\le p\le \infty$ and let $\phi $ as in Theorem \[bernstein\_greens\_fn\] or Proposition \[bernstein\_infty\_fn\]. If for $f\in L^p$, $1\le p<\infty$, or $f\in C({\mathbb{S}}^n)$, $p=\infty$, there are constants $0<\mu\le \gamma$, $t \in {\mathbb{R}}$, and $c_f>0$ such that $$\label{dist_f_bnd} {\operatorname{dist}}_{L^p({\mathbb{S}}^n)}(f,\calg_{\phi,X}) \le c_f\frac{ h_X^\mu}{\log_2^t (h_X^{-1})}$$ holds for all $X\in \calf_\rho$, then, for every $0\le \nu <\mu$, $f\in H^p_\nu({\mathbb{S}}^n)$. If (\[dist\_f\_bnd\]) holds for $\nu=\mu$ and some $t>1$, then $f\in H^p_\mu({\mathbb{S}}^n)$. Moreover, if in addition $\phi$ satisfies the conditions in Corollary \[besov\_cor\_phi\], then for any $\tau>t^{-1}>0$ and $0<r\le \mu$, the function $f$ is in the Besov space $B^r_{\tau,p}$. Let the $V_j$’s be as in (\[nested\_networks\]), and set $f_j:= {\operatorname{argmin}}\left({\operatorname{dist}}_{L^p({\mathbb{S}}^n)}(f,V_j)\right)$, which always exits because $V_j$ is finite dimensional. Since the $V_j$’s are nested, we have that $f_j\in V_k$ for all $k\ge j$. We want to show that $f_j$ is a Cauchy sequence in $H^p_\nu$. From the Bernstein estimate in Theorem \[bernstein\_greens\_fn\] – or Proposition \[bernstein\_infty\_fn\] – and the inequality $h_{j+1}/q_{j+1}\le \rho$, we have $$\|f_{j+1}-f_j\|_{H^p_\nu} \le C \rho^\nu h_{j+1}^{-\nu}\|f_{j+1}-f_j\|_p\le C\rho^\nu h_{j+1}^{-\nu}\big(\|f_{j+1}-f\|_p + \|f-f_j\|_p\big).$$ And by (\[dist\_f\_bnd\]), we also have $$\begin{aligned} \|f_{j+1}-f_j\|_{H^p_\nu} &\le Cc_f \rho^\nu h_{j+1}^{-\nu} (h_{j+1}^\mu\log_2^{-t}(h_{j+1}) + h_j^\mu\log_2^{-t}(h_j))\\ &\le Cc_f \rho^\nu h_0 2^{-(\mu - \nu)(j+1)}\left( (h_0+j+1)^{-t} + 2^\mu (h_0+j)^{-t}\right)\\ &\le C'c_f 2^{-(\mu - \nu)j} j^{-t}\end{aligned}$$ where $C'$ is independent of $j$. Take $k>j$. Using the previous inequality and a standard telescoping-series argument, we arrive at this: $$\|f_j-f_k\|_{H^p_\nu} \le C''(\sum_{m=j}^k 2^{-(\mu - \nu)m} m^{-t}).$$ Letting $j,k\to\infty$, we see $\|f_j-f_k\|_{H^p_\nu} \to 0$ when $\mu>\nu$ and $\tau \in {\mathbb{R}}$ or when $\mu=\nu$ and $t >1$ . Thus, $f_j$ is a Cauchy sequence in $H^p_\nu $ and is therefore convergent to $\tilde f\in H^p_\nu$. Moreover, by (\[dist\_f\_bnd\]) with $X=X_j$, we see that $f_j \to f$ in $L^p$, so $\tilde f = f$ almost everywhere. Hence, we have $f\in H^p_\nu$. The statement concerning Besov spaces follows from two things: the observation that $a_j:={\operatorname{dist}}_{L^p}(f,V_j)\le c_f 2^{-\mu j}j^{-t}$, so $\| \bfa \|_{\tau,r}<\infty$ whenever $0<r \le \mu$ and $\tau t>1$, and Corollary \[besov\_cor\_phi\]. For the case $\nu=\mu$, $0<t\le 1$, the inverse theorem fails for Bessel-potential Sovolev spaces, but still remains valid for Besov spaces with $\tau>t^{-1}$. Concluding Remarks ================== There are connections between this paper and [@Mhaskar-etal-99-1; @Mhaskar-06-1]. In these papers quasi interpolatory SBF networks were obtained yielding near best approximants for functions in Sobolev classes. The associated quasi-interpolation operators were constructed in the Fourier domain. The paper [@Mhaskar-etal-99-1] focused on sequences corresponding to the $c^{\infty}$ case treated within this paper. The paper [@LeGia-Mhaskar-06-1] dealt with sequences connected to the “perturbations of Green’s functions” case. For example, let $\psi$ be a perturbation of a Green’s function as described in this paper. If the Fourier coefficents of $\psi$ satisfy the “difference condition” as stated in , [@Mhaskar-06-1] then it is in $L^1$. The examples given in Section 3 satisfy both kinds of conditions. In [@Mhaskar-etal-99-1; @Mhaskar-06-1], the quasi-interpolatory SBF networks were shown to give best results in the sense of n-widths. In this paper, using the frame approach, we have shown the quasi-interpolatory networks are also optimal for approximation of individual functions. Also note that in [@Mhaskar-06-1], Marcinkiewicz-Zygmund measures generalizing the measure that associates $\mu_q(R_\xi)$ with each $\xi$ were introduced. These measures were used to derive [@Mhaskar-06-1 Prop. 4.1 & (4.15)], which have overlap with the current Prop. 4..4, Lemma 4.7 and estimate (5.4). In [@LeGia-Mhaskar-06-1], the quasi-interpolation polynomial operators were further utilized to show that, in the presence of certain singularities, they exhibited better approximation properties than traditional methods. Also [@LeGia-Mhaskar-06-1 Prop. 4.3] is related to Proposition 4.10 given here. Finally there is material closely connected to Theorem 4.1 appearing in \[6, [@Mhaskar-05-2 Proposition 4.1]. Another version of the operator $B_J$ was introduced in [@Mhaskar-etal-00-1]: $\sigma_J(f) = \sum_{l = 0}^{2^J} h(l/2^J)P_l(f),$ where $h:[0, \infty) \rightarrow [0, \infty)$ is a function in $C^k$, equal to 1 on \[0, 1/2\] and 0 on $[1, \infty)$. An early form of Theorem 4.1 was Theorem 3.4 of [@Mhaskar-etal-00-1]. Frames, based on the $\sigma_J(f)$ operator can be constructed as in [@Mhaskar-05-1; @Mhaskar-Prestine-05-1] using $h(t) - h(2t)$ in place of ${\cal \kappa}$ used in the construction given here. Finally we mention that the idea of using minimal separation for converse theorems and Bernstein inequalities goes back to [@Schaback-Wendland-02-1], see also [@Mhaskar-05-1]. Also, for the neural network community, we note that the number of neurons is not used as a measure of complexity, but rather the minimal separation of the nodes. [10]{} Characterizations of function spaces on the sphere using frames. , 2 (2007), 567–589 (electronic). . Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1992. , vol. 303 of [*Grundlehren der Mathematischen Wissenschaften \[Fundamental Principles of Mathematical Sciences\]*]{}. Springer-Verlag, Berlin, 1993. . Wiley-Interscience, New York, 1958. . Clarendon Press, Oxford, 1998. . Birkhäuser, Boston, 2004. A survey on spherical spline approximation. (1997), 29–85. . Springer, Berlin, 1987. The best approximation of classes of functions ${W}\sp{\alpha }\sb{p}\,(s\sp{n})$ by polynomials in spherical harmonics. (1982), 285–293. (English) 32 (1983), 622–626. Polynomial operators and local approximation of solutions of pseudo-differential equations on the sphere. (2006), 299–322. A [M]{}arkov-[B]{}ernstein inequality for [G]{}aussian networks. In [*Trends and applications in constructive approximation*]{}, vol. 151 of [*Internat. Ser. Numer. Math.*]{} Birkhäuser, Basel, 2005, pp. 165–180. On the representation of smooth functions on the sphere using finitely many bits. , 3 (2005), 215–233. Weighted quadrature formulas and approximation by zonal function networks on the sphere. (2006), 348–370. Polynomial frames on the sphere. (2000), 387–404. Approximation properties of zonal function networks using scattered data on the sphere. (1999), 121–137. Corrigendum to spherical [M]{}arcinkiewicz-[Z]{}ygmund inequalities and positive quadrature. (2001), 453–454. Spherical [M]{}arcinkiewicz-[Z]{}ygmund inequalities and positive quadrature. (2001), 1113–1130. (Corrigendum: *Math. Comp. 71* (2001), 453–454). Polynomial frames: a fast tour. In [*Approximation theory [XI]{}: [G]{}atlinburg 2004*]{}, Mod. Methods Math. Nashboro Press, Brentwood, TN, 2005, pp. 287–318. . Springer, Berlin, 1966. Decomposition of [B]{}esov and [T]{}riebel-[L]{}izorkin spaces on the sphere. , 2 (2006), 530–564. Localized tight frames on spheres. (2006), 574–594. Approximation power of rbfs and their associated sbfs: A connection. (2007), 107–124. Direct and inverse [S]{}obolev error estimates for scattered data interpolation via spherical basis functions. (2007), 369–390. Scattered-data interpolation on spheres: [E]{}rror estimates and locally supported basis functions. (2002), 1393–1410. . Academic Press, New York, 1974. Localized polynomial frames on the ball. , 2 (2008), 121–148. Strictly positive definite functions on spheres in [E]{}uclidean spaces. (1996), 1513–1530. On the equivalence of the [$K$]{}-functional and the modulus of smoothness of functions on a sphere. , 3 (1992), 123–129, 160. English transl. in *Math. Notes*, (1993) 965–970. Inverse and saturation theorems for radial basis function interpolation. (2002), 669–681. Positive definite functions on spheres. (1942), 96–108. . Princeton University Press, Princeton, New Jersey, 1971. Analysis of the laplacian on the complete riemannian manifold. (1983), 48–79. . Amer. Math. Soc., Providence, RI, 1975. Spaces of [B]{}esov-[H]{}ardy-[S]{}obolev type on complete [R]{}iemannian manifolds. , 2 (1986), 299–337. . Cambridge University Press, Cambridge, UK, 2005. . Cambridge University Press, Cambridge, UK, 1965. [^1]: Department of Mathematics, California State University, Los Angeles,CA 90032, USA. Research supported by grant DMS-0605209 from the National Science Foundation and grant W911NF-04-1-0339 from the U.S. Army Research Office. [^2]: Department of Mathematics, Texas A&M University College Station, TX 77843, USA. Research supported by grants DMS-0504353 and DMS-0807033 from the National Science Foundation. [^3]: Institute of Mathematics, University of Lübeck, Wallstrasse 40, 23560, Lübeck, Germany [^4]: Department of Mathematics, Texas A&M University College Station, TX 77843, USA. Research supported by grants DMS-0504353 and DMS-0807033 from the National Science Foundation. [^5]: *2000 Mathematics Subject Classification:* 41A17, 41A27, 41A63, 42C15, [^6]: *Key words:* sphere, Bernstein estimates, approximation, spherical basis functions.
--- author: - 'L. Cortese' - 'J. I. Davies' - 'M. Pohlen' - 'M. Baes' - 'G. J. Bendo' - 'S. Bianchi' - 'A. Boselli' - 'I. De Looze' - 'J. Fritz' - 'J. Verstappen' - 'D. J. Bomans' - 'M. Clemens' - 'E. Corbelli' - 'A. Dariush' - 'S. di Serego Alighieri' - 'D. Fadda' - 'D. A. Garcia-Appadoo' - 'G. Gavazzi' - 'C. Giovanardi' - 'M. Grossi' - 'T. M. Hughes' - 'L. K. Hunt' - 'A. P. Jones' - 'S. Madden' - 'D. Pierini' - 'S. Sabatini' - 'M. W. L. Smith' - 'C. Vlahakis' - 'E. M. Xilouris' - 'S. Zibetti' date: 'Submitted to A&A Herschel Special Issue' title: 'The Herschel Virgo Cluster Survey: II. Truncated dust disks in H[i]{}-deficient spirals[^1]' --- Introduction ============ It is now well established that the evolution of spiral galaxies significantly depends on the environment they inhabit. The reduction in the star formation rate (e.g., [@lewis02]) and atomic hydrogen (H[i]{}) content (e.g., [@giova85]) of galaxies when moving from low- to high-density environments indicates that clusters are extremely hostile places for star-forming galaxies. However, a detailed knowledge of the effects of the environment on all the components of the interstellar medium (ISM) is still lacking. Particularly important is our understanding of how the environment is able to affect the dust content of cluster spirals. Dust plays an important role in the process of star formation, since it acts as a catalyzer for the formation of molecular hydrogen (H$_{2}$, from which stars are formed) and prevents its dissociation by the interstellar radiation field. Thus, the stripping of dust might significantly affect the properties of the ISM in infalling cluster spirals. Since dust is generally associated with the cold gas component of the ISM, it is expected that when the H[i]{} is stripped part of the dust will be removed as well, but no definitive evidence of a reduced dust content in cluster galaxies has been found so far. For a fixed morphological type, H[i]{}-deficient galaxies[^2] appear to have higher IRAS $f(100~\mu {\rm m})/f(60~\mu {\rm m})$ flux density ratios (i.e., colder dust temperatures, [@bicay87]) and lower far-infrared (FIR) flux densities per unit optical area [@doyon89] than gas-rich galaxies. However, by using ISO observations of the Virgo cluster [@tuffs02], [@popescu02] find no strong variation with cluster-centric distance in the dust properties of each morphological type. Only the most extreme H[i]{}-deficient galaxies appear to be lacking a cold dust component. More recently, [@review] have revealed an interesting trend of decreasing dust masses per unit of H-band luminosity with decreasing distance from the center of Virgo. Thus, it is still an open issue whether or not dust is removed from infalling cluster spirals. The launch of [*Herschel*]{} [@pilbratt10] has opened a new era in the study of environmental effects on dust. Thanks to its high spatial resolution and sensitivity to all dust components, [*Herschel*]{} will be able to determine if cluster galaxies have lost a significant amount of their dust content. Ideally, this analysis should be done on a large, statistically complete sample, following the same criteria used to define the H[i]{}-deficiency parameter [@haynes]: i.e., by comparing the dust content of galaxies of the same morphological type but in different environments. By observing a significant fraction ($\sim$64 deg$^2$) of the Virgo cluster at 100, 160, 250, 350 and 500 $\mu$m, the Herschel Virgo Cluster Survey (HeViCS, [@davies10], hereafter Paper I; see also http://www.hevics.org) will soon provide the optimal sample for such an investigation. In the meantime, with the first HeViCS data it is possible to use a more indirect approach and compare the extent of the dust disk in gas-rich and gas-poor cluster galaxies. Since previous studies have shown that the H[i]{} stripping is associated with a ‘truncation’[^3] of the gas [@cayatte94] and star-forming disk [@koopmann04; @cati05; @review], if the dust follows the atomic hydrogen we should find a reduction in the extent of the dust disk with increasing H[i]{}-deficiency. ![image](collage_nolabel.epsi){width="17cm"} In this paper we will take advantage of the HeViCS observations, obtained as part of the [*Herschel*]{} Science Demonstration (SD) phase, to investigate the correlation between the dust distribution and gas content in cluster galaxies. Observations and data reduction =============================== A $\sim$245x 230 field in the center of the Virgo cluster has been observed by [*Herschel*]{} using the SPIRE/PACS [@griffin10; @poglitsch10] parallel scan-map mode as part of the SD observations for HeViCS. In this paper we will focus our attention on the 3 SPIRE bands only. The full widths at half maximum of the SPIRE beams are 18.1, 25.2, 36.9 arcsec at 250, 350 and 500 $\mu$m, respectively. Details about the observations and data reduction can be found in Paper I. The typical rms noise across the whole image are $\sim$12, 10, 12 mJy/beam at 250, 350 and 500 $\mu$m, respectively (i.e., $\sim$2 times higher than the confusion noise). No spatial filtering is applied during the data reduction, making SPIRE maps ideal to investigate extended submillimetre (submm) emission. The uncertainty in the flux calibration is of the order of 15%. In order to investigate how the dust distribution varies with the degree of H[i]{}-deficiency in Virgo spirals, we restricted our analysis to the 15 spiral galaxies in the HeViCS SD field for which H[i]{} surface density profiles are available. The H[i]{} maps are obtained from the recent ‘VLA Imaging of Virgo in Atomic gas’ (VIVA) survey ([@chung09], 13 galaxies: NGC4294, NGC4299, NGC4330, NGC4351, NGC4380, NGC4388, NGC4402, NGC4424, NGC4501, NGC4567, NGC4568, NGC4569, NGC4579), from @cayatte94 [NGC4438] and from @warmels88 [NGC4413]. H[i]{}-deficiencies have been determined following the prescription presented in [@chung09]. This method is slightly different from the original definition presented by [@haynes], as it assumes a mean H[i]{} mass-diameter relation, regardless of the morphological type. Following [@chung09], we use the difference between the type-dependent and type-independent definitions as uncertainty in the H[i]{}-deficiency parameter. We note that, on average, this value is smaller than the intrinsic scatter of $def_{\rm HI}$ for field galaxies ($\sim$0.27, [@fumagalli09]). Surface brightness profiles in the three SPIRE bands were derived using the IRAF task <span style="font-variant:small-caps;">ellipse</span>. The center was fixed to the galaxy’s optical center (taken from the NASA/IPAC Extragalactic Database[^4]) and the ellipticity and position angle to the same values adopted for the H[i]{} profiles taken from the literature [@chung09; @cayatte94; @warmels88]. The sky background was determined within rectangular regions around the galaxy and subtracted from the images before performing the ellipse fitting. Each profile was then corrected to the ‘face-on’ value using the inclinations taken from the literature. All the galaxies in our sample are clearly resolved in all the three SPIRE bands: e.g., on average $\sim$4-5 beam sizes at 500 $\mu$m. Submm isophotal radii were determined at 6.7$\times$10$^{-5}$, 3.4$\times$10$^{-5}$ and 1.7$\times$10$^{-5}$ Jy arcsec$^{-2}$ surface brightness level in 250, 350 and 500 $\mu$m respectively. These are the average surface brightnesses observed at the optical radius (25 mag arcsec$^{-2}$ in B band, [@rc3]) in the four non H[i]{}-deficient galaxies ($def_{\rm HI}<$0.3) in our sample (NGC4294, 4299, 4351, 4567) and correspond to $\sim$2-3$\sigma$ noise level. Of course, this choice is rather arbitrary and it has no real physical basis. However, as discussed in § 3, the result does not depend on the way in which the isophotal radii have been defined. Although many of our galaxies show some evidence of nuclear activity, we do not find a single case in which the nuclear submm emission dominates the emission from the disk (see also [@sauvage10]). Thus, the isophotal radius is a fair indication of the extent of the dust disk. Results & discussion ==================== In Fig. \[fc\] we compare the optical, 250 $\mu$m and H[i]{} maps for a subsample of our galaxies with different levels of H[i]{}-deficiency. In highly deficient spirals the 250 $\mu$m emission is significantly less extended than the optical, following remarkably well the observed ‘truncation’ of the H[i]{} disk[^5]. This is confirmed in Fig. \[isoradius\], where we show the ratio of the submm-to-optical isophotal diameters as a function of $def_{HI}$ for the 15 galaxies in our sample. ![The ratio of the submm-to-optical diameters versus H[i]{}-deficiency in the three SPIRE bands. Squares are for Sa-Sab, stars for Sb-Sbc and hexagons for Sc and later types. For comparison, the triangles show the same relation for the H[i]{}-to-optical diameter ratio, where the H[i]{} isophotal diameters are taken at a surface density level of 1 M$_{\odot}$ pc$^{-2}$ [@chung09].[]{data-label="isoradius"}](isoradius_rev.epsi){width="7.25cm"} For all the three SPIRE bands we find a strong correlation (Spearman correlation coefficient $r_{s}\sim-$0.87, corresponding to a probability $P(r>r_{s})>$ 99.9% that the two variables are correlated) between the submm-to-optical diameter ratio and $def_{HI}$. Although qualitatively supported by Fig. \[fc\], this correlation alone does not imply a change in the shape of the submm profile. A decrease in the central submm surface brightness of gas-poor galaxies could produce a similar trend without the need to invoke a reduction in the disk scale-length. However, Fig. \[SB\] and \[profiles\] clearly exclude such a scenario. In Fig. \[SB\] we show that, while the 350 $\mu$m flux per unit of 350 $\mu$m area (i.e., the average submm surface brightness) is nearly constant across the whole sample, the 350 $\mu$m flux per unit of optical area significantly decreases with increasing $def_{HI}$. This is even more evident in Fig. \[profiles\] where the average surface brightness profiles in bins of normalized radius for gas-rich and gas-poor galaxies ($def_{HI}>$0.96; i.e., NGC4380, NGC4388, NGC4424, NGC4438, NGC4569) are shown. While the central surface brightness is approximately the same, the profile of H[i]{}-deficient galaxies is steeper than in normal galaxies and falls below our detection limit at approximately half the optical radius[^6]. We can thus conclude that H[i]{}-deficient galaxies have submm disks significantly less extended than the optical disks, following closely the ‘truncation’ observed in H[i]{}. Interestingly, from Fig. \[isoradius\] and \[SB\] it emerges that the extent of the dust disk is significantly reduced compared to the optical disk only for high H[i]{}-deficiencies ($def_{\rm HI}$[[$ \stackrel{>}{\sim}$]{}]{}0.8-1), i. e. when the atomic hydrogen starts to be stripped from inside the optical radius. ![The 350 $\mu$m flux per unit of 350 $\mu$m area (left) and optical area (right) versus H[i]{}-deficiency. Symbols are as in Fig. \[isoradius\].[]{data-label="SB"}](SB_prof.epsi){width="8cm"} ![Average submm surface-brightness profiles in bins of normalized radius for normal and highly H[i]{}-deficient galaxies.[]{data-label="profiles"}](profiles.ps){width="7.6cm"} We now need to consider whether we are just observing a trend due to a different mix of morphologies between gas-rich and gas-poor galaxies. Although H[i]{}-deficient systems are of earlier type than gas-rich spirals, our result does not change if we focus our attention on Sa-Sbc galaxies only (i.e., 80% of our sample). Since in this range the average 850 $\mu$m scale-length-to-optical radius [@thomas04] and H[i]{}-to-optical radius [@cayatte94] ratios do not vary significantly (i.e., less than 1$\sigma$) with galaxy type, morphology alone cannot be responsible for the correlations shown in Fig. \[isoradius\] and \[SB\]. Moreover, all the highly H[i]{}-deficient galaxies in our sample are well known perturbed Virgo spirals, on which the influence of the cluster environment has already been proven (e.g., [@vollmer09]). So, the difference in the dust distribution between gas-poor and gas-rich spirals observed here is likely due to the effect of the cluster environment and is not just related to the intrinsic properties of each galaxy. Future analysis of a larger and more complete sample will allow us to further disentangle the role of environment from morphology on the dust distribution in nearby spirals. A ‘truncation’ in the surface brightness profile of NGC4569 (the most H[i]{}-deficient galaxy in our sample) has already been observed at [*Spitzer*]{} 24 and 70 $\mu$m by [@n4569]. However, while a reduction in the 24 and 70 $\mu$m surface brightness might just be a direct consequence of the quenching of the star formation in gas-poor galaxies, this scenario is not valid in our case. For $\lambda$[[$ \stackrel{>}{\sim}$]{}]{}100-200 $\mu$m, the dust emission does not come predominantly from grains directly heated by photons associated with star formation activity, but from a colder component heated by photons of the diffuse interstellar radiation field (e.g., [@chini86; @draine07; @bendo10]). Since this colder component dominates the dust mass budget in galaxies, the trends here observed are likely not due to a reduction in the intensity of the ultraviolet radiation field, but they imply that in H[i]{}-deficient galaxies the dust surface density in the outer parts of the disk is significantly lower than in normal spirals. An alternative way to compare the properties of normal and gas-poor Virgo spirals is to look at their submm-to-near-infrared colours. Since the K-band is an ideal proxy for the stellar mass and the SPIRE fluxes provide an indication of the total dust mass, it is interesting to investigate how the $f(250)/f(K)$ and $f(500)/f(K)$ flux density ratios vary with $def_{HI}$. We find that highly H[i]{}-deficient galaxies have $f(250)/f(K)$ and $f(500)/f(K)$ ratios a factor $\sim$2-3 lower than normal galaxies (Fig. \[colours\]). This provides additional support to a scenario in which gas-poor galaxies have also lost a significant fraction of their original dust content. By comparing the dust mass per unit of H-band luminosity for a sample of late-type galaxies in the Coma-Abell1367 supercluster, [@contursi01] find no significant difference in the dust content of normal and H[i]{}-deficient spirals, apparently in contrast with our results. However, such a difference is due (at least in part) to the fact that the sample used by [@contursi01] does not include any galaxy with $def_{\rm HI}>$0.87. It is easy to see in Fig. \[colours\] that, for $def_{\rm HI}<$0.87, almost no trend is observed between the $f(500)/f(K)$ (or $f(250)/f(K)$) flux density ratio and $def_{HI}$. In fact, the Spearman correlation coefficient drops from $r_{s}\sim-$0.78 to $\sim-$0.28 and $\sim-$0.11 (i.e., a drop in the probability that two variables are correlated to 80 and 40%) for the $f(250)/f(K)$ and $f(500)/f(K)$ ratios, respectively. This implies that the two variables are no longer significantly correlated, highlighting once more that substantial dust stripping is observed only if the ISM is removed from well within the optical radius. ![The 250 $\mu$m-to-K-band (left) and 500 $\mu$m-to-K-band flux density ratios versus H[i]{}-deficiency. Symbols are as in Fig. \[isoradius\].[]{data-label="colours"}](colours_rev.epsi){width="8cm"} Conclusions =========== In this paper, we have shown that in H[i]{}-deficient galaxies the dust disk is significantly less extended than in gas-rich systems. This result, combined with the evidence that H[i]{}-deficient objects show a reduction in their submm-to-K-band flux density ratios, suggests that when the atomic hydrogen is stripped part of the dust is removed as well. However, the dust stripping appears efficient only when very gas-poor spirals are considered, implying that in order to be significant the stripping has to occur well within the optical radius. This is consistent with [@thomas04] who found that the 850 $\mu$m scale-length of nearby galaxies is smaller than the H[i]{}, suggesting that outside the optical radius the gas-to-dust ratio is higher than in the inner parts. Our analysis provides evidence that the cluster environment is able to significantly alter the dust properties of infalling spirals. We note that this has only been possible thanks to the unique spatial resolution and high sensitivity in detecting cold dust provided by the [*Herschel*]{}-SPIRE instrument and to the wide range of H[i]{}-deficiencies covered by our sample. Once combined with the direct detection of stripped dust presented by [@cortese10b] and [@gomez10], our results highlight dust stripping by environmental effects as an important mechanism for injecting dust grains into the intra-cluster medium, thus contributing to its metal enrichment. This is consistent with numerical simulations which predict that ram pressure alone can already contribute $\sim$10% of the enrichment of the ICM in clusters [@domainko]. Interestingly, the stripped grains should survive in the hot ICM long enough to be observed [@popescu00; @clemens10]. Once completed, HeViCS will allow a search for additional evidence of dust stripping and place important constraints on the amount of intra-cluster dust present in Virgo. Moreover, in combination with the Herschel Reference Survey [@hrs], it will be eventually possible to accurately quantify the degree of dust-deficiency in Virgo spirals. We thank the referee, Richard Tuffs, for useful comments which improved the clarity of this manuscript. We thank all the people involved in the construction and launch of [*Herschel*]{}. In particular, the [*Herschel*]{} Project Scientist G. Pilbratt, and the PACS and SPIRE instrument teams. natexlab\#1[\#1]{} , G., [et al.]{} 2010, A&A, this volume , M. D. & [Giovanelli]{}, R. 1987, , 321, 645 , A., [Boissier]{}, S., [Cortese]{}, L., [et al.]{} 2006, , 651, 811 , A., [et al.]{} 2010, A&A, this volume , A., [Eales]{}, S., [Cortese]{}, L., [et al.]{} 2010, PASP, 122, 261 , A. & [Gavazzi]{}, G. 2006, , 118, 517 , B., [Haynes]{}, M. P., & [Giovanelli]{}, R. 2005, , 130, 1037 , V., [Kotanyi]{}, C., [Balkowski]{}, C., & [van Gorkom]{}, J. H. 1994, , 107, 1003 , R., [Kruegel]{}, E., [Kreysa]{}, E. 1986, , 167, 315 , A., [van Gorkom]{}, J. H., [Kenney]{}, J. D. P., [Crowl]{}, H., & [Vollmer]{}, B. 2009, , 138, 1741 , M., [et al.]{} 2010, A&A, this volume , A., [Boselli]{}, A., [Gavazzi]{}, G., [et al.]{} 2001, , 365, 11 , L., [et al.]{} 2010, A&A, this volume , J., [et al.]{} 2010, A&A, this volume , G., [de Vaucouleurs]{}, A., [Corwin]{}, Jr., H. G., [et al.]{} 1991, [Third Reference Catalogue of Bright Galaxies]{}, ed. d. V. e. a. Roman, de Vaucouleurs , W., [Mair]{}, M., [Kapferer]{}, W., [et al.]{} 2006, , 452, 795 , R. & [Joseph]{}, R. D. 1989, , 239, 347 , B. T., [Dale]{}, D. A., [Bendo]{}, G. , [et al.]{} 2007, , 663, 866 , M., [Krumholz]{}, M. R., [Prochaska]{}, J. X., [Gavazzi]{}, G., & [Boselli]{}, A. 2009, , 697, 1811 , R. & [Haynes]{}, M. P. 1985, , 292, 404 , H., [et al.]{} 2010, A&A, this volume , M., [et al.]{} 2010, A&A, this volume , M. P. & [Giovanelli]{}, R. 1984, , 89, 758 , R. A. & [Kenney]{}, J. D. P. 2004, , 613, 866 , I., [Balogh]{}, M., [De Propris]{}, R., [et al.]{} 2002, , 334, 673 , G., [et al.]{} 2010, A&A, this volume , A., [et al.]{} 2010, A&A, this volume , M., [et al.]{} 2010, A&A, this volume , C. C., [Tuffs]{}, R. J., [Fischera]{}, J., & [V[ö]{}lk]{}, H. 2000, , 354, 480 , C. C., [Tuffs]{}, R. J., [V[ö]{}lk]{}, H., [Pierini]{}, D., [Madore]{}, B. F., 2002, , 567, 221 , M., [et al.]{} 2010, A&A, this volume , H. C., Alexander, P., Clemens, M. S., Green. D. A., Dunne, L., Eales, S. 2004, , 351, 362 , R. J., [Popescu]{}, C. C., [Pierini]{}, D., [et al.]{} 2002, , 139, 37 , B. 2009, , 502, 427 , R. H. 1988, , 72, 427 [^1]: [*Herschel*]{} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA. [^2]: The H[i]{}-deficiency ($def_{\rm HI}$) is defined as the difference, in logarithmic units, between the observed H[i]{} mass and the value expected from an isolated galaxy with the same morphological type and optical diameter [@haynes]. [^3]: The term ‘truncation’ is used here to indicate either an abrupt steepening of the surface-brightness profile or, more simply, a significant reduction in the disk scale-length compared to the optical one. [^4]: http://nedwww.ipac.caltech.edu/ [^5]: See also [@pohlen10] for an analysis of the two grand design Virgo spirals NGC4254 ($def_{\rm HI}\sim-$0.10) and NGC4321 ($def_{\rm HI}\sim$0.35). [^6]: This also confirms that the correlation shown in Fig. \[isoradius\] is not qualitatively affected by the definition of submm isophotal radius adopted here.
--- abstract: | The variational iteration method is used to solve nonlinear Volterra integral equations. Two approaches are presented distinguished by the method to compute the Lagrange multiplier. **Keywords:** variational iteration method, nonlinear Volterra integral equation, successive approximation method **AMS subject classification:** 65R20, 45J99 author: - 'E. Scheiber[^1]' title: On the Variational Iteration Method for the Nonlinear Volterra Integral Equation --- Introduction ============ The Ji-Huan He’s Variational Iteration Method (VIM) was applied to a large range differential and integral equations problems [@1]. The main ingredient of the VIM is the Lagrange multiplier used to improve an approximation of the solution of the problem to be solved [@2]. In this paper the VIM to solve a nonlinear Volterra equation is resumed. As specified in [@30] the Volterra integral equation must first be transformed to an ordinary differential equation or a nonlinear Volterra integro-differential equation by differentiating both sides. The solution of the Volterra integral equation applying VIM was exemplified in a series of papers [@34], [@33], [@31], [@32]. By applying the VIM, an initial value problem is deduced in order to obtain the Lagrange multiplier. The trick is to perform such a variation that the solution of the generated initial value problem can be deduced analytically. We analyze two variants to compute the Lagrange multiplier. The first variant was used in [@30] and [@31]. We observed that this approach reduces to the successive approximation method. Finally two numerical examples are given showing that the second approach gives a more rapidly converging version of VIM. The nonlinear Volterra integral equation\ of second kind ========================================= The nonlinear Volterra integral equation of second kind is [@30] $$\label{vim1et1} y(x)=f(x)+\int_0^xK(x,t)F(y(t))\mathrm{d}t,\qquad x\in[0,x_f],$$ where - $f(x), K(x,t)$ are continuous derivable functions; - $F(y)$ has a continuous second order derivative. These conditions ensure the existence and uniqueness of the solution to the nonlinear Volterra integral equation (\[vim1et1\]). As a consequence the solution can be computed using the successive approximation method (SAM) $$\begin{aligned} u_{k+1}(x)&=&f(x)+\int_0^xK(x,t)F(u_k(t))\mathrm{d}t,\label{vim1et2}\\ u_0(x)&=& f(x) \nonumber \end{aligned}$$ and $y(x)=\lim_{k\rightarrow\infty}u_k(x).$ The variational iteration method ================================ The main ingredient of the VIM is the Lagrange multiplier used to improve the computed approximations relative to a given iterative method. The initial approach {#the-initial-approach .unnumbered} -------------------- We shall remind the VIM applied to solve a nonlinear Volterra integral equation as it were presented in[@30], [@31]. In this approach the variation of the unknown function in the nonlinear term is neglected, resulting in a easily solvable initial value problem for the Lagrange multiplier. The derivative of (\[vim1et1\]) is used in VIM for the Volterra integral equation $$\label{vim1et3} u_{k+1}(x)=u_k(x)+\int_0^x\Lambda(t)\left(u'_k(t)-f'(t)-\frac{\mathrm{d}}{\mathrm{d}t}\int_0^tK(t,s)F(u_k(s))\mathrm{d}s\right)\mathrm{d}t.$$ The variation will be not applied to the nonlinear term $F(u_k(s))$ and the above equality is rewritten as $$\label{vim1et4} u_{k+1}(x)=u_k(x)+\int_0^x\Lambda(t)\left(u'_k(t)-f'(t)-\frac{\mathrm{d}}{\mathrm{d}t}\int_0^tK(t,s)F(\tilde{u}_k(s))\mathrm{d}s\right)\mathrm{d}t.$$ If $u_{k+1}(x)=y(x)+\delta u_{k+1}(x), u_k(x)=y(x)+\delta u_k(x),$ but $\tilde{u}_k(x)=y(x),$ then it results $$\label{vim1et5} \delta u_{k+1}(x)=\delta u_k(x)+\int_0^x\Lambda(t)\delta u'_k(t)\mathrm{d}t.$$ After an integration by parts it results $$\delta u_{k+1}(x)=(1+\Lambda(x))\delta u_k(x)+\int_0^x\Lambda'(t)\delta u_k(t)\mathrm{d}t.$$ In order that $u_{k+1}$ be a better approximation than $u_k,$ it is required that $\Lambda$ is the solution of the following initial value problem $$\begin{aligned} \Lambda'(x)&=&0, \label{vim1et6}\\ \Lambda(t)&=&-1,\qquad t\in[0,x].\label{vim1et7}\end{aligned}$$ The solution of this initial value problem in $\Lambda(t)=-1.$ Replacing this solution in (\[vim1et3\]) we get $$u_{k+1}(x)=u_k(x)-\int_0^x\left(u'_k(t)-f'(t)-\frac{\mathrm{d}}{\mathrm{d}t}\int_0^tK(t,s)F(u_k(s))\mathrm{d}s\right)\mathrm{d}t=$$ $$=f(x)+\int_0^xK(x,s)F(u_k(s))\mathrm{d}s+\left(u_k(0)-f(0)\right)=$$ $$=f(x)+\int_0^xK(x,s)F(u_k(s))\mathrm{d}s.$$ Thus we found the successive approximation method (\[vim1et2\]). Another approach {#another-approach .unnumbered} ---------------- We shall take into account partially the variation of the nonlinear term containing the unknown function. Now in (\[vim1et3\]) we apply Leibniz’s rule of differentiation under the integral sign $$\label{vim1et12} u_{k+1}(x)=u_k(x)+$$ $$+\int_0^x\Lambda(t)\left(u'_k(t)-f'(t)-K(t,t)F(u_k(t))-\int_0^t\frac{\partial K(t,s)}{\partial t}F(u_k(s))\mathrm{d}s\right)\mathrm{d}t.$$ We require that the variation doesn’t affect the term with integral, i.e.: $$u_{k+1}(x)=u_k(x)+$$ $$+\int_0^x\Lambda(t)\left(u'_k(t)-f'(t)-K(t,t)F(u_k(t))-\int_0^t\frac{\partial K(t,s)}{\partial t}F(\tilde{u}_k(s))\mathrm{d}s\right)\mathrm{d}t.$$ It results $$\label{vim1et8} \delta u_{k+1}(x)=\delta u_k(x)+$$ $$+\int_0^x\Lambda(t) \left(\delta u'_k(t)-K(t,t)\left(F(u_k(t))-F(y(t))\right)\right)\mathrm{d}t=$$ $$=\delta u_k(x)+\int_0^x\Lambda(t) \left(\delta u'_k(t)-K(t,t)F'(y(t))\delta u_k(t)\right)\mathrm{d}t+O((\delta u_k)^2)=$$ $$=(1+\Lambda(x))\delta u_k(x)-\int_0^x\left(\Lambda'(t)+\Lambda(t)K(t,t)F'(y(t))\right)\delta u_k(t)\mathrm{d}t+O((\delta u_k)^2).$$ Again, in order for $u_{k+1}$ to be a better approximation than $u_k,$ it is required that $\Lambda$ is the solution of the following initial value problem $$\begin{aligned} & \Lambda'(t)+\Lambda(t)K(t,t) F'(y(t))=0, \label{vim1et9}\\ &\Lambda(x)=-1.&\label{vim1et10}\end{aligned}$$ for $t\in[0,x].$ The solution of this initial value problem (\[vim1et9\])-(\[vim1et10\]) is given by $$\label{vim1et11} \Lambda(t)=-e^{\int_t^xK(s,s)F'(y(s))\mathrm{d}s}.$$ Because $y(s)$ is an unknown function, the following problem is considered instead of (\[vim1et9\])-(\[vim1et10\]) $$\begin{aligned} &\Lambda'(t)+\Lambda(t)K(t,t) F'(u_k(t))=0, \label{vim1et90}\\ &\Lambda(x)=-1\label{vim1et91}\end{aligned}$$ with the solution denoted $\Lambda_k(t).$ The solution of the problem (\[vim1et90\])-(\[vim1et91\]) is $$\Lambda_k(t)=-e^{\int_t^xK(s,s)F'(u_k(s))\mathrm{d}s}.$$ Our next goal is to find a convenient form to implement (\[vim1et12\]) which is equivalent with (\[vim1et3\]). Denoting $$v_{k+1}(t)=u_k(t)+\int_0^tK(t,s)F(u_k(s))\mathrm{d}t$$ (\[vim1et3\]) may be rewritten as $$u_{k+1}(x)=u_k(x)+\int_0^x\Lambda_k(t)u'_k(t)\mathrm{d}t-$$ $$-\int_0^x\Lambda_k(t)\left(f'(t)+\frac{\mathrm{d}}{\mathrm{d}t}\int_0^tK(t,s)F(u_k(s))\mathrm{d}s\right)\mathrm{d}t.$$ After integration by parts in the two integrals we obtain $$u_{k+1}(x)=(1+\Lambda_k(x))u_k(x)-\Lambda_k(0)u_k(0)-\int_0^x\Lambda_k'(t)u_k(t)\mathrm{d}t-$$ $$-\Lambda_k(x)v_{k+1}(x)+\Lambda_k(0)f(0)+\int_0^x\Lambda_k'(t)v_{k+1}(t)\mathrm{d}t.$$ Considering (\[vim1et10\]) and that $u_k(0)=f(0)$ the above equality becomes $$\label{vim1et13} u_{k+1}(x)=v_{k+1}(x)+\int_0^x\Lambda_k'(t)\left(v_{k+1}(t)-u_k(t)\right)\mathrm{d}t,$$ where $$\Lambda_k'(t)=-\Lambda_k(t)K(t,t)F'(u_k(t))=e^{\int_t^xK(s,s)F'(u_k(s))\mathrm{d}s}K(t,t)F'(u_k(t)).$$ Comparing SAM with this approach of VIM we have found experimentally that VIM needs a smaller number of iterations to fulfill a stopping condition, e.g. the absolute error between two successive approximations is less than a given tolerance. Numerical results ================= On an equidistant mesh $x_i=i h,\ i\in\{0,1,\ldots,n\},$ with $h=\frac{x_f}{n},$ in the interval $[0,x_f],$ the numerical solution is $(u_i^{(k)})_{0\le i\le n},$ where $$u_i^{(k)}\approx u_k(x_i).$$ Details of implementation: - The initial approximations were chosen as $u_i^{(0)}=f(x_i),\ i\in\{0,1,\ldots,n\};$ - $u_0^{(k)}=f(x_0),$ for any $k\in\mathbb{N};$ - The component $u_i^{(k+1)}$ is computed using (\[vim1et13\]), for $x=x_i$. The integrals were computed using the trapezoidal rule of numerical integration; - The stopping condition was $\max_{0\le i\le n}|u_i^{(k+1)}-u_i^{(k)}|<\varepsilon.$ ([@30], p. 241, Example 3) $$\label{vim1ex3} y(x)=\cos{x}-\frac{1}{4}\sin{2x}-\frac{1}{2}x+\int_0^xy^2(t)\mathrm{d}t$$ with the solution $y(x)=\cos{x}.$ The evolution of the computations between the SAM and the VIM are given in the next table. ----------- ---------- ----------- ---------- Iteration Error Iteration Error 1 5.200451 1 0.986121 2 3.280284 2 0.486573 3 1.028282 3 0.092114 4 1.167517 4 0.005001 5 0.839565 5 0.000094 6 0.493593 6 0.000001 7 0.198224 8 0.075078 9 0.023424 10 0.006654 11 0.001705 12 0.000403 13 0.000089 14 0.000018 15 0.000004 ----------- ---------- ----------- ---------- In the above table, the meaning of the field *Error* at iteration $k$ is given by the expression $\max_{0\le i\le n}|u_i^{(k)}-u_i^{(k-1)}|.$ The used parameters to obtained the above results were: $x_f=\pi,\ n=30,\ \varepsilon=10^{-5}.$ The maximum of absolute errors between the obtained numerical solution and the exact solution was 0.003538. The plot of the numerical solution vs. the solution of (\[vim1ex3\]) are given in Fig. \[vim1pex3\]. ![Plot of the numerical solution and the exact solution.[]{data-label="vim1pex3"}](ex3.pdf){width="10cm" height="8cm"} ([@30], p. 240, Example 2) $$\label{vim1ex3} y(x)=e^x-\frac{1}{3}xe^{3x}+\frac{1}{3}x+\int_0^xxy^3(t)\mathrm{d}t$$ with the solution $y(x)=e^x.$ The corresponding results are given in the next Table and Fig. \[vim1pex2\]. ----------- ---------- ----------- ---------- Iteration Error Iteration Error 1 1.314327 1 4.085689 2 0.990523 2 1.844170 3 1.791381 3 1.202564 4 1.733644 4 1.073406 5 0.732878 5 1.111831 6 0.464864 6 0.700407 7 0.424189 7 0.13054 8 0.409519 8 0.004592 9 0.398283 9 0.00009 10 0.381286 10 0.000002 11 0.352116 12 0.307541 25 0.000011 26 0.000003 ----------- ---------- ----------- ---------- The used parameters to obtained the results were: $x_f=1.0,\ n=101,\ \varepsilon=10^{-5}.$ The maximum of absolute errors between the obtained numerical solution and the exact solution was 0.056763. ![Plot of the numerical solution and the exact solution.[]{data-label="vim1pex2"}](ex2.pdf){width="10cm" height="8cm"} Conclusions =========== 1. The VIM as it is presented in [@30], [@31], when $\Lambda(x)=-1,$ for any $x,$ returns to the successive approximation method. 2. Comparing SAM with VIM we have found experimentally that VIM needs a smaller number of iterations to fulfill a stopping condition, e.g. the absolute error between two successive approximations is less than a given tolerance. [999]{} Inokuti M., Sekine H., Mura T., 1980, *General Use of the Lagrange Multiplier in Nonlinear Mathematical Physics.* In *Variational Methods in Mechanics and Solids,* ed. Nemat-Nasser S., Pergamon Press, 156-162. He J.H., 2007, *Variational iteration method - Some recent results and new interpretations.* J. Comput. Appl. Math., 207, 3-17. Porshokouhi M.G., Ghambari B., Rashidi M., 2011, *Variational Iteration Method for Solving Volterra and Fredholm Integral Equations of Second Kind.* Gen.Math. Notes, **2**, no, 1, 143-148. Bani issa M.Sh., Hamoud A.A., Giniswamy, Ghadle K.P., 2019, *Solving nonlinear Volterra integral equations by using numerical technoques.* Int. J. Adv. Appl. Math. and Mech., **6**(4), 50-54. Shakeri S., Saadati R., Vaezpour S.M., Vahidi J., 2009, *Variational Iteration Method for Solving Integral Equations.* J. of Applied Sciences, **9** (4), 799-800. Wazwaz A-M., 2015, *A First Course in Integral Equations.* World Scientific, New Jersey. Xu L., 2007, *Variational iteration method for solving integral equations.* Computers & mathematics with applications. **54**, 1071-1078. [^1]: e-mail: [email protected]
--- abstract: 'We introduce a new class of arbitrary-order exponential time differencing methods based on spectral deferred correction (ETDSDC) and describe a simple procedure for initializing the requisite matrix functions. We compare the stability and accuracy properties of our ETDSDC methods to those of an existing implicit-explicit spectral deferred correction scheme (IMEXSDC). We find that ETDSDC methods have larger accuracy regions and comparable stability regions. We conduct numerical experiments to compare ETD and IMEX spectral deferred correction schemes against a competing fourth-order ETD Runge-Kutta scheme. We find that high-order ETDSDC schemes are the most efficient in terms of function evaluations and overall speed when solving partial differential equations to high accuracy. Our results suggest that high-order ETDSDC schemes are well-suited to work in conjunction with spectral spatial methods or other high-order spatial discritizations. Additionally, ETDSDC schemes appear to be immune to severe order reduction, a problem which affects other ETD and IMEX schemes, including IMEXSDC.' author: - 'Tommaso Buvoli [^1]' bibliography: - 'references\_etd.bib' - 'references\_sdc.bib' - 'references\_other.bib' date: 'May 28, 2014' title: 'A Class of Exponential Integrators based on Spectral Deferred Correction [^2] ' --- [**Keywords:**]{} Spectral deferred correction, exponential time differencing, implicit-explicit, high-order, stiff-systems, spectral methods. Introduction ============ In this paper we present a new class of arbitrary-order exponential time differencing (ETD) methods for solving nonlinear evolution equations of the form $$\phi_t = \Lambda \phi + \mathcal{N}(t,\phi)$$ where $\Lambda$ is a stiff linear operator and $\mathcal{N}$ is a nonlinear operator. Such systems commonly arise when discretizing nonlinear wave equations including Burgers’, nonlinear Schrödinger, Korteweg-de Vries, Kuramoto, Navier-Stokes, and the quasigeostrophic equation. ETD Adams methods [@beylkin1998ELP; @hochbruck2010exponentialreview], ETD Runge-Kutta methods [@cox2002ETDRK4; @KassamTrefethen05ETDRK4; @krogstad2005IF; @hochbruckostermann2005ETDRKSTIFFA; @hochbruckostermann2005ETDRKSTIFFB; @koikari2005rooted; @hochbruck2010exponentialreview], and ETD general linear methods [@ostermann2006general; @hochbruck2010exponentialreview] are well-understood, and many of these schemes perform competitively when integrating nonlinear evolution equations [@grooms2011IMEXETDCOMP; @KassamTrefethen05ETDRK4; @loffeld2013comparative]. Despite these advances, no practical high-order exponential integrators have been developed. High-order ETD Adams methods are largely unusable due to their small stability regions, and there are no ETD Runge-Kutta schemes of order greater than five. Nevertheless, high-order exponential integrators could prove useful if paired with spatial spectral discretizations, especially on periodic domains. Spectral methods exhibit exceptional accuracy and have been shown to be remarkably successful when applied to nonlinear wave equations [@fornberg1998PSM; @trefethen2000SMM; @boyd2013CFSM]. When applying spectral methods on PDEs with smooth solutions, the time integrator often limits the overall order of accuracy. The development of stable, high-order integrators will allow for more accurate numerical simulations at reduced computational costs and will better balance spatial and temporal accuracy. In order to develop high-order ETD schemes, we turn our attention to spectral deferred correction methods (SDC), originally developed by Dutt, Greengard, and Rokhlin [@Dutt2000SDC]. SDC methods are a class of high-order, self-starting time integrators for solving ordinary differential equations. By pairing Euler’s method with a Picard integral equation, SDC methods achieve an arbitrary order of accuracy and favorable stability properties. Remarkably, they are simple to implement, even at high order. In the past decade, there has been a continuing effort to analyze and improve these methods [@speck2013multi; @tang2013high; @hansen2011order; @christlieb2009spectral; @huang2006accelerating; @liu2008strong; @layton2005implications; @guttel2013efficient]. In particular, Minion introduced implicit-explict spectral deferred correction schemes (IMEXSDC) for integrating stiff semilinear systems [@Minion2003IMEX]. To date, these methods remain the only practical arbitrary-order IMEX integrators. In this paper, we present a new exponential integrator based on spectral deferred correction methods. Our new integrator, which we call ETDSDC, allows for an arbitrary-order of accuracy, has favorable stability properties, and outperforms state-of-the-art ETD schemes when low error tolerances are required. In Section \[sec:sdc\], we provide a brief introduction to spectral deferred correction methods before deriving our ETDSDC method and discussing IMEXSDC. In Section \[sec:stability\_accuracy\], we analyze and compare the stability and accuracy regions of these two methods. In Section \[sec:etd\_coefficients\], we discuss two techniques for accurately initializing the coefficients for our ETDSDC method. Finally, in Section \[sec:numerical\_experiments\], we perform numerical experiments comparing our ETDSDC method against IMEXSDC and ETDRK4, a well-known fourth-order exponential integrator [@cox2002ETDRK4]. Spectral Deferred Correction Methods {#sec:sdc} ==================================== In this section, we provide a review of Euler-based spectral deferred correction methods [@Dutt2000SDC], before deriving our ETDSDC method in Section \[subsec:etdsdc\] and the IMEXSDC method [@Minion2003IMEX] in Section \[subsec:imexsdc\]. To introduce SDC methods, we consider a first-order initial value problem of the form $$\begin{split} & \phi'(t) = F(t,\phi) \\ & \phi(a) = \phi_a \end{split} \label{eq:model_ode_vanilla}$$ where $\phi\in\mathbb{C}^d$ and $F(t,\phi)$ is $\nu$ times differentiable for $\nu \gg 1$. We then shift our attention to a semi-linear first-order initial value problem of the form $$\begin{aligned} \begin{split} &\phi'(t) = \Lambda \phi + \mathcal{N}(t,\phi) \\ & \phi(a) = \phi_a \end{split} \label{eq:model_ode_semilinear}\end{aligned}$$ where again $\phi\in\mathbb{C}^d$, $\mathcal{N}\in C^\nu$, and $\Lambda$ is a $d\times d$ matrix (not necessarily diagonal). The continuity conditions on $\mathcal{N}(t,\phi)$ and $F(t,\phi)$ are stronger than the Lipschitz continuity required for existence and uniqueness, but they ensure that high-order methods can be applied successfully. Preliminaries {#sec:preliminaries} ------------- Spectral deferred correction schemes iteratively improve the accuracy of an approximate solution to by repeatedly solving an integral equation that governs error. This integral equation is of the form $$y(t) = y(a) + \int^t_a g(s,y(s))ds + r(t), \label{eq:picard_like_eqn}$$ where $r(a)=0$. As first proposed by Dutt et al. [@Dutt2000SDC], we can approximate the solution to at points $t_0$, $t_1$, $\ldots$, $t_m$ using the implicit ($\ell=1$) or explicit ($\ell=0$) Euler-like method $$y(t_{n+1}) = y(t_n) + h_n g(t_{n+\ell},y(t_{n+\ell})) + r(t_{n+1}), \label{eq:el_method}$$ where $h_n=t_{n+1} - t_n$. To arrive at the error equation of the form (\[eq:picard\_like\_eqn\]), we let $\phi^k(t)$ be an approximate solution to , and let the error be ${E(t) = \phi(t) - \phi^k(t)}$. By considering the integral form of , one arrives at $$\phi(t) = \phi(a) + \int_{a}^{t} F(s,\phi(s)) ds. \label{eq:sdc_vanilla_intergral}$$ Substituting $\phi(t) = \phi^{k}(t) + E(t)$ leads to the integral equation $$E(t) = - \phi^k(t) + \phi^k(a) + E(a) + \int^{t}_{a} F(s,\phi^k(s) + E(s))ds. \label{eq:sdc_vanilla_error}$$ Introducing the residual $$R(t,a,\phi^k) = \left[ \phi^k(a) + \int^{t}_{a} F(s,\phi^k(s)) ds \right] -\phi^k(t) \label{eq:sdc_vanilla_residual_equation}$$ allows us to rewrite as $$E(t) = E(a) + \int^t_a G(s,E(s)) ds + R(t,a,\phi^k), \label{eq:sdc_vanilla_correction_equation} \\$$ where $$G(s,E(s)) = F(s,\phi^k(s) + E(s)) - F(s,\phi^k(s)). \label{eq:G_definition}$$ Rewriting in this manner isolates the residual and the error terms and leads to an equation of the form (\[eq:picard\_like\_eqn\]). The residual $R(t,a,\phi^k)$ depends only on known quantities and can be approximated to arbitrary accuracy via numerical quadrature of the function $F(t,\phi^{k}(t))$. If we consider a single timestep of method (\[eq:el\_method\]) applied to , and suppose that $\phi^k(t)$ is a sufficiently good approximation so that $$\sup_{t\in[t_{n+1},t_n]}\|E(t)\| = O(h^m) \hspace{1em} \text{for} \hspace{1em} h=t_{n+1} - t_n \text{ and } m\in\mathbb{N},$$ then, since $F(t,\phi)$ is Lipchitz continuous in $\phi$, we have that $$\|h G(s,E(s))\| = h\| F(s,\phi^k(s) + E(s)) - F(s,\phi^{k}(s)) \| = O(h^{m+1}).$$ Thus, the Euler-like method (\[eq:el\_method\]) is sufficient for estimating $E(t)$ to in the interval $[t_n,t_{n+1}]$. This approximate error, which we denote by $E^k(t)$, can be used to obtain an $O(h^{n+1})$ accurate solution . This process can be repeated $M$ times to obtain a sequence of increasingly accurate approximations to . To implement this strategy numerically, Dutt et al. proposed to divide each timestep $[t_n,t_{n+1}]$ into $N$ substeps or quadrature nodes which we denote via ${t_{n,1}, \ldots, t_{n,N}}$ [@Dutt2000SDC]. This enables us to represent the approximate solution $\phi^k(t)$ as an interpolating polynomial which passes through the quadrature points. We can then calculate a provisional solution $\phi^1(t)$ at each node using either forward or backward Euler, and obtain a sequence of higher-order approximations $\phi^{k}(t_{n,j}) = \phi^{k-1}(t_{n,j}) + E^{k-1}(t_{n,j})$ by repeatedly approximating the error $E(t)$ at each quadrature node using (\[eq:el\_method\]). The choice of the nodes $t_{n,1},\ldots,t_{n,N}$ affects the quality of the quadrature approximation used to determine . Dutt et al. use Gauss-Legendre points, and Minion has studied the implications of using different quadrature nodes [@layton2005implications]. After $M$ correction sweeps, the order of accuracy at each node is $\min(N,M+1)$, regardless of the choice of quadrature nodes [@hansen2011order; @tang2013high]. To simplify our discussion, we consider only a single timestep of spectral deferred correction from $t_n=0$ to $t_{n+1}$. We find it most convenient to describe SDC methods in terms of normalized quadrature points which reduce to the quadrature points if the stepsize $h=1$. Throughout the rest of this paper we will make extensive use of the following definitions: ----------- ----------------------------- ---------------------- ---------------------- Stepsize: $h = t_{n+1} - t_{n}$ Normalized nodes: $\tau_i = t_{n,i}/h$ Substeps: $h_i = t_{n,i+1} - t_{n,i}$ Normalized substeps: $\eta_i = h_i/h$ ----------- ----------------------------- ---------------------- ---------------------- We will use the notation SDC$_N^M$ to denote a spectral deferred correction method which uses the quadrature points $\{\tau_i\}_{i=1}^N$, and performs $M$ correction sweeps. For brevity we also use the variables $\phi_i^k = \phi^k(t_{n,i})$ and $E_i^k = E^k(t_{n,i})$ to denote the approximate solution and the error at the $i$th quadrature node after $k$ correction sweeps. Euler-Based Spectral Deferred Correction Methods {#subsec:sdc_intro} ------------------------------------------------ We now describe Euler-based spectral deferred correction methods in detail. Implicit and Explicit SDC methods use Implicit or Explicit Euler respectively to determine the provisional solution $\phi^1(t)$ at the quadrature points $h\tau_i$. Applying the Euler-like method (\[eq:el\_method\]) to one obtains an approximation of the error $E(t)$ at each of the quadrature points. Every step of this Euler-like method requires approximating the residual term; we describe this process below. : During the $k^{\text{th}}$ correction sweep, $\phi^{k}(t)$ is known at the quadrature points. The residual term (\[eq:sdc\_vanilla\_residual\_equation\]) can be approximated for $t=h\tau_{i+1}$ and $a=h\tau_i$ at the cost of $N$ function evaluations $F(h\tau_i,\phi_i^k)$ via $$\hat{R}(h\tau_{i+1},h\tau_{i},\phi^k) = \phi^k(h\tau_i) -\phi^k(h\tau_{i+1}) + I^{i+1}_{i}(\phi^k)$$ where $I_{i}^{i+1}(\phi^k)$ denotes the $N$th order numerical quadrature approximation to $$\int^{h\tau_{i+1}}_{h\tau_{i}} F(s, \phi^{k}(s)) ds. \label{eq:sdc_vanilla_quadrature}$$ The coefficients for this numerical quadrature can be obtained for general quadrature points using an algorithm which we propose in Section \[sec:etd\_coefficients\]. For Chebyshev quadrature points, a fast $O(N\log(N))$ matrix-free algorithm exists for computing (\[eq:sdc\_vanilla\_quadrature\]) [@norris1999spectral]. Given the initial condition $\phi_1^1 = \phi(a)$, we can express a single timestep of an SDC$_N^M$ method algorithmically: [|X|]{} Note: $E_1^k$ = 0\ [* Initial Solution (Euler):*]{} $\phi^1_{i+1} = \phi^1_{i} + h_{i} F(h\tau_{i+\ell}, \phi^1_{i+\ell})$ $E_{i+1}^k = E_{i}^k + h_i G(h\tau_{i+\ell}, E^k, \phi^k) + \hat{R}(h\tau_{i+1},h\tau_{i},\phi^k)$ $\phi^{k+1}_{i+1} = \phi^{k}_{i+1} + E^k_{i+1}$\ By substituting the expression for $E^k_{i+1}$ into the update formula for $\phi^{k+1}_{i+1}$, noting that $\phi^{k+1}_i = \phi^k_i + E^k_i$, and using , one arrives at the following direct update formula: $$\begin{aligned} \phi^{k+1}_{i+1} = \phi^{k+1}_{i} + h_i\left[F(h\tau_{i+\ell},\phi_{i+\ell}^{k+1}) - F(h\tau_{i+\ell}, \phi_{i+\ell}^k) \right] + I_i^{i+1}(\phi^k). \end{aligned}$$ This compact form for spectral deferred correction methods was first mentioned in [@Minion2003IMEX] but was not recommended due to potential numerical rounding errors. However, in our numerical experiments, we find that this compact formula leads to simpler codes and equally accurate results. We therefore make use of this compact update formula in all of our codes. ETD Spectral Deferred Correction Methods {#subsec:etdsdc} ---------------------------------------- We now introduce a new class of exponential integrators based on spectral deferred correction for solving , which we repeat here for convenience: $$\begin{split} &\phi'(t) = \Lambda \phi + \mathcal{N}(t,\phi), \\ & \phi(a) = \phi_a. \end{split}$$ To derive ETD spectral deferred correction schemes, we seek an error equation of the form $$y(t) = y(a)e^{\Lambda(t-a)} + \int^t_a e^{\Lambda(t-s)} g(s,y(s))ds + r(t). \label{eq:etd_picard_like_eqn}$$ We propose to approximate the solution to by replacing $g(s,y(s))$ with a one-point approximation, leading to the explicit ($\ell=0$) or implicit ($\ell = 1$) ETD Euler-like method $$y(t_{n+1}) = y(t_n) e^{h \Lambda} + \Lambda^{-1} \left[ e^{h \Lambda}-I \right] g(t_{n+\ell},y(t_{n+\ell})) + r(t_{n+1}). \label{eq:etd_el_method}$$ To arrive at an error equation of the form (\[eq:etd\_picard\_like\_eqn\]), we let $\phi^k(t)$ be an approximate solution of , and define the error to be $E(t) = \phi(t) - \phi^k(t)$. Applying variation of constants, we obtain the integral form of , $$\phi(t) = \phi(a) e^{\Lambda (t-a)} + \int_{a}^{t} e^{\Lambda (t-s)} \mathcal{N}(s,\phi(s)) ds. \label{eq:sdc_etd_intergral}$$ Substituting $\phi(t) = \phi^{k}(t) + E(t)$ leads to the integral equation $$E(t) = -\phi^{k}(t) + \left(\phi^{k}(a) + E(a)\right) e^{\Lambda(t-a)} + \int_{a}^t e^{\Lambda (t-s)} \mathcal{N}(s, \phi^{k}(s)+E(s)) ds. \label{eq:sdc_etd_error}$$ Introducing the residual $$R_{e}(t,a,\phi^k) = \left[ \phi^k(a)e^{ \Lambda (t-a)} + \int_{a}^t e^{\Lambda (t-s)} \mathcal{N}(s, \phi^k(s)) ds \right] -\phi^k(t) \label{eq:sdc_etd_residual_equation}$$ allows us to rewrite as $$E(t) = E(a)e^{ \Lambda (t-a)} + \int_{a}^t e^{\Lambda (t-s)} H(s,E(s)) ds + R_{e}(t,a,\phi^k), \label{eq:sdc_etd_correction_equation}$$ where $$H(s,E(s)) = \mathcal{N}(s, \phi^{k}(s)+E(s)) - \mathcal{N}(s, \phi^{k}(s)). \label{eq:H_definition}$$ Now that we have obtained an error equation of the form (\[eq:etd\_picard\_like\_eqn\]), we are free to proceed in the same manner as Euler-based spectral deferred correction. The provisional solution $\phi^1(t)$ is calculated at the quadrature points using either implicit or explicit ETD Euler and the error at each quadrature point is estimated using (\[eq:etd\_el\_method\]). As before, we describe the computation of the residual term. : During the $k^{\text{th}}$ correction sweep, $\phi^{k}(t)$ is known at the quadrature points. The residual (\[eq:sdc\_etd\_residual\_equation\]) can be approximated for ${t=h\tau_{i+1}}$ and ${a=h\tau_i}$ at the cost of $N$ function evaluations via $$\hat{R}_e(h\tau_{i+1},h\tau_{i},\phi^k(t)) = \phi^{k}(h\tau_i) e^{ h_i \Lambda} -\phi^k(h\tau_{i+1}) + W^{i+1}_{i}(\phi^k)$$ where $W_{i}^{i+1}(\phi^k)$ denotes the weighted $N$ point numerical quadrature approximation to $$\int_{h\tau_i}^{h\tau_{i+1}} e^{\Lambda (h\tau_{i+1}-s)} \mathcal{N}(s, \phi^{k}(s)) ds \label{eq:etd_quad_approx}$$ where the weight function is $w(s) = e^{\Lambda (\tau_{i+1}-s)}$. We describe in detail how to obtain the coefficients for this weighted quadrature in Section \[sec:etd\_coefficients\]. We use ETDSDC$_N^M$ to denote an ETD spectral deferred which performs $M$ correction sweeps on the quadrature points $\{\tau_i\}_{i=1}^N$. Given the initial condition $\phi_1^1=\phi(a)$, we can express a single timestep of an ETDSDC$_N^M$ method algorithmically: [|X|]{} Note: $E_1^k$ = 0\ [* Initial Solution (ETD Euler):*]{} $\displaystyle \phi^1_{i+1} = \phi^1_{i} e^{h_i \Lambda} + \Lambda^{-1}\left[ e^{h_i \Lambda}-I \right] \mathcal{N}(h\tau_{i+\ell}, \phi^1_{i+\ell})$ $\displaystyle E_{i+1}^k = E_{i}^k e^{h_i \Lambda} + \Lambda^{-1} \left[ e^{h_i \Lambda}-I \right] H(h\tau_{i+\ell}, E^k, \phi^k) + \hat{R}_e(h\tau_{i+1},h\tau_{i},\phi^k)$ $\phi^{k+1}_{i+1} = \phi^{k}_{i+1} + E^k_{i+1}$\ By substituting the expression for $E^k_{i+1}$ into the update formula for $\phi^{k+1}_{i+1}$, noting that $\phi^{k+1}_i = \phi^k_i + E^k_i$, and using , one arrives at the following direct update formula: $$\begin{aligned} \phi^{k+1}_{i+1} = \phi^{k+1}_{i} e^{h_i \Lambda} + \Lambda^{-1} \left[e^{h_i \Lambda}-1\right] \left[\mathcal{N}(h\tau_{i+\ell},\phi_{i+\ell}^{k+1}) - \mathcal{N}(h\tau_{i+\ell}, \phi_{i+\ell}^k) \right] + W_i^{i+1}(\phi^k). \label{eq:etdsdc_compact} \end{aligned}$$ Though we have derived both an implicit and explicit exponential integrator, we will be solely considering the explicit exponential integrator throughout the rest of this paper. IMEX Spectral Deferred Correction {#subsec:imexsdc} --------------------------------- We now briefly discuss Minion’s IMEXSDC$^M_N$ method for solving [@Minion2003IMEX]. The provisional solution $\phi^1(t)$ is calculated using IMEX Euler. The error and residual equations can be derived by repeating the procedure outlined in Section \[sec:preliminaries\] with ${F(t,y) = \Lambda y + \mathcal{N}(t,y)}$. This leads to $$\begin{aligned} & E(t) = E(a) + \int_{a}^{t} \left[\Lambda E(s) + G(s,E(s)) \right]ds + R(t,a,\phi^k), \label{eq:sdc_imex_correction_equation} ~~~\\ & H(s,E(s)) = \mathcal{N}(s,E(s) + \phi^{k}(s)) - \mathcal{N}(s,\phi^{k}(s)) \\ & R(t,a,\phi^k) = \left[ \phi^{k}(a) + \int_{a}^{t} \left[ \Lambda \phi^{k}(s) + \mathcal{N}(s,\phi^{k}(s)) \right] ds \right]-\phi^{k}(t). \label{eq:sdc_imex_residual_equation} \end{aligned}$$ Notice that is of the form $$y(t) = y(a) + \int^t_a \left[ \Lambda y(s) + g(s,y(s)) \right] ds + r(t). \label{eq:imex_picard_like_eqn}$$ We can approximate by treating the linear term implicitly and the nonlinear term explicitly, yielding the IMEX Euler-like scheme $$y(t_{n+1}) = (I - h\Lambda)^{-1} \left[y(t_n) + h g(t_{n+\ell},y(t_{n+\ell})) + r(t_{n+1})\right]. \label{eq:imex_el_method}$$ The residual term (\[eq:sdc\_imex\_residual\_equation\]) is approximated exactly as described in Section \[subsec:sdc\_intro\], except the integrand in is now ${\Lambda \phi^{k}(s) + N(s,\phi^{k}(s))}$. We denote the quadrature approximation to the residual for IMEXSDC by $\tilde{R}(t,a,\phi)$. Given the initial condition $\phi_1^1=\phi(a)$, we can express a single timestep of an IMEXSDC$_N^M$ method algorithmically: [|X|]{} Note: $E_1^k$ = 0\ [* Initial Solution (IMEX Euler):*]{} $\displaystyle \phi^1_{i+1} = \left[I-h_i \Lambda \right]^{-1}\left[ \phi^{1}_{i} + h_i N(h\tau_i,\phi^{1}_i)\right] $ $\displaystyle E_{i+1}^k = \left[I-h_i \Lambda \right]^{-1} \left[E^k_i + h_i H(h\tau_i,E, \phi^{k}) + \tilde{R}(h\tau_{i+1},h\tau_i,\phi^k) \right]$ $\phi^{k+1}_{i+1} = \phi^{k}_{i+1} + E^k_{i+1}$\ By rewriting the error formula implicitly so that $$E_{i+1}^k = \left[E^k_i + h_i(\Lambda E^k_{i+1} + H(h\tau_i,E, \phi^{k})) + \tilde{R}(h\tau_{i+1},h\tau_i,\phi^k) \right],$$ substituting this expression into the update formula for $\phi^{k+1}_{i+1}$, and noting that $$E_{i+1}^k = \phi^{k+1}_{i+1} - \phi^{k}_{i+1}, \hspace{2em} \phi^{k+1}_i = \phi^k_i + E^k_i$$ one arrives at the following direct update formula: $$\phi^{k+1}_{i+1} = \left[I-h_i \Lambda \right]^{-1} \left[\phi_{i}^{k+1} - (h_i \Lambda) \phi_{i+1}^{k} + h_i(\mathcal{N}(h\tau_i,\phi^{k+1}_i) - \mathcal{N}(h\tau_i,\phi_{i}^{k}) ) + \tilde{I}_i^{i+1}(\phi^k)\right]$$ where $\tilde{I}^{i+1}_i(\phi^k)$ denotes the numerical quadrature approximation to $$\int^{h\tau_{i+1}}_{h\tau_{i}} \Lambda \phi^k(s) + \mathcal{N}(s, \phi^{k}(s)) ds.$$ Stability and Accuracy {#sec:stability_accuracy} ====================== Determining the stability properties of IMEX and ETD integrators is non-trivial. A commonly used approach is to consider the model problem $$\begin{split} & \phi' = \mu \phi + \lambda \phi \\ & \phi(0) = 1 \end{split} \label{eq:stability_model_problem}$$ where $\mu,\lambda \in \mathbb{C}$ and the terms $\mu \phi$, $\lambda \phi$ act as the linear and nonlinear term respectively. This model problem highlights stability for when it is possible to simultaneously diagonalize both the linear and nonlinear operators around a fixed point. Though this analysis does not extend to general linear systems, it has proven useful for predicting stability properties of IMEX and ETD methods on a variety of partial differential equations [@grooms2011IMEXETDCOMP]. Applying an ETDSDC$_{N}^{M}$ or IMEXSDC$_{N}^{M}$ method on leads to a recursion relation of the form $$\phi(t_{n+1}) = \psi^M_N(r,z) \phi(t_n)$$ where $r=\mu h$, $z=\lambda h$, and $h$ denotes the timestep. As with all one-step methods, the stability region is defined as $$\mathcal{S} = \{(r,z)\in \mathbb{C}^2, |\psi^M_N(r,z)|\le 1 \}.$$ We list the stability functions $\psi^M_N(r,z)$ for ETDSDC$^M_N$ and IMEXSDC$^M_N$ schemes in Table \[tab:stab\_funs\]. [X]{} [**ETDSDC Stability Functions $\psi_1^k(r,z) = 1$**]{}\ $$\renewcommand{{1.5}}{3} \begin{array}{l c l} \psi_{i+1}^{1} &=& \displaystyle e^{r \eta_i} \psi^{1}_{i} + \frac{e^{r \eta_i} - 1}{r} z \psi_{i}^{1} \\ \psi_{i+1}^{k+1} &=& \displaystyle e^{r \eta_i} \psi_{i}^{k+1} + \frac{e^{r \eta_i} - 1}{r} z (\psi_{i}^{k+1} - \psi_{i}^{k}) + z\sum_{j=1}^N \mathbf{W}_{i,j} \psi_{j}^{k} \hspace{1em} \\ \text{where} & & \displaystyle \mathbf{W}_{ij} = \int^{\tau_{i+1}}_{\tau_i} e^{r(\tau_{i+1}-s)} L_j(s) ds, \hspace{2em} L_j(s) = \prod^{N}_{\substack{l=1\\l \neq j}} \frac{(s-\tau_l)}{(\tau_j-\tau_l)}. \end{array} \label{eq:etdsdc_stability_function}$$\ [X]{} [**IMEXSDC Stability Functions $\psi_1^k(r,z) = 1$**]{}\ $$\renewcommand{{1.5}}{3} \begin{array}{l c l} \psi^1_{i+1} &=& \displaystyle \left(\frac{1 + z \eta_i}{1-r\eta_i} \right) \psi_{i}^{1} \\ \psi^{k+1}_{i+1} &=& \displaystyle \left(\frac{\psi_{i}^{k+1} + \eta_iz \left( \psi^{k+1}_{i} - \psi^{k}_{i} \right) - r\eta_i \psi_{i+1}^{k} + (r + z) \sum_{j=1}^N \mathbf{I}_{i,j} \psi_{j}^{k}}{1-r\eta_i} \right) \\ \text{where} & & \displaystyle \mathbf{I}_{ij} = \int^{\tau_{i+1}}_{\tau_i} L_j(s) ds, \hspace{2em} L_j(s) = \prod^{N}_{\substack{l=1\\l \neq j}} \frac{(s-\tau_l)}{(\tau_j-\tau_l)}. \end{array} \label{eq:imexsdc_stability_function}$$ We choose to analyze stability for PDEs with linear dispersion and dissipation; thus, $r = h\mu$ and $z = h\lambda$ are complex-valued. Several strategies have been proposed for effectively visualizing the resulting four-dimensional stability region. As in [@beylkin1998ELP; @cox2002ETDRK4; @krogstad2005IF], we choose to overlay two-dimensional slices of the stability regions, each corresponding to a fixed $r$ value. For the sake of brevity, we focus our attention on 8th order methods where ${N=8},~{M=7}$ and on 16th order methods where ${N=16},~{M=15}$. For all methods, we select the Chebyshev quadrature nodes $$\tau_i = \frac{1}{2}\left(1 - \cos\left(\frac{\pi(i-1)}{N-1}\right)\right)\hspace{2em} i=1,\ldots,N.$$ We pick a range of real, imaginary, and complex $r$ values to simulate nonlinear PDEs with varying degrees of linear dispersion and dissipation. We plot stability regions pertaining to $$r \in -1 \cdot [0,30], \hspace{2em} r \in 1i \cdot [0,30], \text{ and} \hspace{2em} r \in \exp(3\pi i/4) \cdot [0,30] \label{eq:r_value_range}$$ in Figure \[fig:stability\]. For these three $r$ ranges, we find that the stability regions of all methods grow as $|r|$ increases. For imaginary $r$, the stability regions for ETDSDC methods temporarily decrease before growing. Though all methods exhibit satisfactory stability properties, IMEXSDC methods allow for coarser timesteps on a wider range of $(r,z)$. Overall, our results suggest that both IMEXSDC methods and ETDSDC methods exhibit good stability properties on a wide range of stiff nonlinear evolution equations. When analyzing spectral deferred correction methods, it is also common to plot accuracy regions. Accuracy regions highlight the restrictions on the stepsize $h$ so that error after one timestep is smaller than $\epsilon>0$. They are simply defined as $$\mathcal{A}_\epsilon = \{(r,z)\in \mathbb{C}^2, |\psi^M_N(r,z) - \exp(r+z)|\le \epsilon \}.$$ They were introduced in [@Dutt2000SDC] for comparing the efficiency of high-order methods, and provide a more detailed picture than stability regions which solely differentiate between convergent and divergent $(r,z)$ pairs. We find that as $|r|$ increases, the accuracy region containing $z=0$ decreases rapidly for ETDSDC$^M_N$ methods and vanishes entirely for IMEXSDC$^M_N$ methods. This behavior can be understood from and . For the ETDSDC$^M_N$ methods it follows that ${\psi_N^M(r,0) = \exp(r h)}$; moreover, since the stability function $\psi_N^M(r,z)$ is continuous, then for any $\epsilon > 0$, there exists a nontrivial accuracy region surrounding $z=0$. The same cannot be said for IMEXSDC schemes since satisfies the weaker relation ${\psi(r,0) = \exp(rh) + O(rh)}$; hence, as $r$ becomes sufficiently large, there need not exist an accuracy region around $z=0$. We present accuracy regions for ${\epsilon = 1\times10^{-8}}$ in Figure \[fig:accuracy\]. We consider the three ranges of $r$ values in (\[eq:r\_value\_range\]), but due to rapidly shrinking accuracy regions, we are only able to visualize different subsets of $r$ values for each numerical method. ETDSDC$^M_N$ schemes outperform IMEXSDC$^M_N$ schemes for all tested values. Accuracy regions for the ETD methods decrease more slowly, and the non-vanishing accuracy regions around $z=0$ guarantee accuracy for any $r$ so long as $z$ is chosen sufficiently small. The MATLAB code used to generate these figures can be found in [@BuvoliETDZenodo] and can be easily modified to generate stability and accuracy plots for other ETDSDC or IMEXSDC methods. [Stability Region Plots]{} ------------------------------------------------ [ $r = 0$ ]{} [ $r = R_0/2$ ]{} [ $r = R_0$]{} ------------------------------------------------ [ ETDSDC$_8^7$]{} [ IMEXSDC$_{8}^{7}$]{} [ ETDSDC$_{16}^{15}$]{} [ IMEXSDC$_{16}^{15}$]{} [**Dissipative Model Problem: $r \in -1 \cdot [0, 30]$ and $R_0 = -30$** ]{} ![Stability regions for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in the legend. We plot an additional black contour for the ETDSDC$^{15}_{16}$ method on the dispersive model problem to show that stability regions eventually grow for sufficiently large imaginary $r$. For large $|r|$, increasing the order of the ETD and IMEX methods does not lead to significantly larger stability regions.[]{data-label="fig:stability"}](figures/stability/ETD_8_7_dissipative.pdf){width="1\linewidth"} ![Stability regions for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in the legend. We plot an additional black contour for the ETDSDC$^{15}_{16}$ method on the dispersive model problem to show that stability regions eventually grow for sufficiently large imaginary $r$. For large $|r|$, increasing the order of the ETD and IMEX methods does not lead to significantly larger stability regions.[]{data-label="fig:stability"}](figures/stability/IMEX_8_7_dissipative.pdf){width="1\linewidth"} ![Stability regions for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in the legend. We plot an additional black contour for the ETDSDC$^{15}_{16}$ method on the dispersive model problem to show that stability regions eventually grow for sufficiently large imaginary $r$. For large $|r|$, increasing the order of the ETD and IMEX methods does not lead to significantly larger stability regions.[]{data-label="fig:stability"}](figures/stability/ETD_16_15_dissipative.pdf){width="1\linewidth"} ![Stability regions for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in the legend. We plot an additional black contour for the ETDSDC$^{15}_{16}$ method on the dispersive model problem to show that stability regions eventually grow for sufficiently large imaginary $r$. For large $|r|$, increasing the order of the ETD and IMEX methods does not lead to significantly larger stability regions.[]{data-label="fig:stability"}](figures/stability/IMEX_16_15_dissipative.pdf){width="1\linewidth"} [**Dispersive Model Problem: $r \in 1i \cdot [0, 30]$ and $R_0 = 30i$** ]{} ![Stability regions for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in the legend. We plot an additional black contour for the ETDSDC$^{15}_{16}$ method on the dispersive model problem to show that stability regions eventually grow for sufficiently large imaginary $r$. For large $|r|$, increasing the order of the ETD and IMEX methods does not lead to significantly larger stability regions.[]{data-label="fig:stability"}](figures/stability/ETD_8_7_dispersive.pdf){width="1\linewidth"} ![Stability regions for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in the legend. We plot an additional black contour for the ETDSDC$^{15}_{16}$ method on the dispersive model problem to show that stability regions eventually grow for sufficiently large imaginary $r$. For large $|r|$, increasing the order of the ETD and IMEX methods does not lead to significantly larger stability regions.[]{data-label="fig:stability"}](figures/stability/IMEX_8_7_dispersive.pdf){width="1\linewidth"} ![Stability regions for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in the legend. We plot an additional black contour for the ETDSDC$^{15}_{16}$ method on the dispersive model problem to show that stability regions eventually grow for sufficiently large imaginary $r$. For large $|r|$, increasing the order of the ETD and IMEX methods does not lead to significantly larger stability regions.[]{data-label="fig:stability"}](figures/stability/ETD_16_15_dispersive.pdf){width="1\linewidth"} ![Stability regions for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in the legend. We plot an additional black contour for the ETDSDC$^{15}_{16}$ method on the dispersive model problem to show that stability regions eventually grow for sufficiently large imaginary $r$. For large $|r|$, increasing the order of the ETD and IMEX methods does not lead to significantly larger stability regions.[]{data-label="fig:stability"}](figures/stability/IMEX_16_15_dispersive.pdf){width="1\linewidth"} [**Dissipative/Dispersive Model Problem: $r \in \exp(3\pi i/4) \cdot [0, 30]$ and $R_0 = 30e^{3\pi i/4}$** ]{} ![Stability regions for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in the legend. We plot an additional black contour for the ETDSDC$^{15}_{16}$ method on the dispersive model problem to show that stability regions eventually grow for sufficiently large imaginary $r$. For large $|r|$, increasing the order of the ETD and IMEX methods does not lead to significantly larger stability regions.[]{data-label="fig:stability"}](figures/stability/ETD_8_7_both.pdf){width="1\linewidth"} ![Stability regions for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in the legend. We plot an additional black contour for the ETDSDC$^{15}_{16}$ method on the dispersive model problem to show that stability regions eventually grow for sufficiently large imaginary $r$. For large $|r|$, increasing the order of the ETD and IMEX methods does not lead to significantly larger stability regions.[]{data-label="fig:stability"}](figures/stability/IMEX_8_7_both.pdf){width="1\linewidth"} ![Stability regions for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in the legend. We plot an additional black contour for the ETDSDC$^{15}_{16}$ method on the dispersive model problem to show that stability regions eventually grow for sufficiently large imaginary $r$. For large $|r|$, increasing the order of the ETD and IMEX methods does not lead to significantly larger stability regions.[]{data-label="fig:stability"}](figures/stability/ETD_16_15_both.pdf){width="1\linewidth"} ![Stability regions for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in the legend. We plot an additional black contour for the ETDSDC$^{15}_{16}$ method on the dispersive model problem to show that stability regions eventually grow for sufficiently large imaginary $r$. For large $|r|$, increasing the order of the ETD and IMEX methods does not lead to significantly larger stability regions.[]{data-label="fig:stability"}](figures/stability/IMEX_16_15_both.pdf){width="1\linewidth"} [Accuracy Region Plots]{} ------------------------------------------------ [ $r = 0$ ]{} [ $r = R_0/2$ ]{} [ $r = R_0$]{} ------------------------------------------------ [ ETDSDC$_8^7$]{} [ IMEXSDC$_{8}^{7}$]{} [ ETDSDC$_{16}^{15}$]{} [ IMEXSDC$_{16}^{15}$]{} [**Dissipative Model Problem: $r \in -1 \cdot [0, 30]$** ]{} [$R_0 = -5$]{} ![Accuracy regions corresponding to $\epsilon = 1\times 10^{-8}$ for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in legend. We choose $R_0$ in each figure so that the contour marks a near vanishing accuracy region around $z=0$. As expected, 16th order methods possess larger accuracy regions for a wider range of $r$ than 8th order methods.[]{data-label="fig:accuracy"}](figures/accuracy/ETD_8_7_dissipative.pdf "fig:"){width="1\linewidth"} [$R_0 = -2$]{} ![Accuracy regions corresponding to $\epsilon = 1\times 10^{-8}$ for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in legend. We choose $R_0$ in each figure so that the contour marks a near vanishing accuracy region around $z=0$. As expected, 16th order methods possess larger accuracy regions for a wider range of $r$ than 8th order methods.[]{data-label="fig:accuracy"}](figures/accuracy/IMEX_8_7_dissipative.pdf "fig:"){width="1\linewidth"} [$R_0 = -30$]{} ![Accuracy regions corresponding to $\epsilon = 1\times 10^{-8}$ for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in legend. We choose $R_0$ in each figure so that the contour marks a near vanishing accuracy region around $z=0$. As expected, 16th order methods possess larger accuracy regions for a wider range of $r$ than 8th order methods.[]{data-label="fig:accuracy"}](figures/accuracy/ETD_16_15_dissipative.pdf "fig:"){width="1\linewidth"} [$R_0 = -20$]{} ![Accuracy regions corresponding to $\epsilon = 1\times 10^{-8}$ for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in legend. We choose $R_0$ in each figure so that the contour marks a near vanishing accuracy region around $z=0$. As expected, 16th order methods possess larger accuracy regions for a wider range of $r$ than 8th order methods.[]{data-label="fig:accuracy"}](figures/accuracy/IMEX_16_15_dissipative.pdf "fig:"){width="1\linewidth"} [**Dispersive Model Problem: $r \in 1i \cdot [0, 30]$** ]{} [$R_0 = 5i$]{} ![Accuracy regions corresponding to $\epsilon = 1\times 10^{-8}$ for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in legend. We choose $R_0$ in each figure so that the contour marks a near vanishing accuracy region around $z=0$. As expected, 16th order methods possess larger accuracy regions for a wider range of $r$ than 8th order methods.[]{data-label="fig:accuracy"}](figures/accuracy/ETD_8_7_dispersive.pdf "fig:"){width="1\linewidth"} [$R_0 = 2i$]{} ![Accuracy regions corresponding to $\epsilon = 1\times 10^{-8}$ for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in legend. We choose $R_0$ in each figure so that the contour marks a near vanishing accuracy region around $z=0$. As expected, 16th order methods possess larger accuracy regions for a wider range of $r$ than 8th order methods.[]{data-label="fig:accuracy"}](figures/accuracy/IMEX_8_7_dispersive.pdf "fig:"){width="1\linewidth"} [$R_0 = 15i$]{} ![Accuracy regions corresponding to $\epsilon = 1\times 10^{-8}$ for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in legend. We choose $R_0$ in each figure so that the contour marks a near vanishing accuracy region around $z=0$. As expected, 16th order methods possess larger accuracy regions for a wider range of $r$ than 8th order methods.[]{data-label="fig:accuracy"}](figures/accuracy/ETD_16_15_dispersive.pdf "fig:"){width="1\linewidth"} [$R_0 = 9i$]{} ![Accuracy regions corresponding to $\epsilon = 1\times 10^{-8}$ for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in legend. We choose $R_0$ in each figure so that the contour marks a near vanishing accuracy region around $z=0$. As expected, 16th order methods possess larger accuracy regions for a wider range of $r$ than 8th order methods.[]{data-label="fig:accuracy"}](figures/accuracy/IMEX_16_15_dispersive.pdf "fig:"){width="1\linewidth"} [**Dissipative/Dispersive Model Problem: $r \in \exp(3\pi i/4) \cdot [0, 30]$** ]{} [$R_0 = 5e^{3i\pi /4}$]{} ![Accuracy regions corresponding to $\epsilon = 1\times 10^{-8}$ for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in legend. We choose $R_0$ in each figure so that the contour marks a near vanishing accuracy region around $z=0$. As expected, 16th order methods possess larger accuracy regions for a wider range of $r$ than 8th order methods.[]{data-label="fig:accuracy"}](figures/accuracy/ETD_8_7_both.pdf "fig:"){width="1\linewidth"} [$R_0 = 2e^{3i\pi /4}$]{} ![Accuracy regions corresponding to $\epsilon = 1\times 10^{-8}$ for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in legend. We choose $R_0$ in each figure so that the contour marks a near vanishing accuracy region around $z=0$. As expected, 16th order methods possess larger accuracy regions for a wider range of $r$ than 8th order methods.[]{data-label="fig:accuracy"}](figures/accuracy/IMEX_8_7_both.pdf "fig:"){width="1\linewidth"} [$R_0 = 26e^{3i\pi /4}$]{} ![Accuracy regions corresponding to $\epsilon = 1\times 10^{-8}$ for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in legend. We choose $R_0$ in each figure so that the contour marks a near vanishing accuracy region around $z=0$. As expected, 16th order methods possess larger accuracy regions for a wider range of $r$ than 8th order methods.[]{data-label="fig:accuracy"}](figures/accuracy/ETD_16_15_both.pdf "fig:"){width="1\linewidth"} [$R_0 = 16e^{3i\pi /4}$]{} ![Accuracy regions corresponding to $\epsilon = 1\times 10^{-8}$ for 8th order and 16th order methods with Chebyshev quadrature nodes. Colored contours correspond to different $r$ values as described in legend. We choose $R_0$ in each figure so that the contour marks a near vanishing accuracy region around $z=0$. As expected, 16th order methods possess larger accuracy regions for a wider range of $r$ than 8th order methods.[]{data-label="fig:accuracy"}](figures/accuracy/IMEX_16_15_both.pdf "fig:"){width="1\linewidth"} Calculating $W^{i+1}_{i}(\phi^k)$ {#sec:etd_coefficients} ================================= Every iteration of an ETDSDC$^M_N$ method requires computing $W^{i+1}_{i}(\phi^k)$, which denotes the weighted quadrature approximation to $$\int_{h\tau_i}^{h\tau_{i+1}} e^{\Lambda (h\tau_{i+1}-s)} \mathcal{N}(s, \phi^{k}(s)) ds.$$ To arrive at a formula for $W_{i}^{i+1}(\phi^k)$, we let $\mathbf{N}_l(\phi) = \mathcal{N}(h\tau_l, \phi(h\tau_l))$ and replace $\mathcal{N}(s,\phi^k(s))$ in with the Lagrange interpolating polynomial $L(s)$ that passes through the quadrature points $\{(h\tau_{l}, \mathbf{N}_l(\phi^k)) \}_{l=1}^N$ so that $$W^{i+1}_i(\phi^k) = \int^{h\tau_{i+1}}_{h\tau_{i}} e^{\Lambda(h\tau_{i+1} - s)} L(s) ds = \sum_{l=1}^N w_{i,l} \hspace{.1em} \mathbf{N}_l(\phi^k). \label{eq:W_lagrange}$$ For low-order methods, explicit formulae for $w_{i,l}$ can be derived by forming $L(s)$ and repeatedly applying integration by parts. Unfortunately, this direct calculation leads to increasingly involved formulae for large $N$. We therefore seek a general procedure for determining $w_{i,l}$ for any $N$. We propose to express the weights $w_{i,l}$ in terms of the well-known functions $$\varphi_n(z) = \frac{1}{(n-1)!} \int^1_0 e^{z(1-\sigma)}\sigma^{n-1} d\sigma.$$ using a stable algorithm developed by Fornberg for determining finite difference coefficients [@fornberg1988]. We describe our algorithm in Section \[sec:w\_algorithm\], before discussing $\varphi$ functions and two well-known methods for initializing them in Section \[sec:phif\]. Proposed Algorithm {#sec:w_algorithm} ------------------ To arrive at a convenient expression for $W^{i+1}_{i}(\phi^k)$, we propose to apply the change of variables $$s = h\left[(\tau_{i+1} - \tau_i) \sigma + \tau_i \right] = h_i \sigma + h \tau_i, \label{eq:w_cov}$$ to the integral term in (\[eq:W\_lagrange\]), expand the Lagrange interpolating polynomial $L(s(\sigma))$ as a Taylor polynomial, and rewrite the result in terms of $\varphi$ functions. Applying the change of variables (\[eq:w\_cov\]) leads to $$h_i \int^{1}_{0} e^{h_i \Lambda (1-\sigma)} L(s(\sigma)) d\sigma = h_i \int^{1}_{0} e^{h_i \Lambda (1-\sigma)} P_i(\sigma) d\sigma,$$ where $P_i(\sigma)$ is the Lagrange interpolating polynomial which passes through the points $$\{(q_{i,l}, \mathbf{N}_l(\phi^k)) \}_{l=1}^N \hspace{1em} \text{and} \hspace{1em} q_{i,l} = (\tau_l - \tau_i)/(\tau_{i+1} - \tau_{i})$$ denote the scaled, translated quadrature nodes $h\tau_i$ under the transformation ($\ref{eq:w_cov}$). Next, we define the finite difference coefficients $a^{(i)}_{j,l}$ so that $$\left. \frac{d^j}{d\sigma^j} P_i(\sigma) \right|_{\sigma=0} = \sum_{l=1}^N a^{(i)}_{j,l} \hspace{.05em} \mathbf{N}_l(\phi^k).$$ Expanding $P_i(\sigma)$ as a Taylor polynomial we obtain $$W^{i+1}_{i}(\phi^k) = h_i \int^{1}_{0} e^{h_i \lambda (1-\sigma)} \sum_{j=0}^{N-1} \left[ \frac{\sigma^j}{j!} \sum_{l=1}^{N} a^{(i)}_{j,l} \hspace{0.1em} \mathbf{N}_l(\phi^k) \right] d\sigma.$$ Reordering terms we arrive at $$\begin{aligned} W^{i+1}_{i}(\phi^k) &=& h_i \sum_{l=1}^N \left[ \mathbf{N}_l(\phi^k) \sum_{j=0}^{N-1} \left[ \frac{a^{(i)}_{j,l}}{j!} \int^{1}_{0} e^{h_i \Lambda (1-\sigma)} \sigma^j d\sigma \right] \right] \\ &=& h_i \sum_{l=1}^N \left[ \mathbf{N}_l(\phi^k) \sum_{j=0}^{N-1} \left[ a^{(i)}_{j,l} \varphi_{j+1}(h_i \Lambda) \right] \right]. \end{aligned}$$ By defining the functions $$w_{i,l}(z) = h_i \sum_{j=0}^{N-1} a^{(i)}_{j,l} \varphi_{j+1}(z), \label{eq:w_function}$$ we obtain a convenient expression for the weighted quadrature rule: $$W^{i+1}_{i}(\phi^k) = \sum_{l=1}^N w_{i,l}(h_i\Lambda) \mathbf{N_k}(\phi^k).$$ To successfully implement this procedure, we must determine the finite difference coefficients $a_{j,l}^{(i)}$ and the matrix functions $\varphi_n(h_i\Lambda)$. The coefficients $a^{(i)}_{j,l}$ can be rapidly obtained using the stable algorithm presented in [@fornberg1988]. We define the functions: - $weights(z_0,[q_1,\ldots,q_n],m)$: returns a finite difference matrix $\mathbf{a}$ for computing $m$ derivatives at $z_0$, assuming $q_j$ are the quadrature points. This calling sequence is consistent with the implementation in [@fornberg1998classroom]. - $initPhi(z,n)$: returns the functions $\varphi_i(z)$ for $i=0,\ldots,n$. We discuss two possible implementations in Section \[sec:phif\]. The algorithm for computing $w_{i,l}(z)$ for an ETDSDC$_N^M$ method can be written as: [|X|]{}\ $[\varphi_0(h_i \Lambda), \ldots, \varphi_N(h_i \Lambda)] = \text{initPhi}(h_i\Lambda,N)$ $q_j = (\tau_j - \tau_i)/(\tau_{i+1} - \tau_{i})$ $a^{(i)} = \text{weights}(0,[q_1, \ldots q_N],N-1)$ $w_{i,l}(h_i\Lambda) = w_{i,l}(h_i\Lambda) + a^{(i)}_{j,l} \varphi_{j+1}(h_i \Lambda)$\ When computing $w_{i,l}(h_i\Lambda)$, it is convenient to save $\varphi_{0}(h_i\Lambda)$ and $\varphi_{1}(h_i\Lambda)$ since both are required for the ETD Euler method. $\varphi$ Functions {#sec:phif} ------------------- The coefficients of all exponential integrators can be expressed in terms of $\varphi$ functions [@hochbruck2010exponentialreview; @berland2007expint; @minchev2005; @koikari2007error]. The $n$th $\varphi$ function can be defined in the following ways: $$\begin{aligned} & \text{{\it Integral Form}:} & \varphi_n(z) & = \begin{dcases} e^{z} & n = 0 \\ \frac{1}{(n-1)!} \int^1_0 e^{z(1-s)} s^{n-1} ds & n> 0 \end{dcases} \\ & \text{{\it Series Form:}} & \varphi_n(z) & = \sum_{k=0}^\infty \frac{z^{k}}{(k+n)!} \label{eq:phi_taylor_def} \\ & \text{{\it Recursion Relation:}} & \varphi_n(z) & = \frac{\varphi_{n-1}(z) - \frac{1}{(n-1)!}}{z}, \hspace{1em} \varphi_0(z) = e^z \label{eq:phi_recursion} \end{aligned}$$ The first few $\varphi_n(z)$ are given by $$\varphi_0(z) = e^z, \hspace{1em} \varphi_1(z) = \frac{e^z - 1}{z}, \hspace{1em} \varphi_2(z) = \frac{e^z - 1 - z}{z^2}, \hspace{1em} \varphi_3(z) = \frac{e^z - 1 - z - \tfrac{1}{2}z^2}{z^3}.$$ We can now rewrite the compact update formula (\[eq:etdsdc\_compact\]) as $$\phi^{k+1}_{i+1} = \varphi_0(h_i \Lambda) \phi^{k+1}_{i} + \varphi_1 (h_i \Lambda) \left[N(h\tau_{i+\ell},\phi_{i+\ell}^{k+1}) - N(h\tau_{i+\ell}, \phi_{i+\ell}^k) \right] + W_i^{i+1}(\phi^k).$$ From their series definition, it follows that the functions $\varphi_n(z)$ are entire; nevertheless, it is well-known that explicit formula for $\varphi_n(z)$ are prone to catastrophic numerical roundoff error for small $|z|$. Various strategies for overcoming this difficulty have been compared extensively [@ashi2009comparison]. We briefly outline a method based on scaling and squaring [@koikari2007error] and a method based on contour integration [@KassamTrefethen05ETDRK4]. Other approaches involve Krylov subspace approximations [@hochbruck1998exponential; @hochbruck1997krylov] and improved contour integrals [@trefethen2007] but we do not consider them in this paper. ### Taylor/Padé Scaling and Squaring Algorithm The scaling and squaring algorithm for calculating $\varphi$ functions is a generalization of a well-known algorithm for computing matrix exponentials [@higham2009scaling]. For small $|z|$, $\varphi_n(z)$ can be accurately evaluated via the Taylor series (\[eq:phi\_taylor\_def\]) or via the diagonal $(m,m)$ Padé approximation, whose explicit formula is given in [@skaflestad2009scaling]. This initial approximation can be used to obtain $\varphi_n(z)$ for large $|z|$ by repeatedly applying the well-known scaling relation $$\varphi_n(z) = \frac{1}{2^n} \left[ \varphi_0 \left(\tfrac{z}{2} \right) \varphi_n \left(\tfrac{z}{2} \right) + \sum_{i=1}^{n} \frac{\varphi_{i}\left(\tfrac{z}{2} \right)}{(n-i)!} \right]. \label{eq:phi_scaling_relation}$$ We present pseudocode for an $m$-term Taylor series procedure for initializing $\phi_i(\Lambda)$ in Table \[tab:phi\_methods\]. A MATLAB implementation of the Padé scaling and squaring algorithm is freely available in [@berland2007expint] and can be easily used to initialize $\varphi_n(\Lambda)$ for both scalar and matrix $\Lambda$. ### Contour Integration Algorithm An alternative algorithm for initializing ETD coefficients was first suggested in [@KassamTrefethen05ETDRK4]. Since the functions $\varphi_n(z)$ are entire, Cauchy’s integral formula can be used to obtain $\varphi(z)$ at problematic regions near $z = 0$. We highlight this procedure for both scalar and matrix $\Lambda$ in Table \[tab:phi\_methods\] assuming that the explicit formula for $\varphi_n(z)$ is known. If this is not the case, then it is convenient to combine with the discretized contour integral so that $$\varphi_n(\Lambda) = \frac{1}{P} \sum_{j=0}^{P-1} \frac{\varphi_{n-1}(\Lambda + re^{i\theta}) - 1/(n-1)!}{\Lambda + Re^{i\theta}}. \label{eq:phi_contour_recursion}$$ This allows one to progressively evaluate $\varphi_n(\Lambda)$ for $n=1,\ldots,N$. For scalar $\Lambda$ we use when $|\Lambda| < 1$ and when $|\Lambda|\ge 1$. For matrix $\Lambda$ we find that the technique based on scaling and squaring is faster and more accurate, especially for matrices with large norm. [|X|]{}\ [*  Select Scaling Factor:*]{} Let $s\in\mathbb{N}$ so that $\|\Lambda/2^s\|_\infty < \delta(m)$ See Appendix \[ap:A1\] or [@koikari2007error] for choosing $\delta(m)$. [*  Initialize $\varphi_i(\Lambda/2^s)$ via Horner’s Method:*]{} [**for**]{} i=0 to N $ P_i = \frac{\Lambda}{(m+i)!} + \frac{\mathbf{I}}{(m+i-1)!}$ k=0 to m-2 $ P_i = \Lambda P_i + \frac{\mathbf{I}}{(m+i-2-k)!}$ $\varphi_i(\Lambda/2^s) = P_i$ [**for**]{} i=1 to s n=0 to N $\displaystyle \varphi_n(\Lambda/2^{s-i}) = \frac{1}{2^n} \left[ \varphi_0 \left(\Lambda/2^{s-i+1} \right) \varphi_n \left(\Lambda/2^{s-i+1} \right) + \sum_{i=1}^{n} \frac{\varphi_{i}\left(\Lambda/2^{s-i+1} \right)}{(n-i)!} \right]$ \ [|X|]{} [[@col@width]{}]{}[X|X]{} [**Contour Integral for Scalar**]{} $|\Lambda| < 1$ & [**Contour Integral for Matrix**]{} $\Lambda$\ & [* Cauchy Integral Formula:* ]{} [0em]{} $\displaystyle{\varphi_{n}(\Lambda) = \frac{1}{2\pi i} \oint_\Gamma \varphi_{n}(z)(z \mathbf{I} - \Lambda)^{-1} dz}$ [* Choosing $\Gamma$:*]{} [1em]{} Let $\Gamma = R e^{i\theta} + z_0$ for $\theta\in[0,2\pi]$. The radius $R$ and center $z_0$ must be chosen so that contour encloses spectrum of $\Lambda$. $\displaystyle \varphi_n(\Lambda) = \frac{1}{2\pi} \int^{2\pi}_0 \varphi_n(Re^{i\theta} + \Lambda) d\theta$ [* Discretization via Trapezoidal Rule:*]{} [0em]{} Let $\theta_j = 2\pi j/P$, $\gamma_j = R \exp(\theta_j) + z_0$, then for $P$ sufficiently large, $\varphi_{n}(\Lambda)$ is approximately $\displaystyle{ \frac{1}{P} \sum_{j=0}^{P-1} \varphi_{n}(\gamma_j) \left(\mathbf{I} + \frac{(z_0 \mathbf{I} - \Lambda)}{Re^{i\theta_j}}\right)^{-1}}\vspace{.5em}$ where $\varphi_n(\gamma_j)$ initialized like scalar $\Lambda$. \ Numerical Experiments {#sec:numerical_experiments} ===================== In this section, we numerically solve four partial differential equations in order to compare ETDSDC$^M_N$ and IMEXSDC$^M_N$ methods of orders $4,8,16,32$ against the fourth-order Runge-Kutta method (ETDRK4) developed in [@cox2002ETDRK4]. We have chosen to include ETDRK4 in our tests since it was shown to perform competitively [@grooms2011IMEXETDCOMP; @KassamTrefethen05ETDRK4], and provides a good reference for comparing SDC based schemes to existing ETD and IMEX methods. We provide our MATLAB and Fortran implementation of ETDSDC$^M_N$, IMEXSDC$^M_N$ and ETDRK4 in [@BuvoliETDZenodo] along with code for reproducing our numerical experiments. In all our numerical experiments, we apply a fine spectral spatial discretization so that error is primarily due to the time integrator. In our first three experiments we impose periodic boundary conditions and solve the PDEs in Fourier space. This is convenient since it leads to an evolution equation of the form (\[eq:model\_ode\_semilinear\]) where the matrix $\Lambda$ is diagonal. In our final experiment we consider a more challenging example where $\Lambda$ is a dense matrix. We base our first three numerical experiments from [@KassamTrefethen05ETDRK4; @grooms2011IMEXETDCOMP] so that our results can be compared with those obtained using other IMEX and ETD schemes. Since we consider methods of varying order, our experiments are based on the number of function evaluations rather than the step size $h$. We compute reference solutions by using four times as many function evaluations as used in the experiment. To avoid biased results, we average the solutions of at least two convergent methods when forming our reference solutions. For each PDE, we present plots of relative error vs. function evaluations, relative error vs. stepsize, and relative error vs. computational time, where the relative error between two solution vectors $\mathbf{x}$ and $\mathbf{y}$ is $\|\mathbf{x} - \mathbf{y} \|_\infty/ \| \mathbf{x}\|_\infty.$ Though we solve equations in Fourier space, we compute relative errors in physical space. We do not count the time required to initialize ETD coefficients in our time plots. We also make no specific efforts to optimize our code, thus timing results only serve as an indication and may vary under different implementations. The results presented in this paper have been run on a 3.5 Ghz Intel i7 Processor using our double precision Fortran implementation. We describe each of the four problems below. 1em The [**Kuramoto-Sivashinsky**]{} (KS) equation models reaction-diffusion systems [@kuramoto1976persistent]. As originally presented in [@KassamTrefethen05ETDRK4], we consider the KS equation with periodic boundary conditions: $$\begin{aligned} & u_t = -u_{xx} -u_{xxxx} - \tfrac{1}{2}\left(u^2\right)_x \label{eq:kuramoto}, \\ & u(x,t=0) = \cos\left( \tfrac{x}{16} \right) \left( 1 + \sin\left( \tfrac{x}{16} \right) \right), \hspace{1em} x\in[0, 64\pi]. \nonumber\end{aligned}$$ We numerically integrate using a 1024 point Fourier spectral discretization in $x$ and run the simulation out to $t=60$. The KS equation has a dispersive linear term $\Lambda$ with eigenvalues given by $\lambda(k) = k^2 - k^4$, where $k$ denotes the Fourier wavenumber. We present our numerical results in Figure \[fig:results\_page1\]. 1em The [**Nikolaevskiy**]{} equation was originally developed for studying seismic waves [@Nikolaevskiy] and now serves as a model for pattern formation in a variety of systems [@simbawa2010nikolaevskiy]. As originally presented in [@grooms2011IMEXETDCOMP], we consider the Nikolaevskiy equation with periodic boundary conditions: $$\begin{aligned} & u_t = \alpha \partial^3_x u + \beta \partial^5_x u -\partial_x^2 \left( r - ( 1 + \partial^2_x)^2 \right) u -\tfrac{1}{2}\left( u^2 \right)_x \label{eq:nikolaevskiy}, \\ & u(x,t=0) = \sin(x) + \epsilon \sin(x/25), \hspace{1em} x\in[-75\pi, 75\pi] \nonumber\end{aligned}$$ where $r=1/4$, $\alpha = 2.1$, $\beta = 0.77$, and $\epsilon = 1/10$. We solve the Nikolaevskiy equation using a 4096 point Fourier spectral discretization in $x$ and run the simulation out to $t=50$. The Nikolaevskiy equation has a dissipative and dispersive linear term with eigenvalues given by ${\lambda(k) = k^2(r - (1-k^2)^2) - i\alpha k^3 + i\beta k^5}$, where $k$ denotes the Fourier wavenumber. We present our numerical results in Figure \[fig:results\_page1\]. 1em The [**quasigeostrophic**]{} (QG) equations model a variety of atmospheric and oceanic phenomena [@pedloskygeophysical]. As originally presented in [@grooms2011IMEXETDCOMP], we consider the barotropic QG equation on a $\beta$-plane with linear Ekman drag and hyperviscous diffusion of momentum with periodic boundary conditions, $$\begin{aligned} & \partial_t \nabla^2 \psi = -\left[\beta \partial_x \psi + \epsilon \nabla^2 \psi + \nu \nabla^{10} \psi + \mathbf{u} \cdot \nabla (\nabla^2 \psi) \right] \label{eq:quasigeostrophic} \\ & \psi(x,y,t=0) = \frac{1}{8} \exp\left(-8\left(2y^2+x^2/2 - \pi/4 \right)^2 \right), \nonumber \\ & (x,y) \in[-\pi,\pi] \nonumber\end{aligned}$$ where $\psi(x,y)$ is the stream function for two-dimensional velocity ${\mathbf{u} = (-\partial_y \psi, \partial_x \psi)}$, $\epsilon = 1/100$, and $\nu = 10^{-14}$. We run the simulation to time $t=5$ using a $256\times 256$ point Fourier discritization. We consider a different initial condition than the one presented in [@grooms2011IMEXETDCOMP], since $\nabla^2 \psi(x,y)$ was originally chosen to be discontinuous at the point $(0,0)$. We note that describes the change in the vorticity $\omega = \nabla^2 \psi$ in terms of the stream function $\psi$. In order to obtain $\psi$ at each timestep, it is necessary to solve Poisson’s equation $\nabla^2 \psi = \omega$. Since we are solving in Fourier space, it follows that $$\hat{\psi}_{k,l} = \begin{cases} 0 & k=l=0 \\ -\frac{\hat{\omega}}{k^2 + l^2} & \text{otherwise} \end{cases}$$ where $k$ and $l$ are the Fourier wave numbers and $\hat{\psi}$, $\hat{\omega}$ denote the discrete Fourier transforms of $\psi$ and $\omega$. The QG equation has a linear term with strong dissipation and mild dispersion with eigenvalues given by $\lambda(k,l) = \frac{-ik - \epsilon k^2}{k^2 + l^2} - \nu(k^8 + l^8)$. We present our numerical results in Figure \[fig:results\_page2\]. [[**Performance Results for Kuramoto-Sivashinsky Equation** ]{}]{} ![Performance results for the Kuramoto-Sivianshi and Nikolaevskiy equations. Gray dashed lines of increasing steepness in the accuracy vs stepsize plots correspond to $O(h^4)$, $O(h^8)$ and $O(h^{16})$, respectively. IMEXSDC schemes experience significant order reduction on both problems.[]{data-label="fig:results_page1"}](figures/experiments/kuramoto.pdf "fig:"){width="1\linewidth"} [ [**Performance Results for Nikolaevskiy Equation**]{}]{} ![Performance results for the Kuramoto-Sivianshi and Nikolaevskiy equations. Gray dashed lines of increasing steepness in the accuracy vs stepsize plots correspond to $O(h^4)$, $O(h^8)$ and $O(h^{16})$, respectively. IMEXSDC schemes experience significant order reduction on both problems.[]{data-label="fig:results_page1"}](figures/experiments/Nikolaevskiy.pdf "fig:"){width="1\linewidth"} [[**Performance Results for Quasigeostrophic Equation**]{}]{} ![Performance results for the Quasigeostrophic and Korteweg-de Vries equations. Dashed lines of increasing steepness in the accuracy vs stepsize plots correspond to $O(h^4)$, $O(h^8)$ $O(h^{16})$ and $O(h^{32})$, respectively. Notice that high-order IMEXSDC schemes are unstable on the KDV equation. Order reduction does not occur for any method on the quasigeostrophic equation, but affects both IMEXSDC and ETDSDC schemes on the KDV equation.[]{data-label="fig:results_page2"}](figures/experiments/Quasigeostrophic.pdf){width="1\linewidth"} [[**Performance Results for Korteweg-de Vries Equation**]{}]{} ![Performance results for the Quasigeostrophic and Korteweg-de Vries equations. Dashed lines of increasing steepness in the accuracy vs stepsize plots correspond to $O(h^4)$, $O(h^8)$ $O(h^{16})$ and $O(h^{32})$, respectively. Notice that high-order IMEXSDC schemes are unstable on the KDV equation. Order reduction does not occur for any method on the quasigeostrophic equation, but affects both IMEXSDC and ETDSDC schemes on the KDV equation.[]{data-label="fig:results_page2"}](figures/experiments/KDV.pdf){width="1\linewidth"} 1em The [**Korteweg-de Vries**]{} (KDV) equation describes weakly nonlinear shallow water waves. In 1965 Kruskal and Zabusky observed that smooth initial conditions could give rise to soliton solutions [@zabusky1965interaction]. As in their original numerical experiment, we consider the KDV equation on a periodic domain $$\begin{aligned} & u_t = -\left[ \delta u_{xxx} + \tfrac{1}{2}(u^2)_x \right] \\ & u(x,t=0) = \cos(\pi x), \hspace{1em} x \in [0,2]\end{aligned}$$ where $\delta = 0.022$ and the simulation is run out to time $t = 3.6/\pi$. The eigenvalues of the linear terms are given by $\lambda(k,l) = \delta ik^3$; thus this equation possess a purely dispersive linear term. Unlike our previous examples, we solve this PDE in physical space where the resulting differentiation matrix is no longer diagonal. The nondiagonal case is more challenging since the coefficients $w_{i,n}$ in are now matrix functions. In practice it would be more efficient to consider a lower-order spatial description and apply Krylov space or contour integral techniques that avoid explicitly initializing the requisite ETD matrices. Nevertheless, we consider this example to test the robustness of the scaling and squaring algorithm. For IMEXSDC$^M_N$ schemes it is necessary to repeatedly solve the system $\Lambda x = f$ at each timestep. We perform an initial $LU$ factorization of $\Lambda$ to expedite this process. We present our numerical results for the KDV equation in Figure \[fig:results\_page2\]. Discussion ---------- Our results demonstrate that high-order methods can lead to significant speedup when solving nonlinear wave equations to high accuracy. Methods of order 8 and 16 were able to achieve the smallest error using the fewest function evaluations and the least overall CPU time. Interestingly, the error threshold separating good and bad performance for high and low order methods varied significantly in each experiment. Overall, ETDSDC methods consistently achieved better accuracy than corresponding IMEXSDC methods, and did not suffer from crippling order reduction on any of the problems we tested. Amongst the fourth order methods, ETDRK4 is more efficient than either ETDSDC$_4^3$ or IMEXSDC$_4^3$. Moreover, ETDRK4 is the fastest method for computing solutions if error tolerances are large. Methods of order 32 were generally less competitive than those of 8th or 16th order, and should only be considered in situations where extreme precision is necessary and quad/arbitrary-precision arithmetic allow for relative errors significantly below ${1 \times 10^{-12}}$. Finally, for diagonal $\Lambda$, the time required to initialize the ETD coefficients was insignificant as compared to overall computational time even for 32nd order method. High-order ETDSDC methods continued to perform well even in the non-diagonal case, and we found no evidence of catastrophic roundoff error when forming the ETD matrix coefficients $w_{i,l}(h_i \Lambda)$. For nondiagonal $\Lambda$, high-order ETDSDC$_N^M$ schemes require large amounts of memory and time to initialize and store the $N^2-N$ requisite matrices. Moreover, the expensive matrix multiplications at each timestep reduced their overall competitiveness. To improve the performance of ETDSDC schemes on higher dimensional problems with non-diagonal linear operators, it becomes essential to use techniques that avoid explicitly storing the ETD matrices. High-order IMEXSDC schemes were unstable when solving the KDV equation on fine grids in both physical and Fourier space. Through additional numerical testing we find that IMEXSDC schemes can be unstable when integrating other nonlinear wave equations with dispersive linear terms such as the nonlinear Schrödinger equation. We make several additional comments regarding our numerical experiments. The benefits of using high-order methods is greatly reduced if the initial conditions are not smooth, though in certain situations we found that high-order methods are rendered no less efficient than lower-order counterparts. The size of the integration window also affects the difference in performance between high and low-order methods, with the high-order methods generally benefiting on larger time domains. Chaotic equations can cause additional complications, as small perturbations due to rounding errors grow exponentially and contaminate overall accuracy. This was the case for the the KS equation where we were not able to integrate further without damaging the quality of the reference solution. Conclusion ========== We have demonstrated that high-order ETD spectral deferred correction schemes possess excellent accuracy/stability properties and outperform existing ETD and IMEX methods when solving nonlinear wave equations to high accuracy. Our proposed methodology for initializing ETD coefficients is robust and can be successfully applied to ETDSDC schemes up to 32 order accuracy, even for equations with non-diagonal linear operator $\Lambda$. We have also highlighted the advantages of ETD spectral deferred correction methods as compared with IMEXSDC schemes. Our new ETD schemes consistently outperform their IMEX counterparts, do not appear to suffer from crippling order reduction, and retain stability on equations with dispersive linear terms. ![image](figures/owl_bw.pdf){height="0.75em"} Acknowledgements {#acknowledgements .unnumbered} ================ I would like to thank Randall J. LeVeque for the many useful discussions over the course of this project and for his comments on drafts of this work. I would also like to thank Kristina Callaghan for her help editing the manuscript. This research was supported in part by funding from the Applied Mathematics Department at the University of Washington and NSF grant DMS-1216732. Choosing $\delta(m)$ {#ap:A1} ==================== We describe a simple choice for $\delta(m)$ from Table \[tab:phi\_methods\]; more sophisticated alternatives are developed in [@koikari2007error]. Let $\Lambda$ be a matrix or scalar and let $\mathbf{A} = \Lambda/2^s$. We seek an integer $s$ so that $\varphi_n(\mathbf{A})$ can be initialized via its $m$th order Taylor series without admitting an error larger than $\epsilon$, assuming exact arithmetic. $\varphi_n(\mathbf{A})$ can be approximated by $$\varphi_n^{m}(\mathbf{A}) = \sum_{k=0}^m \frac{\mathbf{A}^{k}}{(k+n)!} + {\ensuremath{\operatorname{O}\bigl(\|\mathbf{A}\|^{m+1}\bigr)}}.$$ The error $E_n(\mathbf{A}) = \| \varphi_n(\mathbf{A}) - \varphi^m_n(\mathbf{A}) \|$ can be expressed as $$E_n(\mathbf{A}) = \left| \left| \sum_{k={m+1}}^\infty \frac{\mathbf{A}^{k}}{(k+n)!} \right| \right| \le \sum_{k={m+1}}^\infty \frac{ \| \mathbf{A} \|^k}{(k+n)!}.$$ For any $n$, $E_n(\mathbf{A}) $ can be bounded above by $$\sum_{k=1}^\infty \frac{\|\mathbf{A} \|^{k+m}}{k!(m+1)!} = \frac{\|\mathbf{A} \|^{m+1} \exp(\|\mathbf{A} \|)}{(m+1)!}.$$ Assuming exact arithmetic, we can guarantee that $E_n(\mathbf{A}) < \epsilon$ for any $s$ so that $\mathbf{A} = \Lambda/2^s$ satisfies $$\epsilon = \frac{\|\mathbf{A} \|^{m+1} \exp(\|\mathbf{A} \|)}{(m+1)!}.$$ To avoid solving a nonlinear system for each $m$ and $\epsilon$, we fix $m=20$, $\epsilon=1\times 10^{-16}$ and let $\|\mathbf{A}\|\le \rho$. Solving the cooresponding nonlinear equation for $\rho$, leads to $\rho = 1.4 \approx 1.0$. Therefore our condition on $\mathbf{A}$ reduces to $\| \mathbf{A} \| = \| \Lambda/2^s \| \le 1$. In our numerical codes we choose $\| \mathbf{A} \| = \max\{\|\mathbf{A}\|_1,\|\mathbf{A}\|_{\infty}\}$. This leads to the condition $$s = \max \left \{ 0, \frac{\ln \left( \max\{\|\mathbf{A}\|_1,\|\mathbf{A}\|_{\infty}\} \right)}{\ln(2)} \right \}.$$ [^1]: Department of Applied Mathematics, University of Washington, Lewis Hall, Box 353925, Seattle, WA 98195. ([email protected]) [^2]: This research was supported in part by funding from the Applied Mathematics Department at the University of Washington and NSF grant DMS-1216732.
--- abstract: 'This note investigates a number of scenarios in which unadjusted testing following a blinded sample size re-estimation leads to type I error violations. For superiority testing, this occurs in certain small-sample borderline cases. We discuss a number of alternative approaches that keep the type I error rate. The paper also gives a reason why the type I error inflation in the superiority context might have been missed in previous publications and investigates why it is more marked in case of non-inferiority testing.' --- **Some Notes on Blinded Sample Size Re-Estimation** Ekkehard Glimm[^1] and Jürgen Läuter[^2] Introduction {#sec1} ============ Sample Size re-estimation (SSR) in clinical trials has a long history that dates back to Stein (1945). A sample size review at an interim analysis aims at correcting assumptions which were made at the planning stage of the trial, but turn out to be unrealistic. When the sample units are considered to be normally distributed, this typically concerns the initial assumption about the variation of responses. Wittes and Brittain (1990) and Gould and Shih (1992, 1998) among others discussed methods of [*blinded*]{} SSR. In contrast to [*unblinded*]{} SSR, blinded SSR assumes that the actually realized effect size estimate is not disclosed to the decision makers who do the SSR. Wittes et al. (1999) and Zucker et al. (1999) investigated the performance of various blinded and unblinded SSR methods by simulation. They observed some slight type I error violations in cases with small sample size and gave explanations for this phenomenon for some of the unblinded approaches available at that time. Slightly later, Kieser and Friede([@Kieser03], [@Friede06]) suggested a method of blinded sample size review which is particularly easy to implement. In a trial with normally distributed sample units with the aim of testing for a significant treatment effect (“superiority testing”) at the final analysis, it estimates the variance under the null hypothesis of no treatment effect and then proceeds to an unmodified $t$-test in the final analysis, i.e. a test that ignores the fact that the final sample size was not fixed from the onset of the trial. Kieser and Friede investigated the type I error control of their suggestion by simulation. They conclude that *no additional measures to control the significance level are required in these designs if the study is evaluated with the common t-test and the sample size is recalculated with any of these simple blind variance estimators*. Although Kieser and Friede explicitly stated that they provide no formal proof of type I error control, it seems to us that many statisticians in the pharmaceutical industry are under the impression that such a proof is available. This, however, is not the case. In this paper, we show that in certain situations, the method suggested by Kieser and Friede does not control the type I error. It should be emphasized that asymptotic type I error control with blinded SSR is guaranteed. If the sample size of only one of the two stages tends to infinity, the other stage is obviously irrelevant for the asymptotic value of the final test statistic and thus the method asymptotically keeps $\alpha$. If the sample size in both stages goes to infinity, then the stage-1-estimate of the variance converges to a constant value. Hence, whatever sample size re-estimation rule is used, it implicitly fixes the total sample size in advance (though its precise value is not yet known before the interim). In any case, asymptotically $\alpha$ is again kept. Govindarajulu (2003) has formalized this thought and extended to non-normally distributed data. As a consequence, the type I error violations discussed in this note are very small and occur in cases with small samples. We still believe, however, that the statistical community should be made aware of these limitations of blinded sample-size review methodology. While sections \[sec2\]-\[sec4\] focus on the common case of testing for treatment differences in clinical trials, section \[sec5\] briefly discusses the case of testing for non-inferiority of one of the two treatments. In had been noted in another paper by Friede and Kieser [@Friede03] that type I error inflations from SSR can be more marked in this situation. We give an explanation of this phenomenon. A scenario leading to type I error violation {#sec2} ============================================ In this section we show that in certain cases, a blinded sample size review as suggested by [@Kieser03] leads to a type I error which is larger than the nominal level $\alpha$. In general, blinded sample review is characterized by the fact that the final sample size of the study may be changed at interim analyses, but that this change depends on the data only via the [*total variance*]{} which is the variance estimate under the null hypothesis of interest. If $x_i, i=1, \ldots, n_1$ are stochastically independent normally distributed observations, this total variance is proportional to $\sum_{i=1}^{n_1}x_i^2$ in the one-sample and to $\sum_{i=1}^{n_1}x_i^2-n_1\bar{x}^2$ in the two-sample case. We consider the one-sample $t$ test of $H_0:\mu=0$ at level $\alpha$ applied to $x_i \sim N(\mu, \sigma^2)$. The reason for this is simplicity of notation and the fact that the geometric considerations given below cannot be imagined for the two-sample case which would have to deal with a dimension larger than three even in the simplest setup. However, the restriction to the one-sample case entails no loss of generality, as it is conceptually the same as the two sample case. We will briefly comment on this further below. In addition, a blinded sample size review may also be of practical relevance in the one-sample situation, for example in cross-over trials. Assume a blinded sample size review after $n_1=2$ observations. If the total variance is small, we stop sampling and test with the $n_1=n=2$ observations we have obtained. If it is large, we take another sample element $x_3$, and do the test with $n=3$ observations. This rule implies that $n=2$ for $x_1^2+x_2^2\leq r^2$ and $n=3$ otherwise for some fixed scalar $r$. Geometrically, the rejection region of the (one-sided) $t$ test for $n=3$ is a spherical cone with the equiangular line $x_1=x_2=x_3$ as its central axis in the three-dimensional space. By definition, the probability mass of this cone is $\alpha$ under $H_0$. For the case of $n=2$, the rejection region is a segment of the circle $x_1^2+x_2^2\leq r^2$ around the equiangular line $x_1=x_2$. Hence, in three dimensions, the rejection region is a segment of the spherical cylinder $x_1^2+x_2^2\leq r^2, x_3$ arbitrary. The probability mass covered by this segment again is $\alpha$ inside the cylinder. The rejection region of the entire procedure is the segment of the cylinder plus the spherical cone minus the intersection of the cone with the cylinder. We now approximate the probability mass of these components. For $r^2$ small, we approximately have $P(x_1^2+x_2^2\leq r^2)=\frac{r^2}{2\sigma^2}$. Hence, under $H_0$, the probability mass of this part of the rejection region is approximately $\frac{r^2}{2\sigma^2}\cdot \alpha$. The volume of the intersection of the cone with the cylinder can be approximated as follows: The central axis $x_1=x_2=x_3$ of the cone intersects with the cylinder in one of the points $\pm \left(\frac{r}{\sqrt{2}},\frac{r}{\sqrt{2}},\frac{r}{\sqrt{2}}\right)$. The distance of this point to the origin is thus $h=\sqrt{\frac{3}{2}}r$. The approximate volume of the intersection is $\frac{4\pi h^3}{3}=\sqrt{6}\pi r^3$. To conservatively approximate the probability mass of this intersection, we assume that every point in it has the same probability mass as the origin (in reality, it of course has a lower probability mass). Then the probability mass of the intersection is approximated by $\sqrt{6}\pi r^3\cdot\alpha \cdot (\sqrt{2\pi}\sigma)^{-3}$, where $(\sqrt{2\pi}\sigma)^{-3}$ is the value of the standard normal density $N_3\left(\mathbf{0},\sigma^2\mathbf{I}_3\right)$ in the point $\mathbf{0}$. Combining these results, a conservative approximation of the probability mass of the rejection region for the entire procedure is $$\label{f1} \alpha\left(1+\frac{r^2}{2\sigma^2}-\frac{\sqrt{6}\pi r^3} {(\sqrt{2\pi}\sigma)^{3}}\right)=\alpha\left(1+\frac{r^2}{2\sigma^2}-\frac{\sqrt{3} r^3} {2\sqrt{\pi}\sigma^{3}}\right).$$ Obviously, this is larger than $\alpha$ for small $r$. For the more general case of a stage-1-sample size of $n_1$, possibly followed by a stage 2 with $n_2$ further observations, the rejection region of the “sample size reviewed” $t$ test has an approximate null probability following the same basic principle as (\[f1\]):$\alpha\cdot\left(1+const_1 \cdot \left(\frac{r}{\sqrt{n_1}\sigma}\right)^{n_1}-const_2 \cdot \left(\frac{r}{\sqrt{n_1}\sigma}\right)^{n_1+n_2} \right)$ if $r, n_1$ and $n_2$ are small. Consequently, there must be situations with small $\frac{r}{\sqrt{n_1}\sigma}$ where the blinded review procedure cannot keep the type I error level $\alpha$ exactly. Due to symmetry of the rejection region, this statement holds for both the one- and the two-sided test of $H_0$. Note that in this example, the test keeps $\alpha$ exactly if $\sum_{i=1}^{n_1} x_i^2\leq r^2$. This is due to the sphericity of the conditional null distribution of $(x_1,\cdots, x_{n_1})$ given $\sum_{i=1}^{n_1} x_i^2\leq r^2$ (see [@Fang90], theorem 2.5.8). Type I error violation stems from the fact that the test does not keep $\alpha$ conditional on $\sum_{i=1}^{n_1} x_i^2> r^2$, i.e. if a second stage of sampling more observations is done. To investigate the magnitude of the ensuing type I error violation, we simulated 10’000’000 cases with $n_1=2$ initial observations and $n_2=2$ additional observations that are only taken if $x_1^2+x_2^2\geq 0.5$. The true type I error of the two-sided combined $t$ test turned out to be $0.0542$ for a nominal $\alpha=0.05$. As expected, this is caused by the situations where stage-2-data is obtained. Since $x_1^2+x_2^2 \sim \chi^2(2)$, we have $P(x_1^2+x_2^2\geq 0.5)=0.779$. This was also the value observed in the simulations. The rejection rate for these cases alone was $0.0553$. If $x_1^2+x_2^2< 0.5$, we know that conditionally the rejection rate is exactly $\alpha$. Accordingly, this conditional rejection rate in the simulations was $0.0500$. If $n_1$ and $n_2$ are increased, the true type I error rate converges rather quickly to $\alpha$. For example, in case of $n_1=n_2=5$ and $r^2=2.5$, the simulated error rate is $0.0508$ with $77.6\%$ of cases leading to stage 2 and a conditional error rate of $0.0510$ in case stage 2 applies. We also performed some simulations where $n_2$ is determined with the algorithm suggested by [@Kieser03]. For this purpose, we generated $10'000'000$ simulation runs of a blinded sample size review after $n_1=2$ observations following the rule given in section 3 of [@Kieser03] with a very large assumed effect of $\delta=2.2$. This produces an average of $3.09$ additional observations $n_2$. The simulated type I error was $0.05077$. To see that the two-sample case is also covered by these investigations, note that the ordinary $t$-test statistic can be viewed as $X/\sqrt{Y/s}$ where $X\sim N(\delta,1)$ is stochastically independent of $Y \sim \chi^2\left(s\right)$. Regarding any investigation of the properties of this quantity, it obviously does not matter if the random variables $X$ and $Y$ arise as mean and variance estimate from a one-sample situation or as difference in means and common within-group variance estimate in the two-sample case. The same is true here: According to [@Kieser03], p. 3575, the “resampled” $t$-test statistic consists of the four components $D_1$, $V_1$, $D_2\left|\left(V_1,D_1\right)\right.$ and $V_2^*\left|\left(V_1,D_1\right)\right.$ (loosely speaking, these correspond to the differences in means and variance estimates of the two stages). Comparing the distributions of $D_1$ and $V_1$ and the conditional distributions of $D_2$ and $V_2^*$ given $D_1$ and $V_1$ (and hence $n_2$), one immediately sees that these are the same for the one- and the balanced two-sample case when we replace $n_i$ by $n_i/2$ and the means of the two stages by the corresponding two differences in means between the two treatment groups. For the conditional distribution of $V_2^*\left|\left(V_1,D_1\right)\right.$ see section \[sec4\]. Approaches that control the type I error ======================================== Permutation and rotation tests ------------------------------ If the considerations from the previous section are of concern, then a simple alternative is to do the test as a permutation test. In the one-sample case, one would generate all permutations (or a large number of random permutations) of the signs onto the absolute values of observations. For each permutation, the $t$ test would be calculated and the $(1-\alpha)$-quantile of the resulting empirical distribution of $t$-test values gives the critical value of an exact level $\alpha$-test of $H_0$. Alternatively, a $p$-value can be obtained by counting the percentage of values from the permutation distribution which are larger or equal to the actually observed value of the test statistic. After determining the additional sample size $n_2$ from the first $n_1$ observations, we apply the permutation method to all $n_1+n_2$ observations. The special case of $n_2=0$ is possible and then the parametric (non-permutation) $t$-test can also be used. This strategy keeps the $\alpha$-level exactly, because the total variance $\frac{1}{n_1}\sum_{i=1}^{n_1}x_i^2$ is invariant to the permutations. In the two-sample case, the approach would permute the treatment allocations of the observations. In order to preserve the observed total variance, the permutations have to be done separately for the $n_1$ observations of stage 1 and the $n_2$ observations of stage 2, respectively. If sample sizes are small, permutation tests suffer from the discreteness of the resampling distribution and the associated loss of power. In this case, rotation tests [@Langs05; @Lau05] offer an attractive alternative. These replace the random permutations of the sample units by random rotations. This renders the support of the corresponding empirical distribution continuous and thus avoids the discreteness problem of the permutation strategy. In order to facilitate this, rotation tests require the assumption of a spherical null distribution. This is the case in this context. Stage-1- and stage-2-data have to be rotated separately even in the one-sample case in order to keep the fixed observed stage-1-value of the total variance. Permutation and rotation strategies emulate the true distribution of the $t$ test including sample size review. Hence, they will “automatically” correct any type I error inflation as outlined in the previous section, but will otherwise have almost identical properties (e.g. with respect to power) as their “parametric” counterpart. We did some simulations of the permutation and rotation strategies under null and non-null scenarios. These, however, just backed up the statements made here and are thus not reported. Combinations of test statistics from the two stages --------------------------------------------------- Methods that use a combination of test statistics from the two stages are another alternative if one is looking for an exact test. For example, we might use Fisher’s $p$-value combination $-2\log(p_1\cdot p_2)$ [@Bauer94] where $p_j=P(T_j>t_j)$ with $T_j$ being the test statistic from stage-$j$-data only and $t_j$ its observation from the concrete data at hand. As $-2\log(p_1\cdot p_2) \sim \chi^2(4)$ for independent test statistics $T_1$ and $T_2$ under $H_0$, the combination $p$-value test rejects $H_0$ if $-2\log(p_1\cdot p_2)$ is larger than the $(1-\alpha)$-quantile from this distribution. In this application, we use the true null distributions of the test statistics $T_j$ to determine the $p$-values. For example, in case of the one-sample-$t$-test these are the $t$-distributions $T_1 \sim t(n_1-1)$ and $T_2 \sim t(n_2-1)$. The stage-2-sample size $n_2$ is uniquely determined by $\sum_{i=1}^{n_1}x_i^2$. Since $T_1$ is a test statistic for which Theorem 2.5.8. of [@Fang90] holds under $H_0$, the null distribution of $T_1$ is valid also conditionally on $\sum_{i=1}^{n_1}x_i^2$. As a consequence, $T_1 \sim t(n_1-1)$ and $T_2 \sim t(n_2-1)$ are stochastically independent under $H_0$ for given $\sum_{i=1}^{n_1}x_i^2$. Any combination of them can be used as the test statistic for $H_0$. Of course, one still has to find critical values of the null distribution for the selected combination. The statement about the conditional null distributions of the test statistics given the total variance $\sum_{i=1}^{n_1}x_i^2$ allows us to go beyond Fisher’s $p$-value combination and similar methods that are combining $p$-values using fixed weights or calculate conditional error functions with an “intended” stage-2-sample size. The weights used to combine the two stages may also depend on the observed stage-1-data. For example, if the variance were known (and hence a $z$-test for $H_0$ could be done), then the optimal (standardized) weights for combining the $z$-statistics from the two stages would be $\sqrt{\frac{n_1}{n_1+n_2}}$ and $\sqrt{\frac{n_2}{n_1+n_2}}$ in the one-sample case. Hence, $t_{comb}=\sqrt{\frac{n_1}{n_1+n_2}}t_1+\sqrt{\frac{n_2}{n_1+n_2}}t_2$ seems a promising candidate for a combination test statistic. The fact that $\left\{T_j\right\}, j=1,2$ retain their $t(n_j-1)$-null distributions if we condition on $s_1^2=\sum_{i=1}^{n_1}x_i^2$ means that critical values for this test can be obtained from the distribution of the weighted sum of two stochastically independent $t$-distributed random variables with $(n_1-1)$ and $(n_2-1)$ degrees of freedom, respectively. It is obvious that this is very easy with numerical integration or a simulation. Comparing $t_{comb}$ with these critical values (that depend only on $n_1$ and $n_2$) to decide about the rejection of $H_0$ gives an exact level-$\alpha$ test. To investigate the performance of the introduced suggestions, we did several simulations. The critical values for the one-sided one-sample test using $t_{comb}$ were obtained by simulating 1’000’000 values of two independent $t$-distributions with $n_1$ fixed and $n_2$ as determined by the SSR method in [@Kieser03]. We used the “total variance” for SSR, not the “adjusted variance” which subtracts a constant based on the putative effect size. Nevertheless, the re-estimated sample size of course depends on the “assumed effect” which may be different from the true, unknown effect size. In the simulations,we investigated various combinations of the true effect size $\mu$ and an assumed effect size $\delta$. ------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------- ![Power of various test after sample size re-estimation[]{data-label="fig1"}](sampsizeReest_power5.eps "fig:"){width="7cm" height="7cm"} ![Power of various test after sample size re-estimation[]{data-label="fig1"}](sampsizeReest_power30.eps "fig:"){width="7cm" height="7cm"} ------------------------------------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------- Null simulations verified the claimed type-I-error control for the various adjustment methods described in this section and are thus not reported. Figure \[fig1\] shows the results of 1’000’000 simulation runs for sample sizes of $n_1=5$ and $n_1=30$, a true effect size of $\mu=0.2$ (the standardized true effect size, such that the non-centrality parameter of a standard-$t$-test with $n$ observations would be $\sqrt{n}\mu$) and varying values of $\delta$ on the $x$-axis. The unmodified $t$-test as suggested by [@Kieser03] is always best. In comparison, the weighted $t$-test combination $t_{comb}$ suffers from a small power loss which seems non-negligible only for very small stage-1-sample sizes below $n_1=10$ (where the type I error control of the “reviewed” $t$-test might be a concern). For all simulated scenarios with $n_1=30$, the difference in power was always below $1\%$. In contrast, Fisher’s $p$-value combination typically loses 3 to 4 % of power when $t$-test power is less than 95 % and up to 7% for some scenarios ($\mu=0.1, \delta=0.15$ with power $t$-test 76.4%, power $t$-combination 76.3%, power $p$-value combination 69.6%). The distribution of Kieser and Friede’s $t$-test statistic {#sec4} ========================================================== To investigate the type I error of the $t$-test after a blinded sample size review, Kieser and Friede [@Kieser03] write the $t$-test statistic as a function of four components $D_1$, $V_1$, $D_2$ and $V_2^*$ (see page 3575 of [@Kieser03]) for which they derive respective distributions. However, the distribution of $V_2^*$ given $(D_1,V_1)$ mentioned there is an approximation, not the exact distribution. Hence, the “actual” type I error rates in [@Kieser03] are also approximate, possibly masking a minor type I error level inflation. The following uses the notation from [@Kieser03]. It shows that the conditional distribution of $V_2^*|(V_1,D_1)$ is not $\chi^2 (2n_2)$. Without loss of generality it can be assumed that $\sigma^2=1$. We have $$V_2^*=V_2+\frac{n_1n_2}{n_1+n_2}\left(\left(\overline{X}_{11}-\overline{X}_{21}\right)^2+ \left(\overline{X}_{12}-\overline{X}_{22}\right)^2\right)$$ $V_2|(V_1,D_1)\sim \chi^2(2n_2-2)$ is obvious. It is also obvious that if we condition on $V_1$ only, and suppose that this determines sample size $n_2$ uniquely, we have $$D^*_i:=\sqrt{\frac{n_1n_2}{n_1+n_2}}\left(\overline{X}_{1i}-\overline{X}_{2i}\right)\sim N(0,1),$$ such that $D^*_1$ and $D^*_2$ are stochastically independent. Thus, in this case $D^{*2}_1+D^{*2}_2\sim \chi^2(2)$, so if $n_2$ is a function of $V_1$, but not $D_1$, the claim $V_2^*|V_1\sim \chi^2(2n_2)$ holds. This was noted by [@Gould92]. If we condition on both $V_1$ and $D_1$, $V_2$ and $\left(D^*_1,D^*_2\right)$ are still independent, but $D^*_1$ and $D^*_2$ are no longer. By applying a theorem on conditional normal distributions (see e.g. [@Sri02], page 35) and some well-known results on matrix decompositions, it can be shown that the true conditional distribution of $V_2^*$ is a mixture distribution: $$V_2^*|(V_1, D_1) =_d \chi^2_{2n_2-1}+ z_2^2,$$ where “$=_d"$ denotes ”equal in distribution“ and $z_2^2$ has the ”rescaled" non-central $\chi^2$-distribution $$z_2^2\sim \frac{n_1}{n_1+n_2}\cdot\chi^2\left(1;\frac{n_2}{n_1}\left(D_1-\sqrt{\frac{n_1}{2}}\Delta\right)^2\right).$$ The assumption $V_2^*|(V_1,D_1)\sim \chi^{2}(2n_2)$ will often very closely approximate this real distribution. Sample size reviews when testing for non-inferiority {#sec5} ==================================================== The preceding sections have dealt with the superiority test $H_0: \mu=0$. While type I error violations in this context are extremely small, it was noted by [@Friede03] that more serious violations arise in the case of non-inferiority and equivalence testing and that these are persistent with larger sample sizes. This section gives an intuitive explanation for this. Assume that in the two-sample case, it is intended to test the non-inferiority hypothesis $H_0:\mu_1-\mu_2\leq \delta$ on data $x_{ijk}\sim N\left(\mu_j,\sigma^2\right)$ where $i=1,2$ indexes stage, $j=1,2$ treatment group, $k=1,\ldots,n_i$ sample unit and $\delta$ is a fixed non-inferiority margin. Sample size reassessment after stage 1 determines the stage-2-sample size via $$\label{f50} n_2=4\cdot\frac{\left(u_{1-\alpha}+u_{1-\beta}\right)^2}{\left(\theta-\delta\right)^2}\cdot \tilde{\sigma}^2$$ (where $u_\alpha$ is the $\alpha$-quantile of $N(0,1)$, $\beta$ is the desired power of the ordinary two-sample $t$-test and $\theta$ is the assumed true effect difference between the treatments) as a function of the “total variance” $$\tilde{\sigma}^2=\frac{1}{2n_1-1}\sum_{j=1}^{2}\sum_{k=1}^{n_1}\left(x_{1jk}-\bar{x}_1\right)^2$$ with $\bar{x}_1=\frac{\sum_{j=1}^{2}\sum_{k=1}^{n_1} x_{1jk}}{2n_1}$. This, however, does not correspond to a “blinded” sample size review of the corresponding superiority test. To see this, notice that the described test can also be represented as a test of $H_0:\mu_1^*-\mu_2\leq 0$ on the “shifted” data $$\label{f51} x_{ijk}^*=\left\{\begin{array}{l} x_{ijk}-\delta \mbox{ if } j=1,\\ x_{ijk} \mbox{ if } j=2, \end{array} \right.$$ A blinded sample size review of $\left(x_{ijk}^*\right)$ would also use (\[f50\]), but with $$\hat{\sigma}^2=\frac{1}{2n_1-1}\sum_{j=1}^{2}\sum_{k=1}^{n_1}\left(x^*_{1jk}-\bar{x}^*_1\right)^2$$ instead of $\tilde{\sigma}^2$. It is easy to see that $$\tilde{\sigma}^2=\hat{\sigma}^2-\frac{n_1}{2(2n_1-1)}\delta^2+\frac{n_1}{2n_1-1}\delta(\bar{x}_{11}-\bar{x}_{12}).$$ This formula contains the quantity $\frac{n_1}{2n_1-1}\delta(\bar{x}_{11}-\bar{x}_{12})$ which links the realized difference in means $\bar{x}_{11}-\bar{x}_{12}$ with the true difference $\delta$ of means under $H_0$. If, for example, $\delta<0$, then $n_2$ decreases with increasing realized values of $\bar{x}_{11}-\bar{x}_{12}$. Relative to the blinded superiority sample size review, this means that fewer additional sample elements are taken when stage-1-evidence is in favor of the alternative and vice versa. Obviously, this must be associated with an increase of type I error under $H_0$. Conversely, the test gets conservative when $\delta>0$. These tendencies were also noticed by [@Friede03] in simulations. The “blinded” non-inferiority test is thus equivalent to an “unblinded” superiority test and hence subject to type I error biases that afflict an unmodified $t$-test applied after the sample size was modified using the observed difference in means. To be sure, the user of the blinded non-inferiority re-estimation does not get to see the realized value of $\bar{x}_{11}-\bar{x}_{12}$, but nevertheless it has the described impact on the modified sample size $n_2$. Discussion ========== This paper investigates a number of situations with normally distributed observations where blinded sample size review according to Kieser and Friede does not control the type I error rate. In superiority testing, the corresponding inflations are extremely small and occur with sample sizes that will rarely be of practical relevance. The method can thus safely be used in practice. As an alternative for which type I error control can be proved, it is also possible to combine the $t$-test statistics of the two stages directly using data-dependent weights. Regarding the outcome in practical applications, these two methods are virtually indistinguishable. In contrast, $p$-value-combination and related methods suffer from some power loss due to the fact that they have to work with a predetermined “intended” stage-2-sample size and lose power if one deviates from this intention in the sample size review. Non-inferiority testing is subject to much more severe type I error violations. This is due to its equivalence with unblinded superiority testing. As a consequence, blinded SSR is not an acceptable method in confirmatory clinical trials. [99]{} Kieser M, Friede T. Simple procedures for blinded sample size adjustment that do not affect the type I error rate. *Statistics in Medicine* 2003; **22**: 3571–3581. Friede T, Kieser M. Sample size recalculation in internal pilot study designs: a review. *Biometrical Journal* 2006; **48**:537–555. Fang K-T, Zhang Y-T. *Generalized Multivariate Analysis*. Springer: Berlin, 1990. Langsrud Ø. Rotation tests. *Statistics and Computing* 2005; **15**: 53–60. Läuter J, Glimm E, Eszlinger M. Search for relevant sets of variables in a high-dimensional setup keeping the familywise error rate. *Statistica Neerlandica* 2005; **59**: 298–312. Bauer P, Köhne K. Evaluation of experiments with adaptive interim analyses. *Biometrics* **50**: 1029–1041. Gould AL, Shih WJ. Sample size re-estimation without unblinding for normally distributed outcomes with unknown variance. *Communications in Statistics - Theory and Methods* 1992; **21**:2833–2853. Srivastava MS. *Methods of Multivariate Statistics*. Wiley: New York, 2002. Wittes, J. and Brittain, E. The role of internal pilot studies in increasing the effciency of clinical trials, Statistics in Medicine, 9, 65-72 (1990). Wittes J, Schabenberger O, Zucker D, Brittain E, Proschan M. Internal pilot studies I: type I error rate of the naive t-test. Statistics in Medicine 1999; 18: 3481–3491. Gould, A. L. and Shih, W. J. ‘Modifying the design of ongoing trials without unblinding’, Statistics in Medicine, 17, 89–100 (1998). Govindarajulu Z. Robustness of sample size re-estimation procedure in clinical trials (arbitrary populations). Statistics in Medicine 2003;22:1819–1828. Friede T, Kieser M. Blinded sample size reassessment in non-inferiority and equivalence trials. *Statistics in Medicine* 2003; **22**: 995–1007. Technical Appendix ================== This appendix shows that $V_2^*|(V_1, D_1)$ has the distribution given in section \[sec4\]. By applying the usual theorems on conditional normal distributions (see e.g. [@Sri02], page 35), we obtain the bivariate distribution $$\begin{aligned} \label{f3} &&\mathbf{D}^*:={D^*_1 \choose D^*_2}\left|(V_1,D_1)\right. ={D^*_1 \choose D^*_2}\left|(n_2,D_1)\right.\sim\\ &&N_2\left({\sqrt{\frac{n_2}{2(n_1+n_2)}} \left(D_1-\sqrt{\frac{n_1}{2}}\Delta\right) \choose {-\sqrt{\frac{n_2}{2(n_1+n_2)}}} \left(D_1-\sqrt{\frac{n_1}{2}}\Delta\right)}; \left( \begin{array}{cc} \frac{2n_1+n_2}{2(n_1+n_2)} & \frac{n_2}{2(n_1+n_2)} \\ \frac{n_2}{2(n_1+n_2)} & \frac{2n_1+n_2}{2(n_1+n_2)} \end{array} \right)\right)\nonumber\end{aligned}$$ To derive the distribution of $\mathbf{D}^{*'}\mathbf{D}^*=D^{*2}_1+D^{*2}_2$, we can make use of the following well-known general result: Suppose $\mathbf{x}\sim N_p({\bm\mu},\mathbf{V})$ and let $\mathbf{V}^{\frac{1}{2}}$ be a root of $\mathbf{V}$ (i.e. a matrix that fulfills $\mathbf{V}^{\frac{1}{2}}\mathbf{V}^{\frac{1}{2}}=\mathbf{V}$). Then $\mathbf{x}=_d\mathbf{V}^{\frac{1}{2}}\mathbf{y}$ where $\mathbf{y}\sim N_p(\mathbf{V}^{-\frac{1}{2}}{\bm\mu},\mathbf{I}_p)$. Furthermore assume that $\mathbf{A}$ is a positive semidefinite symmetric $p\times p$-matrix. Then $$\label{f10} \mathbf{x}'\mathbf{A}\mathbf{x}=_d \mathbf{y}'\mathbf{V}^{\frac{1}{2}}\mathbf{A}\mathbf{V}^{\frac{1}{2}}\mathbf{y}.$$ $\mathbf{V}^{\frac{1}{2}}\mathbf{A}\mathbf{V}^{\frac{1}{2}}$ can also be written as an eigenvalue decomposition $\mathbf{C}'\mathbf{\Lambda}\mathbf{C}$, where $\mathbf{\Lambda}=\left(\lambda_i\right)_{i=1,\ldots,p}$ is the diagonal matrix of eigenvalues and $\mathbf{C}$ is the matrix of the corresponding eigenvectors. Inserting this into (\[f10\]), we obtain $$\mathbf{x}'\mathbf{A}\mathbf{x}=_d \sum_{i} \lambda_i z_i^2$$ with $\mathbf{z}=(z_1,\ldots, z_p)'\sim N\left(\mathbf{C}'\mathbf{V}^{-\frac{1}{2}}{\bm\mu}, \mathbf{I}_p\right)$. Using this general result in the particular case by setting ${\bm\mu}$ and $\mathbf{V}$ to the mean and covariance matrix in ($\ref{f3}$) and $\mathbf{A}= \mathbf{I}_2$, it is easy to see that $\mathbf{V}$ has eigenvalues 1 and $\frac{n_1}{n_1+n_2}$ and eigenvectors $\frac{1}{\sqrt{2}}\cdot (1,1)'$ and $\frac{1}{\sqrt{2}}\cdot (1,-1)'$. Consequently, conditional on $(V_1,D_1)$, we obtain: $$D^{*2}_1+D^{*2}_2=_d z^{2}_1+z^{2}_2$$ where $z_1\sim N(0,1)$ and $z_2\sim N(\sqrt{\frac{n_2}{n_1+n_2}}\cdot \left(D_1-\sqrt{\frac{n_1}{2}}\Delta\right),\frac{n_1}{n_1+n_2})$. Hence, $V_2^*|(V_1, D_1) =_d \chi^2_{2n_2-1}+ z_2^2$, where $z_2^2$ has the “rescaled” non-central $\chi^2$-distribution $$z_2^2\sim \frac{n_1}{n_1+n_2}\cdot\chi^2\left(1;\frac{n_2}{n_1}\left(D_1-\sqrt{\frac{n_1}{2}}\Delta\right)^2\right).$$ We note in passing that if $\mathbf{D}^{*'}\mathbf{D}^*$ were $\chi^2(2)$-distributed, it would have $E(\mathbf{D}^{*'}\mathbf{D}^*)=2$. The true conditional expected value given $(V_1, D_1)$ can be obtained from $$\begin{aligned} E(\mathbf{D}^{*'}\mathbf{D}^*)=tr(E(\mathbf{D}^*\mathbf{D}^{*'}))=tr(\mathbf{\Sigma}_{\mathbf{D}^*}+{\bm\mu}_{\mathbf{D}^*}{\bm\mu}_{\mathbf{D}^*}')=\\ 1+\frac{n_1}{n_1+n_2}+\frac{n_2}{n_1+n_2}\left(D_1-\sqrt{\frac{n_1}{2}}\Delta\right)^2.\end{aligned}$$ Of course, this is not equal to $2$ in general. However, $E(\left(D_1-\sqrt{\frac{n_1}{2}}\Delta\right)^2)=1$ holds, since $D_1 \sim N\left(\sqrt{\frac{n_1}{2}}\Delta,1\right)$. If we then ignore that $n_2$ is a random variable as well, we obtain the approximate unconditional expected value $\left(1+\frac{n_1}{n_1+n_2}+\frac{n_2}{n_1+n_2}\right)=2$. [^1]: Novartis Pharma AG, CH-4002 Basel, Switzerland [^2]: Otto-von-Guericke-Universität Magdeburg, 39114 Magdeburg, Germany
--- abstract: | We report on the magnetocaloric properties of two molecule-based hexacyanochromate Prussian blue analogs, nominally CsNi$^{II}$\[Cr$^{III}$(CN)$_{6}]\cdot($H$_{2}$O) and Cr$^{II}_{3}$\[Cr$^{III}$(CN)$_{6}$\]$_{2}\cdot 12($H$_{2}$O). The former orders ferromagnetically below $T_{C}\simeq 90$ K, whereas the latter is a ferrimagnet below $T_{C}\simeq 230$ K. For both, we find significantly large magnetic entropy changes $\Delta S_{m}$ associated to the magnetic phase transitions. Notably, our studies represent the first attempt to look at molecule-based materials in terms of the magnetocaloric effect for temperatures well above the liquid helium range. author: - 'Espérança Manuel and Marco Evangelisti$^{*}$' - Marco Affronte - 'Masashi Okubo, Cyrille Train, and Michel Verdaguer' title: Magnetocaloric effect in hexacyanochromate Prussian blue analogs --- Recent experiments [@tejada00; @tejada03; @affronte04; @evange05a; @evange05b] have proven that molecule-based magnetic materials can manifest a significant magnetocaloric effect (MCE) that, in some cases, [@evange05a; @evange05b] is even larger than for intermetallic and lanthanide-alloys conventionally studied and employed for cooling applications. [@gschneidner05] This is certain for molecular compounds based on high-spin clusters with vanishing magnetic anisotropy for which the net cluster spins, resulting from strong intracluster superexchange interactions, are easily polarized by the applied-field providing large magnetic entropy changes $\Delta S_{m}$. Their maximum efficiency in terms of MCE takes place, however, at liquid-helium temperatures. A possible solution to increase the temperature of the maximum $\Delta S_{m}$ is by considering larger anisotropies, although the drawback is that the entropy change gets smaller with increasing anisotropy. [@zhang01] An alternative route to follow is to strengthen the intermolecular magnetic correlations which ultimately will give rise to long-range magnetic order (LRMO). The response to the application or removal of magnetic fields is indeed maximized near the magnetic ordering temperature. In this respect, extended molecule-based systems like Prussian blue analogs are particularly appealing since LRMO temperatures up to room temperature have been reported for this class of compounds. [@verdaguer04] In this Brief Report we report a study of MCE on two Prussian blue analogs, nominally CsNi$^{II}$\[Cr$^{III}$(CN)$_{6}]\cdot($H$_{2}$O) and Cr$^{II}_{3}$\[Cr$^{III}$(CN)$_{6}$\]$_{2}\cdot 12($H$_{2}$O) (hereafter denoted as NiCr and Cr$_{3}$Cr$_{2}$, respectively), showing different magnetic ordering at relatively high temperatures. Indeed, NiCr is known to undergo a transition to a long-range [*ferromagnetic*]{} ordered state at $\simeq 90$ K, [@gadet92] whereas Cr$_{3}$Cr$_{2}$ behaves as a long-range ordered [*ferrimagnet*]{} at temperatures below $\simeq 230$ K. [@mallah99] Both compounds crystalize in a cubic lattice, [@parameters] the conventional unit cell is depicted in Fig. 1. Exchange coupling between the two metallic center is ensured by the cyano-bridge. For NiCr, half of the tetrahedral interstitial sites are occupied by cesium atoms which maintain charge neutrality. Further information on the structure together with a description of the method of synthesis can be found in Refs. [@gadet92] and  [@mallah99] for NiCr and Cr$_{3}$Cr$_{2}$, respectively. Susceptibility, magnetization and heat capacity measurements down to 2 K were carried out in a Quantum Design PPMS set-up for the $0<H<7$ T magnetic field range. All data were collected on powdered samples of the compounds. ![(Color online). Sketch of a bimetallic Prussian blue analog. At the vertices, black spheres represent Cr$^{III}$ ($s=3/2$), whereas lighter-colored spheres represent either Ni$^{II}$ (spin $s=1$) or Cr$^{II}$ ($s=2$) atoms. Depicted as well are the Cs atoms at the interstitial sites (only relevant for the NiCr compound).[]{data-label="fig1"}](fig1.eps){width="4.5cm"} ![Top: Real component of the molar ac-susceptibility $\chi(T)$, collected at $f=1730$ Hz and ac-field $h_{ac}=10$ G, for NiCr and Cr$_{3}$Cr$_{2}$, as labeled. Bottom: Isothermal magnetization $M(H)$ of both NiCr and Cr$_{3}$Cr$_{2}$ for $T=5$ K.[]{data-label="fig2"}](fig2.eps){width="7.2cm"} For both compounds, Figure $\ref{fig2}$ shows the real component $\chi(T)$ of the complex susceptibility collected with an ac-field $h_{ac}=10$ G at $f=1730$ Hz. For NiCr, the abrupt change of $\chi(T)$ at $T_{C}\simeq 90$ K is ascribed to the transition to a ferromagnetically ordered state, in which demagnetization effects become important. In fact, the value of 73.5 emu/mol reached by $\chi(T)$ at $T_{C}$ is of the same order as expected for an isotropic NiCr sample at the maximum. [@note1] In the linear regime of the inverse susceptibility corrected for the demagnetizing field, the fit to the Curie-Weiss law provides the Curie constant $C=3.2$ emuK/mol and the Weiss constant $\theta=118$ K, in agreement with the observed ferromagnetic ordering. The constant $C$ is reasonably well accounted for by the expected value of randomly oriented spins that amounts to 3.1 emuK/mol, assuming $g=2.2$ for Ni$^{II}$ and $g=2$ for Cr$^{III}$ ions. In the ordered phase the saturation magnetization amounts to 4.6 $\mu_{B}$ (Fig. $\ref{fig2}$), which corroborates the tendency of the Ni$^{II}$ and Cr$^{III}$ spins to align parallel. For Cr$_{3}$Cr$_{2}$, the transition to the expected long-range ferrimagnetically ordered state is observed at $T_{C}\approx 230$ K [@note2] (Fig. $\ref{fig2}$). However, $\chi(T)$ does not reach its maximum at $T_{C}$ but only at much lower temperatures ($\approx 140$ K), and then it decreases down by lowering $T$. The molar magnetization of Cr$_{3}$Cr$_{2}$ collected for $T=5$ K increases sharply up to $\approx 1.9~\mu_{B}$ at $H=0.3$ T and then follows a linear dependence with increasing field, reaching $\approx 2.5~\mu_{B}$ at $H=7$ T (Fig. $\ref{fig2}$). As observed by Mallah [*et al.*]{}, [@mallah99] the magnetization fails to reach the saturation value ($6~\mu_{B}$) expected for a full antiparallel alignment of the Cr$^{II}$ and Cr$^{III}$ spins. There are two possible reasons for this behavior: (i) Cr$^{II}$ ions can also be in the low-spin state for which $s=1$, instead of $s=2$; (ii) the magnetic ordered structure is more complex than a simple ferrimagnet, and spin-canting or reorientation may take place in the ordered state accounting for the anomalous $\chi(T)$ dependence as well. The latter may be favored by the intrinsic Cr$^{II}$(CN)$_{6}$ vacancies. Likely both reasons coexists, a large fraction of low-spin Cr$^{II}$ would induce structural as well as magnetic disorder. We refrain from pushing any further the analysis since we do not have enough experimental information, because our measurements are performed on powder samples. ![(Color online). Field-cooled $M(T)$ curves measured at different applied-fields for NiCr (top) and Cr$_{3}$Cr$_{2}$ (bottom).[]{data-label="fig3"}](fig3.eps){width="7.2cm"} ![(Color online). Magnetic entropy change $\Delta S_{m}(T)$ as obtained from $M(T,H)$ data of Fig. 3 for NiCr (top) and Cr$_{3}$Cr$_{2}$ (bottom), for several field changes $\Delta H$, as labeled.[]{data-label="fig4"}](fig4.eps){width="7.2cm"} For a proper evaluation of the MCE of these compounds, [@pecharsky99JAP] we performed systematic magnetization $M(T,H)$ measurements as a function of temperature and field. Field-cooled $M(T,H)$ measurements for several applied-fields $H$ up to 7 T show spontaneous magnetization below the corresponding $T_{C}$’s (Fig. 3). In an isothermal process of magnetization, the magnetic entropy change $\Delta S_{m}$ can be derived from Maxwell relations by integrating over the magnetic field change $\Delta H=H_{f}-H_{i}$, that is: $$\Delta S_{m}(T)_{\Delta H}=\int_{H_{i}}^{H_{f}}\frac{\partial M(T,H)}{\partial T}~{\rm d}H.$$ From $M(H)$ data of Fig. $\ref{fig3}$, the obtained $\Delta S_{m}(T)$ for several $\Delta H$ values [@note3] are depicted in Fig. $\ref{fig4}$. It can be seen that, for both compounds, $\Delta S_{m}$ has maxima at temperatures corresponding that of the ordering temperatures. Interestingly, $\Delta S_{m}(T)$ of Cr$_{3}$Cr$_{2}$ keeps relatively large values for a broad $T$-range down to temperatures well below $T_{C}$. Moreover, we note that $\Delta S_{m}$ increases by increasing $\Delta H$, reaching for $\Delta H=7$ T the largest values of 2.8 J mol$^{-1}$K$^{-1}$ for NiCr and 0.70 J mol$^{-1}$K$^{-1}$ for Cr$_{3}$Cr$_{2}$, or equivalently 6.6 J kg$^{-1}$K$^{-1}$ and 0.93 J kg$^{-1}$K$^{-1}$ for the former and the latter, respectively. ![Temperature dependence of the specific heat of NiCr (top) and Cr$_{3}$Cr$_{2}$ (bottom) collected for zero-applied-field. Top inset: Magnification of the region around the ferromagnetic ordering temperature $T_{C}\simeq 90$ K, for $H=0$ and $H=7$ T.[]{data-label="fig5"}](fig5.eps){width="7cm"} The magnetocaloric effect of a given magnetic material can be also evaluated from specific heat $C/R$ measurements. [@pecharsky99JAP] The experimental specific heat of NiCr is displayed in Fig. 5 for $H=0$ and $H=7$ T. A careful look at the zero-field curve reveals the onset of the ferromagnetic phase transition at $T_{C}=90$ K, corroborating the magnetization experiments. It can also be noticed that this contribution disappears upon application of the magnetic field, proving its magnetic origin. The magnetic contribution is hardly detectable, the reason is that, at the temperature region of the phase transition, the dominant contribution arises from thermal vibrations of the lattice. In order to evaluate MCE from the $C(T,H)$ data of Fig. 5, we first determine the total entropies for $H=0$ and 7 T as functions of $T$, according to $$S(T)_{H}=\int_{0}^{T}\frac{C(T)_{H}}{T}~{\rm d}T.$$ Experimental entropies are obtained integrating down to the lowest achieved $T\approx 2$ K and, obviously, not from $T=0$ K as in principle required. However, this does not represent an obstacle for our purposes because at these temperatures the system is fully magnetically ordered and the magnetic entropy is therefore vanishingly small. For $\Delta H=(7-0)$ T, we then calculate the magnetic entropy change $\Delta S_{m}$ as well as the adiabatic temperature change $\Delta T_{ad}$, [@pecharsky99JAP] both as function of temperature. Note that the estimation of the lattice contribution is irrelevant for our calculations, since we deal with differences between total entropies at different $H$. The results obtained for $\Delta S_{m}$ and $\Delta T_{ad}$ are displayed in Fig. 6. The largest changes are seen near $T_{C}$, where we get $-\Delta S_{m}=(2.5\pm 0.3)$ J mol$^{-1}$K$^{-1}$, or equivalently $(5.9\pm 0.7)$ J kg$^{-1}$K$^{-1}$, and $\Delta T_{ad}=(1.2\pm 0.1)$ K. It can be noticed that the obtained $\Delta S_{m}$ agrees with the previous estimate inferred from $M(T,H)$, suggesting that both independent procedures can be effectively used to characterize the NiCr compound with respect to its magnetocaloric properties. ![For NiCr. Top: $\Delta S_{m}(T)$ as obtained from $C$ (empty dots) and $M$ data (filled dots), both for $\Delta H=7$ T. Bottom: $\Delta T_{ad}(T)$ as obtained from $C$ data, for $\Delta H=7$ T.[]{data-label="fig6"}](fig6.eps){width="7cm"} As for NiCr and for the sake of completeness, we performed heat capacity experiments for Cr$_{3}$Cr$_{2}$ as well. However, one has to consider that for this system we expect ferrimagnetic order at $T_{C}\simeq 230$ K. The resulting magnetic moment per formula unit is rather small (Fig. $\ref{fig2}$), especially taking into account the large lattice contribution that takes place at these very high temperatures. It is therefore not surprising that no magnetic anomaly is detectable in the $C(T,H)$ curves. The lower panel of Fig. 5 shows indeed the measured curve, which essentially does not depend on the applied-field. Summing up the experimental results here presented, the magnetocaloric response for the molecular materials NiCr and Cr$_{3}$Cr$_{2}$ shows maximum entropy changes near the ordering temperatures, 90 and 230 K, respectively. This places the two Prussian blue analogs in a temperature range still below room temperature, which is dominated, with reference to MCE variations, by lanthanide-based materials. [@pecharsky99] As a matter of fact, a direct comparison reveals that the NiCr and Cr$_{3}$Cr$_{2}$ compounds have $\Delta S_{m}$ and $\Delta T_{ad}$ that are about an order of magnitude smaller. In spite of such a large gap, our results are encouraging since they represent a strong improvement with respect to the molecule-based materials investigated so far in terms of MCE, for which large MCE variations take place only below 10 K. [@tejada00; @tejada03; @affronte04; @evange05a; @evange05b] The relevant point is that, as for conventional materials investigated and proposed for cooling applications, [@gschneidner05] the mechanism of magnetic ordering can be efficiently exploited also in this class of materials to enhance the magnetocaloric effect. This work is partially funded by the Italian MIUR under FIRB project No. RBNE01YLKN and by the EC-Network of Excellence “MAGMANet” (No. 515767). E.M. was supported by a fellowship from the EC-Marie Curie network “QuEMolNa” (No. MRTN-CT-2003-504880). M.O was supported by Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists. [99]{} Corresponding author.\ Electronic address: [email protected] F. Torres, J.M. Hernández, X. Bohigas, and J. Tejada, Appl. Phys. Lett. [**77**]{}, 3248 (2000). F. Torres, X. Bohigas, J.M. Hernández, and J. Tejada, J. Phys.: Condens. Matter [**15**]{}, L119 (2003). M. Affronte, A. Ghirri, S. Carretta, G. Amoretti, S. Piligkos, G.A. Timco, and R.E.P. Winpenny, Appl. Phys. Lett. [**84**]{}, 3468 (2004). M. Evangelisti, A. Candini, A. Ghirri, M. Affronte, E.K. Brechin, and E.J.L. McInnes, Appl. Phys. Lett. [**87**]{}, 072504 (2005). M. Evangelisti, A. Candini, A. Ghirri, M. Affronte, S. Piligkos, E.K. Brechin, and E.J.L. McInnes, Polyhedron [**24**]{}, 2573 (2005). see, i.e., K.A. Gschneidner Jr., V.K. Pecharsky and A.O. Tsokol, Rep. Prog. Phys. [**68**]{}, 1479 (2005), and references therein. X.X. Zhang, H.L. Wei, Z.Q. Zhang, and L.Y. Zhang, Phys. Rev. Lett. [**87**]{}, 157203 (2001). see, i.e., M. Verdaguer and G. Girolami, [*Magnetic Prussian blue analogs*]{} in [*Magnetism: Molecules to Materials V*]{}, Edited by J.S. Miller and M. Drillon (Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim, 2004). V. Gadet, T. Mallah, I. Castro, and M. Verdaguer, J. Am. Chem. Soc. [**114**]{}, 9213 (1992). T. Mallah, S. Thiébaut, M. Verdaguer, and P. Veillet, Science [**262**]{}, 1554 (1993). For NiCr, the cell parameter is $a=10.50$ Å and the molecular weight is $m_{w}=424.4$ g. For Cr$_{3}$Cr$_{2}$, $a=10.39$ Å and $m_{w}=788.3$ g. An estimate of the demagnetizing factor ($N\approx 0.6$) gives the value of $\approx 91$ emu/mol for the susceptibility of an isotropic ferromagnetic NiCr sample at the maximum, assuming a cylindrical approximation of its shape. The observed ordering temperature of Cr$_{3}$Cr$_{2}$ slightly differs from that reported in Ref. [@mallah99]. The small discrepancy can be explained by the water content: 12(H$_{2}$O) here vs. 10(H$_{2}$O) in Ref. [@mallah99]. It is known indeed that the amount of water may (i) vary from sample to sample, (ii) affect the magnetic properties. [@verdaguer04] V.K. Pecharsky and K.A. Gschneidner Jr., J. Appl. Phys. [**86**]{}, 565 (1999). For practical reasons, the measurements at the lowest applied field were carried out for $H_{i}=10^{-3}$ T, which in our calculations was approximated to zero-applied-field. see, i.e., V.K. Pecharsky and K.A. Gschneidner Jr., J. Magn. Magn. Mater. [**200**]{}, 44 (1999), and references therein.
--- abstract: 'Adopting the Standard Halo Model (SHM) of an isotropic Maxwellian velocity distribution for dark matter (DM) particles in the Galaxy, the most stringent current constraints on their spin-dependent scattering cross-section with nucleons come from the IceCube neutrino observatory and the PICO-60 C$_3$F$_8$ superheated bubble chamber experiments. The former is sensitive to high energy neutrinos from the self-annihilation of DM particles captured in the Sun, while the latter looks for nuclear recoil events from DM scattering off nucleons. Although slower DM particles are more likely to be captured by the Sun, the faster ones are more likely to be detected by PICO. Recent N-body simulations suggest significant deviations from the SHM for the smooth halo component of the DM, while observations hint at a dominant fraction of the local DM being in substructures. We use the method of [@Ferrer:2015bta] to exploit the complementarity between the two approaches and derive conservative constraints on DM-nucleon scattering. Our results constrain $\sigma_{\mathrm{SD}} \lesssim 3 \times 10^{-39} \mathrm{cm}^2$ ($6 \times 10^{-38} \mathrm{cm}^2$) at $\gtrsim 90\%$ C.L. for a DM particle of mass 1 TeV annihilating into $\tau^+ \tau^-$ ($b\bar{b}$) with a local density of $\rho_{\mathrm{DM}} = 0.3~\mathrm{ GeV/cm}^3$. The constraints scale inversely with $\rho_{\mathrm{DM}}$ and are independent of the DM velocity distribution.' title: 'Velocity independent constraints on spin-dependent DM-nucleon interactions from IceCube and PICO.' --- Introduction ============ Based on inferences from observations of gravitational effects, it has long been believed that a significant fraction of the Universe is made up of dark matter (DM) (see [@vandenBergh:1999sa]). However, very little is known about its properties and interactions. A weakly interacting massive particle (WIMP), whose relic abundance from a state of thermal equilibrium can make up DM has been the subject of considerable theoretical attention and experimental focus (see [@Bertone:2004pz] for a comprehensive review). Various complementary approaches have been pursued to detect the WIMPs that may constitute the DM halo of our Galaxy. Terrestrial direct detection (DD) experiments search for nuclear recoil events from the elastic scattering of WIMPs with the target nuclei of their detectors. Neutrino and gamma ray telescopes search for directional excesses over astrophysical backgrounds that may indicate the pair-annihilation of WIMPs, while collider searches look for the signatures of WIMPs being created in high-energy interactions of Standard Model particles. Although the different search strategies have attained the sensitivity to probe the physically-motivated WIMP parameter space over the past few decades, they have failed to detect any signal. In the absence of a convincing detection, constraints have been derived on the interaction cross-sections of these hypothetical particles with Standard Model particles. Such an inference requires knowledge both of the density of DM $\rho_{\mathrm{DM}}$ and of its velocity distribution function (VDF) $f(\vec{v})$. In the Standard Halo Model (SHM) [@Drukier:1986tm], the DM of the halo is a collisionless gas in hydrostatic equilibrium with the stars, retaining the velocity distribution obtained during the formation of our Galaxy. An isotropic Maxwell-Boltzman velocity distribution in the Galactic rest frame is usually adopted. Meanwhile, N-body simulations have hinted that a Maxwell-Boltzmann distribution does not accurately represent even the smooth component of the halo [@Kuhlen:2009vh; @Lisanti:2010qx; @Mao:2012hf]. Recent observations point to the possibility that a dominant fraction of the DM in the Solar neighbourhood [@Necib:2018iwb] may not yet have achieved dynamical equilibrium, perhaps due to the infalling tidal debris of a disrupted massive satellite galaxy of the Milky Way. New data also suggest that a substantial fraction of our stellar halo may lie in a strongly radially anisotropic population, the ‘Gaia sausage’ [@Evans:2018bqy]. If so, constraints on WIMP-nucleon interactions derived assuming the SHM (from both direct and indirect searches) may be weakened. Direct detection experiments are preferentially sensitive to nuclear recoils from high velocity DM particles, while capture in the Sun is more likely for the slower fraction of the DM population. In this work we use the method of  [@Ferrer:2015bta], which is independent of the velocity distribution of the halo model to exploit this complementarity and derive *conservative*, upper limits on the spin-dependent DM-nucleon scattering cross-section by combining the results from [@Aartsen:2016zhm] and [@Amole:2017dex]. Here the DM velocity distribution is taken to be a completely general superposition of individual ’streams’ (delta functions in velocity), similarly to the halo-independent analysis of direct detection experiments  [@Frandsen:2011gi]. Although constraints from individual searches will now be dependent on the stream velocity, by exploiting the complementarity of the IceCube and PICO searches, constraints independent of the stream velocity can be obtained. This method also improves on previous assessments of halo model uncertainties on indirect DM detection  [@Choi:2013eda], by allowing the velocity distribution to be anisotropic. The resulting constraints are a factor of 2 to 4 worse than the PICO SHM constraints at low DM masses and up to an order of magnitude worse at high DM masses, depending upon the annihilation channel, but are independent of the halo model. Detectors and data samples {#sec:data} ========================== IceCube 3 year Solar WIMP search -------------------------------- IceCube is a cubic-kilometer neutrino detector installed in the ice at the geographic South Pole between depths of 1450 and 2450 m. It relies on photomultiplier tubes housed in pressure vessels known as digital optical modules (DOM) for the optical detection of Cherenkov photons emitted by charged particles traversing the ice. The principal IceCube array is sensitive to neutrinos down to $\sim$100 GeV in energy [@Achterberg:2006md; @Abbasi:2008aa; @Aartsen:2016nxy]. The central region of the detector is an infill array known as DeepCore optimized in geometry and DOM density for the detection of neutrinos at lower energies, down to $\sim$10 GeV [@Collaboration:2011ym]. Over a detector uptime of 532 days corresponding to the austral winters between May 2011 and May 2014, two non-overlapping samples of upgoing track-like events, dominated by muons from charged current interactions of atmospheric $\nu_{\mu}$ and $\bar{\nu}_{\mu}$, were isolated [@Aartsen:2016zhm]. During austral summers, the Sun being above the horizon, is a source of downgoing neutrinos and the signal is overwhelmed by a background of muons originating in cosmic ray interactions in the upper atmosphere. The first sample, consisting of events that traverse the principal IceCube array, is sensitive to neutrinos in the 100 GeV – 1 TeV range in energy, while the second sample is dominated by events starting in and around the DeepCore infill array, and is sensitive down to neutrinos of $\sim$10 GeV in energy. An unbinned maximum likelihood ratio analysis of the directions and energies of the events that make up the two samples was unable to identify a statistically significant excess of neutrinos from the direction of the Sun. This enabled 90% C.L. upper limits on the DM annihilation induced neutrino flux to be computed according to the prescription of  [@Feldman:1997qc] as presented in [@Aartsen:2016zhm]. This can be interpreted as both a constraint on the annihilation rate of DM particles in the Sun, as well as on the scattering cross-section of DM with nucleons, although this has been usually done under the SHM assumption. In particle physics models where the DM couples to the spin of the nucleus and annihilates preferentially into SM particles that decay to produce a large number of high energy neutrinos (such as $\tau^+\tau^-$), the resultant constraints are the most stringent for DM mass above $\sim 80$ GeV [@Tanabashi:2018oca]. PICO ---- The PICO collaboration searches for WIMPs using superheated bubble chambers operated at temperature and pressure conditions which lead to being virtually insensitive to gamma and beta radiation [@Amole:2019scf]. Events in PICO consist of the transition from liquid to gas phase, signalled by the nucleation of a bubble in the target material. This phase change is imaged by the cameras surrounding the active area, which trigger upon detecting the formation of a pocket of gas. Additional background suppression is achieved through the measurement of the acoustic signal generated by the event, allowing alpha particles to be discriminated from nuclear recoils. Details of the apparatus are available in [@Amole:2015pla]. The data used in this study were obtained from the PICO-60 detector, consisting of a 52.2$\pm$0.5 kg C$_\mathrm{3}$ F$_\mathrm{8}$ target, operated roughly two kilometres underground at SNOLAB in Sudbury, Ontario, Canada. The results used here come from an efficiency-corrected exposure of 1167 kg-days taken between November 2016 and January 2017 [@Amole:2017dex]. The response of the detector to WIMPs is dependent on the thermodynamic conditions, and is calibrated using *in situ* nuclear and electronic recoil sources. Additionally, the Tandem Van de Graaff facility at the University of Montreal was used to determine the detector response, using well-defined resonances of the $^{51}$V(*p*,*n*)$^{51}$Cr reaction to produce mono energetic neutrons at 61 and 97 keV. The combination of these measurements is simulated using differential cross-sections for elastic scattering on fluorine to produce the detector response. DM velocity distributions and impact on constraints: The method {#sec:method} =============================================================== Following the method of [@Ferrer:2015bta], the velocity distribution of the DM (WIMP) population in the Solar system, $f(\vec{v})$ can be expressed as the superposition of streams with fixed velocity $\vec{v}_0$ with respect to the Solar frame. $$f(\vec{v}) = \int_{|\vec{v}_0| \leq v_{\mathrm{max}}} d^3 v_0 \delta^{(3)}(\vec{v} - \vec{v}_0)f(\vec{v}_0) \label{eq:streamdeco}$$ where $v_{\mathrm{max}}$ is the maximum velocity at which WIMPs can be found, typically the escape velocity of the Galaxy. For every stream with velocity $\vec{v}_0$ with respect to the Sun, upper limits can be derived from the null results of IceCube by requiring that the capture rate for the stream $C_{\vec{v}_0}$ be less than or equal to $ C_{\mathrm{max}}$, the upper limit on the capture rate from the results of the experiment. For a direct detection experiment, which sees the same stream with velocity $\vec{v}_0 - \vec{v}_{\mathrm{E}}(t)$ with respect to the Earth, similar constraints can be derived for each stream velocity by requiring that the event rate for the stream $R_{\vec{v}_0}$ be less than or equal to $R_{\mathrm{max}}$, the upper limit on the event rate from the results of the experiment. $C_{\vec{v}_0}$ and $R_{\vec{v}_0}$ are computed by evaluating the integrals of equations 2 and 3 of [@Ferrer:2015bta]. Since the PICO exposure period was too short for the Earth’s velocity $\vec{v}_{\mathrm{E}}(t)$ to average out to zero, velocities are conservatively shifted by 30.29 km s$^{-1}$ (the velocity of the Earth around the Sun at perihelion [@er:fa]) when computing $R_{\vec{v}_0}$. For the capture rates in the Sun, the integrals were evaluated using the density profile and nuclear abundances in the Sun for protons and nitrogen nuclei (the second most abundant species with nuclear spin) in the standard Solar model [@Bahcall:1987jc] as implemented in *sunpy* [@Sun:Py]. Nuclear form factors as implemented in *dmdd* [@Gluscevic:2015sqa] for spin-dependent scattering, corresponding to the $\Sigma'_{1M}$ (Axial transverse electric response) and $\Sigma''_{1M}$ (Axial longitudinal response), table 1 of  [@Fitzpatrick:2012ix] were employed for the event rate calculations in PICO. Figure \[fig:method2specdemo\] demonstrates the evolution of the constraints on the spin-dependent DM-proton scattering cross-section from both IceCube and PICO as $|v_0|$ is varied. The individual constraints on the cross section are computed from the constraints on the capture rate in the Sun already derived in [@Aartsen:2016zhm] as well as the constraint on the event rate within PICO presented in [@Amole:2017dex]. For a WIMP of mass $M$ scattering off a nucleus of mass $m$, the maximum stream velocity at which capture is allowed is given by [@Ferrer:2015bta]: $$v_{\mathrm{max}} = 2 v_{\mathrm{esc}} \sqrt{\frac{Mm}{|M-m|}} \label{eq:vmax}$$ where $v_{\mathrm{esc}}$ is the escape velocity. Consequently, above certain threshold values of the stream velocity, capture by scattering off protons is kinematically impossible and only nitrogen nuclei contribute to the capture rate. ![image](./Figures/40.0_5pCB.png){width="\columnwidth"} ![image](./Figures/700.0_5pCB.png){width="\columnwidth"} Subsequently, the largest value of the scattering cross-section allowed by both IceCube and PICO, $\sigma^{\mathrm{HI}}$, can be determined at the velocity of least constraint, $v_{\mathrm{LC}}$, where $\sigma^{\mathrm{PICO}}_{\mathrm{max}}(v_{\mathrm{LC}}) = \sigma^{\mathrm{IceCube}}_{\mathrm{max}}(v_{\mathrm{LC}})$. This procedure is illustrated in Figure \[fig:method2specdemo\] for two specific models, 40 GeV and 700 GeV WIMPs annihilating to $b\bar{b}$. Results and Conclusions {#sec:results} ======================= The resultant DM velocity independent constraints are illustrated in Figure \[fig:moneyplot\] and presented in Table \[tab:WIMPresults\]. For the “hard" channels ( [$W^{+}W^{-}$]{}and [$\tau^{+}\tau^{-}$]{}), which produce a relatively large number of neutrinos at energies just below the DM mass, the DM-velocity-independent constraints are in general worse only by a factor of 2 to 4 compared to the PICO SHM constraints. However, at a DM Mass of $\sim$250 GeV ($\sim$700 GeV for [$b\bar{b}$]{}), the constraints are significantly worse because the DM particle velocities just below the PICO threshold are still too high to be captured by scattering off protons in the Sun (see Figure \[fig:method2specdemo\]). At immediately higher masses, the constraints improve because the IceCube sensitivity improves with the DM mass in this range. The constraints are in agreement with the findings by [@Ibarra:2017mzt]. The IceCube constraints were recomputed with Monte-Carlo data sets under varying assumptions of all systematic uncertainties as described in [@Aartsen:2016zhm]. The dominant uncertainties were found to originate in the photodetection efficiency of the photomultiplier tubes that make up the DOMs, as well as the optical properties of the ice. Since these constraints correspond to the same annihilation rates of DM particles in the Sun reported in [@Aartsen:2016zhm], capture-annihilation equilibrium continues to be a valid assumption. The dominant uncertainties in the detector acceptance of PICO originate in the uncertainties of the neutron beam used in the calibration process. These are propagated to the final level and shown as shaded regions. Conservatively, the pessimistic efficiencies of PICO have been used to derive the constraints. While these constraints are robust with respect to any uncertainties in the velocity distribution of DM particles, they are still susceptible to uncertainties and/or fluctuations in the local density of DM, and are presented for the benchmark local density of $\rho_{\mathrm{DM}} = 0.3$ GeV cm$^{-3}$, and scale inversely with this quantity. ![image](./Figures/WSyst2HIandSMHCB.png){width="1.8\columnwidth"} ------------ ------------------------ ------------------- -------------------------------------------- -------------------------------------------- ------------------------------------------ ----------- $m_{\chi}$ annih. $v_{\mathrm{LC}}$ PICO IceCube Combined Syst unc. (GeV) channel (km s$^{-1})$ $\sigma_{\mathrm{SD}}^{\mathrm{SHM}}$ (pb) $\sigma_{\mathrm{SD}}^{\mathrm{SHM}}$ (pb) $\sigma_{\mathrm{SD}}^{\mathrm{HI}}$(pb) (%) 20 [$\tau^{+}\tau^{-}$]{} 229.7 3.78$\times$10$^{-5}$ 4.85$\times$10$^{-4}$ 2.29$\times$10$^{-4}$ 23.4 35 [$b\bar{b}$]{} 131.5 3.43$\times$10$^{-5}$ 9.25$\times$10$^{-4}$ 1.26$\times$10$^{-3}$ 18.3 35 [$\tau^{+}\tau^{-}$]{} 236.8 1.35$\times$10$^{-4}$ 9.74$\times$10$^{-5}$ 10.2 50 [$b\bar{b}$]{} 137.3 3.72$\times$10$^{-5}$ 6.39$\times$10$^{-3}$ 8.24$\times$10$^{-4}$ 8.0 50 [$\tau^{+}\tau^{-}$]{} 222.5 7.90$\times$10$^{-5}$ 1.08$\times$10$^{-4}$ 9.5 100 [$b\bar{b}$]{} 141.5 3.29$\times$10$^{-4}$ 7.23$\times$10$^{-4}$ 9.7 100 [$W^{+}W^{-}$]{} 167.8 5.36$\times$10$^{-5}$ 9.52$\times$10$^{-5}$ 3.56$\times$10$^{-4}$ 11.4 100 [$\tau^{+}\tau^{-}$]{} 170.3 2.91$\times$10$^{-5}$ 3.34$\times$10$^{-4}$ 14.4 250 [$b\bar{b}$]{} 106.2 2.80$\times$10$^{-3}$ 5.15$\times$10$^{-3}$ 26.7 250 [$W^{+}W^{-}$]{} 108.3 1.09$\times$10$^{-4}$ 5.30$\times$10$^{-5}$ 4.85$\times$10$^{-3}$ 31.8 250 [$\tau^{+}\tau^{-}$]{} 108.4 2.82$\times$10$^{-5}$ 3.12$\times$10$^{-3}$ 14.0 500 [$b\bar{b}$]{} 76.4 3.06$\times$10$^{-3}$ 4.99$\times$10$^{-2}$ 54.1 500 [$W^{+}W^{-}$]{} 122.7 2.06$\times$10$^{-4}$ 3.76$\times$10$^{-5}$ 3.04$\times$10$^{-3}$ 10.2 500 [$\tau^{+}\tau^{-}$]{} 142.5 1.46$\times$10$^{-5}$ 1.58$\times$10$^{-3}$ 13.1 1000 [$b\bar{b}$]{} 72.07 2.59$\times$10$^{-3}$ 5.72$\times$10$^{-2}$ 9.1 1000 [$W^{+}W^{-}$]{} 126.0 3.90$\times$10$^{-4}$ 6.80$\times$10$^{-5}$ 4.81$\times$10$^{-3}$ 8.6 1000 [$\tau^{+}\tau^{-}$]{} 145.3 2.07$\times$10$^{-5}$ 2.57$\times$10$^{-3}$ 10.8 3000 [$b\bar{b}$]{} 100.3 6.76$\times$10$^{-3}$ 1.61$\times$10$^{-1}$ 19.8 3000 [$W^{+}W^{-}$]{} 76.09 1.14$\times$10$^{-3}$ 5.42$\times$10$^{-4}$ 1.59$\times$10$^{-1}$ 21.4 3000 [$\tau^{+}\tau^{-}$]{} 49.52 1.21$\times$10$^{-4}$ 1.48$\times$10$^{-1}$ 22.4 5000 [$b\bar{b}$]{} 89.23 1.58$\times$10$^{-2}$ 3.11 25.4 5000 [$W^{+}W^{-}$]{} 46.41 1.89$\times$10$^{-3}$ 1.37$\times$10$^{-3}$ 3.16 16.5 5000 [$\tau^{+}\tau^{-}$]{} 46.41 3.28$\times$10$^{-4}$ 2.66 19.1 ------------ ------------------------ ------------------- -------------------------------------------- -------------------------------------------- ------------------------------------------ ----------- F. Ferrer, A. Ibarra and S. Wild, JCAP [**1509**]{}, no. 09, 052 (2015) \[arXiv:1506.03386 \[hep-ph\]\]. A. Achterberg [*et al.*]{} \[IceCube Collaboration\], Astropart. Phys.  [**26**]{}, 155 (2006) \[astro-ph/0604450\]. R. Abbasi [*et al.*]{} \[IceCube Collaboration\], Nucl. Instrum. Meth. A [**601**]{}, 294 (2009) \[arXiv:0810.4930 \[physics.ins-det\]\]. M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], JINST [**12**]{}, no. 03, P03012 (2017) \[arXiv:1612.05093 \[astro-ph.IM\]\]. R. Abbasi [*et al.*]{} \[IceCube Collaboration\], Astropart. Phys.  [**35**]{}, 615 (2012) \[arXiv:1109.6096 \[astro-ph.IM\]\]. S. van den Bergh, Publ. Astron. Soc. Pac.  [**111**]{}, 657 (1999) \[astro-ph/9904251\]. G. Bertone, D. Hooper and J. Silk, Phys. Rept.  [**405**]{}, 279 (2005) \[hep-ph/0404175\]. A. K. Drukier, K. Freese and D. N. Spergel, Phys. Rev. D [**33**]{}, 3495 (1986). N. W. Evans, C. A. J. O’Hare and C. McCabe, Phys. Rev. D [**99**]{}, no. 2, 023012 (2019) \[arXiv:1810.11468 \[astro-ph.GA\]\]. A. L. Fitzpatrick, W. Haxton, E. Katz, N. Lubbers and Y. Xu, JCAP [**1302**]{}, 004 (2013) doi:10.1088/1475-7516/2013/02/004 \[arXiv:1203.3542 \[hep-ph\]\]. M. Kuhlen, N. Weiner, J. Diemand, P. Madau, B. Moore, D. Potter, J. Stadel and M. Zemp, JCAP [**1002**]{}, 030 (2010) \[arXiv:0912.2358 \[astro-ph.GA\]\]. M. Lisanti, L. E. Strigari, J. G. Wacker and R. H. Wechsler, Phys. Rev. D [**83**]{}, 023519 (2011) \[arXiv:1010.4300 \[astro-ph.CO\]\]. Y. Y. Mao, L. E. Strigari, R. H. Wechsler, H. Y. Wu and O. Hahn, Astrophys. J.  [**764**]{}, 35 (2013) \[arXiv:1210.2721 \[astro-ph.CO\]\]. L. Necib, M. Lisanti and V. Belokurov, arXiv:1807.02519 \[astro-ph.GA\]. K. Choi, C. Rott and Y. Itow, JCAP [**1405**]{}, 049 (2014) \[arXiv:1312.0273 \[astro-ph.HE\]\]. M. T. Frandsen, F. Kahlhoefer, C. McCabe, S. Sarkar and K. Schmidt-Hoberg, JCAP [**1201**]{}, 024 (2012) \[arXiv:1111.0292 \[hep-ph\]\]. M. G. Aartsen [*et al.*]{} \[IceCube Collaboration\], Eur. Phys. J. C [**77**]{}, no. 3, 146 (2017) \[arXiv:1612.05949 \[astro-ph.HE\]\]. G. J. Feldman and R. D. Cousins, Phys. Rev. D [**57**]{}, 3873 (1998) doi:10.1103/PhysRevD.57.3873 \[physics/9711021 \[physics.data-an\]\]. C. Amole [*et al.*]{} \[PICO Collaboration\], arXiv:1905.12522 \[physics.ins-det\]. C. Amole [*et al.*]{} \[PICO Collaboration\], Phys. Rev. Lett.  [**118**]{} (2017) no.25, 251301 \[arXiv:1702.07666 \[astro-ph.CO\]\]. C. Amole [*et al.*]{} \[PICO Collaboration\], Phys. Rev. D [**93**]{}, no. 5, 052014 (2016) \[arXiv:1510.07754 \[hep-ex\]\]. J. N. Bahcall and R. K. Ulrich, Rev. Mod. Phys.  [**60**]{}, 297 (1988). V. Gluscevic, M. I. Gresham, S. D. McDermott, A. H. G. Peter and K. M. Zurek, JCAP [**1512**]{}, no. 12, 057 (2015) \[arXiv:1506.04454 \[hep-ph\]\]. A. Ibarra and A. Rappelt, JCAP [**1708**]{}, no. 08, 039 (2017) \[arXiv:1703.09168 \[hep-ph\]\]. M. Tanabashi [*et al.*]{} \[Particle Data Group\], Phys. Rev. D [**98**]{}, no. 3, 030001 (2018). T. Mumford [*et al.*]{} \[SunPy Community\] Computational Science and Discovery, no 8, 1 (2015) \[arXiv:1505.02563 \[astro-ph\]\]. E. Tollerud [*et al.*]{} \[ERFA\] Computational Science and Discovery, no 8, 1 (2015) doi : 10.5281/zenodo.1021149
--- address: | Department of Physics, University of Washington\ Seattle, WA 98195-1560\ E-mail: [email protected] author: - 'Gerald A. Miller' title: 'The Electromagnetic Form Factors of the Proton and Neutron: Fundamental Indicators of Nucleon Structure' --- Introduction ============ An alternate title would be “Surprises in the Proton”. This talk owes its existence to the brilliant, precise, stunning and exciting recent experimental work on measuring $G_E/G_M$ (or $QF_2/F_1$) for the proton and $G_E$ for the neutron. My goal here is to interpret the data. Symmetries including Poincaré invariance and chiral symmetry will be the principal tool I’ll use. If, a few years ago, one had asked participants at a meeting like this about the $Q^2$ dependence of the proton’s $G_E/G_M$ or $QF_2/F_1$. Almost everyone one have answered that for large enough values of $Q^2$, $G_E/G_M$ would be flat and $QF_2/F_1$ would fall with increasing $Q^2$. The reason for the latter fall being conservation of hadron helicity. Indeed, the shapes of the curves have been obtained in the new measurements, except for the mis-labeling of the ordinate axes. The expected flatness of $G_E/G_M$ holds for $QF_2/F_1$, and the quantity $G_E/G_M$ falls rapidly and linearly with $Q^2$. This revolutionary behavior needs to be understood! Outline ======= I will begin with a brief discussion of Light Front Physics. Stan Brodsky has long been advocating this technique, I have become a convert. Then I will discuss a particular relativistic model of the nucleon, and proceed to apply it. The proton calculations will be discussed first, but recent high accuracy experiments make it necessary for us to compute observables for the neutron as well. Light Front ============ Light-front dynamics is a relativistic many-body dynamics in which fields are quantized at a “time”=$\tau=x^0+x^3\equiv x^+$. The $\tau$-development operator is then given by $P^0-P^3\equiv P^-$. These equations show the notation that a four-vector $A^\mu$ is expressed as $ A^\pm\equiv A^0\pm A^3.$ One quantizes at $x^+=0$ which is a light-front, hence the name “light front dynamics”. The canonical spatial variable must be orthogonal to the time variable, and this is given by $x^-=x^0-x^3$. The canonical momentum is then $P^+=P^0+P^3$. The other coordinates are ${\bf x}_\perp$ and ${\bf P}_\perp$. The most important consequence of this is that the relation between energy and momentum of a free particle is given by: $ p_\mu p^\mu=m^2=p^+p^--p_\perp^2\to p^-={p_\perp^2+m^2\over p^+},$ a relativistic kinetic energy which does not contain a square root operator. This allows the separation of center of mass and relative coordinates, so that the computed wave functions are frame independent. The use of the light front is particularly relevant for calculating form factors, which are probability amplitudes for an nucleon to absorb a four momentum $q$ and remain a nucleon. The initial and final nucleons have different total momenta. This means that the final nucleon is a boosted nucleon, with different wave function than the initial nucleon. In general, performing the boost is difficult for large values of $Q^2=-q^2$. However the light front technique allows one to set up the calculation so that the boosts are independent of interactions. Indeed, the wave functions are functions of relative variables and are independent of frame. Definitions ----------- Let us define the basic quantities concerning us here. These are the independent form factors defined by $$\left< N,\lambda ' p' \left| J^\mu \right| N,\lambda p\right> = \bar u_{\lambda '}(p') \left[ F_1(Q^2)\gamma^\mu + {\kappa F_2(Q^2) \over 2 M_N}i\sigma^{\mu\nu}(p'-p)_\nu \right] u_\lambda (p). \ee The Sachs form factors are defined by the equations: \bea G_E = F_1 - {Q^2 \over 4M_N^2}\kappa F_2,\; G_M = F_1 + \kappa F_2\label{sachsdefs}.\eea There is an alternate light front interpretation, based on field theory, in which one uses the ``good" component of the current, $J^+,$ to suppress the effects of quark-pair terms. Then, using nucleon light-cone spinors: \begin{eqnarray} F_1(Q^2) ={1 \over 2P^+} N,\langle\uparrow\left| J^+\right| N, \uparrow\rangle, Q\kappa F_2(Q^2) ={-2M_N \over 2P^+}\langle N,\uparrow\left| J^+\right| N,\downarrow\rangle. \end{eqnarray} \section{Why I am Giving This Talk} In 1996, Frank, Jennings \& I \cite{Frank:1995pv} examined the point-like-configuration idea of Frankfurt \& Strikman\cite{Frankfurt:cv}. We needed to start with a relativistic model of the free nucleon. The resulting form factors are shown in Figs.~10 and 11 of our early paper. The function $G_M$ was constrained\cite{Schlumpf:ce} by experimental data to define the parameters of the model, but we predicted a very strong decrease of $G_E/G_M$ as a function of $Q^2$. This decrease has now been measured as a real effect, but the task of explaining its meaning remained relevant. That was the purpose of our second paper\cite{Miller:2002qb} in which imposing Poincar\'{e} invariance was shown to lead to substantial violation of the helicity conservation rule as well as an analytic result that the ratio $QF_2/F_1$ is constant for the $Q^2$ range of the Jefferson Laboratory experiments. Although the second paper is new, the model is the same. Ralston {\em et al.}\cite{ralston} have been talking about non-conservation of helicity for a long time. \section { Three-Body Variables and Boost} We use light front coordinates for the momentum of each of the $i$ quarks, such that ${\bf p}_i = (p^+_i,{\bf p}_{i\perp}), p^-=(p_\perp^2+m^2)/p^+.$ The total (perp)-momentum is $\bf {P}= {\bf p}_1+{\bf p}_2+ {\bf p}_3,$ the plus components of the momenta are denoted as \be \xi={p_1^+\over p_1^++p_2^+}\;, \qquad \eta={p_1^++p_2^+\over P^+},\ee and the perpendicular relative coordinates are given by \be {\bf k}_\perp =(1-\xi){\bf p}_{1\perp}-\xi {\bf p}_{2\perp}\;, \quad {\bf K}_\perp =(1-\eta)({\bf p}_{1\perp} +{\bf p}_{2\perp})-\eta {\bf p}_{3\perp}.\ee In the center of mass frame we find: \be {\bf p}_{1\perp}={\bf k}_\perp+\xi {\bf K}_\perp,\;\;\quad {\bf p}_{2\perp}=-{\bf k}_\perp+(1-\xi){\bf K}_\perp\;,\quad {\bf p}_{3\perp}=-{\bf K}_\perp .\ee The coordinates $\xi,\eta, \bfk,\bfK$ are all relative coordinates so that one obtains a frame independent wave function $\Psi({\bf k}_\perp,{\bf K}_\perp,\xi,\eta).$ Now consider the computation of a form factor, taking quark 3 to be the one struck by the photon. One works in a special set of frames with $q^+=0$ and $Q^2=\bfq_\perp^2$, so that the value of $1-\eta$ is not changed by the photon. The coordinate $\bfp_{3\perp}$ is changed to $\bfp_{3\perp}+\bfq_\perp,$ so only one relative momentum, $\bfK_\perp$ is changed: \bea{\bfK'}_\perp =(1-\eta)({\bf p}_{1\perp} +{\bf p}_{2\perp})-\eta ({\bf p}_{3\perp}+{\bf q}_\perp)\; =\bfK_\perp-\eta\bfq_{\perp},\quad \bfk'_\perp=\bfk_\perp,\qquad \eea The arguments of the spatial wave function are taken as the mass-squared operator for a non-interacting system: \be M_0^2\equiv\sum_{i=1,3} p^-_i\;P^+-P_\perp^2= {K_\perp^2\over \eta(1-\eta)}+ {k_\perp^2+m^2\over \eta\xi(1-\xi)} +{m^2\over 1-\eta}. \ee This is a relativistic version of the square of a the relative three-momentum. Note that the absorption of a photon changes the value to: \be{{M_0}'}^2= {(K_\perp-\eta q_\perp)^2\over \eta(1-\eta)}+ {k_\perp^2+m^2\over \eta\xi(1-\xi)} +{m^2\over 1-\eta}.\ee \section { Wave function} Our wave function is based on symmetries. The wave function is anti-symmetric, a function of relative momenta, independent of reference frame, an eigenstate of the spin operator and rotationally invariant (in a specific well-defined sense). The use of symmetries is manifested in the construction of such wave functions, as originally described by Terent'ev \cite{bere76}, Coester\cite{chun91} and their collaborators. A schematic form of the wave functions is \be \Psi(p_i)=\Phi(M_0^2) u(p_1) u(p_2) u(p_3)\psi(p_1,p_2,p_3),\quad p_i=\bfp_i s_i,\tau_i\ee where $\psi$ is a spin-isospin color amplitude factor, the $p_i$ are expressed in terms of relative coordinates, the $u(p_i)$ are ordinary Dirac spinors and $\Phi$ is a spatial wave function. We take the the spatial wave function from Schlumpf\cite{Schlumpf:ce}: \bea \Phi(M_0)={N\over (M^2_0+\beta^2)^{\gamma}}\;, \beta =0.607\;{\rm GeV}, \; \gamma=3.5,\; m = 0.267\; {\rm GeV}. \label{params}\eea The value of $\gamma$ is chosen that $ Q^4G_M(Q^2) $ is approximately constant for $Q^2>4\; {\rm GeV}^2$ in accord with experimental data. The parameter $\beta$ helps govern the values of the perp-momenta allowed by the wave function $\Phi$ and is closely related to the rms charge radius, and $m$ is mainly determined by the magnetic moment of the proton. At this point the wave function and the calculation are completely defined. One could evaluate the form factors as $\langle \Psi\vert J^+\vert \Psi\rangle$ and obtain the previous numerical results\cite{Frank:1995pv}. \section { Simplify Calculation- Light Cone Spinors} The operator $J^+\sim \gamma^+$ acts its evaluation is simplified by using light cone spinors. These solutions of the free Dirac equation, related to ordinary Dirac spinors by a unitary transformation, conveniently satisfy: \bea \bar u_L(p^+,\bfp',\lambda')\gamma^+u_L(p^+,\bfp ,\lambda)= 2\delta_{\lambda\lambda'}p^+. \eea To take advantage of this, re-express the wave function in terms of light-front spinors using the completeness relation: $1= \sum_\lambda u_L(p,\lambda)\bar u_L(p,\lambda).$ We then find \bea &&\Psi(p_i)=u_L(p_1,\lambda_1) u_L(p_2,\lambda_2) u_L(p_3,\lambda_3) \psi_L(p_i,\lambda_i),\\ &&\psi_L(p_i,\lambda_i)\equiv [\bar u_L(\bfp_1,\lambda_1)u(\bfp_1, s_1)] [\bar u_L(\bfp_2,\lambda_2)u(\bfp_2, s_2)]\nonumber\\ \times &&[\bar u_L(\bfp_3,\lambda_3)u(\bfp_3, s_3)]\; \psi(p_1,p_2,p_3).\eea This is the very same $\Psi$ as before, it is just that now it is easy to compute the matrix elements of the $\gamma^+$ operator. The unitary transformation is also known as the Melosh rotation. The basic point is that one may evaluate the coefficients in terms of Pauli spinors: $\vert \lambda_i\rangle,\vert s_i\rangle,$ with $\langle \lambda_i\vert { R}_M^\dagger(\bfp_i)\vert s_i\rangle \equiv \bar u_L(\bfp_i,\lambda_i)u(\bfp_i, s_i)$. It is easy to show that \be \langle \lambda_3\vert { R}_M^\dagger(\bfp_3)\vert s _3\rangle= \langle \lambda_3\vert \left[ {m+(1-\eta)M_0+i{{\mbox{\boldmath $\sigma$}}}\cdot({\bf n}\times {\bf p}_3)\over \sqrt{(m+(1-\eta)M_0)^2+p_{3\perp}^2}}\right]\vert s_3\rangle. \label{melosh}\ee $$ The important effect resides in the term $({\bf n}\times {\bf p}_3)$ which originates from the lower components of the Dirac spinors. This large relativistic spin effect can be summarized: the effects of relativity are to replace Pauli spinors by Melosh rotation operators acting on Pauli spinors. Thus \_i\_M\^(\_i) ,\_3. Proton $F_1,F_2$-Analytic Insight ================================== The analytic insight is based on Eq. (\[melosh\]). Consider high momentum transfer such that $Q=\sqrt{\bfq_\perp^2}\gg \beta=560$ MeV. [*Each*]{} of the quantities: $M_0\;,M_0'\;,\bfp_{3\perp},\;\bfp_{3\perp}$ can be of order $q_\perp,$ so the spin-flip term is as large as the non-spin flip term. In particular, ($s_3=+1/2$) may correspond to ($\lambda_3=-1/2)$, so the spin of the struck quark $\ne$ proton spin. This means that there is no hadron helicity selection rule[@ralston; @Braun:2001tj]. The effects of the lower components of Dirac spinors, which cause the spin flip term ${\mbox{\boldmath $\sigma$}}\times \bfp_3$, are the same as having a non-zero $L_z$, if the wave functions are expressed in the light-front basis. See Sect. 9. We may now qualitatively understand the numerical results, since $$\begin{aligned} F_1(Q^2) &=& \int\! {d^2\!q_\perp d\xi\over \xi(1-\xi)}{ d^2K_\perp d\eta\over\eta(1-\eta)}\; \cdots\;\langle\up\bfp_3'\vert \up(\bfp_3)\rangle \\ Q\kappa F_2(Q^2) &=& 2M_N \int\! {d^2\!q_\perp d\xi\over \xi(1-\xi)} {d^2K_\perp d\eta\over\eta(1-\eta)}\; \cdots\langle\up\bfp_3'\vert \down(\bfp_3)\rangle, \end{aligned}$$ where the $\cdots$ represents common factors. The term $F_1\sim\langle\up\bfp_3'\vert\up\bfp_3\rangle$ is a spin-non-flip term and $F_2\sim\langle\up\bfp_3'\vert\up\bfp_3\rangle$ depends on the spin-flip term. In doing the integral each of the momenta, and $M_0,M_0'$ can take the large value $Q$ for some regions of the integration. Thus in the integral \_3’\_3\~[QQ]{},\_3’\_3\~[QQ]{},so that $F_1$ and $ QF_2$ have the same $Q^2$ dependence. This is shown in Fig. 1. (10,8)(0,-8.) \[fig:1\] Relation between ordinary Dirac Spinors and $L_z$ ================================================= Our use of ordinary Dirac spinors corresponds to the use of a non-zero $L_z$ in the light front basis. We may represent Dirac spinors as Melosh rotated Pauli spinors, and this is sufficient to show $L_z\ne0$. It is worthwhile to consider the pion as an explicit example. Then our version of the light-front wave function $\chi_\pi$ would be[@cc]: \_(k\^+,\_,,’)i\_2 m-(k\_1-i\_3k\_2)’, while the Gousset-Pire-Ralston[@ralston] pion Bethe-Salpeter amplitude $\Phi$ is =P\_[0]{} \_+P\_[1]{}\[\_,\_\], where $p_\pi$ is the pion total momentum, $P_{i\pi}$ are scalar functions of relative momentum, and the term with $P_{1\pi}$ is the one which carries orbital angular momentum. The relation[@ls] between the Bethe-Salpeter amplitude and the light-front wave function $\phi_\pi$ is \_(k\^+,\_,,’)= |[u]{}\_L(k\^+,\_,)\^+\^+ v\_L(P\_\^+-k\^+,-\_,’). Doing the Dirac algebra and choosing suitable functions $P_{i\pi}$, leads to $ \chi_\pi=\phi_\pi.$ The Melosh transformed Pauli spinors, which account for the lower components of the ordinary Dirac spinors, contain the non-zero angular momentum of the wave function $\Phi$. Neutron Charge Form Factor ========================== The neutron has no charge, $ G_{En}(Q^2=0)=0$, and the square of its charge radius is determined from the low $Q^2$ limit as $G_{En}(Q^2)\to-Q^2R^2/6.$ The quantity $R^2$ is well-measured[@nrm] as $R^2=-0.113 \pm 0.005 \;$fm$^2$. The Galster parameterization[@galster] has been used to represent the data for $Q^2<0.7\; {\rm GeV}^2$. Our proton respects charge symmetry, the interchange of $u$ and $d$ quarks, so it contains a prediction for neutron form factors. This is shown in Fig. 2. (10,8)(0,-8) \[fig:2\] The resulting curve labeled relativistic quarks is both large and small. It is very small at low values of $Q^2$. Its slope at $Q^2=0$ is too small by a factor of five, if one compares with the straight line. But at larger values of $Q^2$ the prediction is relatively large. Our model gives $R^2_{\rm model}=-0.025$ fm$^2$, about five times smaller than the data. The small value can be understood in terms of $F_{1,2}$. (10,8)(0,-8.5) \[fig:ratio\] Taking the definition (\[sachsdefs\]) for small values of $Q^2$ gives -Q\^2R\^2/6=-Q\^2R\_1\^2/6 -\_n Q\^2/4M\^2=-Q\^2R\_1\^2/6 -Q\^2R\_F\^2/6,where the Foldy contribution, $R_F^2=6 \kappa_n/4M^2=-0.111\; {\rm fm}^2$, is in good agreement with the experimental data. That a point particle with a magnetic moment can explain the charge radius has led some to state that $G_E$ is not a measure of the structure of the neutron. However, one must include the $Q^2$ dependence of $F_1$ which gives $R_1^2$. In our model $R_1^2=+0.086\; {\rm fm}^2$ which nearly cancels the effects of $R_F^2$. Isgur[@Isgur:1998er] showed that this cancellation is a natural consequence of including the relativistic effects of the lower components of the Dirac spinors. Thus our relativistic effects are standard. We need another source of $R^2$. This is the pion cloud. Pion Cloud and the Light Front Cloudy Bag Model =============================================== The effects of chiral symmetry require that sometimes a physical nucleon can be a bare nucleon emersed in a pion cloud. An incident photon can interact electromagnetically with a bar nucleon, a pion in flight or with a nucleon while a pion is present. These effects were included in the cloudy bag model, and are especially pronounced for the neutron. Sometimes the neutron can be a proton plus a negatively charged pion. The tail of the pion distribution extends far out into space (see Figs. 10 and 11) of Ref.[@cbm], so that the square of the charge radius is negative. It is necessary to modernize the cloudy bag model, so as to make it relativistic. This involves using photon-nucleon form factors from our model, using a relativistic $\pi$-nucleon form factor, and treating the pionic contributions relativistically by doing a light front calculation. This has been done. The result is the light front cloudy bag model, and the preliminary results are shown in Fig. 3. We see that the pion cloud effects are important for small values of $Q^2$ and, when combined with those of the relativistic quarks coming from the bare nucleon, leads to a good description of the low $Q^2$ data. The total value of $G_E$ is substantial for large values of $Q^2$. Summary ======= invariance is needed to describe the new exciting experimental results. Ordinary Dirac spinors carry light front orbital angular momentum. Including the effects of these spinors, in a way such that the proton is an eigenstate of spin leads naturally to the result that $QF_2/F_1$ is constant for values of $Q^2$ between 2 and about 20 GeV$^2$. The prediction of hadron helicity conservation is that $Q^2F_2/F_1$ is constant, so we see that this is not respected in present data and there is no need to expect it to hold for a variety of exclusive reactions occurring at high $Q^2\le 5.5$ GeV$^2$. Examples include the anomalies seen in $pp$ elastic scattering and the large spin effects seen in the reactions $\gamma d\to np$ and $\gamma p\to \pi^0 p$. The results for the neutron $G_E$ can be concisely stated. At small values of $Q^2$ the effects of a pion cloud is needed to counteract the relativistic effects which cancel the effects of the Foldy term. At large values of $Q^2$ relativistic effects give a “large” value of $G_E$; large in the sense that this form factor is predicted to be larger than that of the Galster parameterization. At the time of this workshop, I had not yet used the light front cloudy bag model to compute proton form factors or the neutron’s $G_M$. Including the effects of the pion cloud (with a parameter to describe the pion-nucleon form factor) allows the use of different quark-model parameters. The result is an excellent description of all four nucleon electromagnetic form factors, and I plan to publish that soon. Acknowledgments {#acknowledgments .unnumbered} =============== This work is partially supported by the U.S. DOE. I thank R. Madey for encouraging me to compute the neutron form factors. [99]{} M.R. Frank, B.K. Jennings and G.A. Miller, Phys. Rev. C [**54**]{}, 920 (1996). L. L. Frankfurt and M. I. Strikman, Nucl. Phys. B [**250**]{}, 143 (1985). F. Schlumpf, U. Zurich Ph. D. Thesis, hep-ph/9211255. G. A. Miller and M. R. Frank, nucl-th/0201021 to appear Phys. Rev. C. P. Jain, B. Pire and J. P. Ralston, Phys. Rept.  [**271**]{}, 67 (1996) ; T. Gousset, B. Pire and J. P. Ralston, Phys. Rev. D [**53**]{}, 1202 (1996) V. B. Berestetskii and M. V. Terent’ev. Sov. J. Nucl. Phys. [**25**]{}, 347 (1977). P. L. Chung and F. Coester. , 229, (1991). V. M. Braun, A. Lenz, N. Mahnke and E. Stein, Phys. Rev. D [**65**]{}, 074011 (2002). M. K. Jones [*et al.*]{}, Phys. Rev. Lett.  [**84**]{}, 1398 (2000) O. Gayou [*et al.*]{}, Phys. Rev. Lett.  [**88**]{}, 092301 (2002). P.L. Chung, F. Coester and W.N. Polyzou, Phys. Lett 205B,545 (1988). H. H. Liu and D. E. Soper, Phys. Rev. D [**48**]{}, 1841 (1993). S. Kopecky [*et al.*]{}, Phys. Rev. Lett. [**74**]{}, 2427 (1995) S. Galster[*et al.*]{}, Nucl. Phys. B [**32**]{}, 221 (1971). T. Eden [*et al.*]{}, Phys. Rev. C [**50**]{}, 1749 (1994); M. Meyerhoff [*et al.*]{}, Phys. Lett. B [**327**]{}, 201 (1994); M. Ostrick [*et al.*]{}, Phys. Rev. Lett.  [**83**]{}, 276 (1999). J. Becker [*et al.*]{}, Eur. Phys. J. A [**6**]{}, 329 (1999). I. Passchier [*et al.*]{}, Phys. Rev. Lett.  [**82**]{}, 4988 (1999) D. Rohe [*et al.*]{}, Phys. Rev. Lett.  [**83**]{}, 4257 (1999). H. Zhu [*et al.*]{} Phys. Rev. Lett.  [**87**]{}, 081801 (2001) Jefferson Laboratory Experiment 93-038, R. Madey Spokesperson; R.Madey, for the Jlab E93-038 collaboration, “Neutron Electric Form Factor Via Recoil Polarimetry”, contribution to Baryons 2002. N. Isgur, Phys. Rev. Lett.  [**83**]{}, 272 (1999) S. Théberge, A. W. Thomas and G. A. Miller, Phys. Rev. [**D22**]{} (1980) 2838; (1981) 2106, A. W. Thomas, S. Théberge, and G. A. Miller, Phys. Rev. [**D24**]{} (1981) 216; S. Théberge, G. A. Miller and A. W. Thomas, Can. J. Phys.  [**60**]{}, 59 (1982). G. A. Miller, A. W. Thomas and S. Théberge, Phys. Lett. B [**91**]{}, 192 (1980).
--- abstract: | We present measurements of the branching fraction, the polarization parameters and $CP$-violating asymmetries in $\bz \to \dsp {D_{\rm SM}}$ decays using a 140 fb$^{-1}$ data sample collected at the $\Upsilon(4S)$ resonance with the Belle detector at the KEKB energy-asymmetric $e^+e^-$ collider. We obtain $\calb(\bztodspdsm) = {[{0.81}\pm{0.08}{\rm(stat)}\pm{0.11}{\rm(syst)}]\times10^{-3}},~ {R_{\perp}}= {{0.19}\pm{0.08}{\rm(stat)}\pm{0.01}{\rm(syst)}},~ {R_{0}}= {{0.57}\pm{0.08}{\rm(stat)}\pm{0.02}{\rm(syst)}},~ \sdsds = {{-0.75}\pm{ 0.56}{\rm(stat)}\pm{ 0.12}{\rm(syst)}}$ and $ \adsds = {{-0.26}\pm{ 0.26}{\rm(stat)}\pm{ 0.06}{\rm(syst)}}.$ Consistency with Standard Model expectations is also discussed. address: - 'Budker Institute of Nuclear Physics, Novosibirsk, Russia' - 'Chiba University, Chiba, Japan' - 'Chonnam National University, Kwangju, South Korea' - 'University of Cincinnati, Cincinnati, OH, USA' - 'Gyeongsang National University, Chinju, South Korea' - 'University of Hawaii, Honolulu, HI, USA' - 'High Energy Accelerator Research Organization (KEK), Tsukuba, Japan' - 'Hiroshima Institute of Technology, Hiroshima, Japan' - 'Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, PR China' - 'Institute of High Energy Physics, Vienna, Austria' - 'Institute for Theoretical and Experimental Physics, Moscow, Russia' - 'J. Stefan Institute, Ljubljana, Slovenia' - 'Kanagawa University, Yokohama, Japan' - 'Korea University, Seoul, South Korea' - 'Kyungpook National University, Taegu, South Korea' - 'Swiss Federal Institute of Technology of Lausanne, EPFL, Lausanne, Switzerland' - 'University of Ljubljana, Ljubljana, Slovenia' - 'University of Maribor, Maribor, Slovenia' - 'University of Melbourne, Victoria, Australia' - 'Nagoya University, Nagoya, Japan' - 'Nara Women’s University, Nara, Japan' - 'National Central University, Chung-li, Taiwan' - 'National United University, Miao Li, Taiwan' - 'Department of Physics, National Taiwan University, Taipei, Taiwan' - 'H. Niewodniczanski Institute of Nuclear Physics, Krakow, Poland' - 'Nihon Dental College, Niigata, Japan' - 'Niigata University, Niigata, Japan' - 'Osaka City University, Osaka, Japan' - 'Osaka University, Osaka, Japan' - 'Panjab University, Chandigarh, India' - 'Peking University, Beijing, PR China' - 'Princeton University, Princeton, NJ, USA' - 'Saga University, Saga, Japan' - 'University of Science and Technology of China, Hefei, PR China' - 'Seoul National University, Seoul, South Korea' - 'Sungkyunkwan University, Suwon, South Korea' - 'University of Sydney, Sydney, NSW, Australia' - 'Tata Institute of Fundamental Research, Bombay, India' - 'Toho University, Funabashi, Japan' - 'Tohoku Gakuin University, Tagajo, Japan' - 'Tohoku University, Sendai, Japan' - 'Department of Physics, University of Tokyo, Tokyo, Japan' - 'Tokyo Institute of Technology, Tokyo, Japan' - 'Tokyo Metropolitan University, Tokyo, Japan' - 'Tokyo University of Agriculture and Technology, Tokyo, Japan' - 'University of Tsukuba, Tsukuba, Japan' - 'Virginia Polytechnic Institute and State University, Blacksburg, VA, USA' - 'Yonsei University, Seoul, South Korea' author: - 'H. Miyake' - 'M. Hazumi' - 'K. Abe' - 'K. Abe' - 'H. Aihara' - 'Y. Asano' - 'V. Aulchenko' - 'T. Aushev' - 'T. Aziz' - 'S. Bahinipati' - 'A. M. Bakich' - 'V. Balagura' - 'Y. Ban' - 'S. Banerjee' - 'A. Bay' - 'I. Bedny' - 'U. Bitenc' - 'I. Bizjak' - 'S. Blyth' - 'A. Bondar' - 'A. Bozek' - 'M. Bračko' - 'J. Brodzicka' - 'T. E. Browder' - 'Y. Chao' - 'A. Chen' - 'K.-F. Chen' - 'B. G. Cheon' - 'R. Chistov' - 'S.-K. Choi' - 'Y. Choi' - 'Y. K. Choi' - 'A. Chuvikov' - 'J. Dalseno' - 'M. Danilov' - 'M. Dash' - 'L. Y. Dong' - 'J. Dragic' - 'A. Drutskoy' - 'S. Eidelman' - 'V. Eiges' - 'Y. Enari' - 'S. Fratina' - 'N. Gabyshev' - 'A. Garmash' - 'T. Gershon' - 'G. Gokhroo' - 'B. Golob' - 'J. Haba' - 'K. Hara' - 'T. Hara' - 'N. C. Hastings' - 'K. Hayasaka' - 'H. Hayashii' - 'L. Hinz' - 'T. Hokuue' - 'Y. Hoshi' - 'S. Hou' - 'W.-S. Hou' - 'T. Iijima' - 'A. Imoto' - 'K. Inami' - 'A. Ishikawa' - 'R. Itoh' - 'M. Iwasaki' - 'Y. Iwasaki' - 'J. H. Kang' - 'J. S. Kang' - 'P. Kapusta' - 'S. U. Kataoka' - 'N. Katayama' - 'H. Kawai' - 'T. Kawasaki' - 'H. R. Khan' - 'H. Kichimi' - 'H. J. Kim' - 'J. H. Kim' - 'S. K. Kim' - 'S. M. Kim' - 'K. Kinoshita' - 'P. Koppenburg' - 'S. Korpar' - 'P. Križan' - 'P. Krokovny' - 'R. Kulasiri' - 'C. C. Kuo' - 'A. Kuzmin' - 'Y.-J. Kwon' - 'G. Leder' - 'S. E. Lee' - 'T. Lesiak' - 'J. Li' - 'S.-W. Lin' - 'D. Liventsev' - 'G. Majumder' - 'F. Mandl' - 'T. Matsumoto' - 'A. Matyja' - 'W. Mitaroff' - 'K. Miyabayashi' - 'H. Miyata' - 'R. Mizuk' - 'D. Mohapatra' - 'T. Mori' - 'T. Nagamine' - 'Y. Nagasaka' - 'E. Nakano' - 'M. Nakao' - 'H. Nakazawa' - 'Z. Natkaniec' - 'S. Nishida' - 'O. Nitoh' - 'S. Ogawa' - 'T. Ohshima' - 'T. Okabe' - 'S. Okuno' - 'S. L. Olsen' - 'W. Ostrowicz' - 'H. Ozaki' - 'P. Pakhlov' - 'H. Palka' - 'H. Park' - 'N. Parslow' - 'L. S. Peak' - 'R. Pestotnik' - 'L. E. Piilonen' - 'M. Rozanska' - 'H. Sagawa' - 'Y. Sakai' - 'N. Sato' - 'T. Schietinger' - 'O. Schneider' - 'P. Schönmeier' - 'J. Schümann' - 'C. Schwanda' - 'A. J. Schwartz' - 'K. Senyo' - 'R. Seuster' - 'M. E. Sevior' - 'H. Shibuya' - 'J. B. Singh' - 'A. Somov' - 'N. Soni' - 'R. Stamen' - 'S. Stanič' - 'M. Starič' - 'T. Sumiyoshi' - 'S. Suzuki' - 'S. Y. Suzuki' - 'O. Tajima' - 'F. Takasaki' - 'K. Tamai' - 'N. Tamura' - 'M. Tanaka' - 'Y. Teramoto' - 'X. C. Tian' - 'K. Trabelsi' - 'T. Tsukamoto' - 'S. Uehara' - 'K. Ueno' - 'T. Uglov' - 'S. Uno' - 'Y. Ushiroda' - 'G. Varner' - 'K. E. Varvell' - 'S. Villa' - 'C. C. Wang' - 'C. H. Wang' - 'M.-Z. Wang' - 'M. Watanabe' - 'Y. Watanabe' - 'A. Yamaguchi' - 'H. Yamamoto' - 'T. Yamanaka' - 'Y. Yamashita' - 'M. Yamauchi' - 'J. Ying' - 'Y. Yusa' - 'J. Zhang' - 'L. M. Zhang' - 'Z. P. Zhang' - 'V. Zhilich' - 'D. Žontar' title: 'Branching Fraction, Polarization and $CP$-Violating Asymmetries in $\bz \to \dsp {D_{\rm SM}}$ Decays' --- Belle Preprint 2004-41\ KEK Preprint 2004-82 , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and $B$ decay ,$CP$ violation ,${{\sin2\phi_1}}$ 11.30.Er ,12.15.Ff ,13.25.Hw Introduction {#sec:intro} ============ In the Standard Model (SM), $CP$ violation arises from an irreducible complex phase, the Kobayashi-Maskawa (KM) phase [@bib:KM], in the weak-interaction quark-mixing matrix. In particular, the SM predicts $CP$ asymmetries in the time-dependent rates for $\bz$ and $\bzb$ decays to a common $CP$ eigenstate $\fCP$ [@bib:sanda]. Recent measurements of the $CP$-violation parameter $\sin2\phi_1$ by the Belle [@bib:CP1_Belle; @bib:CP140PRD_Belle] and BaBar [@bib:CP1_BaBar] collaborations established $CP$ violation in $\bz \to J/\psi {K_{\rm S}^0}$ and related decay modes [@bib:CC], which are governed by the $b \to c\overline{c}s$ transition, at a level consistent with KM expectations. Here $\phi_1$ is one of the three interior angles of the Unitarity Triangle [@bib:CP1_Belle; @bib:CP140PRD_Belle]. Despite this success, many tests remain before it can be concluded that the KM phase is the only source of $CP$ violation. The $\bz\to \dsp{D_{\rm SM}}$ decay, which is dominated by the $\btoccd$ transition, provides an additional test of the SM. Within the SM, measurements of $CP$ violation in this mode should yield the $\sin 2\phi_1$ value to a good approximation if the contribution from the penguin diagram is neglected. The correction from the penguin diagram is expected to be small [@bib:Pham1999]. Thus, a significant deviation in the time-dependent $CP$ asymmetry in these modes from what is observed in ${b {\rightarrow}{c\bar{c}s}}$ decays would be evidence for a new $CP$-violating phase. In the decay chain $\Upsilon(4S)\to \bz\bzb \to f_{CP}f_{\rm tag}$, where one of the $B$ mesons decays at time $t_{CP}$ to a final state $f_{CP}$ and the other decays at time $t_{\rm tag}$ to a final state $f_{\rm tag}$ that distinguishes between $B^0$ and $\bzb$, the decay rate has a time dependence given by [@bib:sanda] $$\label{eq:psig} {\mathcal P}(\Delta{t}) = \frac{e^{-|\Delta{t}|/{\taubz}}}{4{\taubz}} \biggl\{1 + \fq \Bigl[ \cals\sin(\dmd\Delta{t}) + \cala\cos(\dmd\Delta{t}) \Bigr] \biggr\}.$$ Here $\cals$ and $\cala$ are $CP$-violation parameters, $\taubz$ is the $B^0$ lifetime, $\dmd$ is the mass difference between the two $B^0$ mass eigenstates, $\Delta{t}$ = $t_{CP}$ $-$ $t_{\rm tag}$, and $\fq$ = +1 ($-1$) when the tagging $B$ meson is a $B^0$ ($\bzb$). The parameter $\cals$ corresponds to the mixing-induced $CP$ violation and is related to $\sin 2\phi_1$, while $\cala$ represents direct $CP$ violation that normally arises from the interference between tree and penguin diagrams. In $\bz\to \dsp{D_{\rm SM}}$ decays the final state $D^*$ mesons may be in a state of $s$-, $p$- or $d$-wave relative orbital angular momentum. Since $s$- and $d$-waves are even under the $CP$ transformation while the $p$-wave is odd, the $CP$-violation parameters in Eq. (\[eq:psig\]) are diluted. In order to determine the dilution, one needs to measure the $CP$-odd fraction. This can be accomplished with a time-integrated angular analysis. The BaBar collaboration has measured the polarization and $CP$ asymmetries [@bib:DstarDstar_BaBar], and find the $CP$-odd contribution to be small, consistent with theoretical expectations [@bib:Pham1999]. The $CP$ asymmetries are found to differ slightly from the expectation that neglects the contribution from the penguin diagram. In this Letter we report measurements of the branching fraction, the polarization parameters and $CP$ asymmetries in $\bz \to \dsp {D_{\rm SM}}$ decays based on a 140 fb$^{-1}$ data sample, which corresponds to 152 million $B\overline{B}$ pairs. At the KEKB energy-asymmetric $e^+e^-$ (3.5 on 8.0 GeV) collider [@bib:KEKB], the $\Upsilon(4S)$ is produced with a Lorentz boost of $\beta\gamma=0.425$ antiparallel to the positron beamline ($z$). Since the $B^0$ and $\bzb$ mesons are approximately at rest in the $\Upsilon(4S)$ center-of-mass system (cms), $\Delta t$ can be determined from the displacement in $z$ between the $f_{CP}$ and $f_{\rm tag}$ decay vertices: $\Delta t \simeq (z_{CP} - z_{\rm tag})/(\beta\gamma c) \equiv \Delta z/(\beta\gamma c)$. The Belle detector [@bib:Belle] is a large-solid-angle spectrometer that includes a three-layer silicon vertex detector (SVD), a 50-layer central drift chamber (CDC), an array of aerogel threshold Cherenkov counters (ACC), time-of-flight (TOF) scintillation counters, and an electromagnetic calorimeter comprised of CsI(Tl) crystals (ECL) located inside a superconducting solenoid coil that provides a 1.5 T magnetic field. An iron flux-return located outside of the coil is instrumented with resistive plate chambers to detect $\kl$ mesons and to identify muons (KLM). Event Selection {#sec:evsel} =============== We reconstruct $\bztodspdsm$ decays in the following $D^*$ final states; $(\dzero\pi^+,\\ \dzerob\pi^-)$, $(\dzero\pi^+, D^-\pi^0)$ and $(D^+\pi^0, \dzerob\pi^-)$. For the $\dzero$ decays we use $\dzero \to K^-\pi^+$, $K^-\pi^+\pi^0$, $K^-\pi^+\pi^+\pi^-$, $K^+K^-$, ${K_{\rm S}^0}\pi^+\pi^-$ and ${K_{\rm S}^0}\pi^+\pi^-\pi^0$. For the $D^+$ decays we use $D^+ \to {K_{\rm S}^0}\pi^+$, ${K_{\rm S}^0}\pi^+\pi^0$, ${K_{\rm S}^0}K^+$, $K^-\pi^+\pi^+$ and $K^-K^+\pi^+$. We allow all combinations of $D$ decays except for cases where both $D$ decays include neutral kaons in the final state. Charged tracks from $D$ meson decays are required to be consistent with originating from the interaction point (IP). Charged kaons are separated from pions according to the likelihood ratio $P_{K/\pi} \equiv \call(K)/[\call(K)+\call(\pi)]$, where the likelihood function $\call$ is based on the combined information from the ACC, CDC $dE/dx$ and TOF measurements. We require $P_{K/\pi} > 0.1~(0.2)$ for kaons in 2-prong (4-prong) $D$ meson decays. The kaon identification efficiency is $96\%$, and $13\%$ of pions are misidentified as kaons. Candidate charged pions are required to satisfy $P_{K/\pi} < 0.9$, which provides a pion selection efficiency of $91\%$ with a kaon misidentification probability of $3\%$. Neutral pions are formed from two photons with invariant masses above 119 MeV/$c^2$ and below 146 MeV/$c^2$. To reduce the background from low-energy photons, we require $E_\gamma > 0.03$ GeV for each photon and $p_{\pi^0} > 0.1$ GeV/$c$, where $E_\gamma$ and $p_{\pi^0}$ are the photon energy and the $\pi^0$ momentum in the laboratory frame, respectively. Candidate ${K_{\rm S}^0}\to \pi^+\pi^-$ decays are reconstructed from oppositely charged track pairs that have invariant masses within 15 MeV/$c^2$ of the nominal ${K_{\rm S}^0}$ mass. A reconstructed ${K_{\rm S}^0}$ is required to have a displaced vertex and a flight direction consistent with that of a ${K_{\rm S}^0}$ originating from the IP. Candidate $D$ mesons are reconstructed from the selected kaons and pions, and are required to have invariant masses within 6$\sigma$ (3$\sigma$) of the $D$ meson mass for 2-prong (3- or 4-prong) decays, where $\sigma$ is the mass resolution that ranges from $5$ to $10$ MeV/$c^2$. In this selection $\sigma$ is obtained by fitting the Monte Carlo (MC) simulated $D$ meson mass. These $\dzero$ ($D^+$) candidates are then combined with $\pi^+$ ($\pi^0$) to form $\dsp$ candidates, where the IP and pion identification requirements are not used to select $\pi^+$ candidates. The mass difference between $\dsp$ and $\dzero$ ($D^+$) is required to be within 3.00 (2.25) MeV/$c^2$ of the nominal mass difference. We identify $B$ meson decays using the energy difference $\dE\equiv E_B^{\rm cms}-E_{\rm beam}^{\rm cms}$ and the beam-energy constrained mass $\mb\equiv\sqrt{(E_{\rm beam}^{\rm cms})^2- (p_B^{\rm cms})^2}$, where $E_{\rm beam}^{\rm cms}$ is the beam energy in the cms, and $E_B^{\rm cms}$ and $p_B^{\rm cms}$ are the cms energy and momentum, respectively, of the reconstructed $B$ candidate. The $B$ meson signal region is defined as $|\dE|<0.04$ GeV and $\mb$ within 3$\sigma$ of the $B$ meson mass, where $\sigma$ is $3.5$ MeV/$c^2$. In order to suppress background from the $e^+e^- \rightarrow u\overline{u},~d\overline{d},~s\overline{s}$, or $c\overline{c}$ continuum, we require $H_2/H_0 < 0.4$, where $H_2$ ($H_0$) is the second (zeroth) Fox-Wolfram moment [@bib:FW]. After applying this requirement, we find that the contributions to the background from ${B^+}{B^-}$, $\bz\bzb$ and continuum are approximately equal. Figure \[fig:mb\] shows the $\mb$ and $\dE$ distributions for the $\bztodspdsm$ candidates that are in the $\dE$ and $\mb$ signal regions, respectively. In the $\mb$ and $\dE$ signal regions there are 194 events. Branching Fraction {#sec:br} ================== To determine the signal yield, we perform a two-dimensional maximum likelihood fit to the $\mb$-$\dE$ distribution ($5.2$ GeV/$c^2 < \mb < 5.3$ GeV/$c^2$ and $|\dE| < 0.2$ GeV). We use a Gaussian signal distribution plus the ARGUS background function [@bib:ARGUS] for the $\mb$ distribution. The signal shape parameters are determined from MC. The background parameters are obtained simultaneously in the fit to data. The $\dE$ distribution is modeled by a double Gaussian signal function plus a linear background function. We obtain shape parameters separately for candidates with and without $\dsp \to D^+\pi^0$ decays to account for small differences between the two cases. The fit yields $130 \pm 13$ signal events, where 20% include $\dsp \to D^+\pi^0$ decays. To obtain the branching fraction $\calb(\bztodspdsm)$, we use the reconstruction efficiency and the known branching fraction for each subdecay mode. We obtain an effective efficiency of $[1.06\pm0.08] \times 10^{-3}$ from the sum of the products of MC reconstruction efficiencies and branching fractions for each of the subdecays. Small corrections are applied to the reconstruction efficiencies for charged tracks, neutral pions and ${K_{\rm S}^0}$ mesons to account for differences between data and MC. We obtain $$\calb(\bztodspdsm) = {[{0.81}\pm{0.08}{\rm(stat)}\pm{0.11}{\rm(syst)}]\times10^{-3}},$$ where the first error is statistical and the second is systematic. The result is consistent with the present world-average value [@bib:PDG2003]. The dominant sources of the systematic error are uncertainties in the tracking efficiency (11%) and in the subdecay branching fractions (7%). Other sources are uncertainties in the fit parameters and methods (1%), in the reconstruction efficiencies of $\pi^0~(2\%)$ and ${K_{\rm S}^0}~(1\%)$, particle identification $(1\%)$, polarization parameters $(2\%)$, the number of $B$ mesons $(1\%)$, and MC statistics $(1\%)$, where each value in parentheses is the total contribution. Polarization {#sec:pol} ============ The time-dependent $CP$ analysis requires knowledge of the $CP$-odd fraction. To obtain the $CP$-odd fraction without bias, we must take into account the efficiency difference between the two $CP$-even components. Therefore, we perform a time-integrated two-dimensional angular analysis to obtain the fraction of each polarization component. We use the transversity basis [@bib:Dunietz:1990cj] where three angles ${\theta_{1}}$, ${\theta_{\rm tr}}$ and ${\phi_{\rm tr}}$ are defined in Fig. 2. The angle ${\theta_{1}}$ is the angle between the momentum of the slow pion from the ${D_{\rm SM}}$ in the ${D_{\rm SM}}$ rest frame and the direction opposite to $B$ momentum in the ${D_{\rm SM}}$ rest frame. The angle ${\theta_{\rm tr}}$ is the polar angle between the normal to the ${D_{\rm SM}}$ decay plane and the direction of flight of the slow pion from the $\dsp$ in the $\dsp$ rest frame. The angle ${\phi_{\rm tr}}$ is the corresponding azimuthal angle, where ${\phi_{\rm tr}}=0$ is the direction antiparallel to the ${D_{\rm SM}}$ flight direction. Integrating over time and the angle ${\phi_{\rm tr}}$, the two-dimensional differential decay rate is $$\frac{1}{\Gamma}\frac{d^2\Gamma}{d\cos{\theta_{\rm tr}}d\cos{\theta_{1}}} = \frac{9}{16}\sum_{i=0,\|,\perp}{R_iH_i(\cos{\theta_{\rm tr}},\cos{\theta_{1}})}, \label{eq:root_angular}$$ where $i=0,\|,$ or $\perp$ denotes longitudinal, transverse parallel, or transverse perpendicular components, $R_i$ is its fraction that satisfies $$R_0+R_{\|}+R_{\perp}=1,$$ and $H_i$ is its angular distribution defined as $$\begin{aligned} \Hzero(\cos{\theta_{\rm tr}},\cos{\theta_{1}}) &= 2 \sin^2{\theta_{\rm tr}}\cos^2{\theta_{1}},&\nonumber \\ \Hpara(\cos{\theta_{\rm tr}},\cos{\theta_{1}}) &= \phantom1\sin^2{\theta_{\rm tr}}\sin^2{\theta_{1}},&\\ \Hperp(\cos{\theta_{\rm tr}},\cos{\theta_{1}}) &= 2 \cos^2{\theta_{\rm tr}}\sin^2{\theta_{1}}.&\nonumber\end{aligned}$$ The fraction $R_\perp$ corresponds to the $CP$-odd fraction. Eq. (\[eq:root\_angular\]) is affected by the detector efficiency, in particular due to the correlations between transversity angles and slow pion detection efficiencies. To take these effects into account, we replace $H_i(\cos{\theta_{\rm tr}},\cos{\theta_{1}})$ with distributions of reconstructed MC events $\calh_i(\cos{\theta_{\rm tr}},\cos{\theta_{1}})$, which are prepared separately for candidates with and without $\dsp \to D^+\pi^0$ decays as is done in the branching fraction measurement. We also introduce effective polarization parameters $R_i^\prime \equiv \epsilon_i R_i/(\epsilon_0 R_0 + \epsilon_\| R_\| + \epsilon_\perp R_\perp)$, where $\epsilon_i$ is a total reconstruction efficiency for each transversity amplitude. As a result, the signal probability density function (PDF) for the fit is defined as $$\calh_{\rm sig} = \sum_{i} R_i^\prime \calh_i(\cos{\theta_{\rm tr}},\cos{\theta_{1}}).$$ We determine the following likelihood value for each event: $${\mathcal L} = \fsig \calh_{\rm sig} + (1-\fsig)\calh_{\rm bg},$$ where $\fsig$ is the signal probability calculated on an event-by-event basis as a function of $\dE$ and $\mb$. The background PDF $\calh_{\rm bg}$ is determined from the sideband region ($5.20 $ GeV/$c^2 < \mb < 5.26$ GeV/$c^2$, $|\de| < 0.2$ GeV). A fit that maximizes the product of the likelihood values over all events yields $$\begin{aligned} {R_{\perp}}&=& {{0.19}\pm{0.08}{\rm(stat)}\pm{0.01}{\rm(syst)}}, \nonumber \\ {R_{0}}&=& {{0.57}\pm{0.08}{\rm(stat)}\pm{0.02}{\rm(syst)}}.\end{aligned}$$ Figure \[fig:pol\] shows the angular distributions with the results of the fit. We study the uncertainties of the following items to determine the systematic errors: background shape parameters, angular resolutions, and slow pion detection efficiencies. Also included are a possible fit bias, MC histogram bin size dependence and misreconstruction effects. These systematic errors are much smaller than the statistical errors. $CP$ Asymmetries {#sec:acp} ================ We perform an unbinned maximum likelihood fit to the three dimensional $\Dt$, $\cos{\theta_{\rm tr}}$ and $\cos{\theta_{1}}$ distributions for $\bztodspdsm$ candidates to measure the $CP$-violation parameters. The $\bz$ meson decay vertices are reconstructed using the $D$ meson trajectory and an IP constraint. We do not use slow pions from $\dsp$ decays. We require that at least one $D$ meson has two or more daughter tracks with a sufficient number of the SVD hits to precisely measure the $D$ meson trajectory. The ${f_{\rm tag}}$ vertex determination is the same as for other $CP$-violation measurements [@bib:CP140PRD_Belle]. The $b$-flavor of the accompanying $B$ meson is identified from inclusive properties of particles that are not associated with the reconstructed $\bz \to \fCP$ decay [@bib:CP1_Belle]. We use two parameters, $\fq$ and $r$, to represent the flavor tagging information. The first, $\fq$, is already defined in Eq. (\[eq:psig\]). The parameter $r$ is an event-by-event, MC-determined flavor-tagging dilution factor that ranges from $r=0$ for no flavor discrimination to $r=1$ for unambiguous flavor assignment. This assignment is used only to sort data into six $r$ intervals. The wrong tag fractions for the six $r$ intervals, $w_l~(l=1,6)$, and differences between $\bz$ and $\bzb$ decays, ${\ensuremath{{\Delta w_l}}}$, are determined from the data; we use the same values that were used for the $\sin 2\phi_1$ measurement [@bib:CP140PRD_Belle]. The signal PDF is given by $$\begin{aligned} {\cal P}_{\rm sig} &=& \frac{e^{-|\Delta t|/\taub}}{4\taub} \sum_{i=0,\|,\perp}R_i^\prime {\cal H}_i(\cos{\theta_{\rm tr}},\cos{\theta_{1}}) \nonumber \\ & &\times\biggl[1-q\Delta w +q(1-2w)( \adsds\cos\Delta m\Delta t +\xi_i\sdsds\sin\Delta m\Delta t)\biggr], \label{func:cp_pdf}\end{aligned}$$ where $CP$ parity $\xi_i$ is $+1$ for $i=0$ and $\|$, and $-1$ for $i=\perp$. We assume universal $CP$-violation parameters in Eq. (\[func:cp\_pdf\]), i.e. ${\sdsds}_{0} = {\sdsds}_{\|} = {\sdsds}_{\perp}$ and ${\adsds}_{0} = {\adsds}_{\|} = {\adsds}_{\perp}$. The distribution is convolved with the proper-time interval resolution function $R_{\rm sig}(\Dt)$ [@bib:CP140PRD_Belle], which takes into account the finite vertex resolution. We determine the following likelihood value for the $j$-th event: $$\begin{aligned} P_j &=& (1-\fol)\int \biggl[ \fsig{\mathcal P}_{\rm sig}(\Dt')R_{\rm sig}(\Dt_i-\Dt') \nonumber \\ &+&(1-\fsig){\mathcal P}_{\rm bkg}(\Dt')R_{\rm bkg}(\Dt_i-\Dt')\biggr] d(\Dt') + \fol P_{\rm ol}(\Dt_i),\end{aligned}$$ where $P_{\rm ol}(\Dt)$ is a broad Gaussian function that represents an outlier component [@bib:CP1_Belle] with a small fraction $\fol$. The $\fsig$ calculation is explained in the previous section. The PDF for background events, ${\mathcal P}_{\rm bkg}(\Dt)$, is expressed as a sum of exponential and prompt components, and is convolved with $R_{\rm bkg}$ that is a sum of two Gaussians. All parameters in ${\mathcal P}_{\rm bkg} (\Dt)$ and $R_{\rm bkg}$ are determined by a fit to the $\Dt$ distribution of a background-enhanced control sample; i.e. events outside of the $\dE$-$\mb$ signal region. We fix $\tau_\bz$ and $\dmd$ to their world-average values [@bib:PDG2003]. The only free parameters in the final fit are $\cals$ and $\cala$, which are determined by maximizing the likelihood function $L = \prod_jP_j(\Dt_j,{\cos{\theta_{\rm tr}}}_j,{\cos{\theta_{1}}}_j;\cals,\cala)$, where the product is over all events. The fit yields $$\begin{aligned} \sdsds &=& {{-0.75}\pm{ 0.56}{\rm(stat)}\pm{ 0.12}{\rm(syst)}}, \nonumber \\ \adsds &=& {{-0.26}\pm{ 0.26}{\rm(stat)}\pm{ 0.06}{\rm(syst)}},\end{aligned}$$ where the first errors are statistical and the second errors are systematic. These results are consistent with the SM expectations for small penguin contributions. We define the raw asymmetry in each $\Dt$ bin by $(N_{q=+1}-N_{q=-1})/(N_{q=+1}+N_{q=-1})$, where $N_{q=+1(-1)}$ is the number of observed candidates with $q=+1(-1)$. Figure \[fig:asym\] shows the raw asymmetries in two regions of the flavor-tagging parameter $r$. While the numbers of events in the two regions are similar, the effective tagging efficiency is much larger and the background dilution is smaller in the region $0.5 < r \le 1.0$. Note that these projections onto the $\Delta t$ axis do not take into account event-by-event information (such as the signal fraction, the wrong tag fraction and the vertex resolution), which are used in the unbinned maximum-likelihood fit. The sources of the systematic errors include uncertainties in the vertex reconstruction (0.05 for $\cals$ and 0.03 for $\cala$), in the flavor tagging (0.04 for $\cals$ and 0.02 for $\cala$), in the resolution function (0.05 for $\cals$ and 0.01 for $\cala$), in the background fractions (0.04 for $\cals$ and 0.02 for $\cala$), in the tag-side interference [@bib:CP140PRD_Belle] (0.01 for $\cals$ and 0.03 for $\cala$), and in the polarization parameters (0.06 for $\cals$ and 0.01 for $\cala$). Other contributions for $\cals$ come from a possible fit bias (0.04) and from uncertainties in $\taubz$ and $\dmd$ (0.02). We add each contribution in quadrature to obtain the total systematic uncertainty. We perform various cross checks. A fit to the same sample with $\cala$ fixed at zero yields $\cals = -0.69\pm0.56$(stat). We check with an ensemble of MC pseudo-experiments that the fit has no sizable bias and the expected statistical errors are consistent with the measurement. We also select the following decay modes that have similar properties to the $\bztodspdsm$ decay: $B^0{\rightarrow}D^{*-}D_s^{*+}$, $D^{-}D_s^{*+}$, $D^{*-}D_s^{+}$, $D^{-}D_s^{+}$, and $B^+{\rightarrow}\dzerobstar D_s^{*+}$, $\dzerob D_s^{*+}$, $\dzerobstar D_s^{+}$ and $\dzerob D_s^{+}$. Fits to the control samples yield $\cals[B^0{\rightarrow}D^{(*)}D_s^{(*)}] = -0.12 \pm 0.08$, $\cala[B^0{\rightarrow}D^{(*)}D_s^{(*)}] = +0.02 \pm 0.05$, $\cals[B^+{\rightarrow}D^{(*)}D_s^{(*)}] = -0.10 \pm 0.07$, and $\cala[B^+{\rightarrow}D^{(*)}D_s^{(*)}] = -0.001 \pm 0.050$, where errors are statistical only. All results are consistent with zero. We also measure the $B$ meson lifetime using $\bztodspdsm$ candidates as well as the control samples. All results are consistent with the present world-average values. A fit to the $\Delta t$ distribution of the $\bztodspdsm$ without using polarization angle information yields $\sdsds = -0.57 \pm 0.45$, $\adsds = -0.29 \pm 0.26$; this result suggests that the $CP$-odd component is small, supporting our polarization measurement. Although the statistics are not sufficient to provide tight constraints, we also consider polarization-dependent values for $\cals$ and $\cala$, which may arise from possible differences in the contributions of the penguin diagrams. We assume that the $CP$ asymmetries for the $CP$-odd component are consistent with the SM expectations, and fix ${\sdsds}_{\perp}$ at the world-average value of ${{\sin2\phi_1}}$ [@bib:PDG2003] and ${\adsds}_{\perp}$ at zero. A fit with this assumption yields $\sdsds = -0.72 \pm 0.50$ and $\adsds = -0.42 \pm 0.30$ for the $CP$-even component, also consistent with the SM expectations. Conclusion {#sec:conclusion} ========== In summary, we have performed measurements of the branching fraction, the polarization parameters and the $CP$-violation parameters for $\bztodspdsm$ decays. The results are $$\begin{aligned} \calb(\bztodspdsm) &=& {[{0.81}\pm{0.08}{\rm(stat)}\pm{0.11}{\rm(syst)}]\times10^{-3}}, \nonumber \\ {R_{\perp}}&=& {{0.19}\pm{0.08}{\rm(stat)}\pm{0.01}{\rm(syst)}}, \nonumber \\ {R_{0}}&=& {{0.57}\pm{0.08}{\rm(stat)}\pm{0.02}{\rm(syst)}}, \nonumber \\ \sdsds &=& {{-0.75}\pm{ 0.56}{\rm(stat)}\pm{ 0.12}{\rm(syst)}}, \nonumber \\ \adsds &=& {{-0.26}\pm{ 0.26}{\rm(stat)}\pm{ 0.06}{\rm(syst)}}.\end{aligned}$$ The polarization parameters and $CP$-violation parameters are consistent with the SM expectations and theoretical predictions for small penguin contributions [@bib:sm_prediction]. Acknowledgments {#sec:acknowledgment .unnumbered} =============== We thank the KEKB group for the excellent operation of the accelerator, the KEK Cryogenics group for the efficient operation of the solenoid, and the KEK computer group and the NII for valuable computing and Super-SINET network support. We acknowledge support from MEXT and JSPS (Japan); ARC and DEST (Australia); NSFC (contract No. 10175071, China); DST (India); the BK21 program of MOEHRD and the CHEP SRC program of KOSEF (Korea); KBN (contract No. 2P03B 01324, Poland); MIST (Russia); MESS (Slovenia); NSC and MOE (Taiwan); and DOE (USA). [999]{} M. Kobayashi and T. Maskawa, Prog. Theor. Phys. [**49**]{}, 652 (1973). A. B. Carter and A. I. Sanda, Phys. Rev. D **23**, 1567 (1981); I. I. Bigi and A. I. Sanda, Nucl. Phys. **B193**, 85 (1981). Belle Collaboration, K. Abe *et al.*, Phys. Rev. Lett. **87**, 091802 (2001); Phys. Rev. D **66**, 032007 (2002); Phys. Rev. D [**66**]{}, 071102 (2002). Belle Collaboration, K. Abe [*et al.*]{}, hep-ex/0408111. BaBar Collaboration, B. Aubert *et al.*, Phys. Rev. Lett. **87**, 091801 (2001); Phys. Rev. D **66**, 032003 (2002); Phys. Rev. Lett.  [**89**]{}, 201802 (2002). Throughout this paper, the inclusion of the charge conjugate decay mode is implied unless otherwise stated. X. Y. Pham and Z. Z. Xing, Phys. Lett. B **458**, 375 (1999). BaBar Collaboration, B. Aubert *et al.*, Phys. Rev. Lett. **91**, 131801 (2003). S. Kurokawa and E. Kikutani, Nucl. Instrum. Methods A [**499**]{}, 1 (2003). Belle Collaboration, A. Abashian [*et al.*]{}, Nucl. Instrum. Methods A [**479**]{}, 117 (2002). G. C. Fox and S. Wolfram, Phys. Rev. Lett. [**41**]{}, 1581 (1978). ARGUS Collaboration, H. Albrecht *et al.*, Phys. Lett. B [**241**]{}, 278 (1990). Particle Data Group, K. Hagiwara [*et al.*]{}, Particle Listings in the 2003 Review of Particle Physics, http://www-pdg.lbl.gov/2003/contents\_listings.html. I. Dunietz, H. R. Quinn, A. Snyder, W. Toki and H. J. Lipkin, Phys. Rev. D [**43**]{}, 2193 (1991). If penguin contributions are small, the theoretical predictions within the SM are ${\mathcal A}\simeq 0$ and ${\mathcal S}\simeq -\sin 2\phi_1$, where $\sin 2\phi_1$ is measured in ${b {\rightarrow}{c\bar{c}s}}$ transitions to be $0.731\pm 0.056$ [@bib:PDG2003].
--- abstract: 'Coupled map lattices (CMLs) are often used to study emergent phenomena in nature. It is typically assumed (unrealistically) that each component is described by the same map, and it is important to relax this assumption. In this paper, we characterize periodic orbits and the laminar regime of type-I intermittency in *heterogeneous* weakly coupled map lattices (HWCMLs). We show that the period of a cycle in an HWCML is preserved for arbitrarily small coupling strengths even when an associated uncoupled oscillator would experience a period-doubling cascade. Our results characterize periodic orbits both near and far from saddle–node bifurcations, and we thereby provide a key step for examining the bifurcation structure of heterogeneous CMLs.' address: - 'Departamento de Matemática Aplicada, E.T.S.I.D.I., Universidad Politécnica de Madrid, Madrid, Spain' - 'Oxford Centre for Industrial and Applied Mathematics, Mathematical Institute, University of Oxford, Oxford, United Kingdom' author: - 'M$^\mathrm{a}$ Dolores Sotelo Herrera' - Jesús San Martín - 'Mason A. Porter' title: 'Heterogeneous, Weakly Coupled Map Lattices' --- heterogeneous CML, intermittency, period preservation, synchronization Introduction ============ Numerous phenomena in nature — such as human waves in stadiums [@farkas03] and flocks of seagulls [@sumpter10] — result from the interaction of many individual elements, and they can exhibit fascinating emergent dynamics that cannot arise in individual or even small numbers of components [@nino]. In practice, however, a key assumption in most such studies is that each component is described by the same dynamical system. However, systems with heterogeneous elements are much more common than homogeneous systems. For example, a set of interacting cars on a highway that treats all cars as the same ignores different types of cars (e.g., their manufacturer, their age, different levels of intoxication among the drivers, etc.), and a dynamical system that governs the behavior of different cars could include different parameter values or even different functional forms entirely for different cars. Additionally, one needs to use different functional forms to address phenomena such as interactions among cars, traffic lights, and police officers. Unfortunately, because little is known about heterogeneous interacting systems [@Coca; @Pavlov], the assumption of homogeneity is an important simplification that allows scholars to apply a plethora of analytical tools. Nevertheless, it is important to depart from the usual assumption of homogeneity and examine coupled dynamical systems with heterogeneous components. The study of coupled map lattices (CMLs) [@librokaneko; @kaneko25] is one important way to study the emergent phenomena (e.g., cooperation, synchronization, and more) that can occur in interacting systems. CMLs have been used to model systems in numerous fields that range from physics and chemistry to sociology, economics, and computer science [@kaneko25; @Special1992; @Special1997; @Tang2010; @Wang2014]. In a CML, each component is a discrete dynamical system (i.e., a map). There are a wealth of both theoretical and computational studies of homogeneous CMLs [@librokaneko; @kaneko25; @Kaneko89; @He1996; @Franceschini02; @Cherati07; @Jakobsen08; @Herrera09; @Xu10], in which the interacting elements are each governed by the same map. Such investigations have yielded insights on a wide variety of phenomena. As we mentioned above, the assumption of homogeneity is a major simplification that often is not justifiable. Therefore, we focus on heterogeneous CMLs, in which the interacting elements are governed by different maps or by the same map with different parameter values. The temporal evolution of a heterogeneous coupled map lattice (CML) with $p$ components is given by $$\label{equ1} X_i(n+1)=f_{R_i}(X_{i}(n)) + \varepsilon {\displaystyle}\sum_{\begin{subarray}{c}h=1\\h\not=i \end{subarray}}^p f_{R_h}(X_{h}(n))\,, \qquad i\in \{1,\dots,p\}\,,$$ where $X_i(n)$ represents the state of the entity at instant $n$ at position $i$ of a lattice and $\varepsilon > 0$ weights the coupling between these entities. We consider entities in the form of oscillators, where the $i$th oscillator evolves according to the map $$\label{equb} X_i(n+1)=f_{R_i}(X_{i}(n))\,, \qquad i\in\{1,\dots,p\}\,,$$ where the $f_{R_i}$ are, in general, different functions that depend on a parameter $R_i$ (where $i \in \{1,\dots,p\}$). We assume that each $f_{R_i}$ is a $C^2$ unimodal function that depends continuously on the parameter $R_i$ with a critical point C at $R_i$. As usual, $f^m$ means that $f$ is composed with itself $m$ times. If an uncoupled oscillator $X_i(n)$ takes the value $x_{i,n}$, then the evolution of this value under the map is $x_{i,n+1} = f_{R_i}(x_{i,n})$. In this paper, we examine heterogeneous, weakly coupled map lattices (HWCMLs). Weakly coupled systems can exhibit phenomena (e.g., phase separation because of additive noise [@Angelini01]) that do not arise in strongly coupled systems, and one can even use weak coupling along with noise to fully synchronize nonidentical oscillators [@yiming]. Thus, it is important to examine HWCMLs, which are amenable to perturbative approaches. In our paper, we characterize periodic orbits both far away from and near saddle–node (SN) bifurcations. Understanding periodic orbits is interesting by itself and is also crucial for achieving an understanding of more complicated dynamics (such as chaos) [@scholarpedPO; @chaosbook]. We then characterize the laminar regime of type-I intermittency in our HWCMLs. Finally, we summarize our results and briefly comment on applications. Theoretical Results {#theory} =================== Before discussing our results, we need to define some notation. Let $x_{i,n|R_i}$ denote the points in a periodic orbit of the $i$th uncoupled oscillator with control parameter $R_i$. The parameter value $r_i$ is a bifurcation value of $R_i$ for the $i$th map, so $x_{i,n|r_i}$ denotes the points in a periodic orbit at this parameter value. Suppose that $R_i = r_i + \varepsilon^\alpha$, where $\varepsilon$ is the same as in the coupling term of the CML (\[equ1\]) and $\alpha \in (0,\infty)$ is a constant. We seek to derive results that are valid at size $O(\varepsilon)$. We need to consider the following situations: 1. In this case, when we expand to size $O(\varepsilon)$, the coupling term does not contribute at all. Therefore, the oscillators in (\[equ1\]) behave as if they were uncoupled at this order of the expansion. 2. In this case, the coupling term controls the $\varepsilon$ bifurcation terms. Thus, to size $O(\varepsilon)$, we cannot study the behavior of the bifurcation. 3. In this case, we are considering a perturbation of the same size as the coupling term, and we can simultaneously study the coupling and the bifurcation analytically. To study orbits close to bifurcation points, we thus let $R_i = r_i + \varepsilon$, where $\varepsilon$ is the same as in the coupling term of the CML (\[equ1\]). In our numerical simulations (see Section \[sec:numerical\]), we will also briefly indicate the effects of considering $\alpha \neq 1$ (see Section \[alp\]). Study of the CML Far from and Close to Saddle–Node Bifurcations {#sub:PeriodicOrbits} --------------------------------------------------------------- In this section, we examine heterogeneous CMLs in which the uncoupled oscillators have periodic orbits either far from or near SN bifurcations. As periodic orbits exhibit different dynamics from each other depending on whether they are near or far from SN bifurcations [@Sotelo10; @Sotelo12], it is important to distinguish between these two situations. A period-$m$ SN orbit is a periodic orbit that is composed of $m$ “SN points” of the composite map $f_{r_i}^m$. Each of these $m$ SN points is a fixed point of $f_{r_i}^m$ at which $f_{r_i}^m$ undergoes an SN bifurcation. Period-$m$ SN orbits play an important role in a map’s bifurcation structure, because they occur at the beginning of periodic windows in bifurcation diagrams. Studying them is thus an important step towards examining the general bifurcation structure of a map. When $f_{r_i}^m$ undergoes an SN bifurcation, the map $f_{r_i}$ has two properties that we highlight. Let $\{x_{i,1|r_i},x_{i,2|r_i}\dots ,x_{i,m|r_i}\}$ be an period-$m$ SN orbit. It then follows that 1. $$\nonumber \frac{\partial f^{m}_{r_i}}{\partial x} (x_{i,j|r_i}) = 1={\displaystyle}\prod_{k=j}^{j+m-1}\frac{\partial f_{r_i}}{\partial x}(x_{i,k|r_i})$$ Consequently, orbits that are near an SN orbit satisfy $$\label{12A} 1 - \prod_{k=j}^{j+m-1} {\displaystyle}\frac{\partial f_{R_i}}{\partial x} (x_{i,k|R_i}) = o{\displaystyle}(1)\,.$$ By contrast, if $$\label{12B} 1 - \prod_{k=j}^{j+m-1} {\displaystyle}\frac{\partial f_{R_i}}{\partial x} (x_{i,k|R_i}) = O{\displaystyle}(1)\,.$$ we say that an orbit is “far from” a SN orbit. 2. [Because $f_{r_i}$ has a critical point at C, so does $f_{r_i}^m$. Suppose that $x_{i,n|r_i}$ is the point of the SN orbit that is closest to C. As ${\displaystyle}\frac{\partial f_{r_i}}{\partial x}(\mbox{C})=0$, for sufficiently large periods, we can find SN orbits with arbitrarily small $\left|{\displaystyle}\frac{\partial f_{r_i}}{\partial x}(x_{i,n|r_i})\right|$ (see Fig. \[fig:SN\]), and in particular we can find examples where $\left|{\displaystyle}\frac{\partial f_{r_i}}{\partial x}(x_{i,n|r_i})\right| < \varepsilon$. We use the term *small-derivative* SN orbits for such orbits. Additionally, a *small-derivative* SN orbit includes points that are not close to the critical point $\mathrm{C}$, so that $\frac{\partial f}{\partial x}(x_i) = O(1)$ in general, and the associated terms cannot be neglected. ]{} The overall bifurcation pattern in a typical unimodal map of the interval is topologically equivalent to the bifurcation pattern in any other typical unimodal map of the interval [@gucken], so it is sensible to focus on a particular such map. The standard choice for such a map is the logistic map. Orbits of any period occur in the logistic map, which contains infinitely many *small-derivative* SN orbits. In particular, such orbits include the period-$q$ SN orbits from which supercycles with symbol sequences $\mbox{CR}\mbox{L}^{q-2}$ originate.[^1] Given this fact and the broad applicability of results for the logistic map, we note that our results are relevant in numerous situations. Lemma 1 {#lemma-1 .unnumbered} ------- Let $|\varepsilon| < 1$ in the CML (\[equ1\]), and suppose that the map $f_{R_i}^{q_i}$ has an SN bifurcation at $R_i = r_i$, such that the associated SN orbit of $f_{r_i}$ is a [small-derivative]{} SN orbit. Additionally, suppose that $R_i=r_i + \varepsilon$ for $i\in\{1,\dots,s\}$, but that $R_i$ for $i\in\{s+1,\dots,p\}$ are far away from $r_i$. Consider the following initial conditions: - For $i\in\{1,\dots,s\}$, let $X_i (n) = x_{ i,n|r_i} + \varepsilon A_{i,n}+O(\varepsilon^2)$, where $x_{ i,n|r_i}$ is the point of the SN orbit closest to the critical point C of $f_{R_i}$ at $R_i=r_i$. - For $i\in\{s+1,\dots,p\}$, let $X_i (n) = x_{ i,n|R_i} + \varepsilon A_{i,n}+O(\varepsilon^2)$. The temporal evolution of the CML (\[equ1\]) is then given by 1. [For $i\in\{1,\dots,s\}$, $$\begin{aligned} \label{oneone} X_i(n+1) &= x_{i,n+1|r_i}+\varepsilon \left({\displaystyle}\frac{\partial f_{r_i}}{\partial r}(x_{i,n|r_i}) +{\displaystyle}\sum_{\substack{ h=1\\h\not=i}}^s x_{h,n+1|r_h}+\sum_{\substack{ h=s+1}}^p x_{h,n+1|R_h} \right)+O(\varepsilon^2)\,, \quad m = 1\,, \end{aligned}$$ $$\begin{aligned} \label{one} X_i(n+m) &= x_{i,n+m|r_i}+\left[ {\displaystyle}\frac{\partial f_{r_i}}{\partial r}(x_{i,n+m-1|r_i})+{\displaystyle}\sum_{k=n}^{n+m-2} \frac{\partial f_{r_i}}{\partial r}(x_{i,k|r_i})\prod_{l=k+1}^{n+m-1} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i})\right. \notag \\ &\qquad+{\displaystyle}\sum_{k=n+1}^{n+m-1} \left(\left( \sum_{\substack{ h=1\\h\not=i}}^s x_{h,k|r_h}+\sum_{\substack{ h=s+1}}^p x_{h,k|R_h} \right) \prod_{l=k}^{n+m-1} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i}) \right)\notag \\ &\qquad \left.+\sum_{\substack{ h=1\\h\not=i}}^s x_{h,n+m|r_h}+\sum_{\substack{ h=s+1}}^p x_{h,n+m|R_h} \right]\varepsilon+O(\varepsilon^2)\,, \quad m \in \{2,\dots,q\}.\end{aligned}$$ ]{} 2. [For $i\in\{s+1,\dots,p\}$, $$\label{twoone} X_i(n+1) =x_{i,n+1|R_i}+\varepsilon \left({\displaystyle}\frac{\partial f_{R_i}}{\partial x}(x_{i,n|R_i}) A_{i,n}+{\displaystyle}\sum_{\substack{ h=1}}^s x_{h,n+1|r_h}+\sum_{\substack{ h=s+1\\h\not=i}}^p x_{h,n+1|R_h}\right)+O(\varepsilon^2)\,, \quad m = 1\,,$$ $$\begin{aligned} \label{two} \nonumber X_i(n+m)&=x_{i,n+m|R_i}+\left[ {\displaystyle}\prod_{k=n}^{n+m-1} \frac{\partial f_{R_i}}{\partial x}(x_{i,k|R_i})A_{i,n}+{\displaystyle}\sum_{k=n+1}^{n+m-1} \left( \left({\displaystyle}\sum_{\substack{h=1}}^s x_{h,k|r_h}\sum_{\substack{h=s+1\\h\not=i}}^p + x_{h,k|R_h}\right)\right.\right.\\ &\qquad\left. \left. {\displaystyle}\prod_{l=k}^{n+m-1} \frac{\partial f_{R_i}}{\partial x}(x_{i,l|R_i})\right)+{\displaystyle}\sum_{\substack{h=1}}^s x_{h,n+m|r_h}+\sum_{\substack{h=s+1\\h\not=i}}^p x_{h,n+m|R_h}\right]\varepsilon+O(\varepsilon^2)\,, \quad m \in \{2,\dots,q\}.\end{aligned}$$ ]{} Proof of Lemma 1 {#proof-of-lemma-1 .unnumbered} ---------------- We proceed by induction. Substitute $X_i(n)=x_{i,n|r_i}+\varepsilon A_{i,n}+O(\varepsilon^2)$ and $X_i(n)=x_{i,n|R_i}+\varepsilon A_{i,n}+O(\varepsilon^2)$ into equation (\[equ1\]) and expand in powers of $\varepsilon$. Note that we need to consider $i\in\{1,\dots, s\}$ and $i\in \{s+1,\dots, p\}$ separately. 1. We initiate the iteration at the point $x_{i,n|r_i}$ of the SN orbit closest to the critical point C. Because we have a small-derivative SN orbit, $\frac{\partial f_{r_i}}{\partial x}(x_{i,n|r_i})$ is arbitrarily small, although this is not true for other points in the SN orbit. For $ i\in\{1,\dots, s\}$, we have $$\begin{aligned} \label{eq12} X_i(n+1) &= f_{r_i}(x_{i,n|r_i})+\varepsilon\frac{\partial f_{r_i}}{\partial x}(x_{i,n|r_i})A_{i,n}+\varepsilon \frac{\partial f_{r_i}}{\partial r}(x_{i,n|r_i}) \notag \\ & +\varepsilon {\displaystyle}\sum_{\substack{ h=1\\h\not=i}}^s f_{r_h}(x_{h,n|r_h}+O(\varepsilon))+ \sum_{\substack{ h=s+1}}^p f_{R_h}(x_{h,n|R_h}+O(\varepsilon))+O(\varepsilon^2) \notag \\ & = x_{i,n+1|r_i}+\varepsilon \left({\displaystyle}\frac{\partial f_{r_i}}{\partial r}(x_{i,n|r_i}) +{\displaystyle}\sum_{\substack{ h=1\\h\not=i}}^s x_{h,n+1|r_h}+\sum_{\substack{ h=s+1}}^p x_{h,n+1|R_h} \right)+O(\varepsilon^2)\,, \end{aligned}$$ In the last step, we have neglected terms that contain $\varepsilon {\displaystyle}\frac{\partial f_{r_i}}{\partial x}(x_{i,n|r_i})$ because ${\displaystyle}\frac{\partial f_{r_i}}{\partial x}(x_{i,n|r_i})$ is arbitrarily small. 2. [For $i\in \{s+1\,,\dots, p\}$, we have $$\label{eq121} X_i(n+1) =x_{i,n+1|R_i}+\varepsilon \left({\displaystyle}\frac{\partial f_{R_i}}{\partial x}(x_{i,n|R_i}) A_{i,n}+{\displaystyle}\sum_{\substack{ h=1}}^s x_{h,n+1|r_h}+\sum_{\substack{ h=s+1\\h\not=i}}^p x_{h,n+1|R_h}\right)+O(\varepsilon^2)\,.$$ ]{} When using the induction hypothesis, we need to distinguish the case $ i\in\{1,\dots, s\}$ from the case $i\in \{s+1\,,\dots, p\}$. For the CML (\[equ1\]), equations (\[eq12\],\[eq121\]) yield the following equations. 1. For $i\in \{1,\dots,s\}$, we write the induction hypothesis for $m \ge 2$ as $$\begin{aligned} \label{eq14} X_i(n+m) &= x_{i,n+m|r_i}+\left[ {\displaystyle}\frac{\partial f_{r_i}}{\partial r}(x_{i,n+m-1|r_i})+{\displaystyle}\sum_{k=n}^{n+m-2} \frac{\partial f_{r_i}}{\partial r}(x_{i,k|r_i})\prod_{l=k+1}^{n+m-1} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i})\right. \notag \\ &\qquad + \left.{\displaystyle}\sum_{k=n+1}^{n+m-1} \left(\left(\sum_{\begin{subarray}{c}h=1\\h\not=i\end{subarray}}^s x_{h,k|r_h}+\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p x_{h,k|R_h}\right)\prod_{l=k}^{n+m-1} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i})\right)\right.\notag \\ &\qquad\left. +\sum_{\begin{subarray}{c}h=1\\h\not=i\end{subarray}}^s x_{h,n+m|r_h}+\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p x_{h,n+m|R_h}\right]\varepsilon+O(\varepsilon^2)\,,\end{aligned}$$ which implies that $$\begin{aligned} X_i(n+m+1) &= f_{R_i}\left[ x_{i,n+m|r_i}+\left( {\displaystyle}\frac{\partial f_{r_i}}{\partial r}(x_{i,n+m-1|r_i})+{\displaystyle}\sum_{k=n}^{n+m-2} \frac{\partial f_{r_i}}{\partial r}(x_{i,k|r_i})\prod_{l=k+1}^{n+m-1} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i}) \right.\right. \notag \\ &\qquad + \left.\left.{\displaystyle}\sum_{k=n+1}^{n+m-1} \left(\left( \sum_{\begin{subarray}{c}h=1\\h\not=i\end{subarray}}^s x_{h,k|r_h}+\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p x_{h,k|R_h}\right)\prod_{l=k}^{n+m-1} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i})\right) \right.\right. \notag \\ &\qquad + \left. \left.\sum_{\begin{subarray}{c}h=1\\h\not=i\end{subarray}}^s x_{h,n+m|r_h}+\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p x_{h,n+m|R_h}\right)\varepsilon\right] + \varepsilon {\displaystyle}\sum_{\begin{subarray}{c}h=1\\h\not=i \end{subarray}}^s f_{r_h}(x_{h,n+m|r_h}+O(\varepsilon))\notag \\ &\qquad+ \varepsilon {\displaystyle}\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p f_{R_h}(x_{h,n+m|R_h}+O(\varepsilon))\notag \\\end{aligned}$$ We Taylor expand all occurrences of $f$ and its derivatives to obtain $$\begin{aligned} \label{equ5bis} X_i(n+m+1) &=x_{i,n+m+1|r_i} + \left[{\displaystyle}\frac{\partial f_{r_i}}{\partial r}(x_{i,n+m|r_i})+ {\displaystyle}\sum_{k=n}^{n+m-1} \frac{\partial f_{r_i}}{\partial r}(x_{i,k|r_i})\prod_{l=k+1}^{n+m} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i}) \right. \notag \\ &\qquad + \left.{\displaystyle}\sum_{k=n+1}^{n+m} \left(\left(\sum_{\begin{subarray}{c}h=1\\h\not=i\end{subarray}}^s x_{h,k|r_h}+\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p x_{h,k|R_h}\right)\prod_{l=k}^{n+m} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i})\right)\right. \notag \\ &\qquad\left.+{\displaystyle}\sum_{\begin{subarray}{c}h=1\\h\not=i\end{subarray}}^s x_{h,n+m+1|r_h}+{\displaystyle}\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p x_{h,n+m+1|R_h}\right]\varepsilon +O(\varepsilon^2)\notag \,.\end{aligned}$$ 2. For $i\in \{s+1,\dots,p\}$, we write the induction hypothesis for $m \ge 2$ as $$\begin{aligned} \nonumber X_i&(n+m)=x_{i,n+m|R_i}\\ &+\left[ {\displaystyle}\prod_{k=n}^{n+m-1} \frac{\partial f_{R_i}}{\partial x}(x_{i,k|R_i})A_{i,n}+{\displaystyle}\sum_{k=n+1}^{n+m-1} \left(\left({\displaystyle}\sum_{\begin{subarray}{c}h=1 \end{subarray}}^s x_{h,k|r_h}+ \sum_{\begin{subarray}{c}h=s+1\\h\not=i \end{subarray}}^p x_{h,k|R_h}\right) {\displaystyle}\prod_{l=k}^{n+m-1} \frac{\partial f_{R_i}}{\partial x}(x_{i,l|R_i}) \right)\right. \notag \\ &\qquad\left. +{\displaystyle}\sum_{\begin{subarray}{c}h=1 \end{subarray}}^s x_{h,n+m|r_h}+\sum_{\begin{subarray}{c}h=s+1\\h\not=i \end{subarray}}^p x_{h,n+m|R_h}\right]\varepsilon+O(\varepsilon^2)\,,\end{aligned}$$ which implies that $$\begin{aligned} \label{equ51} X_i&(n+m+1) = \nonumber \\ &f_{R_i}\left( x_{i,n+m|R_i}+\left[ {\displaystyle}\prod_{k=n}^{n+m-1} {\displaystyle}\frac{\partial f_{R_i}}{\partial x} (x_{i,k|R_i})A_{i,n}+{\displaystyle}\sum_{k=n+1}^{n+m-1} \left(\left( {\displaystyle}\sum_{\begin{subarray}{c}h=1\end{subarray}}^s x_{h,k|r_h} +\sum_{\begin{subarray}{c}h=s+1\\h\not=i \end{subarray}}^p x_{h,k|R_h}\right) \right.\right. \right. \nonumber \\ &\qquad \left.\left.\left.{\displaystyle}\prod_{l=k}^{n+m-1} {\displaystyle}\frac{\partial f_{R_i}}{\partial x} (x_{i,l|R_i})\right)+ {\displaystyle}\sum_{\begin{subarray}{c}h=1 \end{subarray}}^s x_{h,n+m|r_h}+\sum_{\begin{subarray}{c}h=s+1\\h\not=i \end{subarray}}^p x_{h,n+m|R_h}\right]\varepsilon+O(\varepsilon^2)\right) \notag \\ &\qquad +\varepsilon {\displaystyle}\sum_{\begin{subarray}{c}h=1 \end{subarray}}^s f_{r_h}(x_{h,n+m|r_h}+O(\varepsilon))+\sum_{\begin{subarray}{c}h=s+1\\h\not=i \end{subarray}}^p f_{R_h}(x_{h,n+m|R_h}+O(\varepsilon)) \notag \\\end{aligned}$$ We Taylor expand of all occurrences of $f$ and its derivatives to obtain $$\begin{aligned} X_i(n+m+1) &= x_{i,n+m+1|R_i}\\ & +\left[ {\displaystyle}\prod_{k=n}^{n+m} {\displaystyle}\frac{\partial f_{R_i}}{\partial x} (x_{i,k|R_i})A_{i,n}+{\displaystyle}\sum_{k=n+1}^{n+m} \left( {\displaystyle}\left( \sum_{\begin{subarray}{c}h=1 \end{subarray}}^s x_{h,k|r_h}+\sum_{\begin{subarray}{c}h=s+1\\h\not=i \end{subarray}}^p x_{h,k|R_h}\right) {\displaystyle}\prod_{l=k}^{n+m} {\displaystyle}\frac{\partial f_{R_i}}{\partial x} (x_{i,l|R_i}) \right)\right.\\ &\left.+{\displaystyle}\sum_{\begin{subarray}{c}h=1 \end{subarray}}^s x_{h,n+m+1|r_h}+\sum_{\begin{subarray}{c}h=s+1\\h\not=i \end{subarray}}^p x_{h,n+m+1|R_h}\right]\varepsilon+O(\varepsilon^2)\end{aligned}$$ Theorem 1 {#theorem-1 .unnumbered} --------- Let $|\varepsilon| < 1$ in the CML (\[equ1\]), and suppose that the hypotheses of Lemma 1 are satisfied. That is, we assume that the map $f_{R_i}^{q_i}$ has an SN bifurcation at $R_i = r_i$, such that the associated SN orbit of $f_{r_i}$ is a [small-derivative]{} SN orbit, that $R_i=r_i + \varepsilon$ for $i\in\{1,\dots,s\}$, and that $R_i$ for $i\in\{s+1,\dots,p\}$ are far away from $r_i$. Let $\{x_{i,1|r_i},\,x_{i,2|r_i},\dots , x_{i, q{_i}|r_i}\}$ be a period-$q_i$ orbit for the uncoupled oscillator $X_i$ for $ i\in \{1,\dots,s\}$, and let $\{x_{i,1|R_i},\,x_{i,2|R_i},\dots x_{i, q{_i}|R_i}\}$ be a period-$q_i$ orbit for the uncoupled oscillator $X_i$ for $ i\in \{s+1,\dots,p\}$. Consider the following initial conditions: - For $i\in \{1,\dots,s\}$, let $$X_i(n) = x_{i,n|r_i}+\varepsilon A_{i,n} + O(\varepsilon^2)\,,\qquad$$ where $x_{i,n|r_i}$ is the point of the SN orbit closest to the critical point of $f_{R_i}$ at $R_i=r_i$, and $A_{i,n}$ is an arbitrary $O(1)$ value. - For $i\in \{s+1,\dots,p\}$, let $$X_i(n) = x_{i,n|R_i}+\varepsilon A_{i,n} + O(\varepsilon^2)\,,\qquad$$ where $x_{i,n|R_i}$ is a point of the orbit, and $$\begin{aligned} A_{i,n}&=\left[{\displaystyle}{\displaystyle}\sum_{k=n+1}^{n+q-1}\left(\left({\displaystyle}\sum_{\substack{ h=1}}^s x_{h,k|r_h}+\sum_{\substack{ h=s+1\\h\not=i}}^p x_{h,k|R_h}\right){\displaystyle}\prod_{l=k}^{n+q-1} \frac{\partial f_{R_i}}{\partial x}(x_{i,l|R_i})\right)\right.\nonumber\\ &\left.+{\displaystyle}\sum_{\substack{ h=1}}^s x_{h,n+q|r_h}+{\displaystyle}\sum_{\substack{ h=s+1\\h\not=i}}^p x_{h,n+q|R_h}\right] \left(\frac{1}{1-{\displaystyle}\prod_{k=n}^{n+q-1}\frac{ \partial f_{R_i}}{\partial x}(x_{i,k|R_i})}\right)\,.\end{aligned}$$ The CML (\[equ1\]) has the solution $$\begin{aligned} X_i(n+m) = \begin{cases} x_{i,n+m|r_i}+\varepsilon A_{i,n+m}+O(\varepsilon^2)\,,\qquad i\in \{1,\dots,s\}\,, m \in \{1, \dots,q\} \\ x_{i,n+m|R_i}+\varepsilon A_{i,n+m}+O(\varepsilon^2)\,,\qquad i\in \{s+1,\dots,p\}\,, m \in \{1, \dots,q\}\,, \end{cases}\end{aligned}$$ where the coefficients $A_{i,n+m}$ are periodic with period $q=\mbox{lcm}(q_1,q_2,\dots,q_p)$ and satisfy the following formulas: 1. [For $ i\in \{1,\dots,s\}$, $$\begin{aligned} \label{eqN5one} A_{i,n+1} &= \frac{\partial f_{r_i}}{\partial r}(x_{i,n|r_i}) +{\displaystyle}\sum_{\substack{ h=1\\h\not=i}}^s x_{h,n+1|r_h}+\sum_{\substack{ h=s+1}}^p x_{h,n+1|R_h}\,, \quad m = 1 \\\end{aligned}$$ $$\begin{aligned} \label{eqN5} A_{i,n+m} &= {\displaystyle}\frac{\partial f_{r_i}}{\partial r}(x_{i,n+m-1|r_i})+{\displaystyle}\sum_{k=n}^{n+m-2} \frac{\partial f_{r_i}}{\partial r}(x_{i,k|r_i})\prod_{l=k+1}^{n+m-1} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i}) \notag \\ &\qquad +{\displaystyle}\sum_{k=n+1}^{n+m-1} \left({\displaystyle}\left(\sum_{\begin{subarray}{c}h=1\\h\not=i \end{subarray}}^s x_{h,k|r_h}+\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p x_{h,k|R_h}\right)\prod_{l=k}^{n+m-1} \frac{\partial f_{r_i} }{\partial x}(x_{i,l|r_i})\right) \notag \\ &\qquad +\sum_{\begin{subarray}{c}h=1\\h\not=i \end{subarray}}^s x_{h,n+m|r_h}+\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p x_{h,n+m|R_h}\,, \qquad m \in \{2, \dots,q\}\,.\end{aligned}$$ ]{} 2. [For $i\in\{s+1,\dots,p\}$, $$\begin{aligned} \label{eqN6} A_{i,n+m}&=\left[{\displaystyle}{\displaystyle}\sum_{k=n+m+1}^{n+m+q-1}\left(\left({\displaystyle}\sum_{\substack{ h=1}}^s x_{h,k|r_h}+\sum_{\substack{ h=s+1\\h\not=i}}^p x_{h,k|R_h}\right){\displaystyle}\prod_{l=k}^{n+m+q-1} \frac{\partial f_{R_i}}{\partial x}(x_{i,l|R_i})\right)\right.\nonumber\\ &\left.+{\displaystyle}\sum_{\substack{ h=1}}^s x_{h,n+m+q|r_h}+{\displaystyle}\sum_{\substack{ h=s+1\\h\not=i}}^p x_{h,n+m+q|R_h}\right] \left(\frac{1}{1-{\displaystyle}\prod_{k=n+m}^{n+m+q-1}\frac{ \partial f_{R_i}}{\partial x}(x_{i,k|R_i})}\right)\,,\nonumber\\ & \qquad m \in \{1, \dots,q\}\,.\end{aligned}$$ ]{} #### Remark {#remark .unnumbered} Although the initial conditions given in the statement of Theorem 1 may seem restrictive, our numerical computations demonstrate that — independently of the type of the orbit (i.e., either close to or far away from the SN) — it is sufficient to take as an initial condition any point of the unperturbed orbit plus a perturbation of size $O(\varepsilon)$. Proof of Theorem 1. {#proof-of-theorem-1. .unnumbered} ------------------- We need to consider $i \in \{ 1, \dots, s\}$ and $i \in \{ s+1, \dots, p\}$ separately. 1. Using Lemma 1, it follows from $X_i(n)=x_{i,n|r_i}+\varepsilon A_{i,n}+O(\varepsilon^2)$ that $$\begin{aligned} \label{equM1} X_i(n+1)=x_{i,n+1|r_i}+\varepsilon \left({\displaystyle}\frac{\partial f_{r_i}}{\partial r}(x_{i,n|r_i}) +{\displaystyle}\sum_{\substack{ h=1\\h\not=i}}^s x_{h,n+1|r_h} +\sum_{\substack{ h=s+1}}^p x_{h,n+1|R_h}\right)+O(\varepsilon^2) \end{aligned}$$ and $$\begin{aligned} \label{equN1} X_i(n+q+1) &=x_{i,n+q+1|r_i} + \left[{\displaystyle}\frac{\partial f_{r_i}}{\partial r}(x_{i,n+q|r_i})+ {\displaystyle}\sum_{k=n}^{n+q-1} \frac{\partial f_{r_i}}{\partial r}(x_{i,k|r_i})\prod_{l=k+1}^{n+q} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i}) \right. \notag \\ &\qquad + \left.{\displaystyle}\sum_{k=n+1}^{n+q} \left(\left(\sum_{\begin{subarray}{c}h=1\\h\not=i\end{subarray}}^s x_{h,k|r_h}+\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p x_{h,k|R_h}\right)\prod_{l=k}^{n+q} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i})\right)\right. \notag \\ &\qquad+\left.{\displaystyle}\sum_{\begin{subarray}{c}h=1\\h\not=i\end{subarray}}^s x_{h,n+q+1|r_h}+\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p x_{h,n+q+1|R_h}\right]\varepsilon +O(\varepsilon^2)\,.\end{aligned}$$ Because ${\displaystyle}\prod_{l=k}^{n+q} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i})$ includes the arbitrarily small term $\left|{\displaystyle}\frac{\partial f_{r_i}(x_{i,n|r_i})}{\partial x}\right|$, it follows from (\[equN1\]) that $$X_i(n+q+1)=x_{i,n+q+1|r_i}+\left({\displaystyle}\frac{\partial f_{r_i}}{\partial r}(x_{i,n+q|r_i})+{\displaystyle}\sum_{\begin{subarray}{c}h=1\\h\not=i\end{subarray}}^s x_{h,n+q+1|r_h}+\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p x_{h,n+q+1|R_h}\right)\varepsilon+O(\varepsilon^2)\,.$$ With $q=\mbox{lcm}(q_1,q_2,\dots,q_p)$, we have $$\begin{aligned} x_{i,n+q+1|r_i} &= x_{i,n+1|r_i} \,, \\ x_{i,n+q+1|R_i} &= x_{i,n+1|R_i}\,, \\ {\displaystyle}\sum_{\begin{subarray}{c}h = 1\\h\not=i\end{subarray}}^s x_{h,n+q+1|r_h} +\sum_{\begin{subarray}{c}h=s+1\\h\not=i\end{subarray}}^p x_{h,n+q+1|R_h} &= {\displaystyle}\sum_{\substack{ h=1\\h\not=i}}^s x_{h,n+1|r_h}+\sum_{\substack{ h=s+1\\h\not=i}}^p x_{h,n+1|R_h}\,,\end{aligned}$$ because $x_{i,j}$ is a point of a $q_i$-period orbit. Consequently, equations (\[equM1\]) and (\[equN1\]) become the same equation. From equations (\[eq12\]) and (\[equM1\]), we can write equation (\[one\]) in Lemma 1 as $X_i(n+m)=x_{i,n+m|r_i}+\varepsilon A_{i,n+m}+O(\varepsilon^2)$ to obtain $$\begin{aligned} A_{i,n+m} &= {\displaystyle}\frac{\partial f_{r_i}}{\partial r}(x_{i,n+m-1|r_i})+{\displaystyle}\sum_{k=n}^{n+m-2} \frac{\partial f_{r_i}}{\partial r}(x_{i,k|r_i})\prod_{l=k+1}^{n+m-1} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i}) \notag \\ &\qquad +{\displaystyle}\sum_{k=n+1}^{n+m-1} \left({\displaystyle}\left(\sum_{\begin{subarray}{c}h=1\\h\not=i \end{subarray}}^s x_{h,k|r_h}+\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p x_{h,k|R_h}\right)\prod_{l=k}^{n+m-1} \frac{\partial f_{r_i} }{\partial x}(x_{i,l|r_i})\right) \notag \\ &+\sum_{\begin{subarray}{c}h=1\\h\not=i \end{subarray}}^s x_{h,n+m|r_h}+\sum_{\begin{subarray}{c}h=s+1\end{subarray}}^p x_{h,n+m|R_h}\,,\qquad m \in \{1, \dots,q\}\,.\end{aligned}$$ 2. With $X_i(n+m)=x_{i,n+m|R_i}+\varepsilon A_{i,n+m}+O(\varepsilon^2)$, Lemma 1 implies that $$\begin{aligned} X_i(n+m+q) &=x_{i,n+m+q|R_i}+\left[ {\displaystyle}\prod_{k=n+m}^{n+m+q-1} \frac{\partial f_{R_i}}{\partial x}(x_{i,k|R_i})A_{i,n+m}\right. \nonumber \\ &\quad +\left.{\displaystyle}\sum_{k=n+m+1}^{n+m+q-1} \left(\left( {\displaystyle}\sum_{\substack{h=1}}^s x_{h,k|r_h}+ \sum_{\substack{h=s+1\\h\not=i}}^p x_{h,k|R_h}\right){\displaystyle}\prod_{l=k}^{n+m+q-1} \frac{\partial f_{R_i}}{\partial x}(x_{i,l|R_i})\right) \right. \nonumber \\ &\left.+{\displaystyle}\sum_{\substack{h=1}}^s x_{h,n+m+q|r_h}+\sum_{\substack{h=s+1\\h\not=i}}^p x_{h,n+m+q|R_h}\right]\varepsilon+O(\varepsilon^2)\,. \nonumber \end{aligned}$$ By taking $q=\mbox{lcm}(q_1,q_2,\dots,q_p)$, we obtain $x_{i,n+m|r_i}=x_{i,n+m+q|r_i}$ and $x_{i,n+m|R_i}=x_{i,n+m+q|R_i}$ because $x_{i,j}$ is a point of a periodic orbit. Consequently, $X_i(n+m)-X_i(n+m+q)=O(\varepsilon)$ whenever $$\begin{aligned} \label{eqN4} A_{i,n+m}&= {\displaystyle}\prod_{k=n+m}^{n+m+q-1} \frac{\partial f_{R_i}}{\partial x}(x_{i,k|R_i})A_{i,n+m}+{\displaystyle}\sum_{k=n+m+1}^{n+m+q-1} \left( {\displaystyle}\left(\sum_{\substack{ h=1}}^s x_{h,k|r_h} +\sum_{\substack{h=s+1\\h\not=i}}^p x_{h,k|R_h}\right) \right.\nonumber\\ &\left.{\displaystyle}\prod_{l=k}^{n+m+q-1} \frac{\partial f_{R_i}}{\partial x}(x_{i,l|R_i})\right)+{\displaystyle}\sum_{\substack{ h=1}}^s x_{h,n+m+q|r_h}+\sum_{\substack{ h=s+1\\h\not=i}}^p x_{h,n+m+q|R_h}\,. \end{aligned}$$ Furthermore, $A_{i,n+m}$ is periodic. Equation (\[eqN4\]) now implies that $$\begin{aligned} A_{i,n+m}&=\left[{\displaystyle}{\displaystyle}\sum_{k=n+m+1}^{n+m+q-1}\left(\left({\displaystyle}\sum_{\substack{ h=1}}^s x_{h,k|r_h}+\sum_{\substack{h=s+1\\h\not=i}}^p x_{h,k|R_h}\right) {\displaystyle}\prod_{l=k}^{n+m+q-1} \frac{\partial f_{R_i}}{\partial x}(x_{i,l|R_i})\right)\right.\nonumber\\ &\left.+{\displaystyle}\sum_{\substack{ h=1}}^s x_{h,n+m+q|r_h}+{\displaystyle}\sum_{\substack{ h=s+1\\h\not=i}}^p x_{h,n+m+q|R_h}\right] \left(\frac{1}{1-{\displaystyle}\prod_{k=n+m}^{n+m+q-1}\frac{ \partial f_{R_i}}{\partial x}(x_{i,k|R_i})}\right)\,,\nonumber\\ & \qquad m \in \{1, \dots,q\}\,.\end{aligned}$$ It follows that $A_{i,n+m}$ has period $q$ because it is given by sums and products of $q$-periodic functions evaluated at points of a $q$-periodic orbit. Observe that the formula for $A_{i,j}$ for $i\in \{1,\dots,s\}$ in equation (\[eqN5\]) does not contain the term $\left[1-{\displaystyle}\prod_{k=j}^{j+q-1} \frac{\partial f_{r_i}}{\partial x}(x_{i,k|r_i})\right]$ in the denominator \[see equation (\[12A\])\]. Otherwise, $A_{i,j}$ would be of size $O({1}/{\varepsilon})$, and the expansion that we used to prove Theorem 1 would not be valid. By contrast, the formula for $A_{i,j}$ for $i \in \{s+1,\dots,p\}$ in equation (\[eqN6\]) includes the term $\left[1-{\displaystyle}\prod_{k=j}^{j+q-1} \frac{\partial f_{R_i}}{\partial x}(x_{i,k|R_i})\right]$ in the denominator because the oscillators are far from SN bifurcations for $i\in\{s+1,\dots,p\}$. Therefore, $$1-{\displaystyle}\prod_{k=j}^{j+q-1} \frac{\partial f_{R_i}}{\partial x}(x_{i,k|R_i}) = O(1)\,,$$ and it follows that $A_{i,j}$ also has size $O(1)$. Type-I Intermittency Near Saddle–Node Bifurcations {#sub:TypeI} -------------------------------------------------- Theorem 1 concerns the behavior of the CML (\[equ1\]) with a mixture of periodic oscillators that are near the SN bifurcation with others that are far from the SN bifurcation. If an SN orbit takes place at $R_i=r_i$, then the oscillators with $R_i=r_i+\varepsilon$ are the ones that are close to the SN orbit. We now want to study the behavior of the CML (\[equ1\]) when an uncoupled oscillator has type-I intermittency [@pomeau] at $R_i=r_i-\varepsilon$ (i.e., just to the left of where it undergoes an SN bifurcation). Type-I intermittency is characterized by the alternation of an apparently periodic regime (a so-called “laminar phase”), whose mean duration follows the power law $\langle l \rangle \propto \varepsilon^{-\frac{1}{2}}$ (so the laminar region becomes longer as $\varepsilon$ becomes smaller), and chaotic bursts. As $R_i=r_i-\varepsilon$, we expand $f_{r_i-\varepsilon}$ in powers of $\varepsilon$ to obtain $$f_{r_i-\varepsilon}^{q_i}(x_{j|r_i})=f_{r_i}^{q_i}(x_{j|r_i})-\varepsilon \frac{\partial f_{r_i}^{q_i}}{\partial r}(x_{j|r_i}) + O(\varepsilon^2)\,,$$ where $x_j$ a point of a period-$q_i$ SN orbit. Therefore the laminar phase is driven by the period-$q_i$ SN orbit associated with the SN bifurcation. Thus, as $\varepsilon$ becomes smaller, the orbit spends more iterations in the laminar regime, and it thus more closely resembles the period-$q_i$ SN orbit. In particular, $\left|x_{j|r_i}-f_{r_{i-\varepsilon}}^{q_i}(x_{j|r_i})\right| = O(\varepsilon)\,$. To approximate the temporal evolution of the laminar regime using the period-$q_i$ SN orbit, we proceed in the same way as in Theorem 1, except that we replace $R_i=r_i+\varepsilon$ by $R_i=r_i-\varepsilon$. We thus write $$\begin{aligned} X_i(n+1) &= x_{i,n+1|r_i} + \left[ -{\displaystyle}\frac{\partial f_{r_i}}{\partial r}(x_{i,n|r_i}) + \sum_{\substack{ h=1\\h\not=i}}^s x_{h,n+1|r_h}+\sum_{\substack{ h=s+1}}^p x_{h,n+1|R_h}\right]\varepsilon+O(\varepsilon^2)\,, \quad m = 1\,,\end{aligned}$$ $$\begin{aligned} X_i(n+m) &= x_{i,n+m|r_i} + \left[ -{\displaystyle}\frac{\partial f_{r_i}}{\partial r}(x_{i,n+m-1|r_i})-{\displaystyle}\sum_{k=n}^{n+m-2} \frac{\partial f_{r_i}}{\partial r}(x_{i,k|r_i})\prod_{l=k+1}^{n+m-1} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i})\right. \notag \\ &\qquad \left.+{\displaystyle}\sum_{k=n+1}^{n+m-1} \left(\left(\sum_{\substack{h=1\\h\not=i}}^s x_{h,k|r_h}+\sum_{\substack{h=s+1}}^p x_{h,k|R_h}\right)\prod_{l=k}^{n+m-1} \frac{\partial f_{r_i}}{\partial x}(x_{i,l|r_i})\right)\right.\\ &\qquad \left.+\sum_{\substack{ h=1\\h\not=i}}^s x_{h,n+m|r_h}+\sum_{\substack{ h=s+1}}^p x_{h,n+m|R_h}\right]\varepsilon+O(\varepsilon^2)\,, \quad m \in \{2,\dots, q\}\,\end{aligned}$$ which determines the temporal evolution of the CML (\[equ1\]) in the laminar regime. Numerical Computations {#sec:numerical} ====================== Theorem 1 proves the existence of an approximately periodic orbit. In principle, one can deduce the existence of a periodic orbit by using the Implicit Function Theorem (IFT). However, the IFT fails at the SN bifurcation (i.e., at $R_i=r_i$) for free oscillators and consequently fails near an SN bifurcation (i.e., for $R_i=r_i+\varepsilon$) of the HWCML (\[equ1\]), because the Jacobian determinant vanishes. Had we expanded all terms in Theorem 1, we would have obtained terms of size $O(\varepsilon^2)$ that depend on the coefficients of the terms of size $O(\varepsilon)$ (i.e., as functions of the $A_{i,n+m}$ terms in Theorem 1), so terms of size $O(\varepsilon^2)$ would have the same period as the $A_{i,n+m}$. We could then obtain terms of size $O(\varepsilon^3)$ as a functions of the coefficients of lower-order terms. These terms would also have the same period as $A_{i,n+m}$, and the same is true for all higher-order terms if we continued the expanding in powers of $\varepsilon$. This reasoning suggests the existence of a periodic orbit of period $q=\mbox{lcm}(q_1, \dots ,q_p)$ (not just an approximate one), and our numerical simulations successfully illustrate the existence of such periodic orbits. For simplicity, we consider a pair of coupled oscillators, $$\begin{aligned} \label{equA} X(n+1) &= f(X(n))+\varepsilon g(Y(n)) \,, \notag \\ Y(n+1) &= g(Y(n))+\varepsilon f(X(n))\,,\end{aligned}$$ where $f(x)=R_{1}x(1-x)$ and $g(y)=\cos(R_{2}y)$. We initially fix the coupling to be $\varepsilon=0.0001$, though we will later consider $2\varepsilon$, $3\varepsilon$, and so on. The uncoupled oscillator $Y(n)$ has a fixed period of $4$ and is far away from a SN bifurcation for $R_{2}=1.9$. We use values of $R_1$ such that the uncoupled oscillator $X(n)$ is near an SN bifurcation, and we consider SN orbits with different periods. Uncoupled Oscillator $X(n)$ with a Period-3 Orbit {#per3} ------------------------------------------------- For the oscillator $X(n)$, we fix $R_1=r_1+2\varepsilon$, where $r_1 \approx 3.828427$ is an SN bifurcation point of $f$. When there is no coupling, the free oscillator $X(n)$ has a period-3 SN orbit, and the free oscillator $Y(n)$ has a period-$4$ orbit. When coupled, both $X(n)$ and $Y(n)$ have a periodic orbit with period $q={\mathrm{lcm}}(3,4)=12$ (see Fig. \[fig:SNab\]). At $R_1=r_1+\varepsilon$, the HWCML (\[equA\]) exhibits type-I intermittency associated with the SN bifurcation (see Fig. \[fig:SN2ab\]). However, for larger $R_1$ (e.g., $r_1+2\varepsilon$, $r_1+3\varepsilon$, $\dots$, $r_1+7\varepsilon$), the periods of the uncoupled oscillators $X(n)$ and $Y(n)$ are preserved because we are farther away from the bifurcation point. We observe type-I intermittency for $R_1 = r_1+0\varepsilon$, $R_1 = r_1-\varepsilon$, $R_1 = r_1-2\varepsilon$. #### Remark {#remark-1 .unnumbered} When $R_1 = r_1 + 2 \varepsilon$, we calculate $1-\prod_{k=j}^{j+m-1}\frac{\partial f_{r_i}}{\partial x}(x_{i,k|r_i}) \approx 0.24$ for $\varepsilon = 0.0001$. (For $R_1 = r_1 + \varepsilon$, we obtain a smaller value for the second quantity). Recall the quantifications of “far from” and “near” in section \[sub:PeriodicOrbits\]. Although $\varepsilon$ can be very small, the periodic windows that are born with an SN orbit can be even smaller than $\varepsilon$. Thus, from the dynamical standpoint, a very small value of the coupling parameter $\varepsilon$ can nevertheless be large as a variation on a bifurcation parameter. In Section \[sub:TypeI\], we determined the temporal evolution of the oscillators in the laminar regime of type-I intermittency up to size $O(\varepsilon)$. By comparing Fig. \[fig:SNab\] (which depicts the dynamics for a parameter value slightly larger than the SN bifurcation point) and Fig. \[fig:SN2amp\] (which depicts the dynamics right before the bifurcation), we observe periodic behavior just after the bifurcation and laminar behavior just before it. Uncoupled Oscillator $X(n)$ with a Period-5 Orbit {#per5} ------------------------------------------------- We proceed as in Section \[per3\] and obtain similar results. For the oscillator $X(n)$, we fix $R_1=r_1+2\varepsilon$, where $r_1 \approx 3.738173$ is an SN bifurcation point of $f$. When there is no coupling, the free oscillator $X(n)$ has a period-5 SN orbit, and the free oscillator $Y(n)$ has a period-$4$ orbit. When coupled, both $X(n)$ and $Y(n)$ have a periodic orbit with period $q={\mathrm{lcm}}(5,4)=20$ (see Fig. \[fig:SNcd\]). At $R_1=r_1+\varepsilon$, the HWCML (\[equA\]) exhibits type-I intermittency associated with the SN bifurcation (see Fig. \[fig:SN2cd\]). However, for larger $R_1$ (e.g., $r_1+2\varepsilon$, $r_1+3\varepsilon$, $r_1+4\varepsilon$ …), the periods of the uncoupled oscillators $X(n)$ and $Y(n)$ are preserved because we are farther away from the bifurcation point. We observe type-I intermittency for $R_1 = r_1+0\varepsilon$, $R_1 = r_1-\varepsilon$, $R_1 = r_1-2\varepsilon$. Summary of HWCML Dynamics {#alp} ------------------------- Our results allow us to deduce the dynamics of the HWCML (\[equA\]) when $R_i=r_i+\varepsilon^\alpha$. We worked with a coupling strength of $\varepsilon=0.0001$ and a control parameter of $R_i=r_i+k\varepsilon$. In our numerical computations, we observed the following behavior: 1. intermittency for $R_i \leq r_i + \varepsilon$; 2. periodic behavior for $R_i \geq r_i + 2 \varepsilon$. Therefore, the following occurs. 1. If we choose $R_i=r_i + \varepsilon^\alpha$ with $\alpha > 1$, then $R_i < r_i + \varepsilon$, and the HWCML exhibits intermittent behavior according to (a). 2. If we choose $R_i=r_i + \varepsilon^\alpha$ with $0 < \alpha < 1$, then $R_i > r_i + 2\varepsilon$; this holds even for $\alpha$ close to $1$, as long as $\varepsilon^\alpha > 2\varepsilon$ (e.g., $0 <\alpha \lessapprox 0.92$ for $\varepsilon = 0.0001$). Therefore, the HWCML exhibits periodic behavior according to (b). Based on our numerical computations, we can thus establish the following statement: “Under the hypotheses of Theorem 1, the oscillators in the CML (\[equ1\]) have periodic orbits that persist with the same period as in Theorem 1 for perturbations of size $O(\varepsilon)$. That is, higher-order terms do not change the period, as we heuristically stated at the beginning of Section \[sec:numerical\].” We now discuss the consequences of all oscillators in an HWCML having the same period $q=\mbox{lcm}(q_1,q_2,\dots,q_p)$, where $q_1,\dots, q_p$ are the periods of the free oscillators. One can adjust the parameters to obtain periods ${q_1\dots q_p}$ so that $q = {\mathrm{lcm}}(q_1, q_2 , \dots , q_p)$ remains constant. For example, if $q_1=3$ and $q_2=2^k$, then $q={\mathrm{lcm}}(3,2^k)=3 \times 2^k$ (for integers $k >0$). If the first oscillator undergoes a period-doubling cascade, then its period is $3$, $3 \times 2$, $3 \times 2^2$, and so on. However, the period $m$ of the HWCMLs is $q = {\mathrm{lcm}}(3,2^k)= {\mathrm{lcm}}(3 \times 2,2^k)= \dots = {\mathrm{lcm}}(3 \times 2^k,2^k)=3 \times 2^k$, so it does not change even after an arbitrary number of period-doubling bifurcations. That is, for arbitrarily small $\varepsilon \neq 0$, the HWCML period remains the same even amidst a period-doubling cascade. We illustrate the above phenomenon with a simple computation. Consider the HWCML (\[equA\]) and suppose that $R_{1}=3.83$ and $R_{2}=1.9$. When $\varepsilon=0$ (i.e., when there is no coupling), the free oscillator $X(n)$ has a period-3 orbit and the free oscillator $Y(n)$ has a period-$4$ orbit. However, when $\varepsilon=0.001$, both $X(n)$ and $Y(n)$ have a periodic orbit with period $q={\mathrm{lcm}}(3,4)=12$. As we show in Table \[tab:TABLA1\], the free oscillator $X(n)$ undergoes period-doubling bifurcations, but the HWCML exhibits synchronization and still has period-12 orbits for $\varepsilon=0.001$. Conclusions and Discussion {#discuss} ========================== We have examined heterogeneous weakly coupled map lattices (HWCMLs) and have given results to describe periodic orbits both near and far from saddle–node orbits and to describe the temporal evolution of the laminar regime in type-I intermittency. All periodic windows of the bifurcation diagram of unimodal maps originate from SN bifurcations, so it is important to explore the dynamics near such bifurcation points. An important implication of our results is that HWCMLs of oscillators need not behave approximately like their associated free-oscillator counterparts. In particular, they can have periodic-orbit solutions with completely different periods even for arbitrarily small coupling strengths $\varepsilon \neq 0$. Our numerical calculations illustrate an important result about period preservation when oscillator parameters change. Even when one varies the parameters $R_i$ of the functions $f_{R_i}$ such that the uncoupled oscillator $X_i$ undergoes a period-doubling cascade, the periods of each of the coupled oscillators are preserved as long as the least common multiple of the periods remains constant. That is, the oscillation period is resilient to changes. Period preservation is a rather generic phenomenon in CMLs. Suppose, for example, that one oscillator has period of $q \times 2^n$, which can originate either from period doubling or from an SN bifurcation [@SanMartin2007]. One can then change parameters so that different individual oscillators (if uncoupled) would undergo a period-doubling cascade, whereas the least common multiple of the periods of those oscillators will remain constant until one oscillator (if uncoupled) has period $p \times 2^{n+1}$. In a CML, a very large number of oscillators can each undergo a period-doubling cascade, so the period of a CML can be very resilient even in situations when other conditions — in particular, the values of the parameters in the CML — are changing a lot. Moreover, one can adjust the parameters to obtain oscillations of arbitrary periods ${q_1\dots q_p}$ with $q = {\mathrm{lcm}}(q_1, q_2 , \dots , q_p) = \mathrm{constant}$. Consequently, period preservation is a very common phenomenon: it is not limited to the aforementioned period-doubling cascade but rather appears throughout a bifurcation diagram. Periodic orbits anticipated by Theorem 1 and confirmed in Section \[sec:numerical\] correspond to traveling waves in a one-dimensional HWCML and to periodic patterns in a multidimensional HWCML. Such patterns have been studied in homogeneous CMLs [@He1996; @Silva2014], and our results can help to describe such dynamics in heterogeneous CMLs both near and far from bifurcations. Our observation about period resilience implies that there will be many different patterns with the same period. Small changes in an HWCML can change the specific pattern, but the period itself is rather robust. Our results also have implications in applications. A toy macroscopic traffic flow model, governed by the logistic map, was proposed in [@shih]. The derivation of the model is based on very general assumptions involving speed and density. When these assumptions are satisfied, one can use the model to help examine the evolution of flows of pedestrians, flows in a factory, and so on. When such flows interact weakly, then equations of the form that we discussed in Section \[sub:PeriodicOrbits\] can be useful for such applications. For example, one could do a simple examination of the temporal evolution of two groups of football fans around a stadium (or of sheep around an obstacle [@sheep]). The two groups have different properties, so suppose that they are governed by an HWCML. From our results, if each group is regularly entering the stadium on its own (i.e., their behavior is periodic), then both groups considered together would continue to enter regularly at the same rate, provided that the interaction between the two groups is weak. This suggests that it would be interesting to explore a security strategy that models erecting a light fence to ensure that the interaction between the two groups remains weak. The model in Ref. [@shih] also admits chaotic traffic patterns. One can construe the intermittent traffic flow in a traffic jam as being formed by regular motions (i.e., a laminar regime) and a series of acceleration and braking (i.e., chaotic bursts). Our results give the temporal evolution of such a laminar regime in a chaotic intermittent flow if the interaction between entities is weak (i.e., when the laminar regime is long, as we discussed in Section \[sub:TypeI\]). Indeed, as has been demonstrated experimentally for the flow of sheep around an obstacle [@sheep], it is possible to preserve laminar behavior for a longer time through the addition of an obstacle. Acknowledgements {#acknowledgements .unnumbered} ================ We are grateful to the anonymous referee for his/her enlightening and detailed suggestions. We also thank Daniel Rodríguez Pérez for his help in the preparation of this manuscript. References {#references .unnumbered} ========== [36]{} Farkas I, Helbing D, Vicsek T. Human waves in stadiums. Physica A 2003; 330:18–24 Sumpter DJT. Collective Animal Behavior. Princeton: Princeton University Press; 2010. Boccara N. Modelling Complex Systems. Second Edition. Berlin: Springer-Verlag; 2010. Coca D, Billings SA. Analysis and reconstruction of stochastic coupled map lattice models. Phys. Lett. A 2003; 315:61–75 Pavlov EA, Osipov GV, Chan CK, Suykens JAK. Map-based model of the cardiac action potential. Phys. Lett. A 2011; 375:2894–2902. Kaneko K. Theory and Applications of Coupled Map Lattices. New York: John Wiley& Sons; 1993. Kaneko K. From globally coupled maps to complex-systems biology. Chaos 2015; 25:097608 Special issue. Chaos 1992; 2(3). Special issue. Physica D 1997; 103. Tang Y, Wang Z, Fang J. Image encryption using chaotic coupled map lattices with time-varying delays. Commun. Nonlinear Sci. Numer. Simulat. 2010; 15:2456–2468. Wang S, Hu G, Zhou H. A one-way coupled chaotic map lattice based self-synchronizing stream cipher. Commun. Nonlinear Sci. Numer. Simulat. 2014; 19:905–913. Kaneko K. Spatiotemporal chaos in one- and two-dimensional coupled map lattices. Physica D 1989; 37:60–82. He G. Travelling waves in one-dimensional coupled map lattices. Commun. Nonlinear Sci. Numer. Simulat. 1996; 1:16–20. Franceschini V, Giberti C, Vernia C. On quasiperiodic travelling waves in coupled map lattices. Physica D 2002; 164:28–44. Cherati ZR, Motlagh MRJ. Control of spatiotemporal chaos in coupled map lattice by discrete-time variable structure control. Phys. Lett. A 2007; 370:302–305. Jakobsen A. Symmetry breaking bifurcation in a circular chain of N coupled logistic maps. Physica D 2008; 237:3382–3390. Sotelo Herrera D, San Martín J. Analytical solutions of weakly coupled map lattices using recurrence relations. Phys. Lett. A 2009; 373:2704–2709. Xu L, Zhang G, Han B, Zhang L, Li MF, Han YT. Turing instability for a two-dimensional Logistic coupled map lattice. Phys. Lett. A 2010; 374:3447–3450. Angelini L, Pellicoro M, Stramaglia S. Phase ordering in chaotic map lattices with additive noise. Phys. Lett. A 2001; 285:293–300. Lai YM, Porter MA. Noise-induced synchronization, desynchronization, and clustering in globally coupled nonidentical oscillators. Phys. Rev. E 2013; 88:012905 Moehlis J, Josić K, Shea-Brown ET. Periodic orbit. Scholarpedia 2006; 1(7):1358 Cvitanović P, Artuso R, Mainieri R, Tanner G, Vattay G, Whelan N, Wirzba A. Chaos: Classical and Quantum. Version 14. (available at <http://chaosbook.org>) (2012). Sotelo Herrera D, San Martín J. Travelling waves associated with saddle–node bifurcation in weakly coupled CML. Phys. Lett. A 2010; 374:3292–3296. Sotelo Herrera D, San Martín J, Cerrada L. Saddle–node bifurcation cascades and associated travelling waves in weakly coupling CML. Int. J. Bif. Chaos. 2012; 22:1250172. Guckenheimer J, Holmes P. Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields Berlin: Springer-Verlag; 1983. Pomeau Y, Manneville P. Intermittent Transition to Turbulence in Dissipative Dynamical Systems. Commun. Math. Phys. 1980; 74:189–197. San Martín J. Intermittency cascade. Chaos Solitons & Fractals 2007; 32:816–831. Milnor J. Self-similarity and hairiness in the Mandelbrot set. Lect. Notes Pure App. Math. 1989; 114:211–257. dos S. Silva FA, Viana RL, de L. Prado T, Lopes SR. Characterization of spatial patterns produced by a Turing instability in coupled dynamical systems. Commun. Nonlinear Sci. Numer. Simul. 2014; 19:1055–1071. Lo S-C, Cho H-J. Chaos and control of discrete dynamic traffic model. J. Franklin Inst. 2005; 342:839–851. Zurigel I et al. Clogging transition of many-particle systems flowing through bottlenecks. Sci. Rep. 2014; 4:7324. Figure Legends {#figure-legends .unnumbered} ============== ![\[fig:SN\] [**Left: The maps $f^3$ and $f^4$ (where $f$ is the logistic map) and the fixed points at which SN bifurcations occur**]{}. Observe that there are SN points far away from the critical point C. Because $4 > 3$, the extremum of $f^4$ near the critical point C is narrower than the extremum of $f^3$ near C. [**Right: Magnification of the extrema near the critical point C**]{}. We show the distances $d$ between the SN point and the critical point for both $f^3$ and $f^4$. Observe that the distance between this pair of points decreases as the period increases. ](SN1 "fig:"){width="50.00000%"}![\[fig:SN\] [**Left: The maps $f^3$ and $f^4$ (where $f$ is the logistic map) and the fixed points at which SN bifurcations occur**]{}. Observe that there are SN points far away from the critical point C. Because $4 > 3$, the extremum of $f^4$ near the critical point C is narrower than the extremum of $f^3$ near C. [**Right: Magnification of the extrema near the critical point C**]{}. We show the distances $d$ between the SN point and the critical point for both $f^3$ and $f^4$. Observe that the distance between this pair of points decreases as the period increases. ](SN1_amp "fig:"){width="50.00000%"} ![\[fig:SNab\] [**Temporal evolution of the HWCML (\[equA\]) for $R_1 = r_1 + 2\varepsilon$, where $r_1 \approx 3.828427$ (which is an SN bifurcation point) and $R_2 = 1.9$**]{}. The uncoupled oscillators have (a) period 3 and (b) period 4. When $\varepsilon=0.0001$ (i.e., weak coupling), the oscillators $X_\varepsilon(n)$ and $Y_\varepsilon(n)$ both have period ${\mathrm{lcm}}(3,4)=12$. In panels (c) and (d), we plot $X_\varepsilon(n) - X_0(n)$ and $Y_\varepsilon(n) - Y_0(n)$ (i.e., the solution in the coupled case minus the solution in the $\varepsilon = 0$ case) to better observe the period-12 dynamics. ](SNab){width="\textwidth"} ![\[fig:SN2ab\] [**Temporal evolution of the HWCML (\[equA\]) for $R_1 = r_1 + \varepsilon$, where $r_1 \approx 3.828427$ (which is an SN bifurcation point) and $R_2 = 1.9$.**]{} The uncoupled oscillators have (a) period 3 and (b) period 4. When $\varepsilon=0.0001$ (i.e., weak coupling), the oscillators $X_\varepsilon(n)$ and $Y_\varepsilon(n)$ exhibit type-I intermittency. In panels (c) and (d), we plot $X_\varepsilon(n) - X_0(n)$ and $Y_\varepsilon(n) - Y_0(n)$ (i.e., the solution in the coupled case minus the solution in the $\varepsilon = 0$ case) to better observe the intermittency dynamics. ](SN2ab){width="\textwidth"} ![\[fig:SN2amp\] [**Magnification of the laminar regime of type-I intermittency from Fig. \[fig:SN2ab\]d**]{}. We can clearly see the resemblance with the temporal evolution of the oscillator in the periodic regime (see Fig. \[fig:SNab\]d). ](SN2amp){width="50.00000%"} ![\[fig:SNcd\] [**Temporal evolution of the HWCML (\[equA\]) for $R_1 = r_1 + 2\varepsilon$, where $r_1\approx3.738173$ (which is an SN bifurcation point) and $R_2 = 1.9$.**]{} The uncoupled oscillators have (a) period 5 and (b) period 4. When $\varepsilon=0.0001$ (i.e., weak coupling), the oscillators $X_\varepsilon(n)$ and $Y_\varepsilon(n)$ both have period ${\mathrm{lcm}}(5,4)=20$. In panels (c) and (d), we plot $X_\varepsilon(n) - X_0(n)$ and $Y_\varepsilon(n) - Y_0(n)$ (i.e., the solution in the coupled case minus the solution in the $\varepsilon = 0$ case) to better observe the period-20 dynamics. ](SNcd){width="\textwidth"} ![\[fig:SN2cd\] [**Temporal evolution of the HWCML (\[equA\]) for $R_1 = r_1 + \varepsilon$, where $r_1$ is an SN bifurcation point and $R_2 = 1.9$**]{}. The uncoupled oscillators have (a) period 5 and (b) period 4. When $\varepsilon=0.0001$ (i.e., weak coupling), the oscillators $X_\varepsilon(n)$ and $Y_\varepsilon(n)$ show intermittency. In panels (c) and (d), we plot $X_\varepsilon(n) - X_0(n)$ and $Y_\varepsilon(n) - Y_0(n)$ (i.e., the solution in the coupled case minus the solution in the $\varepsilon = 0$ case) to better observe the intermittent dynamics. ](SN2cd){width="\textwidth"} Tables {#tables .unnumbered} ====== $r_1$ Period of $X(n)$ Period of the CML ------------ ------------------ ------------------------------------- $3.831874$ $3$ ${\mathrm{lcm}}(3,4)=12$ $3.844568$ $3 \times 2$ ${\mathrm{lcm}}(3 \times 2,4)=12$ $3.848344$ $3 \times 2^2$ ${\mathrm{lcm}}(3 \times 2^2,4)=12$ : \[tab:TABLA1\] [**Period of the CML (\[equA\]) for $r_2=1.9$ and $\varepsilon=0.001$**]{}. The parameter $r_1$ indicates when the logistic map, which is satisfied by the free oscillator $X(n)$, exhibits orbits of various periods during a period-doubling cascade in the window of period-3 orbits in the bifurcation diagram. Although the period of $X(n)$ changes, the period of the CML remains the same. [^1]: Recall that a “supercycle” is a periodic orbit that includes C; if its period is $q$ (i.e., if $f^q_R(C)=C$ for some parameter value $R$), then ${\displaystyle}\frac{\partial f^q_{R}}{\partial x}(x_k)=0$ for all points $x_k$ in the supercycle.
--- abstract: 'Networks of coupled resonators are an ubiquitous concept in physics, forming the basis of synchronization phenomena, [@Pikovsky.2001] metamaterial formation, [@Engheta.2006] nonreciprocal behavior [@Caloz.2016] and topological effects. [@Hasan.2010] Such systems are typically explored using optical or microwave resonators. [@Ozawa.2018] In recent years, mechanical resonators have entered the stage as universal building block for resonator networks, both for their well-controlled mechanical properties and for their eigenfrequencies conveniently located in the radio-frequency regime. Vertically oriented nanomechanical pillar resonators [@Paulitschke.2013] are ideally suited for the dense integration into large resonator networks. However, to realize the potential of these promising systems, an intrinsic coupling mechanism needs to be established. Here, we demonstrate strain-induced, strong coupling between two adjacent nanomechanical pillar resonators. The coupling is mediated through the strain distribution in the joint substrate caused by the flexural vibration of the pillars, such that the coupling strength can be controlled by the geometric properties of the nanopillars as well as their separation. Both, mode hybridization and the formation of an avoided level crossing in the response of the nanopillar pair are experimentally observed. The coupling mechanism is readily scalable to large arrays of nanopillars, enabling all-mechanical resonator networks for the investigation of a broad range of collective dynamical phenomena.' author: - 'J. Doster$^1$, S. Hoenl$^1$[^1], H. Lorenz$^2$, P. Paulitschke$^2$ & E.M. Weig$^{1}$' bibliography: - 'PillarPairs\_1.bib' title: 'Strong Strain-Induced Coupling between Nanomechanical Pillar Resonators' --- Department of Physics, University of Konstanz, Universit[ä]{}tsstraße 10, 78457 Konstanz, Germany Fakult[ä]{}t f[ü]{}r Physik and Center for NanoScience (CeNS), Ludwig-Maximilians-Universit[ä]{}t, Geschwister-Scholl-Platz 1, 80539 M[ü]{}nchen, Germany In recent years, arrays of coupled macroscopic mechanical resonators have been employed to demonstrate topologically protected transport of mechanical excitations. [@Susstrunk.2015; @Nash.2015; @He.2016; @SerraGarcia.2018] In principle, the underlying concept can be readily transferred to nanoscale implementations, [@Peano.2015; @Fosel.2017] offering the advantage of straightforward on-chip integration. However, sufficiently strong nearest-neighbor coupling is difficult to achieve with nanomechanical resonators, and nanoscale topologically protected transport has to date only been achieved using phononic crystal architectures [@Cha.20180627] or parametric coupling. [@Tian.20180826] Similarly, nanomechanical implementations of nonreciprocal signal transduction through resonator arrays rely on parametric coupling [@Huang.2016] or optomechanical interactions, [@Fang.2017; @Bernier.2017; @Peterson.2017] and the synchronization of nanomechanical resonators is either based on external feedback [@Matheny.2014] or mediated by an optical radiation field. [@Zhang.2015] In contrast, intrinsic mechanical coupling between adjacent nanomechanical beam or string resonators has been reported and relies on the strain distribution in a shared clamping point. [@Karabalin.2009; @Gajo.2017; @Pernpeintner.2018] In a related approach, the membranes constituting the building blocks of nanomechanical phonon waveguides exchange energy through physical connections. [@Hatanaka.2014; @Cha.2018] Here, we translate the concept of strain-induced intrinsic coupling to vertically oriented nanomechanical pillar resonators sharing the same substrate. [@Yeo.2014; @Anguiano.2017; @Lepinay.2017; @Rossi.2017] We consider a pair of inverted conical nanopillar resonators like the one displayed in . The pillars are etched into a (100) GaAs substrate using reactive ion etching [@Paulitschke.2013] and feature eigenfrequencies in the range of a few megahertz. The fundamental flexural eigenmode of each of the pillars exhibits two orthogonal polarization directions with slightly different eigenfequencies as a result of fabrication imperfections. [@Rossi.2017] With respect to the indistinguishable &lt;100&gt; crystal directions, these modes will in the following be referred to as ’horizontal’ (H) and ’vertical’ (V) polarization (see Fig. S1). In total the pillar pair has four eigenmodes, which are correspondingly labelled LH, LV, RH, and RV for the left (L) and right (R) pillar, respectively. Measurements are performed in vacuum $<\SI{e-4}{\milli\bar}$ and at room temperature using piezo-actuation, while the response of the nanopillars is read-out by scanning electron microscope imaging [@Janchen.2002] or by optical detection, [@Yeo.2014] focusing a laser with $\lambda=\SI{635}{\nano\metre}$ wavelength onto the head of one nanopillar and detecting the vibration-induced modulation of its reflection. To evaluate the stress distribution in the substrate, we perform finite element simulations as displayed in . suggests a significant stress overlap between the two pillars, indicating that the structural features in the interjacent area will affect their coupling. Therefore, the geometry of this part of our model reflects the realistic pillar geometry and features a narrowing mesh as highlighted in . Overlapping stress profiles are found for $d\lesssim 4r$. The pillar separations of the investigated samples have been chosen accordingly. We first investigate a nanopillar pair with bottom radius $r\approx\SI{310}{\nano\metre}$, height $H\approx\SI{7}{\micro\metre}$, and center-to-center distance $d\approx\SI{1.3}{\micro\metre}$. The taper angle of $\SI{1.1}{\degree}$ is the same for all nanopillars discussed in this work. The nanopillar pair is driven at frequency $f_{\text{drive}}$ in a scanning electron microscope, allowing to image the resulting envelope of its mechanical vibration. In total, we find four well separated vibrational modes with no spectral overlap (see Fig. S1 of Supplementary Information), which are identified with the eigenmodes of the pillar pair. One of them (LV) is shown in , which display the vibrational envelope of the nanopillar pair driven at $f_{\text{drive}}=f_{\text{LV}}$, imaged from the top and in a tilted view. A more careful inspection of the mode reveals that in not only the left resonator vibrates with a large amplitude but also the right resonator exhibits a simultaneous vibration, albeit with a much smaller amplitude (see also movie M1 and M2). This is also apparent from which shows the vibrational amplitude of both pillars extracted from micrographs obtained at different drive frequencies $f_{\text{drive}}$ in the vicinity of the resonance $f_{\text{LV}}$. Clearly, the two amplitudes evolve simultaneously with the drive frequency $f_{\text{drive}}$, which indicates the existence of a hybridized mode arising from coupling between the two pillars mediated by the joint substrate. To obtain a more thorough understanding of the observed inter-pillar coupling, we measure the response of a second pillar pair with $r\approx\SI{310}{\nano\metre}$, $H\approx\SI{7}{\micro\metre}$, and $d\approx\SI{1}{\micro\metre}$ using the optical detection setup. Thermal tuning, readily implemented by the laser used for optical detection, is employed to sweep the eigenfrequency of the higher-frequency pillar through that of the other pillar which remains largely unaffected (see Supplementary Information for details). shows an avoided crossing of a pair of nanopillar resonators, which is indicative of strong mechanical coupling. A fit of the data using the model of two linearly coupled harmonic oscillators is also included as a black solid line. [@Novotny.2010] It yields a coupling rate $g/2\pi=\SI{8.3\pm1.8}{\kilo\hertz}$ which exceeds the linewidth of the mechanical resonances $\Delta f\approx\SI{3.5}{\kilo\hertz}$. Hence, we demonstrate strong intrinsic coupling between the two nanopillar resonators, for this specific set of geometry parameters. The two modes to the left of the avoided crossing are assigned to the vertical vibration of each resonator (LV, RV) via scanning electron micrographs shown in the insets of , where the tuned pillar with the higher frequency corresponds to the right pillar. Further evidence for the inter-pillar coupling arises from the evolution of the vibration amplitudes in . Since the laser is focused on the frequency-tuned right pillar, mainly the vibration of this one resonator is detected and the vibration of the left pillar is only weakly resolved through the stray field of the laser. Clearly, the transition of the strong signal of the right pillar from the upper to the lower branch of the avoided crossing reflects the hybridization of the two pillars near their resonance at $\SI{425}{\micro\watt}$ and $f_r=\SI{7.401}{\mega\hertz}$. The observed strong coupling supports the conclusions drawn from , the data for which was acquired without frequency tuning the right pillar and with variable $f_{\text{drive}}$. An avoided crossing measured for this pair with $d=\SI{1.32}{\micro\metre}$ is included in Fig. S1. The untuned situation probed in the scanning electron microscope measurements corresponds to the state of the system towards the leftmost edge of the avoided crossing: Far from resonance, the individual eigenmodes of the two pillars dominate the response, however a slight hybridization is already apparent as a consequence of their coupling. Strain-mediated coupling through the substrate is expected to depend on geometrical parameters of the pillar pair. In particular, the bottom radius $r$ of the nanopillars, their height $H$ and their center-to-center distance $d$ have proven influential. In the following, we investigate the dependence of the nanopillar coupling strength on these geometry parameters in measurements and finite element simulations. Coupling rates are obtained from fits to the measured avoided crossings as described above. Finite element simulations () are performed to evaluate the eigenfrequencies of the two nanopillars. Frequency tuning is incorporated by a variation of Young’s Modulus $E$ of one of the nanopillars, mimicking thermal tuning. The level splitting $g/2\pi$ is then also obtained by fitting the model of the coupled oscillators to the simulation data (see Supplementary Information). We investigate the dependence of $g/2\pi$ on each of the parameters $d$, $r$ and $H$ while the other two parameters remain fixed. Only when sweeping the bottom radius $r$, $d$ is adapted accordingly to ensure a constant edge-to-edge distance. shows the measured level splittings of vertical modes and display the simulation results for the three different sweep parameters. The simulations in show increasing coupling when the center-to-center distance of the two nanopillars is decreased. This is a typical signature of strain coupling, [@Ovartchaiyapong.2014] and in agreement with the measurements in . Furthermore, our simulations predict an increasing coupling strength with the bottom radius of the two nanopillars (), as a consequence of the strain caused at the pillar foot upon its deflection, and the experimental data in clearly reflects this behavior. Finally, the simulations show a decrease of the coupling strength with the height of the pillars (), again in consequence of the resulting strain profile. The dependence on the height, however, can not clearly be established in since the differences in coupling for the two investigated pillar heights are smaller than the error bars. The results in have been obtained for pillar pairs with different orientations on the substrate as indicated in the inset of (see supplementary Fig. S4 for details). It is expected that the angular dependence of Young’s modulus of the crystalline GaAs substrate should lead to an angular dependence of the strength of the strain-induced coupling (see supplementary Fig. S4). However, differences in coupling strength with pillar orientation are not experimentally resolved, likely because of the fabrication-induced disorder in the pillar geometry governing the uncertainty in our measurements. To validate the influence of the substrate’s Young’s Modulus, further studies should address the vibration polarization of hybridized (symmetric or antisymmetric) modes for arbitrary orientation of the pillar pair. In conclusion, we reveal strain-induced coupling between two adjacent nanomechanical pillar resonators in the strong coupling regime, and show mode hybridization as well as the formation of an avoided level crossing under thermal tuning of one of the nanopillars. The coupling strength is found to depend on the center-to-center distance of the nanopillars, as well as their diameter and height. Numerical finite element simulations reproduce the observed scaling, confirming that the coupling between two nanopillar resonators is mediated by strain in the substrate which acts as a shared clamping point. Vertically oriented nanopillars are ideally suited for the integration into large arrays. Uncoupled arrays of nanopillars are employed for phonon guiding or cellular force tracking. [@Paulitschke.2018] In addition, III-V semiconductor-based nanopillars offer the possibility of integrating optically active quantum dots, [@Yeo.2014] enabling addressing and readout via their optical properties. The demonstrated coupling mechanism is readily scalable to large arrays of nanopillars with geometry-controlled nearest neighbor coupling. This opens the door to two-dimensional all-mechanical resonator networks for the investigation of a broad range of collective dynamical phenomena, including topological effects, [@Salerno.2017; @Yang.2015; @Fleury.2016; @Fosel.2017] and may pave the way towards nanomechanical computing [@Mahboob.2008] or nanomechanical implementations of neural networks. [@Vodenicarevic.2017] Financial support from the European Unions Horizon 2020 programme for Research and Innovation under grant agreement No. 732894 (FET Proactive HOT), the Deutsche Forschungsgemeinschaft via the Collaborative Research Center SFB 767, the Volkswagen Foundation through grant Az I/85 099, as well as the German Federal Ministry of Education and Research through contract no. 13N14777 funded within the European QuantERA cofund project QuaSeRT and the programm “Validation of the Technological Innovation Potential of Scientific Research - VIP” is gratefully acknowledged. The authors declare no competing interests. Measurements were performed by J.D., while sample fabrication was done by J.D., S.H., H.L., and P.P.; measurement results and data interpretation were discussed by all authors. [^1]: Present adress: IBM Research – Zurich, S[ä]{}umerstrasse 4, CH-8803 R[ü]{}schlikon, Switzerland
--- abstract: 'We give a transparent combinatorial characterization of the identities satisfied by the Kauffman monoid $\mathcal{K}_3$. Our characterization leads to a polynomial time algorithm to check whether a given identity holds in $\mathcal{K}_3$.' address: - '(Yuzhu Chen, Xun Hu, Yanfeng Luo) Department of Mathematics and Statistics, Lanzhou University, Lanzhou, Gansu, 730000, China' - '(Xun Hu) Department of Mathematics and Statistics, Chongqing Technology and Business University, Chongqing, 400033, China' - '(N. V. Kitov, M. V. Volkov) Institute of Natural Sciences and Mathematics, Ural Federal University, Lenina 51, 620000 Ekaterinburg, Russia' author: - 'Yuzhu Chen, Xun Hu, N. V. Kitov, Yanfeng Luo, M. V. Volkov' title: 'Identities of the Kauffman Monoid $\mathcal{K}_3$' --- Introduction {#introduction .unnumbered} ============ The present paper is a follow-up of the article by @ACHLV15. In particular, the object we deal with here (the Kauffman monoid $\mathcal{K}_3$) belongs to the family of monoids studied in that article. We reproduce here the definition of this family, closely following [@ACHLV15]. @TL71, motivated by some graph-theoretical problems in statistical mechanics, introduced a family of associative linear algebras with 1 over the field $\mathbb{C}$. Given an integer $n\ge 2$ and a scalar $\delta\in\mathbb{C}$, the Temperley–Lieb algebra $\mathcal{TL}_n(\delta)$ has generators $h_1,\dots,h_{n-1}$ and relations $$\begin{aligned} &h_{i}h_{j}=h_{j}h_{i} &&\text{if } |i-j|\ge 2,\ i,j=1,\dots,n-1;\label{eq:TL1}\\ &h_{i}h_{j}h_{i}=h_{i} &&\text{if } |i-j|=1,\ i,j=1,\dots,n-1;\label{eq:TL2}\\ &h_{i}h_{i}=\delta h_{i} &&\text{for each } i=1,\dots,n-1.\label{eq:TL3}\end{aligned}$$ Since the relations – do not involve addition, the algebra $\mathcal{TL}_n(\delta)$ is spanned by its multiplicative submonoid generated by $h_1,\dots,h_{n-1}$. This suggests introducing the monoid $\mathcal{K}_n$ with $n$ generators $c,h_1,\dots,h_{n-1}$ subject to the relations , , and the relations $$\begin{aligned} &h_{i}h_{i}=ch_{i}=h_{i}c &&\text{for each } i=1,\dots,n-1,\label{eq:TL4}\end{aligned}$$ which both mimic and mean that $c$ behaves like the scalar $\delta$. The monoids $\mathcal{K}_n$ are called the *Kauffman monoids*[^1] after @Ka90 who independently invented these monoids as geometrical objects; see [@ACHLV15 Section 1] for a geometric definition of the monoids $\mathcal{K}_n$. Kauffman monoids play a role in knot theory, low-dimensional topology, topological quantum field theory, quantum groups, etc. As algebraic objects, these monoids belong to the family of so-called diagram or Brauer-type monoids that originally arose in representation theory [@Br37]. Various diagram monoids, including Kauffman ones, have gained much attention among semigroup theorists over the last two decades; see, e.g., [@Au12; @Au14; @ADV12; @ACHLV15; @DE17; @DE18; @Dea15; @DEG17; @Ea11a; @Ea11b; @Ea14a; @Ea14b; @EF12; @EG17; @Eea18; @FL11; @KMM06; @KM06; @KM07; @LF06; @MM07; @Ma98; @Ma02]. In particular, the finite basis problem for the identities satisfied by Kauffman monoids has been solved by @ACHLV15 who proved that, for each $n\ge 3$, the identities holding in the monoid $\mathcal{K}_n$ are not finitely based. The proof was based on a very ‘high-level’ sufficient condition for the absence of a finite identity basis; if a semigroup $\mathcal{S}$ satisfies this condition, one can conclude that $\mathcal{S}$ admits no finite identity basis, without writing down any concrete identity holding in $\mathcal{S}$! Thus, no information about the identities of $\mathcal{K}_n$ for $n\ge 3$ can be extracted from the proofs in [@ACHLV15], besides, of course, the mere fact that non-trivial identities in $\mathcal{K}_n$ do exist (since they have no finite basis). As mentioned in [@ACHLV15], an alternative approach for the finite basis problem for $\mathcal{K}_3$ was independently developed by three of the authors of the present paper (Chen, Hu, and Luo). That approach relied on purely syntactic techniques and required, as an intermediate step, a combinatorial characterization of the identities satisfied by $\mathcal{K}_3$. Even though the characterization appeared to be of independent interest, it remained unpublished for two reasons: first, its initial, calculation-based proof was rather bulky; second, its main application, that is, the absence of a finite identity basis for $\mathcal{K}_3$, was subsumed by a much more general result in [@ACHLV15]. Now, when the two other authors (Kitov and Volkov) have joined the team, we have mastered a short, calculation-free proof of the characterization and, besides that, we have found a new application: namely, we have shown that the characterization leads to a polynomial time algorithm to check whether a given identity holds in $\mathcal{K}_3$. The short proof and the new application make the content of the present paper. The characterization of the identities of $\mathcal{K}_3$ and its algorithmic version are presented in Sections \[sec:wire\] and \[sec:nfb\] respectively. Identities of $\mathcal{K}_3$ {#sec:wire} ============================= Recall that for a semigroup $\mathcal{S}$, the notation $\mathcal{S}^1$ stands for the least monoid containing $\mathcal{S}$, that is[^2], $\mathcal{S}^1:=\mathcal{S}$ if $\mathcal{S}$ has an identity element and $\mathcal{S}^1:=\mathcal{S}\cup\{1\}$ if $\mathcal{S}$ has no identity element; in the latter case the multiplication in $\mathcal{S}$ is extended to $\mathcal{S}^1$ in a unique way such that the fresh symbol $1$ becomes the identity element in $\mathcal{S}^1$. We adopt the following notational convention: if $s$ is an element of a semigroup $\mathcal{S}$, then $s^0$ stands for the identity element of $\mathcal{S}^1$. We fix a countably infinite set $X$ which we refer to as an alphabet; elements of $X$ are referred to as *letters*. The set $X^+$ of finite sequences of letters forms a semigroup under concatenation which is called the *free semigroup over the alphabet $X$*. Elements $X^+$ are called *words over $X$*. The monoid $X^*:=(X^+)^1$ is called the *free monoid* over $X$; its identity element is referred to as the *empty word*. We will often use the well-known universal property of the free monoid: if $\mathcal{M}$ is a monoid, any map $X\to\mathcal{M}$ can be uniquely extended to a homomorphism $X^*\to\mathcal{M}$ sending the empty word to the identity element of $\mathcal{M}$. If $w=a_1\cdots a_\ell$ with $a_1,\dots,a_\ell\in X$ is a word from $X^+$, the number $\ell$ is called the *length* of $w$, and $a_1$ and $a_\ell$ are said to be the *first letter* and, respectively, the *last letter* of $w$. The length of the empty word is 0, while the first and the last letter of the empty word are undefined. We say that a word $v\in X^+$ *occurs* in a word $w\in X^+$ if $w$ can be factorized as $w=u_1vu_2$ for some words $u_1,u_2\in X^*$. In this situation, the words $u_1$ and $u_2$ are referred to as the *left context* and, respectively, the *right context* of the occurrence of $v$. Clearly, it may happen that $v$ has several occurrences in $w$; we order these occurrences according to the lengths of their left contexts so that the *first occurrence* is the one with the shortest left context, and so on. For a word $w\in X^*$, we denote by $\operatorname{alph}(w)$ the *content* of $w$, that is, the set of all letters that occur in $w$. Observe that $w$ is empty if and only if $\operatorname{alph}(w)$ is the empty set. If $Y\subseteq X$, we denote by $w_Y$ the word obtained from $w$ by removing all occurrences of the letters in $Y$. Then $w_Y$ is empty if and only if $\operatorname{alph}(w)\subseteq Y$. An *identity* is an expression of the form $u\bumpeq v$ with $u,v\in X^*$. If $\mathcal{M}$ is a monoid, we say that the identity $u\bumpeq v$ *holds* in $\mathcal{M}$ or, alternatively, that $\mathcal{M}$ *satisfies* the identity $u\bumpeq v$ if every homomorphism $\varphi\colon X^*\to \mathcal{M}$ *equalizes* $u$ and $v$, that is, $u\varphi=v\varphi$. Similarly, if $u,v\in X^+$ and $\mathcal{S}$ is a semigroup, we say that the identity $u\bumpeq v$ *holds* in $\mathcal{S}$ or that $\mathcal{S}$ *satisfies* the identity $u\bumpeq v$ if every homomorphism from $X^+$ into $\mathcal{S}$ equalizes $u$ and $v$. The following fact is a part of semigroup folklore but we include its proof for the sake of completeness. \[lem:monoid\] If $u,v\in X^*$ and the identity $u\bumpeq v$ holds in a monoid $\mathcal{M}$, then so does the identity $u_Y\bumpeq v_Y$ for each $Y\subseteq X$. We have to check that an arbitrary homomorphism $\varphi\colon X^*\to\mathcal{M}$ equalizes $u_Y$ and $v_Y$. Consider the homomorphism $\varphi_Y\colon X^*\to\mathcal{M}$ that extends the following map $X\to\mathcal{M}$: $$x\mapsto\begin{cases} x\varphi & \text{if } x\notin Y,\\ 1 & \text{if } x\in Y. \end{cases}$$ Then $w\varphi_Y=w_Y\varphi$ for every $w\in X^*$, whence $u_Y\varphi=u\varphi_Y=v\varphi_Y=v_Y\varphi$ since $\varphi_Y$ equalizes $u$ and $v$. We also need a normal form for the elements of the Kauffman monoid $\mathcal{K}_n$; this form was suggested by @Jo83. By the definition, the elements of $\mathcal{K}_n$ can be represented as words over the alphabet $\{c,h_1,\dots,h_{n-1}\}$. For all $a,b$ such that $1\le a<b\le n-1$, let $h_{[b,a]}:=h_bh_{b-1}\cdots h_{a+1}h_a$; for the sake of uniformity, we also let $h_{[a,a]}:=h_a$. A word from $\{c,h_1,\dots,h_{n-1}\}^*$ is said to be in *Jones’s normal form* if it is either of the form $c^{\ell}h_{[b_1, a_1]}\cdots h_{[b_k,a_k]}$ for some $\ell\ge0$ and some $a_1 < \dots <a_k$, $b_1 < \dots < b_k$, or of the form $c^{\ell}$ for some $\ell\ge0$. The proofs of the next statement can be found in [@BDP02] and [@BL05]. \[lem:jones\] Every element of the Kauffman monoid $\mathcal{K}_{n}$ has a unique representation as a word in Jones’s normal form over $\{c,h_1,\dots,h_{n-1}\}$. \[thm:description\] An identity $w\bumpeq w'$ holds in the Kauffman monoid $\mathcal{K}_3$ if and only if $\operatorname{alph}(w)=\operatorname{alph}(w')$ and, for each $Y\subset\operatorname{alph}(w)$, the words $u:=w_Y$ and $u':=w'_Y$ satisfy the following three conditions: - the first letter of $u$ is the same as the first letter of $u'$; - the last letter of $u$ is the same as the last letter of $u'$; - for each word of length $2$, the number of its occurrences in $u$ is the same as the number of its occurrences in $u'$. We start with a closer look at the monoid $\mathcal{K}_3$. Specializing the definition of the Kauffman monoids given in the introduction, one gets the following monoid presentation for $\mathcal{K}_3$: $$\mathcal{K}_3=\left\langle h_1,h_2,c\quad \begin{tabular}{|c} $h_1h_2h_1=h_1,\ h_2h_1h_2=h_2,$\\[.1ex] $h_1^2=ch_1=h_1c,\ h_2^2=ch_2=h_2c$\end{tabular}\right\rangle.$$ Lemma \[lem:jones\] readily implies that every element in $\mathcal{K}_3$ is equal to a unique element of one of the following 5 sets: $$\begin{aligned} C &:=\{c^k\mid k=0,1,\dots\},&&\\ H_{11} &:=\{c^\ell h_1\mid \ell=0,1,\dots\}, & H_{12} &:=\{c^m h_1h_2\mid m=0,1,\dots\},\\ H_{21} &:=\{c^n h_2h_1\mid n=0,1,\dots\}, & H_{22} &:=\{c^r h_2\mid r=0,1,\dots\}.\end{aligned}$$ We turn to the proof of the ‘only if’ part of our theorem. Let $w\bumpeq w'$ be an arbitrary identity that holds in $\mathcal{K}_3$. Given a letter $x_0\in X$, consider the homomorphism $\chi_0\colon X^*\to\mathcal{K}_3$ that extends the following map $X\to\mathcal{M}$: $$x\mapsto\begin{cases} c & \text{if } x=x_0,\\ 1 & \text{if } x\ne x_0. \end{cases}$$ Then $w\chi_0=c^t$, where $t$ is the number of occurrences of $x_0$ in $w$, and similarly, $w'\chi_0=c^{t'}$, where $t'$ is the number of occurrences of $x_0$ in $w'$. Since $\chi_0$ must equalize $w$ and $w'$, we conclude that $t=t'$; in particular, $x_0$ occurs in $w$ if and only if it occurs in $w'$. Thus, $\operatorname{alph}(w)=\operatorname{alph}(w')$. If the word $w$ is empty, $\operatorname{alph}(w)=\varnothing$ has no proper subsets and nothing remains to prove. Therefore, for the rest of the proof of the ‘only if’ part, we assume that neither $w$ nor $w'$ is empty. Let $\mathcal{S}_2$ stand for the semigroup presented by $\langle e,f \mid e^2=e,\ f^2=f\rangle$, that is, $\mathcal{S}_2$ is the free product of two trivial semigroups. Clearly, in $\mathcal{S}_2$ each element is uniquely represented as an alternating product of the generators $e$ and $f$. Hence, $\mathcal{S}_2$ is a disjoint union of the following 4 sets: $$\begin{aligned} &\{(ef)^\ell\mid \ell=1,2,\dots\},&& \{(fe)^nf\mid n=0,1,\dots\},\\ &\{(ef)^me\mid m=0,1,\dots\},&&\{(fe)^r\mid r=1,2,\dots\}.\end{aligned}$$ We define a map $\psi\colon\mathcal{S}_2\to\mathcal{K}_3$ as follows: $$\begin{aligned} (ef)^\ell &\mapsto c^{2\ell-1}h_1&&\text{for each $\ell>0$},\\ (ef)^me &\mapsto c^{2m}h_1h_2 &&\text{for each $m\ge0$},\\ (fe)^nf &\mapsto c^{2n}h_2h_1 &&\text{for each $n\ge0$},\\ (fe)^r &\mapsto c^{2r-1}h_2 &&\text{for each $r>0$}.\end{aligned}$$ Clearly, $\psi$ is 1-1, and a straightforward verification shows that $\psi$ is a homomorphism. Indeed, it suffices to compare Table \[tb:S2\], which shows how typical elements of the semigroup $\mathcal{S}_2$ multiply, and Table \[tb:K3\], which shows how the images of these elements under $\psi$ multiply. $\begin{array}{c|cccc} & (ef)^{\ell'} & (ef)^{m'}e & (fe)^{n'}f & (fe)^{r'} \\ \hline (ef)^\ell & (ef)^{\ell+\ell'} & (ef)^{\ell+m'}e & (ef)^{\ell+n'} & (ef)^{\ell+r'-1}e\rule{0pt}{14pt} \\ (ef)^me & (ef)^{m+\ell'} & (ef)^{m+m'}e & (ef)^{m+n'+1} & (ef)^{m+r'}e \rule{0pt}{14pt}\\ (fe)^nf & (fe)^{n+\ell'}f & (fe)^{n+m'+1} & (fe)^{n+n'}f & (fe)^{n+r'} \rule{0pt}{14pt}\\ (fe)^r & (fe)^{r+\ell'-1}f & (fe)^{r+m'} & (fe)^{r+n'}f & (fe)^{r+r'}\rule{0pt}{14pt} \end{array}$ $\begin{array}{c|cccc} & c^{2\ell'-1}h_1 & c^{2m'}h_1h_2 & c^{2n'}h_2h_1 & c^{2r'-1}h_2 \\ \hline c^{2\ell-1}h_1 & c^{2(\ell+\ell')-1}h_1 & c^{2(\ell+m')}h_1h_2 & c^{2(\ell+n')-1}h_1 & c^{2(\ell+r'-1)}h_1h_2\rule{0pt}{14pt} \\ c^{2m}h_1h_2 & c^{2(m+\ell')-1}h_1 & c^{2(m+m')}h_1h_2 & c^{2(m+n'+1)-1}h_1 & c^{2(m+r')}h_1h_2\rule{0pt}{14pt} \\ c^{2n}h_2h_1 & c^{2(n+\ell')}h_2h_1 & c^{2(n+m'+1)-1}h_2 & c^{2(n+n')}h_2h_1 & c^{2(n+r')-1}h_2\rule{0pt}{14pt} \\ c^{2r-1}h_2 & c^{2(r+\ell'-1)}h_2h_1 & c^{2(r+m')-1}h_2 & c^{2(r+n')}h_2h_1 & c^{2(r+r')-1}h_2\rule{0pt}{14pt} \end{array}$ Thus, $\mathcal{S}_2$ is isomorphic to a subsemigroup in $\mathcal{K}_3$, whence $\mathcal{S}_2$ satisfies every identity $u\bumpeq v$ with $u,v\in X^+$ that holds in $\mathcal{K}_3$. By Lemma \[lem:monoid\], for each proper subset $Y\subset\operatorname{alph}(w)$, the identity $u\bumpeq u'$, where $u:=w_Y\in X^+$ and $u':=w'_Y\in X^+$, holds in $\mathcal{K}_3$. We conclude that $u\bumpeq u'$ holds in $\mathcal{S}_2$ as well, and by [@SV17 Theorem 3], the words $u$ and $u'$ satisfy conditions (a)–(c). This completes the proof of the ‘only if’ part of the theorem. For the ‘if’ part, consider any identity $w\bumpeq w'$ satisfying the conditions of our theorem. If $\operatorname{alph}(w)=\varnothing$, then the condition $\operatorname{alph}(w)=\operatorname{alph}(w')$ implies that both $w$ and $w'$ are empty words, and the identity $w\bumpeq w'$ holds in every monoid. Thus, we may assume that $\operatorname{alph}(w)\ne\varnothing$. Take an arbitrary letter $x\in\operatorname{alph}(w)$ and let $Y:=\operatorname{alph}(w)\setminus\{x\}$. Then the words $u:=w_Y$ and $u':=w'_Y$ are certain powers of the letter $x$, namely, $u=x^t$ and $u'=x^{t'}$, where $t$ is the number of occurrences of $x$ in $w$ and $t'$ is the number of occurrences of $x$ in $w'$. Clearly, the word $x^2$ occurs $t-1$ times in the word $x^t$ and $t'-1$ times in the word $x^{t'}$, and since $u$ and $u'$ must satisfy the condition (c), we conclude that $t-1=t'-1$, whence $t=t'$. We have to check that an arbitrary homomorphism $\varphi\colon X^*\to\mathcal{K}_3$ equalizes $w$ and $w'$. Recall that $\mathcal{K}_3$ is the disjoint union of the set $C$, which is a submonoid in $\mathcal{K}_3$, and the set $H:=\mathcal{K}_3\setminus C=H_{11}\cup H_{12}\cup H_{21}\cup H_{22}$, which is the ideal of $\mathcal{K}_3$ generated by $h_1$ and $h_2$. Let $Y:=\{y\in\operatorname{alph}(w)\mid y\varphi\in C\}$. For each $y\in Y$, let $t_y$ stand for the number of occurrences of $y$ in $w$ (which, as shown in the preceding paragraph, is equal to the number of occurrences of $y$ in $w'$), and let $k_y\in\{0,1,\dots\}$ be such that $y\varphi=c^{k_y}$. We denote the sum $\sum_{y\in Y}t_yk_y$ by $N_Y$. If $Y=\operatorname{alph}(w)$, we have $w\varphi=c^{N_Y}=w'\varphi$, and we are done. Consider the situation where $Y\subset\operatorname{alph}(w)$. Using the fact that the generator $c$ commutes with the generators $h_1,h_2$, we can represent $w\varphi$ and $w'\varphi$ as $c^{N_Y}w_Y\varphi$ and $c^{N_Y}w'_Y\varphi$ respectively. Therefore it remains to verify that $w_Y\varphi=w'_Y\varphi$, and for this, it suffices to show that the identity $u\bumpeq u'$ with $u:=w_Y$ and $u':=w'_Y$ holds in the semigroup $H$. We prove that $H$ satisfies $u\bumpeq u'$, using the Rees matrix construction (cf. [@CP61 Chapter 3]). Let $\mathbb{Z}$ stand for the additive group of integers and let $\Delta:=\begin{pmatrix}1&0\\0&1\end{pmatrix}$ be the identity $2\times2$-matrix over $\mathbb{Z}$. It is convenient for us to represent the matrix using Kronecker’s delta notation so that $\Delta=\begin{pmatrix}\delta_{11}&\delta_{12}\\\delta_{21}&\delta_{22}\end{pmatrix}$. Denote by $\mathrm{M}(\mathbb{Z};\Delta)$ the set of triples $$\{(\eta,k,\lambda)\mid \eta,\lambda\in\{1,2\},\ k\in\mathbb{Z}\},$$ endowed with the multiplication $$(\eta,k,\lambda)(\iota,\ell,\mu):=(\eta,k+\delta_{\lambda\,\iota}+\ell,\mu).$$ The semigroup $\mathrm{M}(\mathbb{Z};\Delta)$ is an instance of the family of the *Rees matrix [semigroups]{}* over $\mathbb{Z}$. Define a map $\xi\colon H\to\mathrm{M}(\mathbb{Z};\Delta)$ as follows: $$\begin{aligned} c^\ell h_1&\mapsto (1,\ell,1)&&\text{for each $\ell\ge0$},\\ c^m h_1h_2&\mapsto (1,m,2)&&\text{for each $m\ge0$},\\ c^n h_2h_1&\mapsto (2,n,1)&&\text{for each $n\ge0$},\\ c^r h_2 &\mapsto (2,r,2)&&\text{for each $r\ge0$}.\end{aligned}$$ Obviously, $\xi$ is 1-1, and one can readily verify that $\xi$ is a homomorphism. Thus, $H$ is isomorphic to a subsemigroup in $\mathrm{M}(\mathbb{Z};\Delta)$. It is known (see, e.g., @KR79 [Theorem 9]) and easy to verify that every identity $u\bumpeq u'$ with $u$ and $u'$ satisfying (a)–(c) holds in each Rees matrix [semigroup]{} over an abelian group. Hence, every such identity holds in $\mathrm{M}(\mathbb{Z};\Delta)$, and thus, in $H$. This completes the proof of the ‘if’ part of the theorem. Recognizing identities of $\mathcal{K}_3$ in polynomial time {#sec:nfb} ============================================================ Given a semigroup $\mathcal{S}$, its *identity checking problem*[^3] is a combinatorial decision problem whose instance is an arbitrary pair $(w,w')$ of words; the answer to the instance $(w,w')$ of the problem is ‘YES’ or ‘NO’ depending on whether or not the identity $w\bumpeq w'$ holds in $\mathcal{S}$. For a finite semigroup, the identity checking problem is always decidable, and moreover, belongs to the complexity class $\mathsf{coNP}$: if for some pair $(w,w')$ of words that together involve $m$ letters, the identity $w\bumpeq w'$ fails in the semigroup $\mathcal{S}$, then a nondeterministic polynomial algorithm can guess an $m$-tuple of elements in $\mathcal{S}$ witnessing the failure and then confirm the guess by computing the values of the words $w$ and $w'$ at this $m$-tuple. There exist many examples of finite semigroups whose identity checking problem is $\mathsf{coNP}$-complete; see, e.g., [@AVG09; @HLMS; @JM06; @Ki04; @Kl09; @Kl12; @PV06; @Se05; @SS06] and the references therein. However, the task of classifying finite semigroups according to the computational complexity of identity checking appears to be far from being feasible as it is not yet accomplished even in the case of finite groups. For infinite semigroups, results on the identity checking problem are sparse. The reason for this is that infinite semigroups usually arise in mathematics as [semigroups]{} of transformations of an infinite set, or [semigroups]{} of relations on an infinite domain, or [semigroups]{} of matrices over an infinite ring, and as a rule all these [semigroups]{} are ‘too big’ to satisfy any nontrivial identity. If, however, an infinite semigroup satisfies a nontrivial identity, its identity checking problem may constitute a challenge: @Mu68 had constructed an infinite semigroup with undecidable identity checking problem. On the ‘positive’ side, we mention a recent result by @DJK18 who have shown that checking identities in the famous bicyclic monoid $\mathcal{B}:=\langle a,b\mid ba=1\rangle$ can be done in polynomial time via rather a non-trivial algorithm based on linear programming. Observe that even though Theorem \[thm:description\] gives an algorithm to verify whether or not a given identity $w\bumpeq w'$ holds in the Kauffman monoid $\mathcal{K}_3$, the algorithm is not polynomial in the number of letters occurring in the words $w$ and $w'$ because one has to check conditions (a)–(c) for every proper subset of the set $\operatorname{alph}(w)$. We will ‘unfold’ this algorithm so that the unfolded version admits a polynomial-time implementation; our approach is inspired by a method developed by @SS06 for checking identities in certain finite semigroups. Given a word $w\in X^+$, its *first* (*last*) *occurrence word* is obtained from $w$ by retaining only the first (respectively, the last) occurrence if each letter that occurs in $w$. A *jump* is a triple $(x,G,y)$, where $x$ and $y$ are (not necessarily distinct) letters and $G$ is a (possibly empty) set of letters that contains neither $x$ nor $y$. The jump $(x,G,y)$ *occurs* in a word $w$ if $w$ can be factorized as $w=v_1xv_2yv_3$ where $v_1,v_2,v_3\in X^*$ and $G=\operatorname{alph}(v_2)$. For instance, each of the jumps $(x,\{y,z\},x)$ and $(y,\varnothing,y)$ occurs twice in the word $xy^2zxzy^2x$, while each of the jumps $(x,\{y\},z)$ and $(z,\{y\},x)$ occurs just once. The following result is in fact a reformulation of Theorem \[thm:description\] in a form amenable for an algorithmic analysis. \[thm:jump\] An identity $w\bumpeq w'$ holds in the monoid $\mathcal{K}_3$ if and only if either both $w$ and $w'$ are empty or $w$ and $w'$ have the same first occurrence and the same last occurrence words, and every jump occurs the same number of times in $w$ and $w'$. For the ‘only if’ claim, we use the ‘only if’ part of Theorem \[thm:description\]. In view of the latter, $\operatorname{alph}(w)=\operatorname{alph}(w')$, whence $w$ is empty whenever $w'$ is, and vice verse. So we may assume that $w,w'\in X^+$. Since $w$ and $w'$ satisfy condition (a) of Theorem \[thm:description\], they start with the same letter, say, $x_1$. If $\operatorname{alph}(w)=\{x_1\}$, the first occurrence word of both $w$ and $w'$ is just $x_1$, and we are done. Otherwise $\{x_1\}$ is a proper subset of $\operatorname{alph}(w)$, and therefore, condition (a) must be satisfied by the words $w_{\{x_1\}}$ and $w'_{\{x_1\}}$. Hence the first letter of $w_{\{x_1\}}$ is the same as the first letter of $w'_{\{x_1\}}$; let us denote this common letter by $x_2$. Observe that $x_2\ne x_1$ since $x_1$ does not occur in $w_{\{x_1\}}$ by the very definition of this word. If $\operatorname{alph}(w)=\{x_1,x_2\}$, the first occurrence word of both $w$ and $w'$ is $x_1x_2$, and we are done again. Otherwise $\{x_1,x_2\}$ is a proper subset of $\operatorname{alph}(w)$, and we can repeat the argument until we exhaust the set $\operatorname{alph}(w)$. At the $i$-th step of the procedure, we append the common first letter $x_i$ of the words $w_{\{x_1,\dots,x_{i-1}\}}$ and $w'_{\{x_1,\dots,x_{i-1}\}}$ to the already constructed word $x_1\cdots x_{i-1}$; observe that $x_i\notin\{x_1,\dots,x_{i-1}\}$ by the definition of the word $w_{\{x_1,\dots,x_{i-1}\}}$. Clearly, the word we get at the end of the procedure is the common first occurrence word of $w$ and $w'$. In the dual way, we deduce that $w$ and $w'$ have the same last occurrence word. It remains to show that an arbitrary jump $(x,G,y)$ occurs the same number of times in $w$ and $w'$. We fix the letters $x$ and $y$ and induct on the cardinality of $G$. If this cardinality is $0$, that is, $G=\varnothing$, each occurrence of the jump $(x,\varnothing,y)$ in a word are nothing but an occurrence of $xy$ in this word. Since $w$ and $w'$ satisfy condition (c) of Theorem \[thm:description\], the word $xy$ must occur the same number of times in $w$ and $w'$, and so does the jump $(x,\varnothing,y)$. The induction step relies on the following observation, which will be useful also in the proof of the ‘if’ claim. \[lem:jump\] Let $x$ and $y$ be (not necessarily distinct) letters, $v\in X^+$ a word, and $G\subseteq\operatorname{alph}(v)$ a set of letters that includes neither $x$ nor $y$. The factor $xy$ occurs in the word $v_G$ as many times as jumps of the form $(x,H,y)$, where $H$ runs over the set of all subsets of $G$, occur in the word $v$. For $xy$ to occur in $v_G$, the word $v$ should contain factors of the form $xsy$ where $\operatorname{alph}(s)\subseteq G$ so that the ‘streak’ $s$ disappears when the letters from $G$ get removed from $v$. In terms of jumps, this means that the occurrences of $xy$ in the word $v_G$ are in a 1-1 correspondence with the occurrences of jumps of the form $(x,H,y)$ with $H\subseteq G$ in the word $v$. Now consider a jump $(x,G,y)$ with $G\ne\varnothing$. Of course, we may assume that $x,y\in\operatorname{alph}(w)$ and $G\subseteq\operatorname{alph}(w)$. Then $G$ is a proper subset of $\operatorname{alph}(w)$ since $x\notin G$. Consider the words $u:=w_G$ and $u':=w'_G$. They satisfy condition (c) of Theorem \[thm:description\]. Hence, if $m$ and $m'$ denote the numbers of occurrences of the word $xy$ in $u$ and respectively $u'$, we have $m=m'$. For any subset $H\subseteq G$, let $n_H$ and $n'_H$ stand for the numbers of occurrences of the jump $(x,H,y)$ in $w$ and respectively $w'$. By Lemma \[lem:jump\] we have $$\label{eq:jump} m=\sum_{H\subseteq G}n_H=n_G+ \sum_{H\subset G}n_H\quad\text{and}\quad m'=\sum_{H\subseteq G}n'_H=n'_G+ \sum_{H\subset G}n'_H.$$ We have $m=m'$ and, by the induction assumption, $n_H=n'_H$ for each proper subset $H$ of $G$. Hence the equalities imply that $n_G$=$n'_G$, as required. This completes the proof of the ‘only if’ claim. For the ‘if’ claim, consider any words $w$ and $w'$ satisfying the conditions of our theorem. If both $w$ and $w'$ are empty, the identity $w\bumpeq w'$ holds in every monoid. Thus, we may assume that $\operatorname{alph}(w)\ne\varnothing$. Take an arbitrary proper subset $G$ of $\operatorname{alph}(w)$. We aim to show that the words $u:=w_G$ and $u':=w'_G$ satisfy conditions (a)–(c) of Theorem \[thm:description\]; our claim then follows from the ‘if’ part of the latter theorem. Let $v$ be the first occurrence word of both $w$ and $w'$. Then it is easy to see that the word $v_G$ is the first occurrence words of both $u$ and $u'$. In particular, the first letter of $v_G$ occurs as the first letter in both $u$ and $u'$. Thus, $u$ and $u'$ satisfy condition (a). In the dual way, we obtain that $u$ and $u'$ satisfy condition (b). In order to verify condition (c), take an arbitrary word $xy$ of length 2, where $x$ and $y$ are (not necessarily distinct) letters, and let $G\subseteq\operatorname{alph}(v)$ be a set of letters that includes neither $x$ nor $y$. Re-using the notation $m,m',n_H,n'_H$ introduced in the last paragraph of the proof of the ‘only if’ claim and applying Lemma \[lem:jump\], we get $$m=\sum_{H\subseteq G}n_H\quad\text{and}\quad m'=\sum_{H\subseteq G}n'_H.$$ Since $n_H=n'_H$ for each $H$, we conclude that $m=m'$, thus completing the proof of the ‘if’ claim. It remains to show that, given an identity $w\bumpeq w'$, one can check whether or not the words $w$ and $w'$ satisfy conditions of Theorem \[thm:jump\] in polynomial of the sum of the lengths of $w$ and $w'$ time. For this, it suffices to exhibit algorithms that, given a word $v\in X^+$ of length $n$, find its first occurrence word, its last occurrence word, and its jumps with their multiplicities in polynomial in $n$ time. In fact, the first two algorithms require only $O(kn)$ time, where $k$ is the number of letters in $\operatorname{alph}(v)$, and the third algorithm requires $O(kn\log(kn))$ time. The algorithms for constructing the first and last occurrence words are pretty straightforward. For the first occurrence word, we initialize $\overrightarrow{v}$ with the empty word and then scan the input word $v$ letter-by-letter from left to right. Each time when we read a letter of $v$, we check whether the letter occurs in $\overrightarrow{v}$ and if it does not, we append the letter to $\overrightarrow{v}$. Then we pass to the next letter if it exists or stop if the current letter is the last letter of $v$. Clearly, at the end of the process, $\overrightarrow{v}$ becomes the first occurrence word of $v$. The algorithm, which we call FOW, makes $n$ steps and on each step it operates with the word $\overrightarrow{v}$ whose length does not exceed $k$. Hence, the time spent by FOW is linear in $kn$. For the last occurrence word, we could apply FOW to the mirror image of the input and return the mirror image of the output of FOW. Alternatively, we suggest the following algorithm, which like FOW operates in the online manner, that is, processes its input word $v$ letter-by-letter from left to right. We initialize $\overleftarrow{v}$ with the empty word. Each time as a letter of $v$ is read, we check whether the letter occurs in $\overleftarrow{v}$ and if it does, we remove the occurrence of the letter from $\overleftarrow{v}$ . Then we append the current letter to $\overleftarrow{v}$ and pass to the next letter if it exists or stop if we have reached the last letter of $v$. At the end of the process, $\overleftarrow{v}$ becomes the last occurrence word of $v$, and again, the working time of the algorithm is linear in $kn$. The algorithm that constructs the multiset of all jumps of $v$ is slightly more involved. We initialize $J$ with the empty multiset; besides that, for each letter $x\in\operatorname{alph}(v)$, we introduce an integer variable denoted $\operatorname{lop}(x)$ (the *last observed position of* $x$) and initialise it with 0. For each positive integer $i\le n$, we denote by $v[i]$ the letter in the $i$-th position of the input word $v$. For integers $i,j\le n$, we let $$v[i,j]:=\begin{cases} v[i]\cdots v[j] &\text{if }\ i\le j,\\ \text{the empty word} & \text{if }\ i>j. \end{cases}$$ Our algorithm scans $v$ letter-by-letter from left to right. Suppose that the current position is $i$ and $v[i]=y$. For each letter $x\in\operatorname{alph}(v)$ such that $\operatorname{lop}(x)>0$, we check if $\operatorname{lop}(y)\le\operatorname{lop}(x)$. If the inequality holds, then neither $x$ nor $y$ occurs in the factor $v[\operatorname{lop}(x)+1,i-1]$ of $v$ and we add the jump $(x,G,y)$ with $G:= \operatorname{alph}(v[\operatorname{lop}(x)+1,i-1])$ to the mulitiset $J$. (Recall that adding an element $e$ to a multiset $M$ means including $e$ in $M$ with multiplicity 1 if $e$ has not yet appeared in $M$ or increasing the multiplicity $e$ in $M$ by 1 if $e$ has already appeared in $M$. By storing $M$ as an appropriate data structure, say, a self-balancing binary search tree, one can perform each such operation in $O(\log|M|)$ time. See @St15 for a description of advanced techniques for handling multisets.) Then we stop if $i=n$ or update the variable $\operatorname{lop}(y)$ by assigning value $i$ to it and pass to the position $i+1$ if $i<n$. Thus, the algorithm makes $n$ steps, at each step at most $k$ jumps are added to $J$, and the time needed for adding of each jump is bounded by $O(\log(kn))$. Hence the overall time spent is $O(kn\log(kn))$. The following table demonstrates how the algorithm runs on the word $v=x^3yxyz^4xyz$. We have raised the entries in the columns containing the values of the variables $\operatorname{lop}(x)$, $\operatorname{lop}(y)$, and $\operatorname{lop}(z)$ in order to stress that every non-final step of the algorithm consists of two phases. Namely, when processing the letter $v[i]$, we first add jumps to the multiset $J$ using the values of $\operatorname{lop}(x)$, $\operatorname{lop}(y)$, and $\operatorname{lop}(z)$ inherited from the previous step, and only after that we update one of these values. $$\begin{array}{c|c|c|c|c|c} i & v[i] & \operatorname{lop}(x) & \operatorname{lop}(y) & \operatorname{lop}(z) & \text{Jumps added to $J$}\\ \hline 1 & x & \raisebox{4pt}{0} & \raisebox{4pt}{0} & \raisebox{4pt}{0} & -\rule{0pt}{16pt} \\ 2 & x & \raisebox{4pt}{1} & \raisebox{4pt}{0} & \raisebox{4pt}{0} & (x, \varnothing, x) \\ 3 & x & \raisebox{4pt}{2} & \raisebox{4pt}{0} & \raisebox{4pt}{0} & (x, \varnothing, x) \\ 4 & y & \raisebox{4pt}{3} & \raisebox{4pt}{0} & \raisebox{4pt}{0} & (x, \varnothing, y) \\ 5 & x & \raisebox{4pt}{3} & \raisebox{4pt}{4} & \raisebox{4pt}{0} & (x, \{y\}, x),\, (y, \varnothing, x)\\ 6 & y & \raisebox{4pt}{5} & \raisebox{4pt}{4} & \raisebox{4pt}{0} & (y, \{x\}, y),\, (x, \varnothing, y) \\ 7 & z & \raisebox{4pt}{5} & \raisebox{4pt}{6} & \raisebox{4pt}{0} & (x, \{y\}, z),\, (y, \varnothing, z) \\ 8 & z & \raisebox{4pt}{5} & \raisebox{4pt}{6} & \raisebox{4pt}{7} & (z, \varnothing, z) \\ 9 & z & \raisebox{4pt}{5} & \raisebox{4pt}{6} & \raisebox{4pt}{8} & (z, \varnothing, z) \\ 10 & z & \raisebox{4pt}{5} & \raisebox{4pt}{6} & \raisebox{4pt}{9} & (z, \varnothing, z) \\ 11 & x & \raisebox{4pt}{5} & \raisebox{4pt}{6} & \raisebox{4pt}{10} & (x, \{y, z\}, x),\, (y, \{z\}, x),\, (z, \varnothing, x)\\ 12 & y & \raisebox{4pt}{11} & \raisebox{4pt}{6} & \raisebox{4pt}{10} & (y, \{z, x\}, y),\, (z, \{x\}, y),\, (x, \varnothing, y)\\ 13 & z & \raisebox{4pt}{11} & \raisebox{4pt}{12} & \raisebox{4pt}{10} & (z, \{x, y\}, z),\, (x, \{y\}, z),\, (y, \varnothing, z) \end{array}$$ Conclusion {#sec:applications} ========== Future work ----------- Obviously, the next natural step in studying identities of Kauffman monoids is to characterize the identities of $\mathcal{K}_n$ for $n>3$. Recently, two of the present authors (Kitov and Volkov) have found a description of the identities of $\mathcal{K}_4$. It turns out that $\mathcal{K}_4$ satisfies precisely the same identities $\mathcal{K}_3$, which is a sort of surprise. The proof of this result is quite involved and relies on a geometric representation of Kauffman monoids rather than their presentation via generators and relations; therefore, the proof will be a subject of a separate paper. One can ask whether or not the coincidence of the identities of $\mathcal{K}_3$ and $\mathcal{K}_4$ extends further, say, to the identities of the monoid $\mathcal{K}_5$. The answer is negative: for instance, the identity $x^2yx\bumpeq xyx^2$, which holds in $\mathcal{K}_3$ (and hence, in $\mathcal{K}_4$) by Theorem \[thm:description\], does not hold in $\mathcal{K}_5$, as the next proposition shows. \[prop:K5\] If a homomorphism $\varphi\colon X^*\to\mathcal{K}_5$ extends the map $$\begin{cases}x\mapsto h_1h_2h_3\\ y\mapsto h_4\end{cases},$$ then $(x^2yx)\varphi\ne(xyx^2)\varphi$. First observe that $$\begin{aligned} (h_1h_2h_3)^2=h_1h_2h_3h_1h_2h_3&=h_1h_2h_1h_3h_2h_3 &&\text{by~\eqref{eq:TL1}}\\ &=h_1h_3 &&\text{by~\eqref{eq:TL2}}.\end{aligned}$$ Therefore, $$\begin{aligned} (x^2yx)\varphi=(h_1h_2h_3)^2h_4h_1h_2h_3&=h_1h_3h_4h_1h_2h_3 &&\text{as $(h_1h_2h_3)^2=h_1h_3$}\\ &=h_1^2h_3h_2h_4h_3&&\text{by~\eqref{eq:TL1}}\\ &=ch_1h_3h_2h_4h_3&&\text{by~\eqref{eq:TL4}},\end{aligned}$$ while $$\begin{aligned} (xyx^2)\varphi=h_1h_2h_3h_4(h_1h_2h_3)^2&=h_1h_2h_3h_4h_1h_3 &&\text{as $(h_1h_2h_3)^2=h_1h_3$}\\ &=h_1h_2h_1h_3h_4h_3&&\text{by~\eqref{eq:TL1}}\\ &=h_1h_3&&\text{by~\eqref{eq:TL2}}.\end{aligned}$$ Since the words $ch_1h_3h_2h_4h_3=ch_{[1]}h_{[3,2]}h_{[4,3]}$ and $h_1h_3=h_{[1]}h_{[3]}$ are in Jones’s normal form, Lemma \[lem:jones\] implies that they represent different elements of $\mathcal{K}_5$. At the moment, we possess no characterization of the identities of the monoid $\mathcal{K}_n$ for any $n>4$. Clustering phenomenon --------------------- Here we discuss an unexpected phenomenon revealed by the studies of identities of ‘interesting’ semigroups: it turns out that semigroups coming from different parts of mathematics and having seemingly different nature tend to cluster with respect to their identities. For instance, comparing Theorem \[thm:description\] with the results by @SV17, we observe that the Kauffman monoid $\mathcal{K}_3$ shares the identities with the monoid $\mathcal{S}_2^1$, where $\mathcal{S}_2=\langle e,f \mid e^2=e,\ f^2=f\rangle$ is the free product of two trivial semigroups. Recall that in the proof of the ‘only if’ part of Theorem \[thm:description\], we exhibited an embedding of the semigroup $\mathcal{S}_2$ into $\mathcal{K}_3$. Clearly, this embedding extends to an isomorphism between the monoid $\mathcal{S}_2^1$ and a certain submonoid of the monoid $\mathcal{K}_3$, and therefore, every identity of the latter holds in $\mathcal{S}_2^1$. However, the fact that every identity of the submonoid isomorphic to $\mathcal{S}_2^1$ must hold in the whole monoid $\mathcal{K}_3$ appears to be somewhat amazing. Comparing Theorem \[thm:description\] with the results by @KR79, one can also observe that the monoid $\mathcal{K}_3$ satisfies the same identities as the least monoid containing the semigroup of adjacency patterns of words that was introduced and studied in [@KR79]. Yet another interesting example has been found by @DJK18: the bicyclic monoid $\mathcal{B}:=\langle a,b\mid ba=1\rangle$ shares the identities with the monoid $UT_2(\mathbb{T})$ of all upper triangular $2\times 2$-matrices over the tropical semiring[^4]. Similarly to the situation discussed in the preceding paragraph, $\mathcal{B}$ can be shown to be isomorphic to a submonoid of $UT_2(\mathbb{T})$, cf. [@IM10 Corollary 4.2], whence every identity of the latter holds in $\mathcal{B}$. Again, it was unexpected that every identity of the submonoid isomorphic to $\mathcal{B}$ extends to the whole monoid $UT_2(\mathbb{T})$. @Sh15 has provided a family of further interesting examples of semigroups which satisfy the same identities as the bicyclic monoid. We mention in passing that the same clustering phenomenon occurs in the realm of finite monoids. For quite a representative example, the reader may compare the results of the papers [@AVZ15; @JF; @Vo04]. Each of these papers studies identities of certain finite monoids that belong to several natural series parameterized by positive integers: Straubing monoids, Catalan monoids, Kiselman monoids, gossip monoids, etc. These monoids (whose definitions we do not reproduce here) arise in the literature due to completely unrelated reasons and consist of elements of very different nature. Nevertheless, it turns out that the $n$-th monoids in each of the series satisfy the same identities! **Acknowledgements.** Yuzhu Chen, Xun Hu, Yanfeng Luo have been partially supported by the Natural Science Foundation of China (projects no. 10971086, 11371177). M. V. Volkov acknowledges support from the Ministry of Education and Science of the Russian Federation, project no. 1.3253.2017, the Competitiveness Program of Ural Federal University, and from the Russian Foundation for Basic Research, project no. 17-01-00551. Almeida, J., Volkov, M.V., and Goldberg, S.V. \[2009\]: *Complexity of the identity checking problem for finite semigroups*, J. Math. Sci. **158**(5), 605–614. Ashikhmin, D.N., Volkov, M.V., and Zhang, Wen Ting \[2015\]: *The finite basis problem for Kiselman monoids*, Demonstratio Mathematica **48**(4), 475–492. Auinger, K. \[2012\]: *Krohn–Rhodes complexity of Brauer type semigroups*, Port. Math. **69**(4), 341–360. Auinger, K. \[2014\]: *Pseudovarieties generated by Brauer-type monoids*, Forum Mathematicum **26**, 1–24. Auinger, K., Chen, Yuzhu, Hu, Xun, Luo, Yanfeng, and Volkov, M.V. \[2015\]: *The finite basis problem for Kauffman monoids*, Algebra Universalis **74**(3–4), 333–350. Auinger, K., Dolinka, I., and Volkov, M.V. \[2012\]: *Equational theories of semigroups with involution*, J. Algebra **369**, 203–225. Brauer, R. \[1937\] *On algebras which are connected with the semisimple continuous groups*, Ann. Math. **38**, 857–872. Bokut’, L.A., and Lee, D.V. \[2005\]: *A Gröbner–Shirshov basis for the Temperley–Lieb–Kauffman monoid*, Izvestija Ural’skogo Gosudarstvennogo Universiteta **36**, 47–66 \[Russian\]. Borisavljević, M., Došen, K., and Petrić, Z. \[2002\]: *Kauffman monoids*, J. Knot Theory Ramifications **11**, 127–143. Clifford, A.H., and Preston, G.B. \[1961\]: , Amer. Math. Soc., Providence, R.I. Daviaud, L., Johnson, M., and Kambites, M. \[2018\]: *Identities in upper triangular tropical matrix semigroups and the bicyclic monoid*, J. Algebra **501**, 503–525. Dolinka, I., and East, J. \[2017\]: *The idempotent-generated subsemigroup of the Kauffman monoid*, Glasg. Math. J., **59**(3), 673–683. Dolinka, I., and East, J. \[2018\]: *Twisted Brauer monoids*, Proc. Royal Soc. Edinburgh, Ser. A **148A**, 731–750. Dolinka, I., East, J., Evangelou, A., FitzGerald, D., Ham, N., Hyde, J., and Loughlin, N. \[2015\]: *Enumeration of idempotents in diagram semigroups and algebras*, J. Comb. Theory, Ser. A **131**, 119–152. Dolinka, I., East, J., and Gray, R. \[2017\]: *Motzkin monoids and partial Brauer monoids*, J. Algebra **471**, 251–298. East, J. \[2011a\]: *Generators and relations for partition monoids and algebras*, J. Algebra **339**, 1–26. East, J. \[2011b\]: *On the singular part of the partition monoid*, Internat. J. Algebra Comput. **21**(1-2), 147–178. East, J. \[2014a\]: *Partition monoids and embeddings in 2-generator regular $*$-semigroups*, Period. Math. Hungar. **69**(2), 211–221. East, J. \[2014b\]: *Infinite partition monoids*, Internat. J. Algebra Comput. **24**(4), 429–460. East, J., and FitzGerald D.G. \[2012\]: *The semigroup generated by the idempotents of a partition monoid*, J. Algebra **372**, 108–133. East, J., and Gray, R. \[2017\]: *Diagram monoids and Graham–Houghton graphs: Idempotents and generating sets of ideals*, J. Comb. Theory, Ser. A **146**, 63–128. East, J., Mitchell, J.D., Ru[š]{}kuc, N., Torpey, M. \[2018\]: *Congruence lattices of finite diagram monoids*. Adv. Math., **333**, 931–1003. FitzGerald, D.G., and Lau, K.W. \[2011\]: *On the partition monoid and some related semigroups*, Bull. Aust. Math. Soc. **83**(2), 273–288. Horváth, G., Lawrence, J., Mérai, L., and Szabó, Cs. \[2007\]: *The complexity of the equivalence problem for nonsolvable groups*, Bull. London Math. Soc. **39**(3), 433–438. Izhakian, Z., and Margolis, S. \[2010\]: *Semigroup identities in the monoid of two-by-two tropical matrices*, Semigroup Forum **80**(2), 191–218. Jackson, M., and McKenzie, R. \[2006\]: *Interpreting graph colorability in finite semigroups*, Internat. J. Algebra Comput. **16**(1), 119–140. Johnson, M., and Fenner, P. \[20??\]: *Identities in unitriangular and gossip monoids*, Semigroup Forum (to appear) Jones, V.F.R. \[1983\]: *Index for subfactors*, Inventiones Mathematicae **72**, 1–25. Kauffman, L. \[1990\]: *An invariant of regular isotopy*, Trans. Amer. Math. Soc. **318**, 417–471. Kim, K.H., Roush, F. \[1979\]: *The semigroup of adjacency patterns of words*, in: , North Holland, Amsterdam (Colloq. Math. Soc. János Bolyai **20**), 281–297. Kisielewicz, A. \[2004\]: *Complexity of semigroup identity checking*, Internat. J. Algebra Comput. **14**(4), 455–464. Kl[í]{}ma, O. \[2009\]: *Complexity issues of checking identities in finite monoids*, Semigroup Forum **79**(3), 435–444. Kl[í]{}ma, O. \[2012\]: *Identity checking problem for transformation monoids*, Semigroup Forum **84**(3), 487–498. Kudryavtseva, G., Maltcev, V., and Mazorchuk, V. \[2006\]: *$\mathcal{L}$- and $\mathcal{R}$-cross-sections in the Brauer semigroup*, Semigroup Forum **72**(2), 223–248. Kudryavtseva, G., and Mazorchuk, V. \[2006\]: *On presentations of Brauer-type monoids*, Cent. Eur. J. Math. **4**(3), 413–434. Kudryavtseva, G., and Mazorchuk, V. \[2007\]: *On conjugation in some transformation and Brauer-type semigroups*, Publ. Math. Debrecen **70**(1-2), 19–43. Lau, K.W., and FitzGerald, D.G. \[2006\]: *Ideal structure of the Kauffman and related monoids*, Comm. Algebra **34**, 2617–2629. Maltcev, V., and Mazorchuk V. \[2007\]: *Presentation of the singular part of the Brauer monoid*, Math. Bohem. **132**(3), 297–323. Mazorchuk, V. \[1998\]: *On the structure of Brauer semigroup and its partial analogue*, Problems in Algebra **13**, 29–45. Mazorchuk, V. \[2002\]: *Endomorphisms of $\mathfrak{B}_n$, $P\mathfrak{B}_n$, and $\mathfrak{C}_n$*, Comm. Algebra **30**(7), 3489–3513. Murskiǐ, V.L. \[1968\]: *Examples of varieties of semigroups*, Math. Notes **3**(6), 423–427. Plescheva, S.V., and Vértesi, V. \[2006\]: *Complexity of the identity checking problem in a $0$-simple semigroup*, Izvestija Ural’skogo Gosudarstvennogo Universiteta **43**, 72–102 \[Russian\]. Seif, S. \[2005\]: *The Perkins semigroup has co-NP-complete term-equivalence problem*, Internat. J. Algebra Comput. **15**(2), 317–326. Seif, S., Szabó, Cs. \[2006\]: *Computational complexity of checking identities in $0$-simple semigroups and matrix semigroups over finite fields*, Semigroup Forum **72**(2), 207–222. Shneerson, L.M. \[2015\]: *On growth, identities and free subsemigroups for inverse semigroups of deficiency one*, Internat. J. Algebra Comput. **25**(1-2), 233–258. Shneerson, L.M., and Volkov, M.V. \[2017\]: *The identities of the free product of two trivial semigroups*, Semigroup Forum **95**(1), 245–250. Steinruecken, Ch. \[2015\]: *Compressing sets and multisets of sequences*, IEEE Trans. Information Theory **61**(3), 1485–1490. Temperley, H.N.V., and Lieb, E.H. \[1971\]: *Relations between the ‘percolation’ and ‘colouring’ problem and other graph-theoretical problems associated with regular planar lattices: Some exact results for the percolation problem*, Proc. Roy. Soc. London, Ser. A **322**, 251–280. Volkov, M.V. \[2004\]: *Reflexive relations, extensive transformations and piecewise testable languages of a given height*, Internat. J. Algebra Comput. **14**(5-6), 817–827. [^1]: The name comes from [@BDP02]; in the literature one also meets the name *Temperley–Lieb–Kauffman monoids* [see, e.g., @BL05]. Kauffman himself used the term *connection monoids*. [^2]: Here and throughout expressions like $A:=B$ emphasize that $A$ is defined to be $B$. [^3]: Also called the ‘*term equivalence problem*’ in the literature. [^4]: Recall that the tropical semiring $\mathbb{T}$ is formed by the real numbers augmented with the symbol $-\infty$ under the operations $a\oplus b:=\max\{a,b\}$ and $a\otimes b:=a+b$, for which $-\infty$ plays the role of zero: $a\oplus-\infty=-\infty\oplus a=a$ and $a\otimes-\infty=-\infty\otimes a=-\infty$. A square matrix over $\mathbb{T}$ is said to be upper triangular if its entries below the main diagonal all $-\infty$.